uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,562,615
arxiv
\section{Introduction} Quantum CSS stabilizer codes\cite{css} can be understood in terms of homology\cite{csshomology1,csshomology2,csshomology3}, and different manifolds provide a rich source of different codes. The two-dimensional toric code\cite{csshomology1,csshomology2} and four-dimensional toric code\cite{4dtoric} are commonly considered examples; they are code families based on families of cellulations of a two and four dimensional tori. Other manifolds\cite{fml} provide other interesting properties, such as greater distance, discussed below. In this paper, we consider families of codes based on high dimensional manifolds. We begin by considering some parameters that quantify a CSS code. The elementary degrees of freedom of a CSS codes are qubits (or, more generally, qudits, for some $d\geq 2$). Let there be $N$ such qudits so that the Hilbert space has dimension $d^N$. CSS codes can be parametrized by several parameters, which we write as $[[N,K,D,W]]$. Here $N$ is the number of qudits. $K$ is the number of encoded qudits, so that the code has a code space which is a subspace of dimension $d^K$. $D$ is the ``distance" of the code, defined below, while $W$ is the ``weight" of the stabilizers, defined also below. Generally speaking, larger $K$ and $D$ is desirable, while smaller $W$ is also desirable (this discussion of desirability of certain values of the parameters ignores other questions like the ability to efficiently decode or encode states, which is a completely separate discussion that we do not consider in this paper). The best families of quantum codes obtained thus far have significantly worse scaling than the corresponding scaling for classical linear codes. Families of classical codes exist with $K=\Theta(N), D=\Theta(N), W={\mathcal O}(1)$ (so-called low density parity check codes provide such an example\cite{ldpc}). If we set $W={\mathcal O}(1)$, then the largest known distance for a quantum code family is $\Theta(\sqrt{N \log(N)})$as in Ref.~\onlinecite{fml}, while if we want $D=\Theta(N)$, then the lowest known weight is $W=\Theta(\sqrt{N})$ as in Ref.~\onlinecite{bh}. These parameters refer to stabilizer codes; if one allows subsystem codes\cite{subsys}, then it is possible to achieve $D=\Theta(N^{1-\epsilon}),W={\mathcal O}(1)$ for $\epsilon={\mathcal O}(1/\sqrt{\log(N})$ as in Ref.~\onlinecite{subsyslin}, but now the parameter $W$ does not refer to the weight of a set of commuting stabilizers but rather the to weight of a set of generators of the ``gauge group" and these generators need not commute with each other. If one requires that the stabilizer group be generated by local commuting operators, then currently no advantage is known for a subsystem code. Another notable stabilizer code family achieves $k=\Theta(N),d=\Theta(\sqrt{N}),w={\mathcal O}(1)$ and has efficient an efficient local decoding algorithm\cite{tz}. In this paper, we construct code families that, assuming a conjecture in geometry, have almost linear distance and logarithmic weight generators. We review various concepts before giving an overview of the paper. \subsection{Review of CSS Codes and Relation to Homology} The code subspace is the subspace of the $d^N$-dimensional Hilbert space which is in the $+1$ eigenspace of several ``stabilizers". These stabilizers are of two types, called ``X-type" and ``Z-type". The $Z$ operator on the $d$-dimensional Hilbert space of a single qudit is the operator \begin{equation} Z=\begin{pmatrix} 1 & \\ &\exp(\frac{2\pi i}{d})\\ && \exp(\frac{4\pi i}{d}) \\ &&&\ldots\end{pmatrix}, \end{equation} while the $X$ operator is the operator \begin{equation} X=\begin{pmatrix} 0 & 1 \\ & 0 & 1 \\ && 0 & 1 \\ &&&\ldots \\ 1 & 0 \ldots \end{pmatrix}. \end{equation} We write $Z_i$ or $X_i$ to indicate the operator $Z$ or $X$ acting on qudit $i$, tensored with the identity on all other qudits. Then, a Z-type stabilizer is the tensor product of $Z$ operators on some qubits, possibly raised to integer powers. Such a Z-type stabilizer might be written, for example, $Z_1 Z_3^2$ to indicate that it is the tensor product of $Z$ on qubit $1$ with the square of $Z$ on qubit $3$. These exponents all can be taken in the range $1,2,...,d-1$; if an operator on a given qubit is raised to power $0$, we simply do not write it when writing the $Z$ stabilzer. The X-type stabilizers are similar, with $Z$ replaced by $X$. We encode the Z-type stabilizers in a matrix that we denote $\partial_2$. This matrix has $N$ rows and has one column per Z-type stabilizer. The entries of the matrix are over the field ${\mathbb F}_d$. The entry in the $i$-th row and $j$-th column indicates which power of $Z_i$ appears in the $j$-th stabilizer; thus, for example, for the stabilizer $Z_1 Z_3^2$, the first row in the corresponding column would have a $1$ and the third row would have a $2$ and all other rows would be zero. We encode the X-type stabilizers also in a matrix, denoted by $\partial_1$. This matrix has $N$ columns and one row per X-type stabilizer, again with the entries over the field ${\mathbb F}_d$. The entry in the $i$-th row and $j$-th column indicates which power of $X_j$ appears in the $i$-th stabilizer A final requirement on CSS codes is that the stabilizers commute with each other. Any pair of Z-type stabilizers trivially commute, as do any pair of X-type stabilizers. The requirement that the Z-type stabilizer commute with the X-type stabilizers can be simply expressed in terms of $\partial_2,\partial_1$ as \begin{equation} \partial_1 \partial_2=0. \end{equation} This requirement is equivalent to saying that there is a chain complex $${\mathcal C}_2 \stackrel{\partial_2}{\rightarrow} {\mathcal C}_1 \stackrel{\partial_1}{\rightarrow} {\mathcal C}_0,$$ where ${\mathcal C}_2,{\mathcal C}_1,{\mathcal C}_0$ are vector spaces over ${\mathbb F}_d$, with basis elements in one-to-one correspondence with Z-type stabilizers, qudits, and X-type stabilizers, respectively. We have ${\rm dim}(C_1)=N$. The number of encoded qudits $K$ is given by the first Betti number, which is equal to $N-{\rm dim}({\mathcal C}_2)-{\rm dim}({\mathcal C}_0$ assuming that all stabilizers are independent of each other (i.e., that the columns of $\partial_2$ are linearly independent, as are the rows of $\partial_1$). The distance $D$ is defined as follows. Let us say that an operator $O$ is a Z-type logical operator if it is a tensor product of $Z$ operators on qudits which commutes with all X-type stabilizers and which is not itself a product of Z-type stabilizers. In the language of homology, such an operator is a representative of a nontrivial first homology class; write $$O=\prod_i Z_i^{a_i},$$ where the product ranges over all qudits and $a_i$ are in ${\mathbb F}_d$. Define an $N$-component vector $v$ with entries $a_i$, so that the requirement that $O$ commutes with all X-type stabilizers is that $\partial_1 v=0$, while the requirement that $O$ not be a product of Z-type stabilizers is that $v$ is not in the image of $\partial_2$. An X-type logical operator is defined similarly, with $Z$ and $X$ integerchanged everywhere in the definition. The weight of a Z-type (or X-type) logical operator $O$ is defined to be the number of qudits $i$ such that $Z_i$ (or $X_i$)appears in $O$ raised to a nonvanishing power mod $d$; we say that that $Z_i$ or $X_i$ is in the support of the logical operator. We define $D_Z$ to be the minimum weight of a Z-type logical operator and $D_X$ to be the minimum weight of an X-type logical operator and define \begin{equation} D={\rm min}(D_X,D_Z). \end{equation} We define the weight $W$ of a code to be the least integer $W$ such that every row and every column of $\partial_2$ has at most $W$ nonvanishing entries and also every row and every column of $\partial_1$ has at most $W$ nonvanishing entries. Note that this means that not only does every stabilizer act on at most $W$ different qudits, also every qudit is acted on by at most $W$ different $Z$-type stabilizers and $W$ different $X$-type stabilizers. We define the weight of an operator which is a product of $Z$ and $X$ operators to be the number of qudits on which the operator acts nontrivially; for example, the operator $X_1 X_3$ has weight $2$. Thus, every stabilizer has weight at most $W$. A vector $v$ in a vector space ${\mathcal C}_k$ is called a $k$-chain (or simply, a ``chain"). If $\partial_k v=0$, then $v$ is called a $k$-cycle. The weight of a vector is defined to be the number of nonzero entries in the vector. \subsection{CSS Codes from Manifolds and Systolic Freedom} Conversely, just as one can define a chain complex from a CSS code, one can use a chain complex to define a CSS code. Given any chain complex over some field ${\mathbb F}_d$, one can define a qudit CSS code: choose any vector space in the chain complex to correspond to the qudits, and then the vector spaces of one higher and one lower dimension correspond to the Z-type and X-type stabilizers. For example, given a triangulation (or cubulation or other discretization) of a four dimensional manifold one can define a chain complex $${\mathcal C}_4 \stackrel{\partial_4}{\rightarrow} {\mathcal C}_3 \stackrel{\partial_3}{\rightarrow} {\mathcal C}_2 \stackrel{\partial_2}{\rightarrow} {\mathcal C}_1 \stackrel{\partial_1}{\rightarrow} {\mathcal C}_0,$$ where the basis elements of ${\mathcal C}_k$ correspond to $k$-cells. Then, one can choose any integer $q$ and let the qudits correspond to the $q$-cells and the Z-type stabilizers correspond to $(q+1)$-cells and the X-type stabilizers correspond to $(q-1)$-cells. The case $q=2$ is the familiar four-dimensional toric code of Ref.~\onlinecite{4dtoric}, while the cases $q=0,4$ are classical repetition codes (Ising models) in the $Z$ or $X$ basis, respectively. Defining CSS codes from manifolds has several nice advantages. For one, often the distance of the code can be translated into geometric properties of the manifold and (up to some technical details that we discuss below) it can be geometrically interpreted as the least possible volume of a $q$-dimensional surface in a nontrivial homology class. Similarly, if the triangulation has a bounded local geometry, then this gives a bound on $W$. Naively, it might seem that such constructions will not be able to obtain a better-than-square-root distance, i.e. $D=\Omega(\sqrt{N})$. We now give some intution for this naive belief, and give a more detailed discussion of the relation between volume and number of qudits in one particular example, as it will be useful later. Consider an $n$-dimensional torus constructed from a hypercube of length $\ell$ on each side for some integer $\ell$ by gluing the opposite faces together. Introduce coordinates $(x_1,\ldots,x_n)$. Discretize the torus by hypercubes of unit length in the obvious way, so that the $0$-cells are at integer values of the coordinates. In this case, the volume of the torus is equal the number of hypercubes in the discretization, which equals $\ell^d$. The number of qudits is given by $$N=\ell^d {d \choose q},$$ while $$D_Z=\ell^q, \quad D_X=\ell^{d-q}.$$ To see that $D_Z\leq \ell^q$, one can pick any $q$-dimensional plane where $q$ of the coordinates assume arbitrary values and the other coordinates are held fixed at integer values; then, the product of $Z$ over the $q$-cells in this plane give a logical operator. We omit the proof that this upper bound for $D$ is tight in this case. The value of $D_X$ is given by picking any $(d-q)$-dimensional plane on the dual lattice and then taking the product of $X$ over the the $q$-cells that intersects this plane also gives a logical operator. Choosing the optimal value, $q=d/2$ still leads only to $D=\Theta(\sqrt{N})$. Varying the geometry of the torus by changing the aspect ratio (i.e., keeping the sides of the torus orthogonal to each other but changing the relative lengths) does not lead to any improvement. However, this naive belief is false. ``Systolic freedom" is the term for a concept due to Gromov\cite{sysfree}, that one may have manifolds for which the product of the $q$-systole (the least volume surface representing a nontrivial element of $q$-th homology) times the $(d-q)$-systole may be arbitrarily larger than the volume of the manifold. This phenomenon was originally observed for integer homology (corresponding to qudit quantum codes with large $d$), while only later in Ref.~\onlinecite{fml} was it constructed for ${\mathbb Z}_2$ homology. \subsection{Overview of Paper} In the original construction of systolic freedom\cite{sysfree}, the topology of the manifold was held fixed and the metric was varied to obtain a diverging ratio, while in the ${\mathbb Z}_2$ case\cite{fml}, the topology of the manifold was varied to obtain a diverging ratio. In this paper, we consider instead a family of manifolds with different dimension. Most of the paper is devoted to considering tori ${\mathbb R}^n/\Lambda$ for certain random lattices $\Lambda$. In section \ref{defns} we make various definitions of the random lattices and define Rankin invariants. In section \ref{overview} we give an overview of the construction and present a geometric conjecture \ref{conj1} and state theorem \ref{mainth} that, assuming the conjecture, there exist quantum CSS codes with logarithmic weight and almost linear distance. In section \ref{rankininvar} we prove lower bounds on the Rankin invariant of certain random lattices, which is the main step in proving theorem \ref{mainth}. In section \ref{calibrationsec} we discuss some obstacles to proving even a weaker form of conjecture \ref{conj1} (involving oriented surfaces) and we consider shortest vectors in the exterior product of a lattice. Finally, in section \ref{qltcsec} we give some alternative constructions which have only square-root distance but which have inverse polylogarithmic soundness parameters as quantum locally testable codes\cite{qltc}. To give some motivation to our lattice construction, consider the two-dimensional toric code. On a square lattice with length $\ell$ on each side, there are $2\ell^2$ qubits and the distance $\ell$. Suppose we ignore the details of the cellulation and take an arbitrary torus ${\mathbb R}^2/\Lambda$, pretending that the number of qubits is equal to the area ($\ell^2$) and the distance is equal to the shortest vector in the lattice $\Lambda$. Then, a slightly better geometry than the square lattice would be to take the hexagonal lattice, as the ratio of the square of the length of the shortest vector to the area of the torus is equal to $2/\sqrt{3}$ rather than $1$. This is only a slight constant improvement over the square lattice. However, in higher dimensions, the shortest vector in lattice $\Lambda$ can be roughly $\sqrt{n}$ longer than the $1/n$ power of the volume of the torus ${\mathbb R}^n/\Lambda$. Further, if we consider least volume surfaces representing nontrivial homology for $q>1$, then larger improvements are possible (at least for surfaces which are hyperplanes). This motivates our construction and the consideration of so-called ``Rankin invariants"\cite{rankin}. \section{Random Lattices and Definitions} \label{defns} Consider a so-called LDA lattice\cite{lda,lda2} as follows. We pick a prime $p$. We will construct a lattice which is a subset of ${\mathbb Z}^n$ for some even $n$. We first construct a linear code over field ${\mathbb F}_p^n$. We define this code by a ``code generator matrix" $G$ which is an $n$-by-$k$ matrix such that the {\it column} vectors are a basis for the codewords. (We explicitly call it a ``code generator matrix" rather than just a ``generator matrix", as we will also consider lattice generator matrices later.) Usually in coding theory, it is instead conventional to let the {\it rows} of a code generator matrix be the basis for a code, but to maintain consistency with notation we use later, we instead use the {\it columns} as the basis. We choose the entries of $G$ independently and uniformly from ${\mathbb F}_p$. We will be interested in taking $n$ large at fixed ratio $k/n<1$. With high probability (i.e., with probability tending to $1$ as $n\rightarrow \infty$ with $k/n$ fixed), $G$ is non-degenerate (see next paragraph). Assuming that $G$ is indeed non-degenerate, one can find a permutation of the rows such that $G$ is in the form $$G=\begin{pmatrix} A \\ B \end{pmatrix}$$ where $A,B$ are $k$-by-$k$ and $(n-k)$-by-$k$ matrices with $A$ non-degenerate. Then, since $A$ is non-degenerate there exists a sequence of elementary column operations that brings $A$ to the identity matrix, where for a matrix over ${\mathbb F}_p^n$ an elementary column operation is one of: adding one column to another, multiplying a column by any nonzero element of the field, or interchanging two columns. These column operations bring $G$ to the form $$G=\begin{pmatrix} I \\ C \end{pmatrix},$$ where $I$ is the $k$-by-$k$ identity matrix and $C$ is some $(n-k)$-by-$k$ matrix. Since the entries of $C$ are obtained by applying these column operations to the entries of $B$, the entries of $C$ are chosen independently of each other and uniformly from ${\mathbb F}_p$, i.e., applying any elementary column operation to an ensemble of matrices with entries chosen uniformly and independently leaves this ensemble invariant. This is the form of $G$ that we work with in the rest of this section. \begin{definition} Let the lattice $L_0$ be the set of points $x_1,...,x_n$ in ${\mathbb Z}^n$ such that the vector $(x_1 \mod p,...,x_n \mod p)$ is in the linear code defined by $G$. \end{definition} We now show that with high probability, $G$ is non-degenerate. With probability $1-(1/p)^n$, the first column of $G$ has a nonzero entry. By elementary column operations, adding a multiple of the first column to other columns, we can set all other columns equal to zero in the first row for which the first column has a nonzero entry. Then, with probability $1-(1/p)^{n-1}$, the second column has a nonzero entry in some other row. Add a multiple of the second column to the third, fourth,... column to set them equal to zero in the first row for which the second column has a nonzero entry. Continuing in this fashion, the probability that $G$ is non-degenerate is $(1-(1/p)^n)(1-(1/p)^{n-1})\ldots(1-(1/p)^{n-k+1}$ which indeed is $1-o(1)$. The lattice $L_0$ is the set of integer linear combinations of the columns of $G$ (interpreted as vectors of integers, rather than as vectors of elements of ${\mathbb F}_p$) and of the $n$ vectors with a $p$ in one coordinate and zeroes elsewhere. Then, the lattice $L_0$ is the set of integer linear combinations of the columns of the matrix $$\begin{pmatrix} I & pI & 0 \\ C & 0 & pI \end{pmatrix},$$ where the row blocks have sizes $k$ and $n-k$ respectively, while the column blocks have sizes $k$, $k$, $n-k$, and respectively, and where $I$ is the identity matrix of appropriate size. However, any integer linear combination of column vectors of $\begin{pmatrix} pI \\ 0 \end{pmatrix}$ is also an integer linear combination of column vectors of $$\begin{pmatrix} I & 0 \\ C & pI \end{pmatrix}.$$ To see this, consider any vector of integers $\vec y=(y_1,...,y_k)$. Then, \begin{eqnarray} \begin{pmatrix} pI \\ 0 \end{pmatrix} \vec y&=&\begin{pmatrix} p \vec y \\ \vec 0\end{pmatrix}\\ \nonumber &=&\begin{pmatrix} I \\ C \end{pmatrix} \begin{pmatrix}p \vec y \\ \vec 0\end{pmatrix}-\begin{pmatrix} 0 \\ pI \end{pmatrix} \begin{pmatrix} 0 \\ C\vec y \end{pmatrix}. \end{eqnarray} Thus, $L_0$ is the set of integer linear combinations of columns of the matrix $$B_0=\begin{pmatrix} I & 0 \\ C &pI \end{pmatrix}.$$ A matrix $B$ such that the lattice is the set of integer combinations of columns of $B$ is called a generating matrix for the lattice. Two different generating matrices $B_1,B_2$ define the same lattice if and only if $B_1=B_2 T$ where $T$ is an integer matrix such that $T^{-1}$ also is an integer matrix. In this case, the matrix $B_1$ can be turned into the matrix $B_2$ by a sequence of elementary column operations where an elementary column operations is one of: adding one column to another, changing the signs of all entries in a column, or interchanging two columns. Given a lattice $L$ with generating matrix $B$ which is an $n$-by-$k$ matrix, such that $B$ has rank $k$, we define the volume of the lattice to equal ${\rm vol}(L)={\rm det}(B^\dagger B)^{1/2}$. If $k=n$, then ${\rm vol}(L)=|{\rm det}(B)|$. \begin{definition} Given any linearly independent set of vectors $x_1,...,x_k$ in ${\mathbb Z}^n$ (or more generally in ${\mathbb R}^n$) we define their volume ${\rm vol}(x_1,...,x_k)$ to be the volume of the lattice generated by the $n$-by-$k$ matrix with columns $x_1,...,x_k$. \end{definition} This matrix $B_0$ is upper triangular and so ${\rm det}(B_0)$ is easily computed: \begin{equation} {\rm vol}(L_0)=|{\rm det}(B_0)|=p^{n-k}. \end{equation} \begin{definition} An ``integral lattice" is defined to be a lattice whose generating matrix has integer entries. A ``primitive lattice" is defined to be an integral lattice such that there is no other integral lattice of the same rank properly containing it. Equivalently, there is no integral lattice which spans the same subspace and properly contains it. \end{definition} Example: in two dimensions, the lattice generated by the vector $(2,1)$ is primitive, while that generated by $(4,2)$ is not. Unless specified, all lattices will be in $n$ dimensions. We use $|\ldots |$ to denote the $\ell_2$ norm of a vector. Finally, we define the Rankin invariant. \begin{definition} The Rankin invariant $\gamma_{n,m}(L)$ for a lattice $L$ with rank $n$ is defined to be \begin{equation} \gamma_{n,m}(L)={\rm min}_{\stackrel{v_1,...,v_m \in L}{{\rm vol}(v_1,...,v_m) \neq 0}} \Bigl( \frac{{\rm vol}(v_1,...,v_m)}{{\rm vol}(L)^{m/n}} \Bigr)^2. \end{equation} The square in the above definition is included for historical reasons. The factor $m/n$ in the exponent of $L_0$ is such that the invariant is unchanged under rescaling the lattice $L$ by any constant factor. In the case $m=1$, the Rankin invariants $\gamma_{n,1}(L)$ is related to the length of the shortest vector: $\gamma_{n,1}(L)={\rm min}_{x \in L, x \neq 0} \frac{|x|^2}{{\rm vol}(L_0)^{2/n}}$. Clearly, $\gamma_{n,n}(L)=1$ for all $L$. \end{definition} The Rankin invariant $\gamma_{n,1}(L)$ is related to the length of the shortest vector in the lattice. To understand the higher Rankin invariants, consider a set of vectors $v_1,...,v_m \in L$ with ${\rm vol}(v_1,...,v_m)\neq 0$. Consider the torus ${\mathbb R}^n/L$. The $m$-dimensional hyperplane spanned by $v_1,...,v_m$ represents a nontrivial integer homology class and has an $m$-dimensional volume (using the Euclidean metric) equal to ${\rm vol}(v_1,...,v_m)$. \section{Overview of Construction: Conjectures and Main Result on Distance} \label{overview} We will consider a family of CSS codes obtained by choosing a fixed $p>1$ and taking LDA lattices with $k=n/2$ from the random ensemble above, for all (even) values of $n$. With high probability, this lattice has rank $n$. Given the lattice, we take a cellulation of the lattice by hypercubes of length $1$ on each side. Then, we consider a qubit toric code on this cellulation with degrees of freedom on $q$-cells for $q=n/2$. Then, the number of $q$-cells is equal to \begin{equation} N={n \choose n/2} p^{n/2}. \end{equation} The distance of the code is equal to the weight of the least weight logical $X$ or $Z$ operator. The vector corresponding to such an operator represents nontrivial homology or cohomology with ${\mathbb Z}_2$ coefficients. We conjecture that: \begin{conjecture} \label{conj1} There exists a constant $C>0$, such that for any $n$-dimensional integer lattice $L$, for the toric code obtained by the cellulation using integer hypercubes and degrees of freedom on $q$-cells for $q=n/2$, the distance is lower bounded by $C^n {\rm min}_{\stackrel{v_1,...,v_q \in L}{{\rm vol}(v_1,...,v_q) \neq 0}} {\rm vol}(v_1,...,v_q)=C^n {\rm vol}(L)^{q/n} \gamma_{n,q}(L)^{1/2}$. \end{conjecture} Let us motivate this conjecture. The least volume hyperplane representing nontrivial homology has volume equal to the Rankin invariant. This hyperplane need not lie on the $q$-cells that we have chosen. We can deform the hyperplane to get a surface that lies on the $q$-cells using the Federer-Fleming deformation theorem\cite{ff}: this theorem is based on deforming the surface to lie on the $(n-1)$-skeleton (i.e., the $(n-1)$-dimensional faces of the hypercubes of unit size), then on the $(n-2)$-skeleton, and so on, iteratively, until the surface likes on the $q$-skeleton. The deformation to move surface from the $m$-skeleton to the $(m-1)$-skeleton is done by choosing a point randomly in an $m$-dimensional hypercube and then projecting the surface outwards from that point to the boundary. This deformation may increase the volume, but that is fine: what we are considered with is lower bounding the volume. However, it is not clear the the optimal operator is obtained by such a deformation procedure starting from a hyperplane. There may be, for example, unoriented chains which are not hyperplanes but which represent nontrivial homology and have much smaller volume than the least volume hyperplane. The conjecture is that such surfaces can have at most exponentially smaller (i.e., smaller by a factor $C^n$) volume. Conjecture \ref{conj1} considers the distance of the code, which is equal to the least volume of a ${\mathbb Z}_2$ cycle representing nontrivial homology. The cycles are in the chain complex obtained from the cellulation using hypercubes. One may be tempted to make a (possibly stronger) conjecture that a similar inequality holds for more general chains, such as polyhedral chains. In this regard, we remark that the possible increase in volume from the Federer-Fleming deformation theorem may be superexponentially large: the upper bound is at most $2n^{n/2} {n \choose n/2}$ (see Ref.~\onlinecite{ffencyc}). We prove that: \begin{theorem} \label{mainth} Assume that conjecture \ref{conj1} holds. Then, for any $\epsilon>0$, there exists a family of quantum CSS codes on $N$ qubits with distance $D=\Omega(N^{1-\epsilon})$ and weight $w={\mathcal O}(\log(N))$ and with $\Theta(N^\delta)$ encoded qubits, where $\delta>0$ ($\delta$ depends on $\epsilon$). \end{theorem} This theorem will follow from a corollary of theorem \ref{countingthm}, which implies that for any constant $c<1/\sqrt{2\pi e}$, with high probability we have ${\rm min}_{\stackrel{v_1,...,v_q \in L}{{\rm vol}(v_1,...,v_q) \neq 0}} {\rm vol}(v_1,...,v_q) \geq (cp)^{n/2}$. Hence, with high probability, $d \geq (cC^2 p)^{n/2}$. Since $N={n \choose n/2} p^{n/2} \leq (4p)^{n/2}$, with high probability we have $$d \geq (cC^2 p)^{\log_{4p}(N)}=N^{\frac{\log(cC^2p)}{\log(4p)}}.$$ Fixing $c$ to be any constant slightly smaller than $1/\sqrt{2\pi e}$, we find that for any $\epsilon>0$ that for all sufficiently large $p$ we have $$1-\epsilon \leq \frac{\log(cC^2p)}{\log(4p)}$$ so that $d\geq N^{1-\epsilon}$. We have $w={\mathcal O}(d)={\mathcal O}(\log(N))$. The number of encoded qubits is equal to ${n \choose k}=2^{(1-o(1)) n}=2^{2(1-(o(1))\log_{4p}(N)}=N^{2(1-o(1))/\log(4p)} \equiv N^{\delta}$. The main work will be theorem \ref{countingthm}, to lower bound the Rankin invariant for this class of lattices. The reader may wonder why we introduce this class of lattices, instead of re-using previous results which show that there exist random lattices with a large Rankin invariant, $\gamma_{n,n/2}(L) \geq (\frac{k}{12})^{n/4}$. See theorem 3 in Ref.~\onlinecite{blockwise}. The reason is that the random lattices constructed there need not be integral lattices and so we do not have such an obvious cell decomposition to place on the lattices. We comment later on the relationship between the Rankin invariant for our lattice (which depends on $n,p$) and the invariant in Ref.~\onlinecite{blockwise}; this requires considering how large $n$ needs to be compared to $p$ in our construction. Note that we choose $p$ large so that the exponentially growing factor, $\approx 2^n$, arising from the factor ${n \choose n/2}$ in the number of cells will be polynomially smaller than the volume $p^{n/2}$. We have $2^n=(p^{n})^{1/\log_2(p)}$. We remark that similar code constructions can be made by choosing degrees of freedom on $q$-cells for $q\neq n/2$, taking $n$ large at a fixed ratio $q/n$. In this case, a natural generalization of conjecture \ref{conj1} is to assume that there is a constant $C$ such that $d_Z \geq C^n {\rm vol}(L)^{q/n} \gamma_{n,q}(L)^{1/2}$ and $d_X \geq C^n {\rm vol}(L)^{(n-q)/n} \gamma_{n,n-q}(L)^{1/2}$. Assuming this conjecture, our construction would give a code with $d_X d_Z$ polynomially larger than $N$. \section{Rankin Invariants} \label{rankininvar} In this section, we will prove lower bounds on the Rankin invariants\cite{rankin} $\gamma_{n,m}(L_0)$ of $L_0$. The proof uses the probabilistic method; in particular, we use the first moment method. To motivate the proof, let us first sketch a proof method for $\gamma_{n,1}(L_0)$; then, we give a sketch a possible extension of the proof method to $\gamma_{n,m}(L_0)$ and explain some difficulties with this extension; finally, we outline the approach we use which is a modification of that. First, suppose we just want to lower bound $\gamma_{n,1}(L_0)$; i.e., we wish to lower bound the shortest vector in the lattice. This can be done by a first moment method: estimate the number of integer vectors with length less than some given length $\ell$; then, compute the probability that any given integer vector is in the lattice (this probability is $p^{-(n-k)}$ for a randomly chosen code assuming $G$ is non-degenerate); so, for sufficiently small $\ell$, the average number of integer vectors with length less than $\ell$ in the lattice is small so it is unlikely that any integer vectors with length less than $\ell$ will be in the lattice. One might attempt to do something similar for the Rankin invariants: estimate the number of rank $m$ integral lattices in $n$ dimensions with volume at most $V$ and then compute the probability that an integral lattice is in a randomly chosen linear code. Call this rank-$m$ lattice $K$ and call its generating matrix $M_K$. In fact, Ref.~\onlinecite{schmidt0} provides asymptotic estimates (large $V$) for the number of such lattices $K$, so it might seem that one could directly use the results there in a first moment method. Indeed, this approach might work, but since the results of Ref.~\onlinecite{schmidt0} hold in the asymptotic limit (large $V$), some additional estimates would be needed (we do use many results in Ref.~\onlinecite{schmidt0}). However, the results we need are in some ways simpler than that of Ref.~\onlinecite{schmidt0} because we do not care about an exact estimate of the number of such lattices, only an upper bound. Further, rather than applying the first moment method by estimating the number of lattices $K$ with some given volume and estimating the probability that such a lattice is in the code and then showing that the product is small for small $V$, we will apply the first moment method to each column of the generating matrix $M_K$ {\it separately} (with $M_K$ written in Hermite normal form). That is, we first estimate (this step is exactly analogous to the discussion at the start of this paragraph regarding how to lower bound $\gamma_{n,1}(L_0)$) the probability that there is a choice for the first column which has small length and which is in the code. Then, we estimate the probability that there is a choice for the second column which is also in the code such that the ratio of the volume of the lattice generated by the first two columns of $M_K$ to the volume of the lattice generated by the first column of $M_K$ is small. To do this calculation, we need the concept of ``factor lattice"\cite{schmidt0}, reviewed below. We continue in this fashion over the other columns, showing that the ratio of the volume of the lattice generated by the first $a$ rows of $M_K$ to the volume of the lattice generated by the first $a-1$ rows of $a$ is likely to be large, for each $a=2,3,\ldots$. \subsection{Counting Points} Let $V_d(r)$ denote the volume of a ball of radius $r$ in $d$ dimensions: \begin{equation} V_d(r)=\frac{\pi^{d/2}}{\Gamma(\frac{d}{2}+1)}r^d. \end{equation} Given a rank-$l$ lattice $L$ which spans some space $E$, we define the Voronoi cell to be the set of points $y$ in $E$ such that $|y| \leq |y-v|$ for all lattice points $v \neq 0$. The $l$-dimensional volume of the Voronoi cell is equal to ${\rm vol}(L)$. \begin{definition} Given a lattice $L$, let $N(L,z,r)$ denote the number of points in lattice $L$ within distance $r$ of some given point $z$. \end{definition} \begin{lemma} \label{numpoints} Let $L$ be a rank-$l$ lattice in $d$ dimensions which spans some space $E$. Suppose the diameter of the Voronoi cell of $L$ is bounded by some given $D$. Then, for any $z,r$, \begin{equation} N(L,z,r) \leq \frac{1}{{\rm vol}(L)} V_l(r+D). \end{equation} \begin{proof} For every point $x\in L$ within distance $r$ of $z$, let $T_x$ be the set of points $y\in E$ such that $y-x$ is in the interior of the Voronoi cell. The sets $T_x$ are non-overlapping and each has $l$-dimensional volume ${\rm vol}(L)$. So, the volume of $\cup_{x,|x-z|\leq r} T_x$ is equal to $N(L,z,r) {\rm vol}(L)$. Every $x$ is within distance $r$ of $z$ and so every point in $\cup_{x,|x-z|\leq r} T_x$ is within distance $r+D$ of $z$, so $N(L,z,r) {\rm vol}(L) \leq V_k(r+D)$. \end{proof} \end{lemma} We make some more definitions. \begin{definition} Given a rank-$l$ lattice $L$ spanning a subspace $E$, the polar lattice $L^P$ is the lattice of all vectors in $E$ which have integral inner products with all vectors in $L$. The polar lattice also has rank $l$ and ${\rm vol}(L^P) {\rm vol}(L)=1$. \end{definition} \begin{definition} Let $\Gamma_0^n$ be the rank-$n$ lattice in $n$ dimensions consisting of all vectors for which all coordinates are integral. \end{definition} \begin{definition} Given a primitive lattice $L$ spanning subspace $E$, the orthogonal lattice $L^\perp$ consists of all vectors in $\Gamma_0^n$ with vanishing inner product with all vectors in $L$. \end{definition} \begin{definition} Let $L$ be a rank-$l$ primitive sublattice of $\Gamma_0^n$ and let $E$ be the subspace spanned by $L$. Let $\pi$ project onto the orthogonal complement of $E$, which we write $E^\perp$. Let $\pi(\Gamma_0^n) \equiv \Gamma_0^n/L$. Then, $\Gamma_0^n/L$ is also a lattice, called the factor lattice. It has rank $n-l$\end{definition} We have\cite{schmidt0} \begin{equation} \label{volinvert} {\rm vol}(L) {\rm vol}(\Gamma_0^n/L)=1. \end{equation} This equation follows from this lemma: \begin{lemma} \begin{equation} \Gamma_0^n/L=((L)^\perp)^P. \end{equation} \begin{proof} See Ref.~\onlinecite{schmidt0}. \end{proof} \end{lemma} \begin{lemma} \label{diamboundlemma} Let $L$ be a rank-$l$ primitive sublattice of $\Gamma_0^n$. Let $\pi$ and $\Gamma_0^n/L$ be as above. Then, the diameter of the Voronoi cell of $\Gamma_0^n/L$ is bounded by $\sqrt{n-l}$. \begin{proof} Since $L$ has rank $l<n$, there must be some vector $w_1$ which has a $1$ in one coordinate and zeroes in all other coordinates (i.e., $w_1$ is of the form $(0,\ldots,0,1,0,\ldots,0)$) which is not in $E$. Then, since the span of $E$ and $w_1$ has dimension $l+1$, if $k<n-1$, there must be some other vector $w_2$ of the same form which is not in the span of $E$ and $l_1$. Proceeding in this fashion, we construct vectors $w_1,...,w_{n-l}$, all of which have zeroes in all but one coordinate and a $1$ in that coordinate. The vectors $\pi(w_i)$ span $E^\perp$. So, every point $y$ in $E^\perp$ can be written as a linear combination $y=\pi(\sum_i a_i w_i)$. If the $a_i$ are integer, then $y$ is a lattice point in $\pi(\Gamma_0^n)$. Every linear combination $\sum_i a_i w_i$ is within distance $(1/2)\sqrt{n-l}$ of some linear combination $\sum_i b_i w_i$ with integer $b_i$ (to see this, simply round all $a_i$ to the nearest integer). Since the norm does not increase under projection, every $\pi(\sum_i a_i w_i)$ is also within distance $(1/2) \sqrt{n-l}$ of some $\pi(\sum_i b_i w_i)$ for integer $b_i$ and hence every point in $E$ is within distance $(1/2) \sqrt{n-l}$ of a lattice point. \end{proof} \end{lemma} We remark that the lattice with basis vectors $\pi(w_i)$ may not include all points in $\pi(\Gamma_0^n)$; as an example, consider $l=1$ and $n=2$ and let $L$ be the lattice with basis vector $(2,1)$ and let $w_1=(0,1)$. The vector $\pi((1,0))$ is then not included in the lattice with basis vector $\pi(w_1)$. \begin{lemma} \label{pointsboundlemma} Let $L$ be a rank-$l$ primitive sublattice of $\Gamma_0^n$. Let $\pi$ and $\Gamma_0^n/L$ be as above. The number of points in $\Gamma_0^n/L$ within distance $r$ of the origin is bounded by \begin{equation} N(\Gamma_0^n/L,0,r) \leq {\rm vol}(L) V_l(r+\sqrt{n-l}). \end{equation} \begin{proof} This follows from lemmas \ref{numpoints},\ref{diamboundlemma} and Eq.~(\ref{volinvert}). \end{proof} \end{lemma} \subsection{Hermite Normal Form For Lattices} Consider a rank-$m$ integral lattice $K$ in $n$ dimensions. If this lattice has basis vectors $v_1,...,v_m$, we write an $n$-by-$m$ matrix $M_K$ whose columns are these basis vectors. We label the rows of the matrix by integers $1,\ldots,n$ and label the columns by integers $1,\ldots,m$. Such a matrix is called a lattice generator matrix for the lattice. Then, the set of points in the integral lattice is the image under $M_K$ of $\Gamma_0^{k}$. By a sequence of column operations (adding one column of $M_K$ to another column, which does not change the image, or changing the sign of a column, which also does not change the image), we can bring always bring the matrix $M_K$ to so-called ``Hermite normal form"; further, there is a unique matrix $M_K$ in Hermite normal form which generates $K$. Our definition of Hermite normal form differs from that of other authors because we will {\it reverse} the order of columns and {\it reverse} the order of rows compared to the usual order. This is because we will be doing induction later and with the reversed order of columns, the notation will be much more natural later. See Eq.~(\ref{example}) for an example of Hermite normal form below. \begin{definition} \label{HNF} A matrix $M_K$ is said to be in Hermite normal form if for every column $j$ there is a row $i_j$ with $1 \leq i_1 < i_2 < \ldots <i_m\leq n$ such that the entries of $M_K$ obey: \begin{equation} \label{zeroes} i>i_j \quad \rightarrow \quad (M_K)_{i,j}=0 \end{equation} and \begin{equation} \label{pos} (M_K)_{i_j,j}>0, \end{equation} and \begin{equation} \label{Sformnew} l > j \quad \rightarrow \quad 0 \leq (M_K)_{i_j,l} < (M_K)_{i_j,j}. \end{equation} We say that ``the first $a$ columns of $M_K$ are in Hermite normal form" if the submatrix of $M_K$ consisting of the first $a$ columns is in Hermite normal form. In this case, for every column $j$ with $j\leq a$ there is a row $i_j$ with $1 \leq i_{1}<i_{2} <\ldots <i_a \leq n$ such that Eqs.~(\ref{zeroes},\ref{pos},\ref{Sformnew}) hold when restricted to the case that $j \leq a$ and $l \leq a$. \end{definition} We introduce some notation. This notation defines various vector spaces and vectors in terms of the matrix $M_K$; we do not explicitly write $M_K$ in the definition, but rather the particular choice of $M_K$ should be clear in context. The last nonzero entry in the $j$-th column occurs in the $i_j$-th row. Define a sequence of lattices $K_1,K_2,...,K_k$, where $K_j$ has rank $j$ and $K_j$ is defined to be the lattice generated by the submatrix of $M_K$ containing the first $j$ rows and the first $i_j$ columns. Note that $K_k=K$. Note also that if $K_a$ is primitive then $K_b$ is primitive for all $b<a$. We let $\vec v_{j}$ be the vector given by the first $i_j$ rows of the $j$-th column. This notation can be clarified with an example of $m=5,k=3$, with $i_1=2,i_2=4,i_3=5$, where we write \begin{equation} \label{example} M_K=\begin{pmatrix} (\vec v_1)_1 & (\vec v_2)_1 & (\vec v_3)_1 \\ (\vec v_1)_2 &(\vec v_2)_2 & (\vec v_3)_2 \\ 0&(\vec v_2)_3 & (\vec v_3)_3 \\ 0 & (\vec v_2)_4 & (\vec v_3)_4\\ 0 & 0 & (\vec v_3)_5 \end{pmatrix}, \end{equation} with $(v_j)_i$ denoting the $i$-th entry of vector $\vec v_a$. For this matrix to be in Hermite normal form, we have $0 \leq (\vec v_2)_2,(\vec v_3)_2 < (\vec v_1)_2$ and $0 \leq (\vec v_3)_4<(\vec v_2)_4$. The lattice $K_j$ is a sublattice of $\Gamma_0^{i_j}$. We let $M_{K_j}$ be the submatrix of $M_K$ consisting of the first $j$ rows and the first $i_{j}$ columns so that $M_{K_j}$ generates $K_j$. We also define a lattice $\tilde K_j$ which is a sublattice of $\Gamma_0^{i_{j+1}}$. The lattice $\tilde K_j$ will be the sublattice generated by the submatrix of $M_K$ consisting of the first $j$ rows and the first $i_{j+1}$ columns. We let $M_{\tilde K_j}$ be the submatrix of $M_K$ consisting of the first $j$ rows and the first $i_{j+1}$ columns. Hence, the last $i_{j+1}-i_j$ entries of every vector in $\tilde K_j$ are equal to $0$. Let $\pi_j$ project onto the orthogonal complement of the span of $\tilde K_j$. \iffalse \begin{lemma} \label{factorlemma} Let $K$ be a rank-$m$ lattice in $n$ dimensions Then, there exist an $n$-by-$m$ matrix $M_{K^P}$ which is a lattice generating matrix in Hermite normal form for a primitive lattice and an $m$-by-$m$ matrix $F$ which is also in Hermite normal form such that the product $$M_{K^P} F$$ is a lattice generating matrix for $K$ (note that the product $M_{K^P} F$ need not be in Hermite normal form). \begin{proof} Let $K$ have lattice generating matrix $M_K$. Let $K^P$ be a primitive lattice spanning the same space as $K$ and containing the lattice $K$. (Note that such a primitive $K^P$ must exist and is unique: it is the lattice consisting of all integer points which are in the space spanned by $K$). Let $K^P$ be generated by $M_{K^P}$ with $M_{K^P}$ in Hermite normal form; note that since $K^P$ is unique, $M_{K^P}$ is uniquely determined by $K$. Then, since $K$ is contained in $K^P$, every column of $M_K$ is an integer linear combination of columns of $M_{K^P}$. Consider the lattice $L$ of points in $\Gamma_0^m$ such that $M_{K^P}$ acting on that point gives a point in $K$; this lattice $L$ is uniquely determined by $K$. Let $F$ be a generating matrix for this lattice in Hermite normal form; $F$ is uniquely determined by $L$ and so unique determined by $K$. We claim that $M_{K^P} F$ indeed generates $K$. By construction, $M_{K^P} F$ is contained in $K$ and also every point in $K$ is in $K^P$ so every point in $K$ is the image of some point in $\Gamma_0^m$ under $M_{K^P}$. \end{proof} \end{lemma} \fi \begin{lemma} \label{factorlemma} Let $K$ be a rank-$m$ lattice in $n$ dimensions with generating matrix $M_K$ in Hermite normal form. Then, there exist an $n$-by-$m$ integer matrix $M_{K^P}$ which is a lattice generating matrix in Hermite normal form (with the same $i_j$ as $M_K$) for a primitive lattice, and an $m$-by-$m$ integer matrix $F$ which is upper triangular with positive diagonal entries such that we have \begin{equation} \label{factoreq} M_K=M_{K^P} F. \end{equation} Further, $F,M_{K^P}$ are unique. \begin{proof} Let $K^P$ be a primitive lattice spanning the same space as $K$ and containing the lattice $K$. (Note that such a primitive $K^P$ must exist and is unique: it is the lattice consisting of all integer points which are in the space spanned by $K$). Let $K^P$ be generated by $M_{K^P}$ with $M_{K^P}$ in Hermite normal form; note that since $K^P$ is unique, $M_{K^P}$ is uniquely determined by $K$. Then, since $K$ is contained in $K^P$, every column of $M_K$ is an integer linear combination of columns of $M_{K^P}$. So, $M_K=M_{K^P} F$ for some integer matrix $F$. $M_K,M_{K^P}$ must have the same $i_j$ or their columns would not span the same space. Since $M_{K},M_{K^P}$ have the same $i_j$, it follows that $F$ is upper triangular with positive diagonal entries: restrict $M_K,M_{K^P}$ to the rows $i_1,i_2,\ldots,i_m$, giving upper triangular matrices of size $m$-by-$m$. Call these matrices $A,B$ respectively. Then $A=BF$, so $F=B^{-1} A$. Since $B$ is upper triangular, so is $B^{-1}$ and so is $F$. \end{proof} \end{lemma} \subsection{Counting Column Choices} \begin{definition} A lattice $L$ is {\it consistent} with a code generator matrix $G$ if every point $(x_1,...,x_n)$ in the lattice has the property that $(x_1 \mod p,\ldots,x_n \mod p)$ is in the code defined by $G$. A lattice generator matrix $M_L$ is consistent with a code generator matrix $G$ if the lattice generated by $M_L$ is consistent with $G$. \end{definition} We will use $a$ to label a column choice, $1\leq a \leq m$. We will construct lattices $K_a$ in terms of $K_{a-1}$ and $\vec v_a$. \begin{lemma} \label{volumelemma} Let $M_K$ be in Hermite normal form. Then, \begin{equation} {\rm vol}(K_{a})={\rm vol}(K_{a-1}) |\pi_{a-1}(\vec v_{a})|. \end{equation} \begin{proof} Immediate from the definition of volume. \end{proof} \end{lemma} Assume $K_{a-1}$ is primitve. Then, the next lemma gives a one-to-one correspondence between vectors $\vec v_a$ obeying {\it one} of the conditions needed for Hermite normal form (the condition Eq.~(\ref{Sformnew})) and vectors in a certain factor lattice. In lemma \ref{corresp2lemma} we consider the case that $K_{a-1}$ is not primitive. Note that there is an additional condition on $v_a$, namely that its first entry be positive, in order for the matrix $M_{K_a}$ to be in Hermite normal form. \begin{lemma} \label{corresplemma} Let the first $a-1$ columns of $M_K$ be given and assume that the first $a-1$ columns of $M_K$ are in Hermite normal form and assume that $K_{a-1}$ is a primitive sublattice of $\Gamma_0^n$. Then, there is a one-to-one correspondence between vectors $\vec v_{a}$ such that \begin{equation} \label{Sft} j<a \quad \rightarrow \quad 0 \leq (M_K)_{i_j,a} < (M_K)_{i_j,j} \end{equation} and points $\vec x$ of the lattice $\Gamma_0^{i_a}/\tilde K_{a-1}$, such that if $\vec x$ corresponds to $\vec v_a$ then $\pi_{a-1}(\vec v_a)=\vec x$. \begin{proof} We will show that for every $\vec x \in \Gamma_0^{i_a}/\tilde K_{a-1}$, there exists a unique $\vec v_a$ obeying Eq.~(\ref{Sft}) such that $\pi_{a-1}(\vec v_a)=\vec x$. This gives a map ${\cal F}$ from $\Gamma_0^{i_a}/\tilde K_{a-1}$ to vectors obeying Eq.~(\ref{Sft}). This map is one-to-one since distinct vectors $\vec x_1 \neq \vec x_2$ cannot both be the image of the same vector $\vec v_a$ under the map $\pi_{a-1}$. This map ${\cal F}$ is onto since any vector $\vec v_a$ obeying Eq.~(\ref{Sft}) is the image of $\pi_{a-1}(\vec v_a)$ under this map. First we show existence of somer vector $\vec v_a$. Every vector $\vec x$ in $\Gamma_0^{i_a}/\tilde K_{a-1}$ is given by $\vec x=\pi_{a-1}(\vec y)$ for some $\vec y\in \Gamma_0^{i_a}$. For any such vector $\vec y$, we can add lattice vectors in $\tilde K_{a-1}$ so that Eq.~(\ref{Sft}) (i.e., set $\vec v_a$ equal to $\vec y$ plus some sum of lattice vectors; this can be done iteratively, so that it holds first for $j=a-1$, then $j=a-2$, and so on). Adding these lattice vectors does not change the image of the result under $\pi_{a-1}$. Now uniqueness. Suppose that $\pi_{a-1}(\vec y)=\pi_{a-1}(\vec z)$ for $\vec y,\vec z$ being two possible choices of $\vec v_a$ such that Eq.~(\ref{Sft}) is obeyed. Then, $\pi_{a-1}(\vec y-\vec z)=0$, so $\vec y-\vec z$ is in the span of $\tilde K_{a-1}$. Since $K_{a-1}$ is primitive so is $\tilde K_{a-1}$ and so $\vec y-\vec z$ is in $\tilde K_{a-1}$. Let $M_K(i,j)$ denote the submatrix of $M_K$ containing the first $i$ rows and the first $j$ columns, so that $M_K(i_a,a-1)$ is a lattice generating matrix for $\tilde K_{a-1}$. So, $\vec y-\vec z=M_K(i_a,a-1) \vec u$, where $\vec u\in \Gamma_0^{a-1}$. Then, Eq.~(\ref{Sft}) requires that $\vec u=0$. This follows inductively: if the last entry of $\vec u$ is nonzero, then it is not possible for both $\vec y$ and $\vec z$ to obey Eq.~(\ref{Sft}) for $j=a-1$; to see this, note that then the $(a-1)$-th entries of $\vec y,\vec z$ must differ by a positive integer multiple of $(M_K)_{i_j,j}$ and so they cannot both fall in the range $0,1,\ldots,(M_K)_{i_j,j}-1$. So, $\vec y-\vec z$ differs by an elements of the lattice generated by $M_K(i_a,a-2)$ and so $\vec y-\vec z=M_K(i_a,a-2) \vec u'$ for $\vec u'\in \Gamma_0^{a-2}$. Again, the last entry of $\vec u'$ must equal zero so that Eq.~(\ref{Sft}) will be obeyed for $j=a-2$. We continue inductively for $j=a-3,\ldots$. \end{proof} \end{lemma} The next lemma is similar to the previous except that we no longer assume that $K_{a-1}$ is primitive. \begin{lemma} \label{corresp2lemma} Let the first $a-1$ columns of $M_K$ be given and assume that the first $a-1$ columns of $M_K$ are in Hermite normal form. Let $M_K(i,j)$ denote the submatrix of $M_K$ containing the first $i$ rows and the first $j$ columns, so that $M_{\tilde K_{a-1}}=M_K(i_a,a-1)$ is a lattice generating matrix for $\tilde K_{a-1}$. Use lemma \ref{factorlemma} to write $$M_{\tilde K_{a-1}}=M_{{\tilde K^P}_{a-1}} F.$$ Then, the possible choices of $\vec v_{a}$ such that \begin{equation} \label{Sft2} j<a \quad \rightarrow \quad 0 \leq (M_K)_{i_j,a} < (M_K)_{i_j,j} \end{equation} are in one-to-one correspondence with choices of tuples $(\vec x,f_1,\ldots,f_{a-1})$, where $\vec x$ is a point in $\Gamma_0^{i_a}/\tilde K^P_{a-1}$ and $f_1,\ldots,f_{a-1}$ are integers obeying $0 \leq f_i < F_{i,i}$, such that if $(\vec x,f_1,\ldots,f_{a-1})$ corresponds to $\vec v_a$ then $\pi_{a-1}(\vec v_a)=\vec x$. Thus, there are ${\rm det}(F)$ distinct vectors $\vec v_a$ corresponding to $\vec x$. \begin{proof} We will show that for every $\vec x \in \Gamma_0^{i_a}/\tilde K_{a-1}$, there exist ${\rm det}(F)$ distinct vectors $\vec v_a$ obeying Eq.~(\ref{Sft2}) such that $\pi_{a-1}(\vec v_a)=\vec x$. These ${\rm det}(F)$ vectors will be labelled by $f_1,\ldots,f_{a-1}$. First we show existence. Every vector $\vec x$ in $\Gamma_0^{i_a}/\tilde K_{a-1}$ is given by $\vec x=\pi_{a-1}(\vec y)$ for some $\vec y\in \Gamma_0^{i_a}$. For any such vector $\vec y$, we can add lattice vectors in $\tilde K_{a-1}$ so that Eq.~(\ref{Sft2}) will hold (this can be done iteratively, so that it holds first for $j=a-1$, then $j=a-2$, and so on). Adding these lattice vectors does not change the image of the result under $\pi_{a-1}$. Now, for each $\vec x$, let $\vec z$ be some fixed vector such that Eq.~(\ref{Sft2}) holds for $\vec v_a=\vec z$ and such that $\vec x=\pi_{a-1}(\vec z)$. Suppose that $\pi_{a-1}(\vec y)=\pi_{a-1}(\vec z)$ for $\vec y$ some other possible choice of $\vec v_a$ such that Eq.~(\ref{Sft2}) is obeyed. We count the number of possible choices of $\vec y$. Then, $\pi_{a-1}(\vec y-\vec z)=0$, so $\vec y-\vec z$ is in the span of $\tilde K_{a-1}$. Since $\tilde K^P_{a-1}$ is primitive, $\vec y-\vec z=M_{{\tilde K^P}_{a-1}} \vec u$, where $\vec u\in \Gamma_0^{a-1}$. There are $F_{a-1,a-1}$ possible choices for the $(a-1)$-th entry of $\vec u$. To see this, note that $\vec y$ and $\vec z$ both obey Eq.~(\ref{Sft2}) for $j=a-1$. For $j=a-1$, this equation gives a constraint that the $(a-1)$-th entry of $\vec y$ must fall in the range $0,\ldots,(M_K)_{i_{a-1},a-1}-1$. The $(a-1)$-th entry of $\vec y$ is determined by the $(a-1)$-th entry of $\vec u$ and shifting that entry of $\vec u$ by one shifts the $(a-1)$-th entry of $\vec y$ by $(M_{\tilde K^P_{a-1}})_{i_{a-1},a-1}$. We have $(M_K)_{i_{a-1},a-1}=(M_{\tilde K^P_{a-1}})_{i_{a-1},a-1} F_{a-1,a-1}$ so that there are $F_{a-1,a-1}$ possible choices. Then, given this choice of the $(a-1)$-th entry of $\vec u$, there are $F_{a-2,a-2}$ possible choices for the $(a-2)$-th entry of $\vec u$, and so on. \end{proof} \end{lemma} \begin{lemma} \label{colcountlemma} Let $M_K$ be a matrix in Hermite normal form which is a lattice generating matrix for a rank-$m$ integral lattice $K$ in $n$ dimensions. Let $K_{a-1}$ be given and let $r$ be a real number. Let $C(r,K_{a-1})$ denote the number of choices of $K_a$ such that \begin{equation} {\rm vol}(K_{a}) \leq r K_{a-1}. \end{equation} Then, \begin{equation} C(r,K_{a-1}) \leq {\rm vol}(K_{a-1}) V_{i_a-a+1}(r+\sqrt{i_a-a+1}). \end{equation} If $r<1$ then $C(r,K_{a-1})=0$. \begin{proof} Let $\vec v_a$ be as defined above. By lemma \ref{volumelemma} \begin{equation} {\rm vol}(K_a)={\rm vol}(K_{a-1}) |\pi_{a-1}(\vec v_a)|. \end{equation} so $|\pi_{a-1}(v_a)| \leq r$. By Eq.~(\ref{pos}), the first entry of $\vec v_a$ is $\geq 1$, and since all vectors in $K_{a-1}$ vanish in the first entry, we have $|\pi_{a-1}(\vec v_a)|\geq 1$, so indeed $C(r,K_{k-1})=0$ for $r<1$. By lemma \ref{corresp2lemma}, the vector $\vec v_a$ is in one-to-one correspondence with a tuple $(\vec x,f_1,\ldots,f_{a-1})$ where $\vec x$ is a vector in lattice $\Gamma_0^{i_a}/\tilde K^P_{a-1}$. By lemma \ref{diamboundlemma}, the lattice $\Gamma_0^{i_a}/\tilde K^P_{a-1}$ has the diameter of its Voronoi cells bounded by $\sqrt{i_a-a+1}$. So, for given $\Gamma_0^{i_a}/\tilde K^P_{a-1}$ and given $r$, the number of possible choices of $\vec x$ such $|\pi_{a-1}(\vec v_a)| \leq r$ is bounded by \begin{equation} N(\Gamma_0^{i_a}/\tilde K^P_{a-1},0,r) \leq \frac{1}{{\rm vol}(\Gamma_0^{i_a}/\tilde K^P_{a-1})} V_{i_a-a+1}(r+\sqrt{i_a-a+1}). \end{equation} So, by Eq.~(\ref{volinvert}), \begin{equation} N(\Gamma_0^{i_a}/\tilde K^P_{a-1},0,r) \leq {\rm vol}(K^P_{a-1}) V_{i_a-a+1}(r+\sqrt{i_a-a+1}). \end{equation} Factorize $M_K(i_a,a-1)=M_{{\tilde K^P}_{a-1}} F$, as in lemma \ref{corresp2lemma}. The number of possible choices of $f_1,\ldots,f_{a-1}$ is equal to ${\rm det}(F)={\rm vol}(K_{a-1})/{\rm vol}(K^P_{a-1})$. So, the total number of choices of $K_a$ is bounded by \begin{equation} {\rm det}(F) {\rm vol}(K^P_{a-1}) V_{i_a-a+1}(r+\sqrt{i_a-a+1})={\rm vol}(K_{a-1}) V_{i_a-a+1}(r+\sqrt{i_a-a+1}), \end{equation} as claimed. \end{proof} \end{lemma} \subsection{First Moment Bound} \begin{lemma} \label{fmb} Let $G$ be an $n$-by-$k$ code generator matrix for a code, chosen from the ensemble defined previously (entries chosen independently and uniformly from ${\mathbb F}_p$). Let $M_K$ be an $n$-by-$k$ lattice generator matrix. Let $K_{a-1}$ be given and assume the first $a-1$ columns of $M_K$ are in Hermite normal form. Let $Pr(K_{a-1},r)$ denote the probability that, conditioned on $K_{a-1}$ being consistent with $G$, there exists a choice of $v_a$ such that $K_a$ is consistent with $G$ and such that the first $a$ columns of $M_K$ are in Hermite normal form and such that \begin{equation} {\rm vol}(K_{a}) \leq r K_{a-1}. \end{equation} Then, for $r<1$, $Pr(K_{a-1},r)=0$, and for $r< p$, \begin{equation} Pr(K_{a-1},r) \leq p^{-(n-k)} {\rm vol}(K_{a-1}) V_{i_a-a+1}(r+\sqrt{i_a-a+1}). \end{equation} \begin{proof} By lemma \ref{colcountlemma}, indeed there are no choices of $v_a$ such that $r<1$. If $r< p$, then $0<(\vec v_a)_1<p$ so $(\vec v_a)_1 \neq 0 \mod p$. So, the $a$-th column of $M_K$ is not in the span of the first $a-1$ columns of $M_K$ modulo $p$. So, even though we have conditioned on $K_{a-1}$ being consistent with $G$, the probability that a given choice of $\vec v_a$ is consistent with $G$ is bounded by $p^{-(n-k)}$. (The probability is $p^{-(n-k)}$ if we condition on $G$ being non-degenerate and smaller if $G$ may be degenerate.) So, by lemma \ref{colcountlemma}, the average number of choices of $\vec v_k$ consistent with $G$ is bounded by $p^{-(n-k)} {\rm vol}(K_{a-1}) V_{i_a-a+1}(r+\sqrt{i_a-a+1})$. \end{proof} \end{lemma} The next theorem estimates the probability that, for a randomly chosen code generator matrix, there is a rank-$m$ lattice $K$ of small volume which is consistent with that matrix. The bounds becomes effective for volume smaller than $(cp)^{{\rm min}(m,n-k)}$ with $c<1/\sqrt{2\pi e}$. \begin{theorem} \label{countingthm} Let $P_{lat}(H,p,n,m)$ denote the probability that for a random code generator matrix $G$ for a code over ${\mathbb F}_p^n$ there is a rank-$m$ lattice $K$ consistent with a code generator matrix such that ${\rm vol}(K) \leq H$. For any $p$, for any real number $x>\sqrt{2\pi e}$, for sufficiently large $n-m$, \begin{equation} P_{lat}((cp)^{{\rm min}(m,n-k)},p,n,m) \leq m c^{m} x^{n-m+1}. \end{equation} The required $n-m$ is quadratic in $p(x-\sqrt{2\pi e})^{-1}$. \begin{proof} Note that if there is a lattice $K$ of rank-$m$ consistent with the code generator matrix, then the lattices $K_1,\ldots,K_{m-1}$ constructed above have ranks $1,\ldots,m-1$ respectively and are also consistent with the code generator matrix and have ${\rm vol}(K_a) \leq {\rm vol}(K)$. So, it suffices to consider the case $m\leq n-k$ (if $m>n-k$, then consider the lattice $K_{n-k}$ instead). For $M_K$ in Hermite normal form, since $i_1<i_2<\ldots<i_m<n$, we have $i_a \leq n-m+a$ and so $i_a-a+1 \leq n-m+1$. We use the bound (the inequality on the second line holds for all sufficiently large $n-m$) \begin{eqnarray} \label{vineq} V_{n-m+1}(r+\sqrt{n-m+1})&=&\frac{\pi^{\frac{n-m+1}{2}}}{\Gamma(\frac{n-m+1}{2})} (r+\sqrt{n-m+1})^{n-m+1} \\ \nonumber &\leq &\Bigl( \frac{2\pi e}{n-m+1}\Bigr)^{\frac{n-m+1}{2}} (r+\sqrt{n-m+1})^{n-m+1} \\ \nonumber &=& \Bigl(\frac{r\sqrt{2\pi e}}{\sqrt{n-m+1}}+\sqrt{2 \pi e} \Bigr)^{n-m+1}, \end{eqnarray} Let $c$ be a real number, $0<c<1$. We will make a choice of $c$ below. By lemma \ref{fmb} and Eq.~(\ref{vineq}), given $K_{a-1}$, if ${\rm vol}(K_{a-1}) \leq (cp)^m$, we have \begin{eqnarray} Pr(K_{a-1},r) & \leq & c^m p^{m-(n-k)} \Bigl(\frac{r\sqrt{2\pi e}}{\sqrt{n-m+1}}+\sqrt{2 \pi e} \Bigr)^{n-m+1} \\ \nonumber & \leq & c^m \Bigl(\frac{r\sqrt{2\pi e}}{\sqrt{n-m+1}}+\sqrt{2 \pi e} \Bigr)^{n-m+1}. \end{eqnarray} For $r<p$, this is bounded by $c^{m} \Bigl(\frac{p \sqrt{2\pi e}}{\sqrt{n-m+1}}+\sqrt{2 \pi e} \Bigr)^{n-m+1}$. For any $p$, for any real number $x>\sqrt{2\pi e}$, for sufficiently large $n-m$, this is bounded by $c^{m} x^{n-m+1}$. (The required $n-m$ is quadratic in $p(x-\sqrt{2\pi e})^{-1}$). Suppose that ${\rm vol}(K) \leq (cp)^{m}$ for some $c<1$. Then, ${\rm vol}(K_a)\leq (cp)^{m}$ for all $a$ and for some $a$ we have ${\rm vol}(K_a)/{\rm vol}(K_{a-1})<p$. However, for ${\rm vol}(K_a) \leq (cp)^{m}$, the above calculation bounds the probability for given $a$ that there is a choice of $K_a$ such that ${\rm vol}(K_a)/{\rm vol}(K_{a-1})<p$ by $c^m x^{n-m+1}$ for all sufficiently large $n-m$. By a union bound, the probability that for some $a$ there is a choice of $K_a$ such that ${\rm vol}(K_a)/{\rm vol}(K_{a-1})<p$ is bounded by $m c^m x^{n-m+1}$ for all sufficiently large $n-m$. So, $P_{lat}((cp)^{m},p,n,m) \leq m c^m x^{n-m+1}$ for all sufficiently large $n-m$. \end{proof} \end{theorem} This implies the following corollary for the Rankin invariant: \begin{corollary} \label{rankincor} For any $p,k$, for all sufficiently large $n$ at fixed ratio $m/n$, for any $c<1/\sqrt{2\pi e}$, with high probability we have \begin{equation} \gamma_{n,m}(L_0)\geq (cp)^{2{\rm min}(m,n-k)} p^{-2m(n-k)/n}. \end{equation} \end{corollary} (Recall that with high probability $G$ is non-degenerate so $L_0$ is rank $n$.) We remark that the bounds of theorem \ref{countingthm}, the bounds on the constant $x$ may not be tight, especially for small $m$. One possible way to tighten the bounds is to use the fact that if there ${\rm vol}(K)<p^{m-z}$ for some integer $z>0$ then there must be at least $z$ different $a$ such that ${\rm vol}(K_a)/{\rm vol}(K_{a-1})<p$; in the proof above we only used that there was at least one such $a$. We remark also that, up to the constant $c$, the value of the Rankin invariant at $m=k=n/2$ is optimal for an integer lattice; i.e., the dependence on $p$ is optimal. The reason is that it implies that an $n/2$-dimensional sublattice of $L_0$ has the same volume (again, up to factors of $c^{m}$) as $L_0$ does. It is also worth comparing the value of the Rankin invariant that we find to the Rankin invariant for random lattices (from a different ensemble) in Ref.~\onlinecite{blockwise}. The Rankin constant $\gamma_{n,m}$ is defined to be the maximum of $\gamma_{n,m}(L)$ over all lattices $L$. Those random lattices in Ref.~\onlinecite{blockwise} were used to lower bound the Rankin constant $\gamma_{n,n/2}$ by $\gamma_{n,n/2} \geq (\frac{k}{12})^{n/4}$. Since we need to take $n \sim p^2$ for the bounds of theorem \ref{countingthm} to be effective, if we choose $m=k=n/2$ and $p\sim \sqrt{n}$ we find that with high probability $\gamma_{n,n/2}(L_0) \geq ({\rm const.} \times n)^{n/4}$. Thus, we find the same leading behavior $n^{n/4}$, with the Rankin invariants differing only by factors ${\rm const.}^{n}$. \section{Volume of Oriented Systole} \label{calibrationsec} In this section, we consider a weaker conjecture than \ref{conj1}. Throughout this section, we consider the case of homology using integer coefficients, rather than ${\mathbb Z}_2$ coefficients. In this setting, there is a general method, called ``calibration"\cite{calibration} for lower bounding weights. We will show that this method gives an effective lower bound for homology class eswhich have a particular form, which we call ``split", but we will show that it does not give a useful lower bound in general. The reason for this is related to the existence of short vectors in the exterior $q$-th power of $L_0$. Given an rank-$n$ lattice $L$, we write its $m$-th exterior power as $\wedge^m L$. This exterior power is a lattice of vectors in ${n \choose m}$ dimensions; the vectors in this lattice are linear combinations (with integer coefficients) of vectors $v_1 \wedge v_2 \wedge \ldots \wedge v_m$, where $v_i \in L$ and the exterior product is anti-symmetric under interchange: $v_1 \wedge v_2 = - v_2 \wedge v_1$. \begin{definition} A vector $v$ in $\wedge^m L$ is called ``split" if $v=x_1 \wedge \ldots \wedge x_m$ for $x_1,\ldots x_m \in L$. \end{definition} The $q$-th homology classes of the torus $T^n$ are in one-to-one correspondence with vectors in $\wedge^q {\mathbb Z}^n$. For the torus ${\mathbb R}^n/L_0$ that we consider, it will be more convenient to regard the classes as being in one-to-one correspondence with vectors in $\wedge^q L_0$. That is, the $k$-th homology class represented by a hyperplane which is a span of $k$ basis vectors will correspond to the vector which is the exterior product of these $k$ basis vectors. The lattice $\wedge^m L$ inherits an inner product: $$(x_1 \wedge \ldots \wedge x_m) \cdot (y_1 \wedge \ldots \wedge y_m)={\rm det}(S),$$ where $S$ has matrix elements $S_{i,j}=x_i \cdot y_j$. We write this norm $|X|$, where $X\in \wedge^m L_0$. Calibration allows one to lower bound the volume of a representative of a homology class in $\wedge^q L_0$ using this inner product. We first explain this lower bound in the split case. The arguments are not new. \begin{lemma} \label{calsplit} Let ${\rm vol}(v_1,\ldots,v_q)\neq 0$. Then, the minimum volume of any closed chain (either a sum of faces of $q$-faces of the unit hypercubes used in the cubulation or more generally an arbitrary sum of simplices) representing homology class $X=v_1 \wedge \ldots \wedge v_q$ is greater than or equal to $|v_1 \wedge \ldots \wedge v_q|$. \begin{proof} Let us write $v \cdot d \vec x$ to denote a differential $1$-form $\sum_i (v)_i d x^i$, where $i=1,\ldots,n$ are orthogonal basis directions in Euclidean space and $(v)_i$ are components of $i$. Consider the differerential $q$-form $\omega=(v_1 \cdot d\vec x) \wedge (v_2 \cdot d \vec x) \wedge \ldots \wedge (v_q \cdot d \vec x)$. Let $S$ denote the hyperplane spanned by vectors $v_1,\ldots,v_q$ (the hyperplane is oriented, so the order of vectors matters). We have $\int_S \omega=|X|^2$. Further, for any chain $C$ in the same homology class as $S$, we have $\int_C \omega=\int_S \omega=|X|^2$, where the integral over $C$ is given by writing $C$ as a sum of $q$-faces of the unit hypercubes and integrating $\omega$ over each face. (Indeed, one can also consider more general $C$, such as sums of arbitrary simplices, and the same result holds). For a $q$-face (or indeed any sum of $q$-dimensional simplices), the integral of $\omega$ over that face is bounded by $|X|$ times the volume of the face. Hence, the volume of $C$ must be at least equal to $(\int_C \omega)/|X|=|X|$. \end{proof} \end{lemma} Now we consider the nonsplit case. In contrast to the split case where we were able to ``calibrate" the hyperplane $S$ (find a differential form assuming maximum value on that hyperplane), we might not be able to calibrate nonsplit homology classes. However, we can still obtain a lower bound. \begin{lemma} \label{calgen} Let $X\in \wedge^q L$, $X \neq 0$. Then, the minimum volume of any closed chain representing homology class $X$ is lower bounded by $|X|$. \begin{proof} Write $X=\sum_a X_a$, where $X_a$ are split vectors. For each $X_a=v_1^a \wedge \ldots \wedge v_q^a$, define a differential $q$-form $\omega_a=(v_1^a \cdot d\vec x) \wedge \ldots \wedge (v_q^a \cdot d \vec x)$. Let $\omega=\sum_a \omega_a$. Let $S_a$ denote the hyperplane spanned by vectors $v^a_1,\ldots,v_q^a$. Let $S$ denote the union of hyperplanes $S_a$. We have $\int_{S_a} \omega_b=(X_a,X_b)$. Hence, $\int_S \omega=|X|^2$. We now consider the maximum of the integral of $C$ over a $q$-face or $q$-dimensional simplex of unit volume. This is equal to $${\rm max}_{V \; {\rm split}, \; |V|=1} (V,X),$$ where we take the maximum over all split vectors $V\in \wedge^q {\mathbb R}^n$, with $V$ not necessarily in $\wedge ^q L$; i.e., $V=v_1 \wedge \ldots \wedge v_q$ for {\it arbitrary} $v_1,\ldots v_q$, with $v_1,\ldots, v_q$ not necessarily in the lattice $L$ (i.e., we are upper bounding the integral over a unit volume square in the hyperplane spanned by $v_1,\ldots v_q$). If we relax the requirement that $V$ be split, we have ${\rm max}_{V} (V,X)=|X|$. The restriction to split $V$ can only reduce the maximum, so the maximum over split $V$ is at most $|X|$. So, as in lemma \ref{calsplit}, since the integral over $\omega$ over any chain representing the same homology class as $X$ must be equal to the $\int_S \omega=|X|$, the volume of such a chain must be at least $|X|$. \end{proof} \end{lemma} One may wonder whether the bound in lemma \ref{calgen} can be significantly improved if we do not relax the requirement that $V$ be split. Of course, if $X$ is split, then ${\rm max}_{V \; {\rm split}, \; |V|=1} (V,X) \geq |X|/\sqrt{{n \choose q}}=|X|$ and the maximum is achieved for $V=X$. However, for $X$ not split, the maximum might be smaller and so the lower bound on the volume would be correspondingly: we can lower bound the volume of a closed chain representing homology class $X$ by $|X|^2/{\rm max}_{V \; {\rm split}, \; |V|=1} (V,X)$. Unfortunately, this at best only leads to a small improvement in the bound. We claim that \begin{equation} \label{splitV} {\rm max}_{V \; {\rm split}, \; |V|=1} (V,X) \geq |X|/\sqrt{{n \choose q}}, \end{equation} so that at best we would lower bound the volume by $\sqrt{{n \choose q}} |X|$, and since $\sqrt{{n \choose q}}< 2^{n/2}$, this leads to only a small improvement (recall that there are $N=p^{n/2}$ qubits and we choose $p>>1$). To see Eq.~\ref{splitV}, consider the orthogonal basis for $\wedge^q {\mathbb R}^n$ of vectors $x_1 \wedge \ldots \wedge x_q$ where $x_1,\ldots, x_q$ are chosen from the $n$ different coordinate directions. These basis vectors are all split. Since $\wedge^q {\mathbb R}^n$ is ${n \choose q}$-dimensional, there must be some basis vector $V$ such that $|(V,X)| \geq |X|/\sqrt{{n \choose q}}$. Using this vector $V$ (or its negation if the inner product $(V,X)$ is negative) in the maximum gives Eq.~(\ref{splitV}). The Rankin invariant is the minimal value of the norm $|X|$ over nonzero split vectors. Thus, the results on the Rankin invariant give a lower bound on the volume of representatives of split homology classes. However, in Ref.~\onlinecite{coulangeon}, it was shown that for certain lattices $L$ the shortest nonzero vector in $\wedge^m L$ may be shorter than the Rankin invariant. Interestingly, the lattices we consider here provide another example where this occurs; in fact this occurs for any lattice with sufficiently large Rankin invariant. \begin{lemma} \label{shortsplit} Let $L$ be a rank-$n$ lattice. Then, the shortest nonzero vector in $\wedge^m L$ has norm at most $\sqrt{\vphantom{I} \gamma_{{n \choose m}} }{\rm vol}(L)^{m/n}$, where $\gamma_{{n \choose m}}$ denotes Hermite's constant in dimension ${n \choose m}$. Hence, if $\gamma_{n,m}(L) \geq \gamma_{{n \choose m}}$, then the shortest vector is not split. \begin{proof} We have ${\rm vol}(\wedge^m L)={\rm vol}(L)^{{n-1} \choose {m-1}}$ by Proposition 1.10.4 of Ref.~\onlinecite{perfect}. The lattice $\wedge^m L$ has rank $r={n \choose m}$, and so the shortest nonzero vector in $\wedge^m L$ has length at most $\sqrt{\gamma_{r}} {\rm vol}(\wedge^m L)^{1/r}$, where $\gamma_r$ is Hermite's constant. So, the shortest nonzero vector in $\wedge^m L$ has length at most \begin{equation} \sqrt{\gamma_r} {\rm vol}(L)^{{{n-1} \choose {m-1}}/{n \choose m}}=\sqrt{\gamma_r} {\rm vol}(L)^{m/n}. \end{equation} \end{proof} \end{lemma} For all $r$, we have $\gamma_r\leq 1+r/4$, with an asymptotic behavior $\gamma_r \lesssim \frac{2r}{\pi e}$\cite{MH}. So, $\sqrt{\vphantom{I}\gamma_{{n \choose m}}} \leq \sqrt{1+{n \choose m}/4}$. So, lemma \ref{shortsplit} has an interesting interpretation for the application to quantum codes. If the bound in lemma \ref{calgen} is saturated so that the least volume cycle representing a homology class has volume $|X|$, then we find that the code has roughly square-root distance. Thus, conjecture \ref{conj1} implies that for some homology classes, the bound of lemma \ref{calgen} is far from saturated. The possible improvement of Eq.~(\ref{splitV}) leads to only a small improvement here (though, it is possible that if the possible improvement of Eq.~(\ref{splitV}) holds for the homology classes with smallest $|X|$ and if the bound of lemma \ref{shortsplit} is saturated then one might be able to prove a slightly above square-root distance for integer homology). \section{Quantum Locally Testable Codes from High-Dimensional Constructions} \label{qltcsec} In this section, we give a construction of quantum codes which are ``locally testable"\cite{qltc} using high-dimensional constructions. The construction uses a different topology than above; the similarity in the constructions is simply that in both cases we consider a family of codes derived from manifolds of varying dimension, with the number of qubits in the code depending exponentially on the dimension of the manifold. Let us write ${\rm wt}(O)$ to indicate the weight of an operator $O$. Similarly, given a vector $v$ (in one of the vector spaces defining the chain complexes), we let ${\rm wt}(v)$ denote the number of nonzero entries in $v$. Given a CSS stabilizer code defined from a chain complex $\ldots {\mathcal C}_{q+1} \stackrel{\partial_{q+1}}{\rightarrow} {\mathcal C}_q \stackrel{\partial_q}{\rightarrow} {\mathcal C}_{q-1} \ldots$, with the qudits associated with $q$-cells and the $Z$-type and $X$-type stabilizers associated with $(q+1)$-cells and $(q-1)$-cells, respectively, we define soundness parameters $\epsilon_X(w),\epsilon_Z(w)$ as follows: \begin{definition} Define \begin{equation} \epsilon_Z(w)={\rm min}_{v\in {\mathcal C}_q,{\rm wt}(v)=w, \partial_q v \neq 0} \Bigl( {\rm max}_{u \in {\mathcal C}_q, \partial_q u=0} \frac{{\rm wt}(\partial v)}{{\rm wt}(v+u)}\Bigr). \end{equation} Define $\epsilon_X(w)$ similarly, with $\partial_q$ replaced with $\partial_{q+1}^T$, where the superscript $T$ denotes transpose. \end{definition} Equivalently, consider the minimum over all $Z$-type operators $O$, such that $O$ has weight $w$ and such that $O$ does not commute with at least one stabilizer, of the following quantity: take the maximum, over all $Z$-type operators $P$ which commute with all stabilizers, of the ratio of the number of stabilizers which do not commute with $O$ to the weight of $O+P$. This minimum is $\epsilon_Z$. It is unclear whether or not families of codes exist which have distance which is $\Omega(1)$ and stabilizer weight ${\mathcal O}(1)$ and which have $\epsilon_{X,Z}(w)$ bounded away from zero for all $w$. However, the codes of Ref.~\onlinecite{tz} have distance $\Theta(\sqrt{N})$, stabilizer weight ${\mathcal O}(1)$ and have $\epsilon_{X,Z}(w)$ bounded away from zero for $w\lesssim \sqrt{N}$, as shown in Ref.~\onlinecite{tzqltc} Here we give a simple construction of a family of qubit codes with $2$ encoded qubits and with distance $\Theta(\sqrt{N})$, $\epsilon_{X,Z}(w)$ only polylogarithmically small for all $w$, and with {\it logarithmic} weight stabilizers. We warm up with a construction of a qubit code family with no encoded qubits (and hence the notion of distance is meaningless for this code) but with $\epsilon_{X,Z}(w)$ bounded away from zero for all $w$ and with logarithmic weight stabilizers; we call this the ``simplex code". We then give the full construction, which is based on a product of hyperspheres. \subsection{Simplex Code} Of course, with no encoded qubits, there are some fairly trivial constructions of code with $\epsilon_{X,Z}$ strictly bounded away from zero. For example, one can take a code with $N$ qubits and stabilizers $Z_1,Z_2,\ldots,Z_N$. Thus, every product of $Z$ operators commutes with all stabilizers (and so $\epsilon_Z(w)$ is a minimum over an empty set), while clearly $\epsilon_X(w)=1$ for all $w$. However, the simplex code construction that we give obeys Poincare duality and has an entangled ground state. The code we consider is obtained by taking a toric code on a $n$-dimensional sphere, with the degrees of freedom on $q$-cells for $q =n/2$. The exact value of $q$ is not very important; the important thing is that $q/n$ is neither close to $0$ nor close to $1$ so that the number of $r$-cells will be exponential in $n$. However, the case $q=n/2$ is the self-dual case so this makes the proofs slightly simpler as we need to consider only one type of stabilizer. The cellulation of the $n$-sphere that we use is to take the boundary of a $n+1$-dimensional simplex. We label the $0$-cells by integers $1,\ldots,n+2$. For $0 \leq r \leq n$, there are ${n+2 \choose r+1}$ distinct $r$-cells, labelled by subsets of $\Lambda\equiv \{1,\ldots,n+2\}$ with $r+1$ elements. We use qubits so the vector spaces are all over ${\mathbb F}_2$. For $1 \leq r \leq n$, the boundary operator $\partial_r$ acting on an $r$-cell labelled by some $(r+1)$-element set $S \subset \Lambda$ gives the sum of $r+1$ different $(r-1)$-cells, labelled by the distinct $r$-element subsets of $S$. For example, for $n\geq 2$, $\partial_2 \{1,2,3\}=\{1,2\}+\{1,3\}+\{2,3\}$. We set $\partial_0=0$. One may verify that $\partial_{r-1} \partial_r=0$ for all $r$. For $q=n/2$, there are $N={n+2 \choose n/2+1}$ qubits, so $N$ is exponentially large in $n$. Remark: in previous sections, the number of qubits we also had an exponential factor $p^{n-k}$ which, for large $p$, was the dominant exponential scaling; in this subsection, we do not have such a factor. Each qubit is acted on by $q+1$ stabilizers (as each $q$-cell has $q+1$ cells in its boundary) and each stabilizer acts on $q+2$ different qubits (as each $(q+1)$-cell has $q+2$ cells in its boundary and each $(q-1)$-cell has $q+2$ cells in its coboundary). Hence, the weight is indeed logarithmic in $N$, $w=(1/2+o(1)) \cdot \log_2(N)$. Finally, we show soundness. First, let us introduce notation. \begin{definition} Given an $r$-cell $\sigma$ labelled by some set $S$ and a set $T\subset \Lambda$, we define $r \cup T$ to equal $0$ if $S\cap T\neq \emptyset$ and otherwise $r \cup T$ is the $r+|T|$-cell labelled by $S \cup T$. Given a vector $v \in {\mathcal C}_r$, we define $v \cup T$ by linearity. $v \cup T \in {\mathcal C}_{r+|T|}$ and the coefficient of $v \cup S$ corresponding to an $r+|T|$-cell labelled by a set $U$ is equal to the coefficient of $v$ corresponding to the $r$-cell labelled by $U\setminus T$ if $T \subset U$ and is equal to $0$ is $T \not\subset U$. \end{definition} \begin{lemma} For the simplex code, for all $w$, $\epsilon_X(w)=\epsilon_Z(w) \geq 1$. \begin{proof} Consider any $v\in {\mathcal C}_q$ with $\partial_q v \neq 0$. Set $w=(\partial_q v) \cup \{1\}$. Then, one may verify that $\partial_q x = \partial_q v$ (and hence, setting $w=x-v$, $\partial_q w=0$) and that ${\rm wt}(x) \leq {\rm wt}(\partial_q v)$. \end{proof} \end{lemma} The proof of soundness above has a very simple geometric interpretation. We take the boundary $\partial_q v$ and shrink it to a point (arbitrarily choosing the vertex $\{1\}$ as the point that we shrink it to). \subsection{Hypersphere Product Code} The above construction had constant soundness, but had no encoded qubits. We now give a different construction with $2$ encoded qubits and distance $\sqrt{N}$ and inverse polylogarithmic soundness. We now consider the toric code on a product of spheres, $S^n \times S^n$. We pick an integer $p \geq 1$ ($p$ need not be prime); $p$ will be chosen to equal $\log(N)$ below in order to achieve square-root distance. We choose a cellulation of $S^n$ as follows: consider an $(n+1)$-dimensional hypercube of side length $p$ on each side (we call this the ``large" hyercube). Cellulate that large hypercube using hypercubes of side length $1$ on each side; we call these the ``small" hypercubes) (so that there are $p^{n+1}$ small hypercubes in the cellulation). Then, take the boundary of the hypercube to get a cellulation of $S^n$. A small $(n+1)$-dimensional hypercube has ${n+1 \choose r} 2^{n+1-r}$ different $r$-cells in its boundary (each $r$-cell is a product of $1$-cells in $r$ out of the $n+1$ directions and then for each of the remaining directions there are $2$ possible choices of $0$-cells). The number of $r$-cells in the cellulation of the large hypercube is ${n+1 \choose r} (p+1)^{n+1-r} p^r$. To see this, assign coordinates $[0,p]$ for each side of the large hypercube. Then, each $r$-cell is a product of $1$-cells in $r$ out of the $n+1$ directions with $0$-cells in the remaining directions. The midpoints of the $1$-cells are at half-integer coordinate in the interval $[0,p]$ and so there are $p$ possible choices for each cell. There are $p+1$ possible choices for the coordinates of each $0$-cell as these cells are at integer coordinates in the interval $[0,p]$. To determine the number of $r$-cells in the boundary of the large hypercube, restrict to the case that in at least one of the directions, the coordinate must be $0$ or $p$. This gives the number equal to $${n+1 \choose r} (p+1)^{n+1-r} p^r \Bigl(1-(\frac{p+1-2}{p+1})^{n+1-r}\Bigr),$$ where the ratio in parenthesis $\frac{p+1-2}{p+1}$ is the probability that for a random integer coordinate in the range $[0,p]$, the coordinate is not on the boundary $0$ or $p$. Thus, there are at most $2^{(1-o(1))\cdot n} p^n$ cells (if $n>>p$) and at least $2^{1-o(1))\cdot n} p^{n-1}$ cells (if $n<<p$). We take the product of this cellulation with itself to get a cellulation of $S^n \times S^n$. The degrees of freedom will be on the $q$-cells for $q=n$, so that $N$ is again exponential in $n$. We have $\log_2(N) =2(1+\log_2(p)+o(1))\cdot n$ for $n>>p$ and $\log_2(N)=2(1+\log_2(p)+o(1))\cdot n - 2\log_2(p)$ for $n<<p$. We will take $p=1/\log(N),n=\Theta(\log(N)/\log(\log(N)))$. Each qubit is acted on by $2n$ stabilizers and each stabilizer acts on $2(n+1)$ qubits. Hence, the weight is logarithmic in $N$, $w=\Theta(\log(N)/\log(\log(N)))$. The number of encoded qubits is equal to $2$, as can be computed from the homology of $S^n \times S^n$ (by the K\"{u}nneth formula, $H_i(S^n \times S^n;{\mathbb Z}_2)=2$ for $i=n$, $H_i(S^n \times S^n;{\mathbb Z}_2)=1$ for $i=0,2n$, and $H_n(S^n \times S^n;{\mathbb Z}_2)=0$ otherwise). \begin{lemma} For the hypersphere product code, \begin{equation} d_X(w)=d_Z(w)=(p+1)^{n+1}\Bigl(1-(\frac{p+1-2}{p+1})^{n+1}\Bigr)=\Theta(N^{\frac{p}{p+2}}). \end{equation} For $p=\Omega(\log(N))$, \begin{equation} d_X(w)=d_Z(w)=\Theta(\sqrt{N})). \end{equation} \begin{proof} Let $a$ be a $0$-cell in the second $S^n$ in the product $S^n \times S^n$. Let $Z(a,2)$ be the logical $Z$ operator which is the product $Z_i$ over all $i$ which are the product of an $n$-cell in the first $S^n$ with $0$-cell $a$. Then, any logical $X$ operator which anticommites with $Z(a,2)$ must have some support on some cell $i$ which is a product of an $n$-cell in the first $S^n$ with $0$-cell $a$. However, since $Z(a,2)$ and $Z(b,2)$ are homologous for any two choices of $0$-cells $a,b$ in the second $S^n$ ($Z(a,2),Z(b,2)$ differ by a product of stabilizers), that logical $X$ operator must have some support on some cell $i$ which is a product of an $n$-cell in the first $S^n$ with $0$-cell $a$ for {\it all} $0$-cells $a$ in the second $S^n$. Hence, that logical $X$ operator must have a number of cells in its support equal to the number of $0$-cells in the second $S^n$. This number is equal to $(p+1)^{n+1}\Bigl(1-(\frac{p+1-2}{p+1})^{n+1}\Bigr).$ This number is also an upper bound to $d_X(w)$, since the product of $X$ over all cells which are a product of a fixed $n$-cell in the first $S^n$ with an arbitrary $0$-cells in the second $S^n$ is a logical operator. We can similarly lower bound the number of cells in the support of any logical $X$ operator which anticommutes with the operator $Z(a,1)$, defined to be the logical $Z$ operator which is the product $Z_i$ over all $i$ which are the product of an $n$-cell in the {\it second} $S^n$ with $0$-cell $a$ in the {\it first} $S^n$. \end{proof} \end{lemma} We now show soundness. Again, the geometric interpretation is to shrink the boundary to a point. \begin{lemma} For the hypersphere product code, $\epsilon_X(w)=\epsilon_Z(w) \geq \Omega(1/\log(N)^2)$. \begin{proof} Consider any $v\in {\mathcal C}_q$ with $\partial_q v \neq 0$. We place coordinates $[0,p]^{n+1}$ on the first large hypercube. Call the face where the first coordinate is equal to $p$ the ``top face". Call the face where the first coordinate is equal to $0$ the ``bottom face". Let $v_0=v$. We will construct a sequence $v_1,v_2,\ldots, v_f\in {\mathcal C}_q$ for some integer $f$ where we bound ${\rm wt}(v_{i+1}-v_i)$ with the final vector $v_f=0$. In this way, we will bound ${\rm wt}(v_0)$. We construct the sequence so that the boundaries $\partial_q v_i$ are first removed from the top face of the first hypercube, then moved from the top face to the bottom face of the first hypercube, and finally moved to a point on the bottom face of the first hypercube. Throughout this proof, when we refer to coordinates, we refer to the first hypercube in the product. We regard an $r$-cell as being a product of $0$-cells and $1$-cells. That is, each cell in the product of hypercubes is product of cells in each hypercube. Then, each $r$-cell in a hypercube is a product of $r$ $1$-cells and $n+1-r$ $0$-cells. The $n+1$ different terms in the product correspond to different coordinates. When we say that a cell ``is a $0$-cell" in a given coordinate, we mean that the cell is a product of a $0$-cell in that coordinate with some cells in other coordinates. We first explain the middle step, moving from top face to bottom face. Suppose that some $v_i$ has $\partial_q v_i$ vanishing on the top face. Indeed, suppose that $\partial_q v_i$ vanishes if the first coordinate is greater than $x$, for some integer $x$. Then, let $\pi_x(\partial_q v_i)$ be the projection of $\partial_q v_i$ onto cells with first coordinate equal to $x$. This projection consists only of cells which are $0$-cells in the first coordinate. Let $v_{i+1}-v_i$ be defined by taking $\pi_x(\partial_q i)$ and replacing every $0$-cell in the first coordinate at position $x$ with a $1$-cell at position $x-1/2$. Then, $\pi_x(\partial_q v_{i+1})=0$. Iterating this procedure, decreasing $x$ from $p$, to $p-1$, to $p-2$, and so on, we can construct a sequence $v_i,v_{i+1},\ldots$ so that the final vector in the second has boundary only on the bottom face. There are at most $p$ steps in the sequence. Note that because $\partial^2=0$, once we ensure that $\pi_x(\partial_q v_i)=0$, then we know that $\partial_q v_i$ has not cells which are $1$-cell in the first coordinate with midpoint as $x-1/2$. Now we explain the first step, moving the boundary off the top face. We apply above the above procedure to the {\it second} coordinate. We let $\pi_{p,x}(\partial_q v_i)$ be the projection $\partial_q v_i$ onto cells with first coordinate equal to $p$ and second coordinate equal to $x$, for integer $x$. We then construct a sequence so that this projection vanishes for $x=p,p-1,\ldots$, following the same procedure as in the above paragraph. There are at most $p$ steps in this sequence. We then repeat this for the second coordinate, third coordinate, and so on; giving at most $pd$ steps. The final step is the same as the first step, with the top face replaced by the bottom face. So, there are at most ${\mathcal O}(pd)$ steps in the sequence. We have ${\rm wt}(\partial_q v_i) \leq {\rm wt}(\partial_q v)$ for all vectors in the sequence, and there are at most ${\mathcal O}(pd)$ steps, so this gives $\epsilon_X(w)\geq \Omega(1/pd)=\Omega(1/\log(N)^2)$. \end{proof} \end{lemma} In the above construction, we lost a factor of $d$ due to having $d$ steps in the sequence to move the boundary. Likely for this construction, this factor cannot be avoided since the diameter of the hypercube is $pd$. One might wonder whether other geometries (such as a geometry that more closely approximates a sphere) would improve on this factor; note however that since the volume of a sphere of radius $r$ in $d$-dimensional Euclidean space scales roughly as $(r/d)^{d/2}$, one would need to take the radius proportional to $d$ in order to obtain a large volume so again one would need to have a large diameter for the geometry. \section{Discussion} We have presented several different code constructions based on the toric code on families of higher-dimensional manifolds. Rather than varying the geometry or topology at fixed dimension, as is more commonly done, we have considered varying dimension. This leads to a scaling in which the number of qubits, $N$, scales exponentially with dimension, $n$, so that the weight of the stabilizers $w$ is proportional $n \propto \log(N)$. Assuming conjecture \ref{conj1}, we have constructed a code family with almost linear distance and logarithmic weight. {\it Acknowledgments---} I thank L. Eldar and M. Freedman for useful discussions.
1,108,101,562,616
arxiv
\section{Introduction} Holographic principle in modern physics has been introduced as the fundamental property of quantum gravity, which was speculated on the basis of the area nature of the black hole entropy. After its concrete realization in the form of the AdS/CFT correspondence, it becomes one of main research arena and has been studied in various contexts. Especially, the AdS/CFT correspondence has been used as a modern toolkit of strong coupling phenomena for the dual field theory. In this context holography has many interesting applications and implications even at the level of a classical theory of gravity, since the classical computation in gravity has the dual interpretation for quantum phenomena in the field theory side. Conversely, it also provides new approaches to the classical theory of gravity through the perspective from the dual field theory. One such application is the introduction of holographic approach to conserved charges in the classical theory of gravity which have been explored in the huge number of literatures. Holographic conserved charges in the asymptotic AdS space~\cite{Balasubramanian:1999re} are introduced along with the construction of boundary stress tensor in gravity by using the Brown-York formalism~\cite{Brown:1992br}, which is now regarded as one of the AdS/CFT dictionary. Despite their successful applications to various cases, holographic charges need to be compared and/or matched to traditional bulk charges since their equivalence is not warranted a priori. In Einstein gravity with negative cosmological constant, the equivalence between the holographic and traditional bulk conserved charges of black holes are shown in Ref.s~\cite{Hollands:2005wt,Papadimitriou:2005ii,Hollands:2005ya}. Interestingly, it was observed that holographic conserved charges of black holes might be different from those by the covariant phase space method when the conformal anomaly of the dual field theory does not vanish. In particular, it has been noticed that the results from the conventional expression of holographic charges depends on the frames at the asymptotic AdS space in odd dimensions, while the charges in covariant phase space method remain invariant. When the metric for the asymptotic AdS space in odd dimensions is taken in the standard non-rotating form, the Casimir energy is given just by constant. On the other hand, the Casimir energy becomes dependent on the rotational parameters when the metric is taken in the rotating frame~\cite{Awad:1999xx,Papadimitriou:2005ii,Gibbons:2005jd}. Furthermore, the conventional expression for holographic charges depends on the counter term subtraction scheme~\cite{deHaro:2000xn,Bianchi:2001kw}. Since it was shown that conserved charges by the covariant phase space method should be completely consistent with the first law of black hole thermodynamics~\cite{Wald:1993nt}, the difference between holographic and covariant phase space charges means that conserved charges by the holographic method require the modification of the first law of black hole thermodynamics, albeit the minimal modification of the first law is shown to be sufficient for harmless physical interpretation of holographic results~\cite{Papadimitriou:2005ii}. Still it would be nice if there is a construction of holographic charges in such a way that they are identical with the bulk ones and thus satisfy the standard form of the first law of black hole thermodynamics. In this paper we would like to revisit the construction of the conventional holographic conserved charges and show how it can be modified to give identical results with the bulk constructions. Our approach is based on the recent works~\cite{Kim:2013zha,Kim:2013cor,Hyun:2014kfa,Gim:2014nba,Hyun:2014sha} which can be regarded as the generalization of the traditional Abbott-Deser-Tekin(ADT) formalism~\cite{Abbott:1981ff,Abbott:1982jh,Deser:2002rt,Deser:2002jk} to the holographic setup. It turns out that our construction is rather general and completely consistent with the bulk covariant expression of conserved charges under a very mild assumption. As a result, whenever the boundary stress tensor is well-defined and there is a continuous parameter in the black hole solution, our expression of holographic charges gives finite, frame and scheme independent results and is completely consistent with the standard form of the first law of black hole thermodynamics. \section{Modified holographic conserved charges} Let us start from the brief summary of holographic renormalization in this section. See~\cite{Skenderis:2002wp} for a review. In terms of the boundary values $(\gamma, \psi)$ of the bulk metric and matter fields $\Psi\equiv (g,\psi)$, the on-shell renormalized action is given by (See the Ref.~\cite{Hyun:2014sha} for our convention) \[ I^{on}_{r}[\gamma, \psi] = I [g, \psi]_{\rm on-shell} + I_{GH}[\gamma] + I_{ct}[\gamma,\psi]\,, \] where the Gibbons-Hawking and counter terms $I_{GB}, I_{ct}$ are defined on a hypersurface. The on-shell condition renders the renormalized action $I_{r}^{on}$ to be the functional of the boundary value $(\gamma, \psi)$ at the boundary ${\cal B}$. The generic variation of the on-shell renormalized action is taken in the form of \begin{equation} \label{ReAction} \delta I^{on}_{r}[\gamma, \psi] = \frac{1}{16\pi G}\int_{{\cal B}} d^{d} x~ \sqrt{-\gamma}\Big[ T_{B}^{ij}\delta \gamma_{\,ij} + \Pi_{\psi}\delta \psi\Big]\,. \end{equation} In order to introduce the boundary ADT current in the renormalized boundary action, let us recall that the boundary diffeomorphism results in the identity of the form: \begin{equation} \label{Bid} \nabla_{i}({2\bf T}^{ij}_{B}\zeta^{B}_{j})=T^{ij}_{B}\, \pounds_{\zeta_{B}}\gamma_{ij} + \Pi_{\psi}\, \pounds_{\zeta_{B}}\psi\,, \end{equation} where $\pounds_{\zeta_{B}}$ denotes the Lie derivative on the boundary and ${\bf T}^{ij}_{B}$ does the modified boundary stress tensor defined by \[ {\bf T}^{ij}_{B} \equiv T^{ij}_{B} + \frac{1}{2}{\cal Z}^{ij}_{B}\,, \qquad T^{ij}_{B} \equiv \frac{1}{\sqrt{-\gamma}}\frac{\delta I^{on}_{r}}{\delta \gamma_{ij}}\,. \] The above boundary identity can be regarded as the analog of the bulk Noether identity, of which elementary derivation is given in~\cite{Hyun:2014sha}. Note that ${\cal Z}$-tensor does not need to be a symmetric one and is given in terms of $\Pi_{\psi}$'s. Let us introduce the boundary conserved current as \begin{align} \label{Bcur} {\cal J}^{i}_{B} (\xi_B) \equiv& - \,\delta{\bf T}^{ij}_{B} \xi^{B}_{j} - \frac{1}{2}\gamma^{kl}\delta\gamma_{kl} {\bf T}^{ij}_{B} \xi^{B}_{j} - {\bf T}^{ij}_{B} \delta\gamma_{jk}\xi_{B}^{k} + \frac{1}{2}\, \xi^{i}_{B}\big(T^{kl}_{B}\delta \gamma_{kl} + \Pi_{\psi}\delta \psi\big)\,, \end{align} where $\delta$ denotes a linearization with respect to the boundary fields, including the variations of Killing vectors. This current can be written in the form of \begin{align} \label{} \label{Bcur1} \sqrt{-\gamma} {\cal J}^{i}_{B} (\xi_B) =& - \delta \Big(\sqrt{-\gamma}\,{\bf T}^{ij}_{B}\xi^{B}_{j}\Big) + \sqrt{-\gamma}\, {\bf T}^{i}_{\!B\, j}\delta \xi^{j}_{B} + \frac{1}{2}\, \sqrt{-\gamma}\xi^{i}_{B}\big(T^{kl}_{B}\delta \gamma_{kl} + \Pi_{\psi}\delta \psi\big)\,. \end{align} One may note that the first term corresponds to the linearized form of the conserved currents in conventional holographic charges. For a boundary Killing vector $\xi_{B}$, the conservation of the first term is the simple result of the identity given in Eq.~(\ref{Bid}). Interestingly, this identity also leads to the conservation of the sum of the second and third terms as shown in the Appendix. After taking the linearization of the boundary fields along the black hole parameters and integrating the linearized form along the one-parameter path $ds$, the holographic charges are introduced by \begin{equation} \label{Bcharge} Q_{B}(\xi_{B}) \equiv \frac{1}{8 \pi G} \int ds \int d^{d-1}x_{i}\sqrt{-\gamma}{\cal J}^{i}_{B} \,. \end{equation} We would like to emphasize that our choice of the conserved boundary currents is motivated by the bulk off-shell extension of the conventional ADT formalism and its form in~Eq.~(\ref{Bcur}) is already written down in Ref.~\cite{Hyun:2014sha}. Our boundary current in Eq.~(\ref{Bcur1}) is a generalization in the case of boundary Killing vectors varying under a generic variation. It turns out that this generalization of conserved currents leads to the frame-independent expression of conserved charges, which is also free from the ambiguity in the counter term subtraction. This advantage becomes manifest by showing the equivalence of the boundary currents to the bulk ADT potential expressions for charges, which is given in the following section. \section{Scheme and frame independence} In this section we argue that our boundary construction of currents leads to the scheme independent results by showing their equivalence with covariant bulk expression for the ADT potential of conserved charges. To this purpose, we explain how to construct the off-shell ADT potential even when a bulk Killing vector is varied under a generic variation. In the bulk, there is an off-shell identity known as the Noether identity which can be written in the form of \begin{align} \label{} {\cal E}_{\Psi}\pounds_{\zeta}\Psi &\equiv {\cal E}_{\mu\nu}\, \pounds_{\zeta} g^{\mu\nu} + {\cal E}_{\psi}\,\pounds_{\zeta}\psi= -2\nabla_{\mu}({\bf E}^{\mu\nu}\zeta_{\nu})\,, \qquad {\bf E}^{\mu\nu}\equiv {\cal E}^{\mu\nu} + \frac{1}{2}{\cal Z}^{\mu\nu} \,, \end{align} where ${\cal E}_{\Psi} $ denotes the Euler-Lagrange expression for the field $\Psi$ and ${\cal Z}^{\mu\nu}$ tensor is given in terms of matter Euler-Lagrange expressions, ${\cal E}_{\psi}$. For a Killing vector $\xi$ which may be unpreserved under a generic variation, one can introduce the off-shell ADT current, just like in the non-varying case~\cite{Hyun:2014sha} as \begin{align} \label{ADT} {\cal J}^{\mu}_{ADT} (\xi, \delta \Psi) =&~ \delta {\bf E}^{\mu\nu}\xi_{\nu} + \frac{1}{2}g^{\alpha\beta}\delta g_{\alpha\beta}\, {\bf E}^{\mu\nu}\xi_{\nu} + {\bf E}^{\mu\nu}\delta g_{\nu\rho}\, \xi^{\rho} + \frac{1}{2}\xi^{\mu}{\cal E}_{\Psi}\delta \Psi\,, \end{align} which can be rewritten as \begin{align} \label{} \label{ADTcpt} \sqrt{-g}{\cal J}^{\mu}_{ADT} (\xi, \delta \Psi) =&~ \delta\Big(\sqrt{-g}\, {\bf E}^{\mu\nu}\xi_{\nu}\Big) - \sqrt{-g}\, {\bf E}^{\mu}_{~\nu}\, \delta\xi^{\nu} + \frac{1}{2}\sqrt{-g}\,\xi^{\mu}{\cal E}_{\Psi}\delta \Psi\,. \end{align} This expression may be regarded a slight generalization of the non-varying Killing vector case~\cite{Kim:2013zha,Hyun:2014sha}. Note that this current takes the same structure as the boundary conserved current in the previous section. The off-shell conservation of this current ${\cal J}^{\mu}_{ADT}$ allows us to write this current in terms of the potential as ${\cal J}^{\mu}_{ADT}=\nabla_{\nu}Q^{\mu\nu}_{ADT}$ at the off-shell level. For the bulk Killing vector $\xi$, one can see that the symplectic current~\cite{Lee:1990nz,Wald:1993nt,Iyer:1994ys} defined for a generic diffeomorphism parameter $\zeta$ by $ \omega(\pounds_{\zeta}\Psi, \delta \Psi) \equiv \pounds_{\zeta}\Theta^{\mu}(\delta \Psi\,;\, \Psi) - \delta \Theta^{\mu}(\pounds_{\zeta}\Psi\,;\, \Psi) $, reduces to \begin{equation} \label{} \omega(\pounds_{\xi}\Psi, \delta \Psi) = -\Theta^{\mu}(\pounds_{\delta \xi}\Psi\,;\, \Psi)\,, \end{equation} wherer $\Theta^{\mu}(\delta \Psi)$ is the surface term for a generic variation of the bulk Lagrangian ${\cal L}$ given by $\delta (\sqrt{-g}{\cal L}) = \sqrt{-g}{\cal E}_{\Psi}\delta \Psi + \partial_{\mu}\Theta^{\mu}(\delta \Psi)$. Through relations among the ADT current, symplectic current and the off-shell Noether current for a diffeomorphism variation ${\cal J}^{\mu}_{\zeta} \equiv 2\sqrt{-g}{\bf E}^{\mu\nu}\zeta_{\nu} + \zeta^{\mu}\sqrt{-g}{\cal L} - \Theta^{\mu}$, the final off-shell expression of the ADT potential, up to the irrelevant total derivative term, turns out to be \begin{align} \label{bulkADT} 2\sqrt{-g} Q^{\mu\nu}_{ADT}(\xi, \delta \Psi\,;\, \Psi) =&~ \delta K^{\mu\nu}(\xi\,;\, \Psi) - K^{\mu\nu}(\delta \xi\,;\,\Psi) -2\xi^{[\mu}\Theta^{\nu]}(\delta \Psi\,;\,\Psi)\,. \end{align} This final expression can be regarded as a slight generalization of covariant phase space results~\cite{Wald:1993nt,Iyer:1994ys}, which has already been obtained in Einstein gravity in~\cite{Barnich:2004uw}. The matching between the boundary current ${\cal J}^{i}_{B}$ and the bulk ADT potential $Q^{\mu\nu}_{ADT}$ goes in the same way just as in the case of $\delta \xi^{\mu}=0$ and $\delta \xi^{i}_{B} =0$, as follows. Let us take the Fefferman-Graham coordinates for the asymptotic AdS space as $ds^{2} = d\eta^{2} + \gamma_{ij}dx^{i}dx^{j}$. Adding the Gibbons-Hawking and counter terms in holographic renormalization gives us the additional surface terms modifying the bulk surface term $\Theta^{\mu}$ as \begin{align} \label{} \tilde{\Theta}^{\eta}(\delta \Psi) =&~ \Theta^{\eta}(\delta \Psi) + \delta(2\sqrt{-\gamma}L_{GH}) + \delta(\sqrt{-\gamma}L_{ct}) = \sqrt{-\gamma}\Big(T^{ij}_{B}\delta \gamma_{ij} + \Pi_{\psi}\delta \psi\Big)\,, \end{align} where the second line equality comes from Eq.~(\ref{ReAction}). Holographic renormalization condition and $\tilde{\Theta}$-expression tells us that $\tilde{\Theta}^{\eta} \sim {\cal O}(1)$ in the radial expansion. Correspondingly, the modified on-shell Noether current $\tilde{J}^{\eta}$ for a diffeomorphism parameter $\zeta$ becomes \begin{equation} \label{ModRel} \tilde{J}^{\eta} = \partial_{i}\tilde{K}^{\eta i}(\zeta) = \zeta^{\eta}\sqrt{-\gamma}{\cal L}^{on}_{r} - \tilde{\Theta}^{\eta}(\pounds_{\zeta} \Psi)\,, \end{equation} where we have used the on-shell condition on the bulk background fields. Just as in the case of $\delta \xi^{\mu}=0$~\cite{Hyun:2014sha,Papadimitriou:2005ii}, the asymptotic behavior of general diffeomorphism parameter $\zeta$ is given by $\zeta^{\eta} \sim {\cal O}(e^{-d\eta})$ and $\zeta^{i} \sim {\cal O}(1)$, in order to preserve the asymptotic gauge choice and the renormalized action. This asymptotic behavior in the diffeomorphism parameter $\zeta$ allows us to discard the first term in the right hand side of Eq.~(\ref{ModRel}) when we approach the boundary. In the following we keep only the relevant boundary values of parameters such that a bulk Killing vector $\xi^{i}$ is replaced by its boundary value $\xi^{i}_{B}$. For the diffeomorphism variation $\pounds_{\zeta}\Psi$, the modified surface term $\tilde{\Theta}^{\eta}$ becomes \begin{equation} \label{} \tilde{\Theta}^{\eta}(\pounds_{\zeta}\Psi) = \sqrt{-\gamma}\Big(2T^{ij}_{B}\nabla_{i}\zeta_{j} + \Pi_{\psi}\pounds_{\zeta}\psi\Big) = \partial_{i}\Big(2\sqrt{-\gamma}\,{\bf T}^{ij}_{B} \,\zeta_j\Big)\,, \nonumber \end{equation} where we have used the identity given in Eq.~(\ref{Bid}). By using this result, one can see that the Noether potential $\tilde{K}^{\eta i}$, up to the irrelevant total derivative term, is given by $ \tilde{K}^{\eta i} = -2\sqrt{-\gamma}\,{\bf T}^{ij}_{B} \,\zeta_j$. As a result, the on-shell relation between the ADT and Noether potentials for a Killing vector $\xi_{B}$ is given by \begin{align} \label{equivalence} \sqrt{-g}Q^{\eta i}_{ADT}& |_{\eta\rightarrow \infty} = \sqrt{-\gamma}{\cal J}^{i}_{B}\,. \end{align} This shows us the scheme independence of the holographic charges since their currents are identified with covariant bulk ADT potentials which are regardless of the counter terms. We would like to emphasize that the above potential-current relation holds up to the total derivative terms which are irrelevant in the charge computation. Moreover this equality guarantees the Smarr relation since the relation was shown to hold in bulk formalisms~\cite{Barnich:2004uw,Hyun:2014kfa}. Since we have presented formal arguments, it would be illuminating to show the frame and scheme independence of mass and angular momentum of five-dimensional AdS Kerr black holes as an explicit example, which is done in the following section. \section{Five-dimensional example} As a specific example, let us focus on the pure Einstein gravity on five dimensions. In the following we will set the radius of the asymptotic AdS space as unity, $L=1$. AdS Kerr black hole solutions in Boyer-Lindquist coordinates~\cite{Hawking:1998kw} are given by \begin{align} \label{} ds^{2}=&~ -\frac{\Delta_{r}}{\rho^{2}}\Big(dt - a \Delta_{\phi}d\phi - b\Delta_{\psi}d\psi\Big)^{2} + \frac{\rho^{2}}{\Delta_{r}}dr^{2} + \frac{\rho^{2}}{\Delta_{\theta}}d\theta^{2} \nonumber \\ &~+ \frac{\Delta_{\theta}\sin^{2}\theta}{\rho^{2}}\Big(adt - \frac{r^{2}+a^{2}}{1-a^{2}}d\phi\Big)^{2} + \frac{\Delta_{\theta}\cos^{2}\theta}{\rho^{2}}\Big(bdt - \frac{r^{2}+b^{2}}{1-b^{2}}d\psi\Big)^{2} \\ &~+ \frac{1+1/r^{2}}{\rho^{2}}\Big( abdt - b(r^{2}+a^{2})\Delta_{\phi} d\phi -a(r^{2}+b^{2}) \Delta_{\psi}d\psi \Big)^{2}\,, \nonumber \end{align} where $ \rho^{2} \equiv r^{2}+a^{2}\cos^{2}\theta + b^{2}\sin^{2}\theta$, \begin{align} \label{BLcoord} \Delta_{r}&\equiv (r^{2}+a^{2})(r^{2}+b^{2})\Big(1+ \frac{1}{r^{2}}\Big) - 2m\,, \nonumber\\ \Delta_{\theta} & \small \equiv 1-a^{2}\cos^{2}\theta -b^{2}\sin^{2}\theta\,, \quad \Delta_{\phi} \equiv \frac{\sin^{2}\theta}{1-a^{2}}\,, \quad \Delta_{\psi}\equiv \frac{\cos^{2}\theta}{1-b^{2}} \,.\nonumber \end{align} In order to use the holographic method, it is useful to take the radial expansion of the metric in Fefferman-Graham coordinates as \begin{equation} \label{} ds^{2}= d\eta^{2} + \gamma_{ij}dx^{i}dx^{j}\,, \quad \gamma_{ij} = \sum_{n=0} e^{-2(n-1)\eta}\gamma^{(n)}_{ij}\,, ~~ \end{equation} where the non-vanishing components of background metric $\gamma^{(0)}$ are given by \begin{align} \label{} \gamma^{(0)}_{tt} = -1\,,\quad \gamma^{(0)}_{t\phi}=a \Delta_{\phi}\,,\quad \gamma^{(0)}_{t\psi}= b \Delta_{\psi}\,,\quad \gamma^{(0)}_{\theta\theta} =\frac{1}{\Delta_{\theta}}\,,\quad \gamma^{(0)}_{\phi\phi}= \Delta_{\phi}\,,\quad \gamma^{(0)}_{\psi\psi} = \Delta_{\psi}\,. \nonumber \end{align} In the computation of conserved charges, it turns out that the expansion up to the second order is sufficient. The non-vanishing components of the first order $\gamma^{(1)}$ are given by \begin{align} \label{} \gamma^{(1)}_{tt} &= - \frac{1}{2}(a^{2}+b^{2}+\Delta_{\theta})\,, \quad \gamma^{(1)}_{t\phi}=\frac{a\Delta_{\phi}}{2}\big(a^{2}-b^{2}-\Delta_{\theta}\big)\,, \quad \gamma^{(1)}_{t\psi} = \frac{b\Delta_{\psi}}{2}\big(b^{2}-a^{2}-\Delta_{\theta}\big)\,, \nonumber \\ \gamma^{(1)}_{\theta\theta} &=\frac{(2-a^{2}-b^{2}-3\Delta_{\theta})}{2\Delta_{\theta}}\,, \quad \gamma^{(1)}_{\phi\phi} = \frac{\Delta_{\phi}}{2}\big(a^{2}-b^{2}-\Delta_{\theta}\big)\,, \quad \gamma^{(1)}_{\psi\psi} = \frac{\Delta_{\psi}}{2}\big(b^{2}-a^{2}-\Delta_{\theta}\big)\,, \nonumber \end{align} and those of the second order $\gamma^{(2)}$ are \begin{align} \gamma^{(2)}_{tt}&= 3 m - \frac{1}{8} (a^2 - b^2)^2 - \frac{1}{4}(2 - a^2 - b^2) \Delta_{\theta} + \frac{3}{8}\Delta_{\theta}^2\,, \nonumber\\ \gamma^{(2)}_{t\phi} &= a \Delta_{\phi}\Big[-3 m + \frac{1}{8} (a^2 - b^2)^2 - \frac{1}{4} (a^2 - b^2)\Delta_{\theta} + \frac{1}{8}\Delta_{\theta}^2\Big]\,, \nonumber\\ \gamma^{(2)}_{t\psi}&=b \Delta_{\psi}\Big[-3 m + \frac{1}{8} (a^2 - b^2)^2 - \frac{1}{4} (b^2 - a^2)\Delta_{\theta} + \frac{1}{8}\Delta_{\theta}^2\Big]\,, \nonumber\\ \gamma^{(2)}_{\theta\theta}&= \small \frac{1}\Delta_{\theta}\Big[m + \frac{(2 - a^2 - b^2)^2}{8} - \frac{3\Delta_{\theta} }{4} (2 - a^2 - b^2)+ \frac{9\Delta_{\theta}^2}{8}\Big]\,, \nonumber \\ \gamma^{(2)}_{\phi\phi}&= \Delta_{\phi}\Big[ m\big(1+ 4a^{2}\Delta_{\phi}\big)+\frac{(a^2 - b^2)^2}{8} -\frac{(a^2 - b^2) \Delta_{\theta} }{4} + \frac{\Delta^{2}_{\theta}}{8} \Big] \,, \nonumber\\ \gamma^{(2)}_{\psi\psi}&= \Delta_{\psi}\Big[m\big(1+ 4b^{2}\Delta_{\psi}\big)+\frac{(a^2 - b^2)^2}{8} -\frac{(b^2 - a^2) \Delta_{\theta}}{4} + \frac{\Delta^{2}_{\theta}}{8} \Big] \,, \nonumber \\ \gamma^{(2)}_{\phi\psi}&= 4abm\Delta_{\phi}\Delta_{\psi}\,. \nonumber \end{align} Now, it is straightforward to obtain the expression of $\sqrt{-\gamma} {\cal J}^{i}_{B} (\xi_B)$ by using Eq.~(\ref{Bcur1}). Since the first term in Eq.~(\ref{Bcur1}) was already given in~\cite{Papadimitriou:2005ii}, let us focus on the second and third terms. One may recall that the time-like Killing vector in this metric is given by $\xi_{T}^{i}\partial_{i} = \partial_{t} -a \partial_{\phi} - b\partial_{\psi}$. % After some computations\cite{xAct} with $0\le \theta <\frac{\pi}{2}$, $0 \le \phi, \psi < 2\pi$, it turns out that \begin{align} \label{} \int d^{3} x_{i} \sqrt{-\gamma} \Big[ {\bf T}^{i}_{\!B\, j}\delta \xi^{j}_{ T} +& \frac{1}{2}\, \xi^{i}_{T}\big(T^{kl}_{B}\delta \gamma_{kl} + \Pi_{\psi}\delta \psi\big) \Big] \nonumber \\ &= -\frac{\pi^{2}(a^{2}-b^{2})(2-a^{2}-b^{2})}{6(1-a^{2})(1-b^{2})}\bigg[ \frac{a\delta a}{1-a^{2}} - \frac{b\delta b}{1-b^{2}}\bigg] \,, \end{align} which results in the linearized mass expression of AdS Kerr black holes from the boundary current as \begin{align} \label{} \delta M &= \delta Q_{B}(\xi_{T}) \nonumber \\ &= \frac{\pi}{2G} \bigg[\frac{ ma\delta a (5-a^{2}-3b^{2}-a^{2}b^{2})}{(1-a^{2})^{3}(1-b^{2})^{2}} + \frac{mb\delta b (5-b^{2}-3a^{2}-a^{2}b^{2})}{(1-a^{2})^{2}(1-b^{2})^{3}}+ \frac{\delta m (3-a^{2}-b^{2} -a^{2}b^{2})}{2(1-a^{2})^{2}(1-b^{2})^{2}} \bigg]\,. \nonumber \end{align} One can check that the difference between our mass expression of $\delta M$ and the conventional one in~\cite{Papadimitriou:2005ii} resides only in absence of the rotational parameter dependence of Casimir energy part. The finite mass expression is given by \begin{align} \label{} M = \frac{3 \pi}{32 G} + \frac{ \pi m (3-a^{2}-b^{2} -a^{2}b^{2})}{4G(1-a^{2})^{2}(1-b^{2})^{2}} \,, \end{align} where we have added the constant Casimir energy part as an integration constant. For rotational Killing vectors $\xi_{R1}^{\mu}\partial_{\mu} = -\partial_{\phi}$ and $\xi_{R2}^{\mu}\partial_{\mu} = -\partial_{\psi}$, one can see that the additional terms, {\it i.e.} second and third ones in Eq.~(\ref{Bcur1}), vanish and so the angular momentum expressions are identical with those given in~\cite{Papadimitriou:2005ii}, which is also the case in the computation of Wald's entropy of black holes. Now, let us check the frame independence for our expression by considering different coordinates. In asymptotically canonical AdS coordinates, the metric of AdS Kerr black holes can be taken in the form of~\cite{Gibbons:2005jd} \begin{align} ds^{2} =& -(1+y^{2})dt^{2} + \frac{dy^{2}}{1+y^{2}-\frac{2m}{\Delta_{\hat{\theta}}^{2}y^{2}}} + y^{2}d\hat{\Omega}_{3}^{2} \\ &+ \frac{2m}{\Delta_{\hat{\theta}}^{3}y^{2}}(dt - a\sin^{2}\hat{\theta}d\hat{\phi} - b\cos^{2}\hat{\theta}d\hat{\psi})^{2} + \cdots\,, \nonumber \end{align} where \begin{align} \Delta_{\hat{\theta}} &\equiv 1 - a^{2}\sin^{2}\hat{\theta} - b^{2}\cos^{2}\hat{\theta}\,,\nonumber\\ d\hat{\Omega}_{3}^{2} &\equiv d\hat{\theta}^{2} + \sin^{2}\hat{\theta}d\hat{\phi} + \cos^{2}\hat{\theta}d\hat{\psi}\,.\nonumber \end{align} By using Fefferman-Graham coordinates, one can check explicitly that mass and angular momentums in these non-rotating coordinates are given by the same expressions as in the rotating ones. (See also~\cite{Gibbons:2005jd}.) For comparison, let us turn to the bulk covariant expressions of ADT potentials. In Einstein gravity, the Noether potential $K^{\mu\nu}$ and the bulk surface term $\Theta^{\mu}$ can be taken respectively as $K^{\mu\nu}(g\,;\, \zeta) = 2\nabla^{[\mu} \zeta^{\nu] }$ and $\Theta^{\mu} (g\,;\,\delta g)= 2\sqrt{-g} g^{\alpha[\mu} \nabla^{\beta]}\delta g_{\alpha\beta}$. The ADT potential, $Q^{\mu\nu}_{ADT}(\xi_{T}\,;\,\delta a, \delta b, \delta m)$ for AdS Kerr black holes is composed of three terms which correspond to the variations of parameters $a$, $b$ and $m$, respectively as $Q^{\mu\nu}_{ADT}(\xi_{T}\,;\,\delta m)$, $Q^{\mu\nu}_{ADT}(\xi_{T}\,;\,\delta a)$ and $Q^{\mu\nu}_{ADT}(\xi_{T}\,;\,\delta b)$. For the bulk Killing vector $\xi_{T}$ taken in the same form as the boundary time-like Killing vector, the relevant component of the $Q^{\mu\nu}_{ADT}(\xi_{T}\,;\,\delta m)$ term is given by \begin{align} &2\sqrt{-g}Q^{\eta t}_{ADT} (\xi_{T}\,;\,\delta m) = \frac{-\delta m\, \sin2\theta}{(1-a^{2})^{2}(1-b^{2})^{2}}\Big[(a^{2}+b^{2}+a^{2}b^{2}-3) +2 (a^{2}-b^{2})\cos 2\theta\Big]\,. \nonumber \end{align} The relevant component of the $Q^{\mu\nu}_{ADT}(\xi_{T}\,;\,\delta a)$ term is given by \begin{align} \label{} 2\sqrt{-g} Q^{\eta t}_{ADT} (\xi_{T}\,;\,\delta a) & = \frac{-a\delta a\, \sin2\theta}{(1-a^{2})(1-b^{2})} \bigg[\frac{(b^{2}-a^{2}) }{8} + \frac{2m(-5+3b^{2} + a^{2} + a^{2}b^{2})}{(1-a^{2})^{2}(1-b^{2})} \nonumber \\ & \qquad\qquad\quad +\Big\{ \frac{1}{2}(2-a^{2}-b^{2}-4e^{2\eta}) + \frac{2m(1-3(b^{2}-a^{2}) -a^{2}b^{2})}{(1-a^{2})^{2}(1-b^{2})}\Big\} \cos2\theta \nonumber \\ & \qquad\qquad\quad + \frac{3}{8}(b^{2}-a^{2})\cos4\theta\bigg]\,, \nonumber \end{align} where one may note that the potentially divergent term proportional to $e^{2\eta}$ corresponds to the irrelevant total derivative one. $Q^{\mu\nu}_{ADT}(\xi_{T};\delta b)$ is given just by exchanging $(a, \delta a)$ by $(b, \delta b)$ in the above $Q^{\mu\nu}_{ADT}(\xi_{T};\delta a)$ expression. One may note that the varying Killing vector contribution in Eq.~(\ref{bulkADT}) does not vanish and is given by \[ K^{\eta t}(\delta\xi_{T})=\frac{8ma\cos\theta\sin^{3}\theta}{(1-a^{2})^{2}(1-b^{2})}\delta a+ \frac{8mb\cos^{3}\theta\sin\theta}{(1-a^{2})(1-b^{2})^{2}}\delta b\,. \] Now, it is straightforward to check the matching between the linearized mass expression of AdS Kerr black holes as \begin{equation} \label{} \delta M_{ADT} = \frac{1}{16\pi G}\int d\theta d\phi d\psi\, 2\sqrt{-g} Q^{\eta t}_{ADT} = \delta M\,, \end{equation} It is also straightforward to obtain the ADT potentials for rotational Killing vectors and check its equivalence with the results from the boundary currents. \section{ Conclusion} In this paper, we have proposed how to modify the conventional expression of holographic conserved charges in order to give the identical results with those from bulk formalisms. Our construction of holographic charges is based on the conserved boundary current, of which form is motivated by the off-shell extension of the traditional ADT formalism for bulk charges. This boundary current is composed of two parts, one of which corresponds to the conventional expression of holographic charges and the other of which does to the additional terms compensating the frame and scheme dependence of the first term. We would like to emphasize that our modification of holographic charge expression does not mean the change of the conventional AdS/CFT dictionary for boundary stress tensor. Rather, our modification corresponds to another prescription, in the gravity context, of holographic charge construction from boundary stress tensor in such a way that it does not depend on the frames for the asymptotic AdS space. In the bulk side, we have extended our previous covariant construction of quasi-local conserved charges when Killing vectors are varied under a generic variation. By showing the equivalence of the modified holographic expression of conserved charges to the bulk covariant expression, we have argued the consistency of our holographic expression with the standard form of the first law of black hole thermodynamics and the Smarr relation. Through the example, it is explicitly shown that the boundary-bulk equivalence is satisfied up to the irrelevant total derivative term. It is also shown that the additional terms in the boundary current vanish in the case of the angular momentum and black hole entropy computation, while these remove the frame-dependence in the mass computation. Since our boundary and bulk constructions of conserved charges are based on a single formalism which depends only on the Euler-Lagrange expression of the given Lagrangian, our construction can be presented in the unified manner and seems very natural. Furthermore, our bulk construction is completely consistent with the well-known formalisms. In all, various constructions are naturally connected and their relationships are revealed in a unified way. It would be very interesting to generalize our construction to the case of more general asymptotic boundary space. \vskip 1cm \centerline{\large \bf Acknowledgments} \vskip0.5cm {SH was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MOE) with the grant number 2012046278 and the grant number NRF-2013R1A1A2011548. S.-H.Yi was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MOE) (No. 2012R1A1A2004410). J. Jeong was supported by the research grant ``ARISTEIA II", 3337 ``Aspects of three-dimensional CFTs", by the Greek General Secretariat of Research and Technology.} \section*{Appendix: Some formulae} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} In order to verify the conservation of boundary currents, let us start from the double variation of fields and actions. When the diffeomorphism parameter $\zeta$ is varied under a generic variation, the variation of any quantity $F^{\mu\nu\cdots}$ containing $\zeta$ is taken as $\delta F^{\mu\nu\cdots}(\zeta\,;\, \Psi) \equiv F^{\mu\nu\cdots}(\zeta+\delta \zeta\,;\, \Psi + \delta \Psi) - F^{\mu\nu\cdots}(\zeta\,;\, \Psi )$. For instance, the Killing conditions for the background field $\Psi$ and the varied field $\Psi+\delta \Psi$ are given respectively by$ \pounds_{\xi}\Psi=0$ and $\pounds_{\xi +\delta \xi}(\Psi + \delta \Psi) =0$. When a diffeomorphism parameter is transformed under a variation such that $\delta \zeta^{\mu} \neq 0$, one needs to modify the commutation of two generic variations as \begin{equation} \label{} (\delta\delta_{\zeta} - \delta_{\zeta}\delta) \Psi = \delta_{\delta \zeta}\Psi\,, \qquad (\delta \delta_{\zeta} - \delta_{\zeta}\delta)I[\Psi]= \delta_{\delta \zeta}I[\Psi]\,. \nonumber \end{equation} For a boundary Killing vector $\xi_{B}$, one can see that \begin{align} \label{dvar} (\delta_{\xi_{B}}\delta -\delta \delta_{\xi_{B}}) I^{on}_{r} [\Psi_{B}] &= \frac{1}{16\pi G}\int d^{d}x~ \delta_{\xi_{B}}\Big[\sqrt{-\gamma} \Big(T^{ij}_{B}\delta \gamma_{ij} +\Pi_{\psi}\delta \psi\Big) \Big] \nonumber \\ &= \frac{1}{16\pi G}\int d^{d}x~ \partial_{i}\Big[\xi^{i}_{B}\sqrt{-\gamma}\Big(T^{kl}_{B}\delta \gamma_{kl} +\Pi_{\psi}\delta \psi\Big) \Big]\,, \end{align} where we have used $\delta_{\xi_{B}}\Psi_{B}=0$ and thus $\delta_{\xi_{B}}I^{on}_{r}[\Psi_{B}]=0$ in the first equality and $\delta_{\xi_{B}}=\pounds_{\xi_{B}}$ in the second equality. The variation with respect to $\delta \xi^{i}_{B}$ can be written as \begin{align} \label{xivar} \delta_{\delta{\xi_{B}}}I^{on}_{r}[\Psi_{B} ] &= \frac{1}{16\pi G}\int d^{d}x \sqrt{-\gamma}\Big(-2\,T^{B}_{ij}\nabla^{i}\delta \xi^{j}_{B} +\Pi_{\psi}\pounds_{\xi_{B}} \psi\Big) \nonumber \\ &= \frac{1}{16\pi G}\int d^{d}x\, \partial_{i}\Big(-2\sqrt{-\gamma}\,{\bf T}^{i}_{\!B\, j}\delta \xi^{j}_{B}\Big) \,, \end{align} where we have used the identity Eq.~(\ref{Bid}) in the second equality. By identifying Eq.~(\ref{dvar}) and Eq.~(\ref{xivar}), one can finally see that \begin{equation} \label{} \nabla_{i}\left[{\bf T}^{i}_{\!B\, j}\delta \xi^{j}_{B} + \frac{1}{2}\, \xi^{i}_{B}\Big(T^{kl}_{B}\delta \gamma_{kl} + \Pi_{\psi}\delta \psi\Big)\right]=0\,. \end{equation}
1,108,101,562,617
arxiv
\section{Introduction} One of the interesting achievements in the field of cold atomic gases is the realization of Fermi-Bose mixtures. For example, in Ref.\,\cite{Schreck2001} the first observation of a mixture of a Bose-Einstein condensate in a Fermi sea was reported. An interesting possibility is that where both components are superfluids. Such experiments have been reported in a $^{6}$Li-$^{7}$Li mixture~\cite{Abeelen, Kempen, Ferrier, Delehaye}, in a $^{40}$K-$^{41}$K mixture with a tunable interaction between the two species~\cite{Wang, Falke, CWu}, in a mixture of $^{6}$Li and $^{133}$Cs with broad interspecies Feshbach resonances~\cite{Repp, Tung}, and in a two-component superfluid $^6$Li-$^{41}$K and $^6$Li-$^{174}$Yb~\cite{Yao,Roy}. In the problem of Fermi-Bose superfluid mixtures, various effects have been studied. These include Faraday waves~\cite{Abdullaev2013}, the existence of a super-counter-fluid phase~\cite{Kuklov}, the existence of dark-bright solitons~\cite{AdhikariPRA2007}, the multiple periodic domain formation~\cite{TylutkiNJP2016}, and collective oscillations ~\cite{Ferliano, Banerjee, Nascimbene, Wen, Wu, MitraJLTP2018, Abdullaev2018}. Finally, the ground state and the existence of vortices in a rotating quasi-two-dimensional Fermi-Bose mixtures has also been investigated in~\cite{WenPRA2014}. Motivated by the studies mentioned above, in this article we study the ground state and the rotational properties of a mixture of a Bose superfluid, with a (paired) Fermi superfluid, at zero temperature. We consider the problem where the two components are confined in a ring potential, i.e., under the assumption of one-dimensional motion, and periodic boundary conditions. This situation is realized experimentally in a very tight toroidal potential, where the transverse degrees of freedom are frozen. We stress that this problem is not only interesting theoretically, but it is also experimentally relevant, following early experiment on rotating fermions in a harmonic trap \cite{Ketferm}. For example, numerous experiments have been performed on single-component rotating Bose-Einstein condensed atoms in annular/toroidal traps, see, e.g., Refs.\,\cite{ann1}. What is even more relevant is the experiment of Ref.\,\cite{ann2}, where a mixture of two distinguishable species of Bose-Einstein condensed atoms has been investigated. One of the main results of the present study is the state of lowest energy for some fixed value of the angular momentum of this coupled system and the corresponding dispersion relation, which plays a central role in its rotational response. As shown below, this problem has a very rich and interesting structure. Given the large number of parameters, we have chosen to tune the relative strength of energy scales that are associated with the Bose-Bose coupling, the Bose-pair coupling, and the Fermi energy, which results from the fermionic origin of the pairs. An experimentally-relevant assumption which is done in the present study is that the kinetic energy associated with the motion of the atoms around the ring is much smaller than all these three energy scales. As a result, in the case of phase coexistence, it is energetically favourable for the system to reside in plane-wave states, with a homogeneous density distribution. This is simply due to the fact that, when the kinetic energy is negligible, the interaction energy is always minimized when the density of both components is homogeneous \cite{RoussouNJP2018}. Clearly, this simplifies the problem significantly, but on the same time it gives it an interesting structure, as the cloud undergoes discontinuous transitions as the angular momentum increases. In addition, the dispersion relation has a quasi-periodicity, which is set by the minority component. The system that we have considered is ideal for the study of one of the most fundamental problems in cold atomic systems, namely, superfluidity. One of the main messages of our study is the richness of the collection of phenomena which are associated with superfluidity due to the mixing of a Bose-Einstein condensate of bosonic atoms, with a paired superfluid Fermi system. In what follows below, we start with our model in Sec.\,II. In Sec.\,III we derive the condition for phase coexistence and phase separation of the two superfluid components. We then turn to the rotational response of the system in Sec.\,IV. In this section we start with the phase-separated regime, and then turn to the case of phase coexistence. In Sec.\,V we investigate the more general problem where the boson mass is not equal to the mass of the pairs. Finally, we summarize our main results and present our conclusions in Sec.\,VI. \section{The model} \label{sec_The_model} The problem we have in mind is that of a fermionic and a bosonic superfluid, confined in a very tight toroidal trap. Following Ref.\,\cite{JMK}, since the transverse degrees of freedom are frozen, one may assume that the two order parameters of the bosonic atoms and of the fermionic pairs have a product form of a Gaussian profile in the transverse direction, times $\Psi_B(\theta)$ for the bosonic atoms and $\Psi_P(\theta)$ for the fermionic pairs, with both $\Psi_B$ and $\Psi_P$ depending on the azimuthal angle $\theta$, only. Then, integrating over the transverse direction, one ends up in the following energy functional, \begin{eqnarray} H = - \frac {\hbar^2 N_P} {2 m_P R^2} \int \Psi_P^* \partial_{\theta \theta} \Psi_P \, d x - \frac {\hbar^2 N_B} {2 m_B R^2} \int \Psi_B^* \partial_{\theta \theta} \Psi_B \, d x \nonumber \\ + \frac 1 3 G_P N_P^3 \int |\Psi_P|^6 \, d x + G_{FB} N_P N_B \int |\Psi_P|^2 |\Psi_B|^2 \, d x \nonumber \\ + \frac 1 2 G_B N_B^2 \int |\Psi_B|^4 \, d x, \nonumber \\ \label{Ham} \end{eqnarray} where $x = R \theta$, with $R$ being the radius of the ring. In the above equation we use the normalization $\int |\Psi_B|^2 d x = 1$ and $\int |\Psi_P|^2 dx = 1$. Here $N_B$ is the number of bosonic atoms, with a mass $m_B$. Assuming that we have $N_F$ number of fermionic atoms, divided equally into two spin states that pair up, we have $N_P = N_F/2$ pairs of fermions. Also, $m_P = 2 m_F$ is the mass of the fermion pairs, with $m_F$ being the mass of each fermionic atom. In the data presented below we first consider the case $m_B = m_P$, and then in Sec.\,V we examine the more general problem, $m_B \neq m_P$. Also, in Eq.\,(\ref{Ham}), $G_{B}$ and $G_{FB}$ are the intra- and inter- species one-dimensional interaction couplings~\cite{AdhikariPRA2007}, which are proportional to the scattering lengths $a_B$ and $a_{FB}$, respectively. Since in the quasi-one-dimensional scheme that we have adopted the transverse degrees of freedom of the order parameters are frozen, the form of the Hamiltonian of Eq.\,(\ref{Ham}) is essentially the same as the one of the purely one-dimensional problem. The last term in $H$, which is proportional to $G_B$, is the usual quartic term, which corresponds to the boson-boson interaction. This is the leading-order term of the more general expression derived by Lieb and Liniger \cite{LL}, for weak boson-boson coupling. The term which is proportional to $G_{FB}$, is similar to the last one and it describes the interaction between the bosons and the pairs, with the only difference being that it is proportional to the product of the densities of the bosons and of the pairs. Finally, the term which is proportional to $G_P$ comes from the Fermi energy of the fermionic atoms which constitute the pairs. This is also the leading-order term of a more general (Gaudin-Yang) Hamiltonian \cite{GY}, valid both in the BCS and in the molecular limits. The standard formula that connects $G_B$ with the scattering length $a_B$ is $G_{B}=2\hbar \omega_{\perp,B} a_{B}$. Here, $\omega_{\perp,B}$ is the frequency of the trap in the transverse direction. The formula for $G_{FB}$ is more complicated, since it depends on the atomic masses and, potentially, on the two different trap frequencies. Here we choose $G_{FB}$ -- compared with $G_B$ -- in such a way that we explore the physically-interesting regimes. Finally, $G_P = 4 \kappa \hbar^2 \pi^2/m_F$, with the dimensionless parameter $\kappa = 1/4$ in the BCS, weakly-attractive coupling limit, and $\kappa = 1/16$ in the molecular unitarity limit~\cite{AdhikariPRA2007}, i.e., close to a Feshbach resonance associated with the fermion-fermion interaction. In all the results which follow below $\kappa$ is set equal to $1/4$, however they are relatively insensitive to this value, in the sense that the results are not affected, at least qualitatively. Clearly $G_B$ and $G_{FB}$ have the same units, i.e., energy times length, while $G_P$ has units of energy times length squared. In order for the Hamiltonian of Eq.\,(\ref{Ham}) to be valid, the bosonic atoms have to be in the mean-field regime, which means that their density per unit length $n_B^0 = N_B/(2 \pi R)$ must be larger than $\approx a_B/a_{\perp,B}^2$, where $a_{\perp,B}$ is the oscillator length that corresponds to $\omega_{\perp,B}$. Furthermore, the assumption of quasi-one-dimensional motion requires that $n_B^0 a_B \ll 1$. Therefore, $n_B^0$ has to be in the range $a_B/a_{\perp,B}^2 \ll n_B^0 \ll 1/a_B$. For the fermionic component, the third term in Eq.\,(\ref{Ham}) is valid in both limits of weak $(\kappa = 1/4)$ and strong $(\kappa = 1/16)$ attraction. From Eq.\,(\ref{Ham}) it follows that in the rotating frame with some angular frequency $\Omega$ \cite{LLb}, $\Psi_P(\theta)$ and $\Psi_B(\theta)$ satisfy the two coupled equations \cite{AdhikariPRA2007, WenPRA2014}, \begin{widetext} \begin{eqnarray} \left[ -\frac{\hbar^2}{2 m_B R^2} \partial_{\theta \theta} + i \Omega \hbar \partial_{\theta} + G_B N_B |\Psi_B|^2 + G_{FB} N_P |\Psi_P|^2 \right] \Psi_B &=& \mu_B \Psi_{B} \; , \nonumber \\ \left[ -\frac{\hbar^2}{2 m_P R^2} \partial_{\theta \theta} + i \Omega \hbar \partial_{\theta} + G_P N_P^2 |\Psi_P|^4 + G_{FB} N_B |\Psi_B|^2 \right] \Psi_P &=& \mu_P \Psi_{P} . \label{sys1} \end{eqnarray} \end{widetext} Alternatively, we can view Eqs.~(\ref{sys1}) as the Euler-Lagrange equations for the minimization of the total energy, under the following three constraints: a constant total angular momentum $-i \hbar R \int [N_B \Psi_B^* \partial_{\theta} \Psi_B + N_P \Psi_P^* \partial_{\theta} \Psi_P] d \theta = L_B \hbar + L_P \hbar = L \hbar$, and a fixed number of $N_B$ bosonic atoms and $N_P$ fermionic pairs, with the three corresponding Lagrange multipliers being $\Omega$, $\mu_B$ and $\mu_P$. In the results that follow below we have thus fixed $L$ and we have evaluated the state of lowest energy, treating $\Omega$ as a Lagrange multiplier. We stress that in a harmonic trap, in two, or three spatial dimensions there is the following possibility: When rotated, a paired Fermi system, which is in the BCS regime (only) may form a shell of unpaired atoms, which undergo solid-body rotation, along with a core of non-rotating paired atoms, which are located near the center. As argued in Ref.\,\cite{str}, in this case breaking of the pairs is energetically inexpensive, since the density is low, while the cloud gains energy due to the centrifugal energy of the normal cloud, undergoing solid-body rotation. In the present problem such a ``decoupling" between the pair and the unpaired parts of the cloud is not possible, since the two parts would have to move together. Therefore, we do not expect such an effect to be present here (which would anyway be relevant for $\kappa \approx 1/4$, only). Finally, we should mention that in our model we have excluded the possibility of Efimov states \cite{ES}. Whether these play any role in our problem would be the subject of a separate publication. Such a study should also investigate the effect of losses. \section{Boundary between phase separation and phase coexistence of the two superfluid components} \label{Sec_Mixing} Depending on the value of the parameters, there are two phases. In the one, the two components have an inhomogeneous density distribution and prefer to reside in different parts of the torus/ring, i.e., we have phase separation. In the other, the two components have a homogeneous density distribution, and thus we have phase coexistence. We derive the condition for energetic stability of the homogeneous phase in Appendix~\ref{app_sec_for_mixing_condition} and the condition for its dynamic stability from the Bogoliubov spectrum, in Appendix B. The condition for energetic stability of the phase where the two components are distributed homogeneously and coexist is \begin{widetext} \begin{eqnarray} \left( \frac{\hbar^2}{2 m_B R^2} + 2 G_B n_B^0 \right) \left( \frac{\hbar^2}{2 m_P R^2} + 4 G_P (n_P^0)^2 \right) > 4 G_{FB}^2 n_B^0 n_P^0, \label{CriticalMixingDemixing} \end{eqnarray} \end{widetext} where $n_B^0 = N_B/(2 \pi R)$ and $n_P^0 = N_P/(2 \pi R)$. Let us denote as $\phi_k = e^{i k \theta}/\sqrt{{2\pi R}}$, with $k$ being an integer, the well-known eigensolutions of the (single-particle) kinetic-energy operator $- \hbar^2 \partial_{\theta \theta}/(2 m_P R^2) = - \hbar^2 \partial_{\theta \theta}/(2 m_B R^2)$, that corresponds to the kinetic energy of the particles, under periodic boundary conditions, with an energy $\hbar^2 k^2/(2 m_P R^2) = \hbar^2 k^2/(2 m_B R^2)$ and angular momentum $k \hbar$. It is clear that if Eq.\,(\ref{CriticalMixingDemixing}) is satisfied for the non-rotating state ($L=0$), where $\Psi_B = \phi_0$ and $\Psi_P = \phi_0$, it will also be satisfied for any plane-wave state with nonzero angular momentum $(k_B,k_P)$, where \begin{equation} (k_B,k_P) \equiv \left( \Psi_B = \phi_{k_B} , \ \Psi_P = \phi_{k_P} \right). \label{eq_PW} \end{equation} We stress that this analysis is local and not global. In other words, the condition of Eq.\,(\ref{CriticalMixingDemixing}) does not necessarily imply that any state $(k_B,k_P)$ is the absolute minimum of the energy for the given angular momentum per particle $\ell = L/N = x_B k_B + x_P k_P$, where $N = N_B + N_P$. Still, it becomes global when the nonlinear terms in the Hamiltonian are sufficiently large \cite{Zar}, or, equivalently, when the kinetic energy is sufficiently small (as in the present problem). In addition to the energetic stability examined above there is also the dynamic stability of the homogeneous solution. In Appendix B we derive the Bogoliubov spectrum, i.e., the excitation energy $\hbar \omega(k)$, as function of $k$, where $k \hbar$ is the angular momentum of the system, with $k$ being an integer. This is given by the smaller value of $\omega^2$ which solves the following equation, \begin{widetext} \begin{eqnarray} \left( \frac {\hbar^2 k^4} {2 m_B R^2} + 2 G_B n_B^0 k^2 - 2 m_B \omega^2 R^2 \right) \left( \frac {\hbar^2 k^4} {2 m_P R^2} + 4 G_P (n_P^0)^2 k^2 - 2 m_P \omega^2 R^2 \right) = 4 G_{FB}^2 n_B^0 n_P^0 k^4. \label{ds} \end{eqnarray} \end{widetext} The condition for dynamic stability which results from Eq.\,(\ref{ds}) with $k=1$, i.e., for real $\omega$, coincides with that of energetic stability, Eq.\,(\ref{CriticalMixingDemixing}). In addition, it follows from Eq.\,(\ref{ds}) that in the case of a purely bosonic component \begin{eqnarray} m_B c_B^2 = \frac {\hbar^2} {4 m_B R^2} + G_B n_B^0, \end{eqnarray} where $c_B$ is the bosonic speed of sound. Similarly, in a purely fermionic superfluid, \begin{eqnarray} m_P c_P^2 = \frac {\hbar^2} {4 m_P R^2} + 2 G_P (n_P^0)^2, \end{eqnarray} where $c_P$ is the speed of sound of the fermionic pairs. \section{Rotational response of the two superfluids} \label{Sec_NumYrastStates} Let us now turn to the rotational response of the system, under some fixed angular momentum, that we examine below. This problem depends on whether the two components are separated, or they coexist. For this reason, we examine each case separately, below. Before we proceed, it is instructive to identify the three energy scales $E_B$, $E_{FB}$, and $E_F$, which appear in the Hamiltonian of Eq.\,(\ref{Ham}), defined as \begin{eqnarray} \frac {E_B} N = \frac 1 2 x_B^2 G_B n_0 , \,\, \frac {E_{FB}} N = x_B x_P G_{FB} n_0, \label{ensc0} \end{eqnarray} and \begin{eqnarray} \frac {E_F} N = \frac 1 3 G_P x_P^3 n_0^2, \label{ensc} \end{eqnarray} where $n_0 = N/(2 \pi R)$, $x_P = N_P/N$, and $x_B = N_B/N$. Denoting as $K$ the kinetic energy per particle, $K = \hbar^2/(2 m_B R^2) = \hbar^2/(2 m_P R^2)$, we introduce the useful dimensionless quantities $\epsilon_B = {E_B}/(N K)$, $\epsilon_{FB} = {E_{FB}}/(N K)$, and $\epsilon_F = {E_F}/(N K)$. In what follows below we consider values of $\epsilon_B$, $\epsilon_{FB}$, and $\epsilon_F$ which are much larger than unity, as is also the case experimentally. The terms $E_B$ and $E_{FB}$ are the familiar ones, met also in the case of boson-boson mixtures. On the other hand, $E_F$ comes from the Fermi pressure of the ``underlying" fermionic origin of the pairs, and it acts as an effective repulsive potential, which, however, does not scale with the density in the usual, quadratic, way, that we are familiar with from the case of contact interactions. In this case of the pairs, the corresponding energy per unit length increases with the third power of the pair density, i.e., there is a stronger dependence of this term on the density. This is one of the major differences between the problem of Fermi-Bose, and Bose-Bose mixtures. Finally, we remark that while the contact potential corresponds to two-body collisions, $E_F$ resembles a term that corresponds to three-body collisions. Before we proceed, we stress that Bloch's theorem \cite{FB}, which refers to a single component system, is also valid in our two-component system, at least under certain conditions, which are examined in Sec.\,V \cite{prl, Anoshkin}. The easiest case is that of equal masses, $m_B = m_P$, where the energy is a periodic function, on top of a parabola, i.e., \begin{eqnarray} \frac E {N K} = \ell^2 + \frac {e(\ell)} K. \label{bl} \end{eqnarray} Here $e(\ell)$ is a periodic function, with a period equal to unity, $e(\ell + 1) = e(\ell)$. In what follows below we measure the zero of the energy with respect to the ground-state energy of the nonrotating system, and therefore $e(\ell = 0) = 0$ in what follows below. \subsection{Regime of phase separation} In the regime of phase separation, already for zero angular momentum the density of the two components is inhomogeneous. In Fig.\,\ref{fig:Demixing} we show numerical solutions of Eqs.\,(\ref{sys1}), i.e., the results we have derived minimizing the energy of the system, fixing the angular momentum. More specifically, we show the density distribution of the two components, and the angular momentum carried by each component separately. In these results we choose a large enough value of $G_{FB}$, see Eq.\,(\ref{CriticalMixingDemixing}), so that the demixing is complete, i.e., the two densities have almost zero overlap, see the upper plot of Fig.~\ref{fig:Demixing}. In this case it is energetically favourable for the system to carry the angular momentum via center of mass excitation. In Fig.\,\ref{fig:Demixing} we also consider a population imbalance $x_P = N_P/N = 0.1$ and $x_B = N_B/N = 0.9$, while $\epsilon_B \approx 83.33$, $\epsilon_{FB} \approx 300.0$, and $\epsilon_F \approx 83.33$. From the lower plot of Fig.\,1 we see that the angular momentum is shared by the two components in a trivial way. More specifically, the total angular momentum $L \hbar = L_P \hbar + L_B \hbar$ is divided into the two components proportionately to the mass and the particle number, that is \begin{equation} \frac {L_P} L = \frac{m_P N_P}{m_B N_B + m_P N_P}, \, \frac {L_B} L = \frac{m_B N_B} {m_B N_B + m_P N_P}. \label{share} \end{equation} Since the rotational kinetic energy $K_r$ of the two components is \begin{equation} K_r = K_{r,P} + K_{r,B} = \frac {\hbar^2 L_P^2} {2 N_P m_P R^2} + \frac {\hbar^2 L_B^2} {2 N_B m_B R^2}, \end{equation} if follows trivially from Eq.\,(\ref{share}) that \begin{eqnarray} K_r = \frac {\hbar^2 L^2} {2 (m_B N_B + m_P N_P) R^2}. \label{kr} \end{eqnarray} Therefore, the dispersion relation is exactly parabolic, in agreement with our numerical results (this is also consistent with Bloch's theorem). Finally, we stress that the density distribution of the two components is unaffected by the rotation, since the system is excited via center of mass excitation. As a result, the density distribution shown in the upper plot of Fig.\,1 is independent of $\ell = L/N$. \begin{figure} \includegraphics[scale=0.6]{Fig_1_a} \includegraphics[scale=0.6]{Fig_1_b} \caption{(Color online) The density distribution $n_P(\theta) = N_P |\Psi_P(\theta)|^2$ and $n_B(\theta) = N_B |\Psi_B(\theta)|^2$ of the two components (upper plot), and the distribution of the angular momentum between the two components (lower plot), in the regime of phase separation. Here the blue color (solid line) corresponds to the bosons, and the red color (dashed line) corresponds to the pairs. Also, $x_P=0.1$ and $x_B=0.9$, in the BCS regime ($\kappa = 1/4$), $\epsilon_B \approx 83.33$, $\epsilon_{FB} \approx 300.0$, and $\epsilon_F \approx 83.33$. The density of the two components is independent of the angular momentum. On the lower figure, $\ell_{B,P}/\ell = m_{B,P} N_{B,P}/(m_B N_B + m_P N_P)$, as explained in the text.} \label{fig:Demixing} \end{figure} \subsection{Regime of phase coexistence} We now turn to the problem where the two components have a homogeneous density distribution and thus they coexist. In this case the solution of Eqs.~(\ref{sys1}) for the (non-rotating) ground state is the trivial one, i.e., the plane-wave state $\Psi_B = \phi_0$, and $\Psi_P = \phi_0$. In Figs.\,\ref{fig:3} and \ref{fig:5} we show again numerical solutions of Eqs.\,(\ref{sys1}), i.e., the results we have derived minimizing the energy of the system, fixing the angular momentum to some values. In these data we consider a population imbalance of $x_P = N_P/N = 0.1$ and $x_B = N_B/N = 0.9$, in the BCS regime ($\kappa=1/4$). Finally, we choose $\epsilon_{FB} \approx 143.2$ for the Fermi-Bose coupling, $\epsilon_F \approx 83.33$ and two different values for the Bose-Bose coupling, $\epsilon_B \approx 3223$ in Fig.\,\ref{fig:3}, and $\epsilon_B \approx 644.6$ in Fig.\,\ref{fig:5}. Before we discuss each graph separately, let us start with some general remarks. From Eq.\,(\ref{bl}), subtracting from the energy of the system the energy due to the center of mass excitation, $E/(NK) - \ell^2$, we get the function $e(\ell)/K$, which is shown in the upper plot of Fig.\,2 and Fig.\,3. This has an exact periodicity of $\ell = 1$, due to Bloch's theorem. On top of this, it also has a quasi-periodicity, $\ell = 0.1$. As we show below, this is set by the minority component, and for the parameters we have chosen this is the paired fermionic superfluid, with $x_P = N_P/N = 0.1$. Furthermore, when $\ell = L/N$ is an integer multiple of $x_P = N_P/N$, in their lowest energy, the two components are always in plane-wave states $(k_B, k_P)$, due to the large value of $\epsilon_B, \epsilon_{FB}$, and $\epsilon_F$ that we have considered \cite{RoussouNJP2018, Zar}. The actual value of $k_B$ and $k_P$ is determined by the minimization of the corresponding kinetic energy, $x_B k_B^2 + x_P k_P^2$, under the constraint of the fixed angular momentum, $\ell = x_B k_B + x_P k_P$, that we want the system to have. Clearly this does not depend on the value of the rest of the parameters, and for this reason the value of $k_B$ and $k_P$ is the same (for some given value of $\ell$) in both figures. When $\epsilon_B$ and $\epsilon_F$ are much larger than unity, and we have phase coexistence, it is energetically favourable for the system to have a homogeneous density distribution in both components, for any value of $\ell$ \cite{RoussouNJP2018}. With the constraint of angular momentum this is not always possible, though. For small values of the angular momentum, where we have sound waves (and the dispersion relation is linear in $\ell$), there is predominantly excitation of only one of the components. This may be seen from the (smaller) solution for $\omega R$ of Eq.\,(\ref{ds}), which is the speed of sound, and the corresponding eigenvector of the matrix of Eq.\,(\ref{matrix}). \begin{figure} \includegraphics[scale=0.6]{Fig_2_a} \includegraphics[scale=0.6]{Fig_2_b} \caption{(Color online) The periodic function $e(\ell)$ (upper plot), and the distribution of the angular momentum between the two components (lower plot), in the regime of phase coexistence. Here the blue color (solid line) corresponds to the bosons, and the red color (dashed line) corresponds to the pairs. Also, $x_P=0.1$ and $x_B = 0.9$, in the BCS regime ($\kappa=1/4$), $\epsilon_B \approx 3223$, $\epsilon_{FB} \approx 143.2$, and $\epsilon_F \approx 83.33$. The quasi-periodic behaviour in $L/N$ is set by $N_P/N = 0.1$. On the lower figure we see that the fermionic pairs carry (almost) all the angular momentum up to $\ell = 1/2$, since $\Psi_B \simeq \phi_0$ at this interval. For $1/2 < \ell < 3/2 $ the bosonic order parameter $\Psi_B$ is $\simeq \phi_1$. Finally, for $3/2 < \ell < 5/2$, $\Psi_B \simeq \phi_2$, etc. The indices, e.g., $(0,0)$ denote the order parameter of the two superfluids at each local minimum, in the notation of Eq.\,(\ref{eq_PW}).} \label{fig:3} \end{figure} \begin{figure} \includegraphics[scale=0.6]{Fig_3_a} \includegraphics[scale=0.6]{Fig_3_b} \caption{(Color online) The periodic function $e(\ell)$ (upper plot), and the distribution of the angular momentum between the two components (lower plot), in the regime of phase coexistence. Here the blue color (solid line) corresponds to the bosons, and the red color (dashed line) corresponds to the pairs. Also, $x_P=0.1$ and $x_B = 0.9$, in the BCS regime ($\kappa=1/4$), $\epsilon_B \approx 644.6$, $\epsilon_{FB} \approx 143.2$, and $\epsilon_F \approx 83.33$. The pairs are now in plane-wave states (as opposed to Fig.~\ref{fig:3}). The indices, e.g., $(0,0)$ denote the order parameter of the two superfluids at each local minimum, in the notation of Eq.\,(\ref{eq_PW}).} \label{fig:5} \end{figure} \begin{figure} \includegraphics[scale=0.6]{Fig_4} \vspace{1cm} \caption{The periodic function $e(\ell)$, derived from the two parabolas of Eqs.\,(\ref{E1}) and (\ref{E2}), for $\epsilon_B = 5$. Here $x_P = 0.1$ and $x_B = 0.9$. There is a level crossing at $\ell_0 \approx x_P/2 = 0.05$. The system follows the lower, solid, curves, undergoing a discontinuous phase at the crossing point, i.e., at $\ell = \ell_0$. \label{fig:cross}} \end{figure} For the chosen parameters, we evaluate from Eq.\,(\ref{ds}) the speed of sound, i.e., the slope of the dispersion relation to be $\approx c_P \approx 50.0 \, \hbar/(m_P R)$ in Fig.\,\ref{fig:3}, in agreement with our numerical data. In this case we have (predominantly) excitation of the fermion pairs. In Fig.\,\ref{fig:5} the speed of sound is $\approx c_B \approx 26.17 \, \hbar/(m_B R)$ -- again in agreement with our numerical results -- we have (predominantly) excitation of the bosonic component. These results already help us explain Figs.\,\ref{fig:3} and \ref{fig:5}, at least for sufficiently small values of $\ell$. Let us examine each plot separately, now, starting with Fig.\,\ref{fig:3}. In this case it is energetically favourable for the bosonic component to reside always in plane-wave states and thus retain its homogeneous density distribution, for all values of the angular momentum. In that sense, the ``interesting" component is the fermionic in this case. As a result, the density of the pairs changes as the angular momentum is varied, having the usual density distribution of solitary-wave excitation under periodic boundary conditions \cite{solf}, while the bosonic density is (to a rather good precision) constant. For $0 \le \ell < 1/2$, $\Psi_B \simeq \phi_0$ and the fermionic pairs carry (almost) all the angular momentum (as seen in the lower plot of Fig.\,2). Within this interval, we observe the subintervals with a width of $\Delta \ell = N_P/N = x_P = 0.1$ (as seen in the upper plot). For $0 < \ell < x_P$ the system shows all the characteristics of an ``ordinary" solitary wave (in the pairs), with a dispersion relation that has a negative curvature. This is due to the effective repulsive potential, discussed above (see Eq.\,(\ref{ensc}) and the discussion that follows below it). In order to get some insight, it is instructive to write down the trial (two-state) order parameter for the pairs in the limit of ``weak" interactions, i.e., when the three energy scales of Eqs.\,(\ref{ensc0}) and (\ref{ensc}) are much smaller than the kinetic energy $K$. Then, for $0 < \ell < x_P$, \begin{eqnarray} \Psi_B = \phi_0, \,\, \Psi_P = \sqrt{1 - \frac {\ell} {x_P}} \phi_0 + \sqrt{\frac {\ell} {x_P}} \phi_1. \end{eqnarray} At the second subinterval, $x_P < \ell < 2 x_P$, $\Psi_P$ changes trivially. More specifically, we have center of mass excitation, and thus $\Psi_P$ is simply multiplied by the phase $e^{i \theta}$. We stress that this does not affect the interaction energy, altering only the kinetic energy (and, obviously, the angular momentum). Therefore, \begin{eqnarray} \Psi_B = \phi_0, \,\, \Psi_P = \sqrt{1 - \frac {\tilde{\ell}} {x_P}} \phi_1 + \sqrt{\frac {\tilde{\ell}} {x_P}} \phi_2, \end{eqnarray} where $0 < \tilde{\ell} < x_P$. This continues all the way, up to the interval $4 x_P < \ell < 5 x_P = 1/2$. For $1/2 < \ell < 6 x_P$, instead of the pair order parameter $\Psi_P$ to be multiplied by the phase $e^{5 i \theta}$, it is more favourable for the bosonic order parameter $\Psi_B$ to undergo a discontinuous transition, and jump to the next plane-wave state, $\Psi_B \simeq \phi_1$. The fermionic order parameter then adjusts to this change, and is multiplied by the phase $e^{-4 i \theta}$. As a result, \begin{eqnarray} \Psi_B = \phi_1, \,\, \Psi_P = \sqrt{1 - \frac {\tilde{\ell}} {x_P}} \phi_{-4} + \sqrt{\frac {\tilde{\ell}} {x_P}} \phi_{-3}. \end{eqnarray} The same situation continues all the way up to $\ell = 1$. Then, the rest of the spectrum, for $\ell > 1$, is determined by Bloch's theorem \cite{FB}, in agreement with the numerical results of Fig.\,2. Turning to Fig.\,\ref{fig:5}, the situation is more subtle. In a sense the role of the two components is reversed, since it is now energetically more favourable to keep the pairs -- and not the bosons -- in plane-wave states. There is one important difference, though, compared with the previous case. Although in Fig.\,\ref{fig:3} the slope of the dispersion relation at $\ell = x_P/2 = 0.05$ is continuous, in the present case, at $\ell \approx x_P/2 = 0.05$, it has a discontinuity, as seen in the upper plot of Fig.\,\ref{fig:5}. (This discontinuity appears also approximately at all the odd-integer multiples of $\ell = x_P = 0.05$, i.e., at $\ell \approx 0.05, 0.15, 0.25$, etc.) In order to understand this qualitatively, let us consider again the limit of weak interactions. For the trial states \begin{eqnarray} \Psi_B = \sqrt{1 - \frac {\ell} {x_B}} \phi_0 + \sqrt{\frac {\ell} {x_B}} \phi_1, \,\, \Psi_P = \phi_0 \label{tr1} \end{eqnarray} the allowed values of $\ell$ are $0 \le \ell \le x_B$. Considering also the trial order parameters \begin{eqnarray} \Psi_B = \sqrt{\frac {x_P - \ell} {x_B}} \phi_{-1} + \sqrt{1 - \frac {x_P - \ell} {x_B}} \phi_0, \,\, \Psi_P = \phi_1 \label{tr2} \end{eqnarray} the allowed values of $\ell$ are $x_P-x_B \le \ell \le x_P$. Thus the common range of $\ell$ of the states of Eqs.\,(\ref{tr1}) and (\ref{tr2}) is $0 \le \ell \le x_P$. Evaluating the energy in the states of Eq.\,(\ref{tr1}) we find that, \begin{eqnarray} \frac E {N K} = \ell + 2 \epsilon_B \left[ \frac {\ell} {x_B} \left( 1 - \frac {\ell} {x_B} \right) \right]. \label{E1} \end{eqnarray} Similarly, for the states of Eq.\,(\ref{tr2}), \begin{eqnarray} \frac E {N K} = 2 x_P - \ell + 2 \epsilon_B \left[ \frac {x_P - \ell} {x_B} \left( 1 - \frac {x_P - \ell} {x_B} \right) \right]. \nonumber \\ \label{E2} \end{eqnarray} Figure 4 shows the two parabolas of Eqs.\,(\ref{E1}) and (\ref{E2}). There is a clear level crossing, which leads to a discontinuous transition and also to the discontinuity in the slope of the dispersion relation. In the limit where the radius of the ring increases, with $n_B^0$ kept fixed, this takes place exactly at $\ell = x_P/2$. At this value of $\ell$ also the order parameter of the pairs undergoes a discontinuous transition from $\Psi_P \simeq \phi_0$ to the state $\Psi_P \simeq \phi_1$, up to $\ell = 3 x_P/2$, etc. Having understood the behaviour of the system at the interval $0 \le \ell < x_P$, the rest of the spectrum follows by center of mass excitation, according to Bloch's theorem, as in the case examined earlier. \section{Effect of the mass imbalance between the bosonic atoms and the paired fermions} Up to now we have assumed that $m_B = m_P$. We examine now the more general problem, where $m_B \neq m_P$ \cite{Anoshkin}. Let us consider the many-body wavefunction of the bosonic atoms and of the fermion pairs in some interval of the total angular momentum $0 \le L_0 \le L_{\rm per}$, \begin{eqnarray} \Psi_{L_0} = \Psi_{L_0}(\theta_1, \dots \theta_{N_B}, \varphi_1, \dots, \varphi_{N_P}). \end{eqnarray} Here the coordinates $\theta_i$, with $1 \le i \le N_B$, refer to the bosonic component, while $\varphi_i$, with $1 \le i \le N_P$, refer to the fermion pairs. Motivated by the case of equal masses, let us investigate now the conditions which allow us to excite the center of mass of this two-component system. First of all, the center of mass coordinate $\Theta_{\rm CM}$ is \begin{eqnarray} \Theta_{\rm CM} = \frac 1 {m_B N_B + m_P N_P} \left(m_B \sum_{i=1}^{N_B} \theta_i + m_P \sum_{i=1}^{N_P} \varphi_i \right). \nonumber \\ \end{eqnarray} In order to achieve center of mass excitation with some integer multiple of $L_{\rm per}$, say $n L_{\rm per}$, we have to act with $e^{i n L_{\rm per} \Theta_{\rm CM}}$ on $\Psi_{L_0}$. This operation will give $\Psi_{L_0 + n L_{\rm per}}$. In order to do that, and since we have to satisfy the periodic boundary conditions (without loss of generality we set $n=1$ for the moment), the two combinations which appear in the exponent, $L_{\rm per} m_B/(m_B N_B + m_P N_P)$ and $L_{\rm per} m_P/(m_B N_B + m_P N_P)$, have to be integers, say $p$ and $q$, respectively, i.e., \begin{eqnarray} \frac {L_{\rm per} m_B} {m_B N_B + m_P N_P} = p, \,\,\, \frac {L_{\rm per} m_P} {m_B N_B + m_P N_P} = q. \end{eqnarray} From the above two equations follows that \begin{eqnarray} \frac {m_B} {m_P} = \frac p q \label{pcn} \end{eqnarray} and also \begin{eqnarray} L_{\rm per} = p N_B + q N_P. \label{laper} \end{eqnarray} Therefore, in order to be able to excite the center of mass motion of the system, the ratio between the masses has to be a rational number. In addition, the period in $L$, $L_{\rm per}$, is no longer $N$, as in the symmetric model, i.e., $m_B = m_P$, but rather $L_{\rm per} = p N_A + q N_B$. Actually, the (smallest) period is the one that results from the values of $p$ and $q$, divided by their greatest common divisor. Apparently, for $p = q = 1$, we get the symmetric case, where $L_{\rm per} = N$. We stress that the above results coincide with the ones in Sec.\,IV A, when Eq.\,(\ref{pcn}) is valid. The difference is that in the case of phase separation, there is no restriction on the masses, while here the ratio between the masses has to be a rational number. The reason for this is the following. When we have phase separation, the density of the two components is sufficiently small at a certain spatial extent around the ring, which allows the phase of $\Psi_B$ and $\Psi_P$ to vary, satisfying the boundary conditions, without any effect on any physical observable. On the contrary, in the case of phase coexistence, this freedom in the phase match is no longer possible and the periodic boundary conditions require that Eq.\,(\ref{pcn}) holds. Let us now turn to the energy spectrum. From the previous discussion the bosonic component takes an angular momentum $L_B = n p N_B$, while the pairs $L_P = n q N_P$ (we now take the more general case, with $n$ being any positive integer). Within the mean-field approximation, if the order parameters of the two components for $0 \le L_0 \le L_{\rm per}$ are expanded in the plane-wave states $\phi_m$ \begin{eqnarray} \Psi_B^0 = \sum_m c_m \phi_m, \,\,\, \Psi_P^0 = \sum_m d_m \phi_m, \label{boo1} \end{eqnarray} then at any other interval with $n L_{\rm per} \le L \le (n+1) L_{\rm per}$, \begin{eqnarray} \Psi_B^n = \sum_m c_{m} \phi_{m + n p}, \,\,\, \Psi_P^n = \sum_m d_m \phi_{m + n q}. \label{boo2} \end{eqnarray} It turns out that the total angular momentum $L_n$ in these states is, indeed, $L_n = L_0 + n L_{\rm per}$. Also, if $K_0$ is the total kinetic energy of the states of Eq.\,(\ref{boo1}), and $K_n$ is the total kinetic energy of the states of Eq.\,(\ref{boo2}), then \begin{eqnarray} K_n - K_0 = \frac {\hbar^2} {2 M R^2} (L_{\rm per} n^2 + 2 L_0 n), \end{eqnarray} where $M = m_B/p = m_P/q$. The form of $K_n$ is \begin{eqnarray} K_n = \frac {\hbar^2} {2 M R^2} \left( n + \frac {L_0} {L_{\rm per}} \right)^2 L_{\rm per} = \frac {\hbar^2} {2 M R^2} \frac {L_n^2} {L_{\rm per}}, \end{eqnarray} and finally, the energy spectrum for the total energy $E$ per particle is \begin{eqnarray} \frac E N = \frac {\hbar^2} {2 M R^2} \frac {L^2} {N L_{\rm per}} + e(L), \label{blm} \end{eqnarray} where we have dropped the index $n$ in $L$. Here, $e(L)$ is a periodic function, with period $L_{\rm per}$. Finally, introducing ${\tilde K} = \hbar^2/(2 M R^2)$, Eq.\,(\ref{blm}) may be written as \begin{eqnarray} \frac {E} {N {\tilde K}} = \frac {\ell^2} {p x_B + q x_P} + \frac {e(\ell)} {\tilde K}. \label{blmmm} \end{eqnarray} The first term on the right coincides with Eq.\,(\ref{kr}). Furthermore, for equal masses the above expression reduces to Eq.\,(\ref{bl}). In Fig.\,5 we have considered an example of unequal masses, with $m_B/m_P = 1/3$, or $p = 1$ $q = 3$, and we have evaluated the dispersion relation, which is in agreement with Eq.\,(\ref{blmmm}). More specifically, we have considered the same $G_B$ and $G_{FB}$ as in Fig.\,2, $x_P=0.1$ and $x_B = 0.9$, and $\kappa=1/4$, i.e., in the BCS regime. As in the upper plots of Figs.\,2 and 3, we subtract again the energy due to the center of mass excitation, i.e., we plot $e(\ell)/{\tilde K}$, while in the lower plot we also show how the angular momentum is distributed between the two species. From these plots we see the expected, exact, periodicity $L_{\rm per}/N = p x_B + q x_P = 1.2$ of $e(\ell)$. On top of that, we still have the quasi-periodicity, equal to $0.1$, set by the minority component, seen also in Figs.\,2, and 3, which was analysed in the previous section. Clearly, in a real system, the ratio between the two masses is not a rational number in general. Still, even if this ratio is close to some rational number, one expects that the deviations from the derived spectrum to be perturbatively small \cite{Anoshkin}. \begin{figure} \includegraphics[scale=0.6]{Fig_5_a} \includegraphics[scale=0.6]{Fig_5_b} \caption{(Color online) The periodic function $e(\ell)$ (upper plot), and the distribution of the angular momentum between the two components (lower plot), in the regime of phase coexistence, for unequal boson and fermion-pair masses, $m_B = M$, $m_p = 3 M$. Here, the blue color (solid line) corresponds to the bosons, and the red color (dashed line) corresponds to the pairs. Also, $x_P=0.1$ and $x_B = 0.9$, in the BCS regime ($\kappa=1/4$), with $G_B$ and $G_{FB}$ being the same as in Fig.\,2. The periodicity $L_{\rm per}/N$, according to Eq.\,(\ref{laper}), is 1.2, as seen in the figures. The unit of energy ${\tilde K}$ used here on the $y$ axis is $\hbar^2/(2 M R^2)$. The indices, e.g., $(0,0)$ denote the order parameter of the two superfluids at each local minimum, in the notation of Eq.\,(\ref{eq_PW}).} \label{fig:difm} \end{figure} \section{Summary and Conclusions} In the present study we have considered a mixture of a Bose-Einstein condensate, with a paired fermionic superfluid, at zero temperature. We have assumed that these two components are confined in a ring potential, as in a very tight toroidal trapping potential. The periodic boundary conditions, combined with the degrees of freedom of the two superfluids, give rise to interesting effects. In the non-rotating, ground state of the system, clearly there are two phases that one may identify. In the one, the two components are distributed homogeneously around the ring, while in the other, the components separate spatially. The rotational response of the system, which is the main question that we have investigated here, depends crucially on the ground state. When the two components separate, the two components carry their angular momentum via center of mass excitation. The more interesting case is the one where the two components coexist uniformly (in the non-rotating, ground state). For small values of the angular momentum, we solved the problem via linearisation of the two coupled equations. This approach also allowed us to identify the nature of the sound-wave excitation of the system. For the more general problem, we solved the problem numerically. Interestingly enough, our results show that for a rather wide range of the parameters, and also for a relatively large population imbalance, the vast majority of the angular momentum is carried by the one component. This is not a surprise, since, for the rather strong non-linear terms we have considered (as in real experiments) it is energetically favourable for the components to maintain their homogeneous density distribution. As the angular momentum increases, the static component starts rotating, too. In certain cases (i.e., Figs.\,3 and 4), this is accomplished via discontinuous transitions. Another interesting consequence of the derived dispersion relation is related with the local minima, which show up at the integer multiples of the minority component. These minima may give rise to persistent currents. The high degree of tunability of these minima, which depend on the population imbalance, the strength of the nonlinear terms and the mass of the two components, is not only an interesting theoretical result, but it may also have technological applications. In all of our displayed results we have assumed that the majority of the particles are the bosonic atoms. Still, the derived results are representative -- at least qualitatively -- of the phases that show up, also in the opposite limit, where the pairs is the dominant component. Actually, we argue that only in the special case where the populations of the two components are rather close to each other may the picture presented here be altered significantly (at least when the ratio between $L_{\rm per}$ and the population of the minority component is an integer multiple, as in the results considered in this study). In addition, according to Sec.\,V, no dramatic change occurs in the dispersion relation in the case of a mass imbalance, apart from the period of $e(\ell)$, provided that the mass ratio is a rational number, or close to it. Therefore, the present results are representative not only in terms of the population imbalance, but also in terms of the mass imbalance between the bosonic atoms and the fermionic pairs. We thus come to the conclusion that, despite the large parameter space that one has to cover in order to get the full picture, the present results cover a substantial fraction of the full phase diagram. Compared with the problem of a bosonic mixture, the present problem has qualitative similarities. The main difference lies in the nonlinear term that appears for the pairs of fermions. While in the Bose-Bose mixtures the energy per unit length scales quadratically with the density (as a result of the assumed s-wave collisions), here the nonlinear term that corresponds to the fermionic component has a stronger density dependence, which goes as the third power of the density. This dependence comes from the Fermi pressure of the fermionic atoms, which constitute the pairs, and in that sense it is of a very different nature. Interestingly enough, this term also resembles a three-body collision term in the Hamiltonian. It would definitely be interesting to investigate this problem also experimentally, in order to confirm the richness of the phases seen here. To make contact with experiment, for a radius $R = 100$ $\mu$m, $N = 10^3$ atoms, for scattering lengths $a_B$ and $a_{FB}$ 100 \AA, for a transverse width of the torus $1 \, \mu$m, and a population imbalance $N_P/N_B = 10$, one gets that $\epsilon_B \approx 10^3$, $\epsilon_{FB} \approx 10^2$, while $\epsilon_F \approx 10^2$. Furthermore, all the three energy scales $E_B/N$, $E_{FB}/N$, and $E_P/N$ are at least an order of magnitude smaller than the oscillator quantum of energy associated with the transverse degrees of freedom, and thus the motion of the atoms should be, to rather good degree, quasi-one-dimensional. Finally, the typical value of the speed of sound for these parameters is a few tens of mm/sec. \section{Acknowledgments} M.~\"{O}. acknowledges partial support from ORU-RR-2019. The authors wish to thank M. Magiropoulos and J. Smyrnakis for useful discussions.
1,108,101,562,618
arxiv
\section{Introduction} \par Let ${\mathcal P}_n$ be the space of all polynomials on the complex plane ${\Bbb C}$ whose degree is at most $n$. Let ${\mathcal R}_{nm}$ be the space of rational functions $R_{nm}=P_n/Q_m$ where $P_n\in{\mathcal P}_n$ and $Q_m\in{\mathcal P}_m$. \par If $f$ is a function on a compact set $K\subset{\Bbb C}$, then we denote by $N_K(n)$ and $N_K(n,m)$ the maximal number of zeros on $K$ of the functions $f-p$, where $p\in{\mathcal P}_n$, respectively $p\in{\mathcal R}_{nm}$. Since functions in ${\mathcal P}_n$ or ${\mathcal R}_{nm}$ have $n+1$ or, respectively, $n+m+2$ coefficients, $N_K(n)\ge n+1$ and $N_K(n,m)\ge n+1$. \par In this paper we consider the situations when for a fixed function $f$ we have either polynomial or rational {\it overinterpolation}. This means that $\lim_{n\to\infty}N_K(n)/n=\infty$ or $\lim_{n\to\infty}N_K(n,m)/n=\infty$. \par One can expect that in the case of overinterpolation, the function $f$ must be either polynomial or rational. We prove two theorems of this kind. Before we state them, let us introduce some notation. \par Let $\Delta_r\subset{\Bbb C}$ be the open disk of radius $r$ centered at the origin, and $\Delta\subset{\Bbb C}$ be the open unit disk. We denote by $O(\Delta_r)$ and $O(\overline\Delta_r)$ the set of holomorphic functions on $\Delta_r$, respectively on neighborhoods of $\overline\Delta_r$. \par The first theorem proved in Section \ref{S:obp} states that for an analytic function $f$ overinterpolation by polynomials implies that $f$ is a polynomial. \begin{Theorem}\label{T:Lagr}Let $f\in O(\Delta)$ and $K=\overline\Delta_r$, where $r<1$. If $\lim_{n\to\infty}N_K(n)/n=\infty$ then $f$ is a polynomial. \end{Theorem} \par For a function $f$ as above, the second theorem states that overinterpolation in ${\mathcal R}_{n1}$ implies that either $f$ is entire or it belongs to ${\mathcal R}_{n1}$. \begin{Theorem}\label{T:rational} Let $f\in O(\Delta)$ and $K=\overline\Delta_r$, where $r<1$. If $\lim_{n\to\infty}N_K(n,1)/n=\infty$ then either $f$ is entire or $f=P/Q$, where $P,\,Q$ are polynomials, $\deg Q=1$ and $Q$ does not divide $P$. \end{Theorem} \par This theorem is proved in Section \ref{S:obrf}, where we also consider the case of Pad\'e interpolation, i.e. when $K=\{0\}$. For any germ of an analytic function $f$ at 0 and any fixed $m\in{\Bbb N}$, we show that $N_K(n,m)\leq n+m+1$ for infinitely many $n$, unless $f$ is the germ of a rational function. \par The expected rate of rational approximation of continuous or analytic functions is at most geometric, but in some cases functions can be approximated faster. This phenomenon is called {\it overconvergence}. In \cite{Go} and \cite{Ch} Gonchar and Chirka have shown that in this case the functions have specific forms. In Section \ref{S:oao} we prove that overinterpolation implies overconvergence on some circle and, therefore, overinterpolated functions have the same specific forms as in the results of Gonchar and Chirka. \par For the entire function $f(z)=\sum 2^{-n!}z^n$, its Taylor series is overconvergent but by Theorem \ref{T:Lagr} $f$ cannot be overinterpolated by polynomials. Hence overconvergence does not imply overinterpolation. \par The assumption in all our results that $f\in O(\Delta)$ seems to be a technical necessity. In the last section we consider the interpolation of a general set $S$ in ${\Bbb C}^2$ by algebraic functions, i. e., we are looking for the maximal number $N_S^a(n)$ of zeros on $S$ of a polynomial of degree $n$ which does not vanish on $S$. The desirable estimate is $N_S^a(n)\le An^\alpha$, where $A$ and $\alpha$ are some constants. We show that either $S$ is finite, or $\alpha=1$ and $S$ is contained in an irreducible algebraic curve, or $\alpha\ge2$. \par It should be noted that in \cite{CP3} we proved for a large class of meromorphic functions $f$ on ${\Bbb C}$ with finitely many poles, including the Riemann $\zeta$-function, that if $S$ is the graph of $f$ over $\Delta_r$, then $N^a_S(n)\le An^2\log r$. \section{Overinterpolation by polynomials}\label{S:obp} If $f\in O(\overline\Delta_R)$ we set $$M(r,f)=\max\{|f(z)|:\,|z|=r\},\;r\leq R.$$ \par We will need the following lemmas: \begin{Lemma}\label{L:Lagr} Let $f\in O(\overline\Delta_R)$ and $L_nf$ denote the Lagrange interpolating polynomial of $f$ at the (not necessarily distinct) points $z_0,\dots,z_n\in\overline\Delta_r$, where $r<R$. If $0<s<R$ then $$M(s,f-L_nf)\leq M(R,f)\frac{R}{R-s}\left(\frac{s+r}{R-r}\right)^{n+1}.$$\end{Lemma} \begin{pf} Let $\omega(z)=(z-z_0)\dots(z-z_n)$. By \cite[p. 59, (1.4)]{G} we have $$f(z)-L_nf(z)=\frac{1}{2\pi i}\int_{|t|=R} \frac{\omega(z)f(t)}{\omega(t)(t-z)}\,dt.$$ The lemma follows since $|\omega(t)|\geq(R-r)^{n+1}$ for $|t|=R$, and since $M(s,\omega)\leq(s+r)^{n+1}$.\end{pf} \par For $R>0$ let $$R_+=\max\{R,1\}.$$ We have the following estimate of Taylor coefficients. \begin{Lemma}\label{L:eftc} Let $f\in O(\Delta)$, $f(z)=\sum_{k\ge0}f_kz^k$. Suppose that $0<r<1$ and the function $f-P_n$ has $N$ zeros in $\overline\Delta_r$, where $P_n$ is a polynomial of degree at most $n$. There exist positive constants $A\ge 1$, $a<1$ and $\delta$, depending only on $r$, with the following property: If $N\ge A(n+1)$, then $$|f_k|\le \frac{M(R,f)}{R_+^{n+1}}\;a^N,$$ for $n<k\le\delta N$ and every $R\ge(r+2)/3$ such that $f\in O(\overline\Delta_R)$. \end{Lemma} \begin{pf} Let $s=(2r+1)/3$ and fix $R$ as in the statement. Let $z_0,\dots,z_{N-1}$ be zeros of $f-P_n$ in $\overline\Delta_r$. Since $N\ge n+1$ the polynomial $P_n=L_nf$ is the Lagrange interpolating polynomial of $f$ at $z_0,\dots,z_n$. Since $f-P_n$ has $N$ zeros in $\overline\Delta_r$, we have by \cite[Theorem 2.2]{CP2} (see the formula on p. 578) $$M(r,f-P_n)\leq M(s,f-P_n)\left(\frac{2rs}{r^2+s^2}\right)^{N}.$$ Hence by Lemma \ref{L:Lagr} \begin{eqnarray*}M(r,f-P_n)&\leq& M(R,f)\frac{R}{R-s}\left(\frac{s+r}{R-r}\right)^{n+1} \left(\frac{2rs}{r^2+s^2}\right)^{N}\\ &=&\frac{M(R,f)}{R^{n+1}}\frac{1}{1-s/R} \left(\frac{s+r}{1-r/R}\right)^{n+1} \left(\frac{2rs}{r^2+s^2}\right)^{N}. \end{eqnarray*} Notice that $$\frac{1}{1-s/R}<\frac3{1-r}\;,\; \frac{s+r}{1-r/R}<\frac3{1-r}\;,$$ and $$a_1:=\frac{2rs}{r^2+s^2}<1.$$ Since $R>2/3$ we obtain \begin{eqnarray}\label{e:mfe0}M(r,f-P_n)&\leq&\frac{M(R,f)}{R^{n+1}} \left(\frac{3}{1-r}\right)^{n+2}a^N_1\\ &<&\frac{M(R,f)}{R_+^{n+1}}\left(\frac{3}{2}\right)^{n+1} \left(\frac3{1-r}\right)^{n+2}a^N_1\notag\\ &<&\frac{M(R,f)}{R_+^{n+1}}\left(\frac{5}{1-r}\right)^{n+2}a^N_1. \notag\end{eqnarray} Let $$A=\max\left\{-4\,\frac{\log 5-\log(1-r)}{\log a_1}\,,1\right\}\;.$$ As $N\ge A(n+1)$ we obtain $$M(r,f-P_n)\leq\frac{M(R,f)}{R_+^{n+1}}\;a_2^N,$$ where $a_2=a_1^{1/2}$. Since $k>\deg P_n$ it follows by Cauchy's inequalities that $$|f_k|\leq\frac{M(r,f-P_n)}{r^k}\leq \frac{M(R,f)}{R_+^{n+1}}\;a_2^Nr^{-k}.$$ We define $a=a^{1/2}_2$ and $\delta$ by $r^\delta=a$. If $k\le\delta N$ then $$|f_k|\leq\frac{M(R,f)}{R_+^{n+1}}\;a^N.$$ \end{pf} \vspace{2mm}\noindent{\em Proof of Theorem \ref{T:Lagr}.} Let $f(z)=\sum_{n\geq0}f_nz^n$. We can find an increasing sequence of integers $N(n)\leq N_K(n)$ such that $N(n)/n\rightarrow\infty$ and the function $f-P_n$ has at least $N(n)$ zeros in $\overline\Delta_r$, where $P_n\in{\mathcal P}_n$. \par Fix $R\geq(r+2)/3$ so that $f\in O(\overline\Delta_R)$. Let $a<1\le A$ be the constants from Lemma \ref{L:eftc} and $n_0$ be so that $N(n)\geq A(n+1)$ if $n\geq n_0$. Lemma \ref{L:eftc} implies that for $n\geq n_0$ \begin{equation}\label{E:Lagr} |f_{n+1}|\le \frac{M(R,f)}{R_+^{n+1}}\;a^{N(n)}.\end{equation} Therefore $|f_n|^{1/n}\rightarrow0$, so $f$ is entire, hence (\ref{E:Lagr}) holds for any $R\geq1$. By Cauchy's inequalities we have $|f_n|\leq M(R,f)/R^n$ for $n\leq n_0$. Using these estimates of the coefficients, we obtain the following bound for $M(2R,f)$, $R\geq1$: \begin{equation}\label{E:doubling}M(2R,f)\leq\sum_{n\geq0} |f_n|(2R)^n\leq CM(R,f),\end{equation} where $$C=\sum_{n=0}^{n_0}2^n+\sum_{n=n_0}^\infty 2^{n+1}a^{N(n)}$$ is independent on $R$. Note that $$2^na^{N(n)}=\left(2a^{N(n)/n}\right)^n\leq2^{-n},$$ provided that $n$ is sufficiently large, thus $C$ is finite. \par Applying the doubling inequality (\ref{E:doubling}) successively we obtain $$M(2^j,f)\leq C^jM(1,f),$$ for any $j>0$. Hence $$|f_n|\leq \frac{C^jM(1,f)}{2^{nj}}\rightarrow0\;{\rm as}\;j\rightarrow\infty,$$ provided that $2^n>C$. We conclude that $f$ is a polynomial of degree at most $\log C/\log 2$. $\Box$ \par Theorem \ref{T:Lagr} has the following immediate corollary: \begin{Corollary}\label{C:Lagr} Let $\{n_k\}_{k\geq0}$ be an increasing sequence of natural numbers such that $n_{k+1}/n_k\leq C$ for some constant $C$. Let $f\in O(\Delta)$ and $K=\overline\Delta_r$, where $r<1$. If $\lim_{k\to\infty}N_K(n_k)/n_k=\infty$ then $f$ is a polynomial.\end{Corollary} \begin{pf} Let $N(n)=N_K(n)$. If $n_k\leq n<n_{k+1}$ then $$\frac{N(n)}{n}>\frac{N(n_k)}{n_{k+1}}\geq\frac{N(n_k)}{Cn_k}\;.$$ \end{pf} \section{Overinterpolation by rational functions}\label{S:obrf} \par We prove here Theorem \ref{T:rational}. We can find an increasing sequence of integers $N(n)\leq N_K(n,1)$ such that $N(n)/n\rightarrow\infty$ and the function $Q_nf-P_n$ has at least $N(n)$ zeros in $\overline\Delta_r$, where $P_n\in{\mathcal P}_n$, $Q_n\in{\mathcal P}_1$ and $Q_n\neq0$. \par Let us write $$f(z)=\sum_{k\geq0}f_kz^k,\;Q_n(z)=\alpha_nz-\beta_n.$$ Let $$\rho=\frac{1}{\limsup|f_k|^{1/k}}\geq1$$ be the radius of convergence of the power series of $f$ at the origin. \par By considering functions $c(Q_nf-P_n)$, where $c\in{\Bbb C}\setminus\{0\}$, we can identify $Q_n$ with the point $[\alpha_n:\beta_n]\in{\Bbb P}^1$. Thus $$Q_n(z)=\alpha_nz-1,\;\alpha_n\in{\Bbb C},\; {\rm or}\;Q_n(z)=z.$$ The latter case corresponds to $\alpha_n=\infty$ in the extended complex plane. \par We begin with a few lemmas. \begin{Lemma}\label{L:eftc2} There exist constants $a<1$, $\delta<1$ and an integer $n_0$, depending only on $r$, with the following property: If $n\geq n_0$, then one of the inequalities $$|\alpha_nf_{k-1}-f_k|\leq\frac{M(R,f)}{R_+^{n+1}}\; (|\alpha_n|R_++1)a^{N(n)}\;,\;\;|f_{k-1}|\leq \frac{M(R,f)}{R_+^n}\;a^{N(n)},$$ holds for every $k$, $n<k\leq\delta N(n)$, and every $R\geq(r+2)/3$ such that $f\in O(\overline\Delta_R)$. \end{Lemma} \begin{pf} Let $a<1\le A$, $\delta>0$, be the constants from Lemma \ref{L:eftc}, and let $n_0=n_0(r)$ be an integer such that $N(n)\geq A(n+1)$ for $n\geq n_0$. We fix such an $n$, and apply Lemma \ref{L:eftc} to the function $Q_nf$ and the polynomial $P_n$. If $Q_n(z)=\alpha_nz-1$ then $$Q_n(z)f(z)=-f_0+\sum_{k\geq1}(\alpha_nf_{k-1}-f_k)z^k,$$ and $M(R,Q_nf)\leq M(R,f)(|\alpha_n|R_++1)$. This yields the first inequality of the lemma. The second one is obtained in a similar way, in the case when $Q_n(z)=z$ (or by letting $\alpha_n\rightarrow\infty$). \end{pf} \begin{Lemma}\label{L:liminf} If $f\in O(\Delta_s)$, $1\leq s\leq\infty$, and if $\;\liminf_{n\rightarrow\infty} |\alpha_n|>1/s$, then $f$ is a polynomial.\end{Lemma} \begin{pf} There exist $n_1\geq n_0$ and $\epsilon>0$ such that $|\alpha_n|>1/s+\epsilon$, for $n\geq n_1$. Let $$c=\left(\frac{1}{s}+\epsilon\right)^{-1},\;d=1+c.$$ By Lemma \ref{L:eftc2} with $k=n+1$ we have $$|f_n|\leq\frac{|f_{n+1}|}{|\alpha_n|}+ \frac{M(R,f)}{R_+^n}\left(1+\frac{1}{R_+|\alpha_n|}\right)a^{N(n)} \leq c|f_{n+1}|+\frac{d M(R,f)}{R_+^n}\;a^{N(n)},$$ for every $R\geq(r+2)/3$ such that $f\in O(\overline\Delta_R)$. Note that this estimate obviously holds in the case $\alpha_n=\infty$. Applying it successively we obtain \begin{equation}\label{E:liminf}|f_n|\leq c^k|f_{n+k}|+\frac{d M(R,f)}{R_+^n}\;\sum_{j=0}^{k-1}\frac{c^j}{R_+^j}\;a^{N(n+j)}, \end{equation} for every $k\geq1$. Fix $s_1\geq(r+2)/3$ such that $c<s_1<s$. Since $f\in O(\Delta_s)$ we have $$|f_{n+k}|\leq\left(\frac{1}{s}+\frac{\epsilon}{2}\right)^{n+k},$$ for $k$ sufficiently large. Since $N(n)$ is increasing, and if $R\geq s_1$, we obtain by (\ref{E:liminf}) $$|f_n|\leq c^k|f_{n+k}|+\frac{d M(R,f)}{R_+^n}\;a^{N(n)}\sum_{j=0}^\infty\frac{c^j}{s_1^j} \leq\frac{\left(\frac{1}{s}+\frac{\epsilon}{2}\right)^{n+k}} {\left(\frac{1}{s}+\epsilon\right)^k}+\frac{ds_1 M(R,f)}{(s_1-c)R_+^n}\;a^{N(n)}.$$ Letting $k\rightarrow\infty$ we conclude that $$|f_n|\leq\frac{ds_1M(R,f)}{(s_1-c)R_+^n}\;a^{N(n)}$$ holds for all $n\geq n_1$ and $R\geq s_1$ such that $f\in O(\overline\Delta_R)$. This is a similar estimate to (\ref{E:Lagr}) from the proof of Theorem \ref{T:Lagr}. Therefore, by the same argument as in the proof of Theorem \ref{T:Lagr}, it follows that $f$ is entire and $M(2R,f)\leq CM(R,f)$ for every $R\geq s_1$, where $$C=\sum_{n=0}^{n_1-1}2^n+\frac{ds_1}{s_1-c}\; \sum_{n=n_1}^\infty 2^na^{N(n)}$$ is independent on $R$. Hence $f$ is a polynomial.\end{pf} \begin{Lemma}\label{L:limsup} $\limsup_{n\rightarrow\infty} |\alpha_n|\geq1/\rho$.\end{Lemma} \begin{pf} We assume for a contradiction that there exist $n_1\geq n_0$ and $0<\epsilon<1/\rho$ such that $$|\alpha_n|<c:=\rho^{-1}-\epsilon,\;n\geq n_1.$$ As $c<1$, we obtain by Lemma \ref{L:eftc2}, applied with $k=n+1$ and $R=(r+2)/3<1$, that $$|f_{n+1}|\leq|\alpha_nf_n|+M(|\alpha_n|+1)a^{N(n)}\leq c|f_n|+2Ma^{N(n)},$$ where $M=M(R,f)$ and $n\geq n_1$. Hence $$|f_{n+k}|\leq c^k|f_n|+2M\sum_{j=0}^{k-1}c^ja^{N(n+k-1-j)} \leq c^k|f_n|+2Ma^{N(n)}\sum_{j=0}^\infty c^j,$$ for all $k\geq1$. \par Let $C=2M/(1-c)$. Then for $n\geq n_1$ we have $$|f_{2n}|\leq c^n|f_n|+Ca^{N(n)}\;,\;\;|f_{2n+1}|\leq c^{n+1}|f_n|+Ca^{N(n)}.$$ Since, for $n$ large, $|f_n|\leq(\rho^{-1}+\epsilon)^n$, it follows that \begin{eqnarray*} |f_{2n}|^{1/(2n)}&\leq&c^{1/2}(\rho^{-1}+\epsilon)^{1/2}+ C^{1/(2n)}a^{N(n)/(2n)},\\|f_{2n+1}|^{1/(2n+1)}&\leq& c^{(n+1)/(2n+1)}(\rho^{-1}+\epsilon)^{n/(2n+1)}+ C^{1/(2n+1)}a^{N(n)/(2n+1)}.\end{eqnarray*} Note that $a^{N(n)/n}\rightarrow0$. Therefore $$\rho^{-1}=\limsup_{j\rightarrow\infty}|f_j|^{1/j}\leq (\rho^{-1}-\epsilon)^{1/2}(\rho^{-1}+\epsilon)^{1/2},$$ a contradiction.\end{pf} \vspace{2mm}\noindent{\em Proof of Theorem \ref{T:rational}.} We can assume $\rho<\infty$, otherwise $f$ is entire. The radius of convergence of the power series of $f(\rho z)$ at the origin is 1, and the function $Q_n(\rho z)f(\rho z)-P_n(\rho z)$ has $N(n)$ zeros in the disk $\overline\Delta_{r/\rho}$. Therefore we may assume that $\rho=1$. \par Let $R=(r+2)/3$ and $M=M(R,f)$. By Lemma \ref{L:eftc2}, one of the estimates \begin{equation}\label{E:mainest}|\alpha_nf_{k-1}-f_k|\leq M(|\alpha_n|+1)a^{N(n)}, \;|f_{k-1}|\le Ma^{N(n)},\end{equation} holds for $n<k\leq\delta N(n)$, provided that $n\geq n_0$. \par By Lemma \ref{L:limsup}, $|\alpha_n|>1/3$ or $\alpha_n=\infty$ for infinitely many $n$. We show that there exists a sequence $m_j\to\infty$ such that $$\alpha_{m_j}\in{\Bbb C},\; |\alpha_{m_j}|>1/3,\;|f_{m_j+1}|>2^{-m_j-2}.$$ Fix any $n$ large with $|\alpha_n|>1/3$ or $\alpha_n=\infty$. Let $k\ge n$ be the smallest integer such that $|f_k|>2^{-k}$. Such $k$ exists since $\rho=1$. If $k>n$ then $\alpha_{k-1}$ is finite. Otherwise by (\ref{E:mainest}) $$2^{-k}<|f_k|\le Ma^{N(k-1)},$$ which is impossible as $n$ is large. By the definition of $k$, $|f_{k-1}|\leq2^{-k+1}$. We claim that $|\alpha_{k-1}|>1/3$. If not, then using (\ref{E:mainest}) $$2^{-k}<|f_k|\le|\alpha_{k-1}f_{k-1}|+2Ma^{N(k-1)}\le \frac{2^{-k+1}}{3}+2Ma^{N(k-1)},$$ so $2^{-k}<6Ma^{N(k-1)}$. This is a contradiction since $n$ is large. \par If $k=n$, then $|f_n|>2^{-n}>Ma^{N(n)}$ shows that $\alpha_n\in{\Bbb C}$, so $|\alpha_n|>1/3$. We have by (\ref{E:mainest}) \begin{eqnarray*}|f_{n+1}|&\ge&|\alpha_nf_n|-4M|\alpha_n|a^{N(n)}> |\alpha_n|(2^{-n}-4Ma^{N(n)})\\ &>&\frac{2^{-n}}{3}\left(1-2^{n+2}Ma^{N(n)}\right)\ge 2^{-n-2}.\end{eqnarray*} This establishes the existence of the desired sequence $m_j$. \par Since $N(n)/n\rightarrow\infty$ and $\rho=1$, we can find $n_1\geq n_0$ with the property that \begin{equation}\label{E:ass} 2^{3n+6k+16}Ma^{N(n+k)}<1,\;|f_n|<2^n, \end{equation} hold for every $n\ge n_1$ and for every $k\ge0$. Then we fix $n\ge n_1$ such that $\alpha_n\in{\Bbb C}$, $|\alpha_n|>1/3$ and $|f_{n+1}|>2^{-n-2}$. We have using (\ref{E:mainest}) that $$2^{-n-2}|\alpha_n|<|\alpha_{n}||f_{n+1}|\le|f_{n+2}|+ 4M|\alpha_n|a^{N(n)}\leq2^{n+2}+4M|\alpha_n|a^{N(n)},$$ so by (\ref{E:ass}) $$|\alpha_{n}|\leq\frac{2^{2n+4}}{1-2^{n+4}Ma^{N(n)}}\leq2^{2n+5}.$$ \par We will show by induction that for every $k\geq0$ \begin{equation}\label{E:ind}|f_{n+k+1}|>2^{-n-2}6^{-k}\;,\;\; \frac16+6^{-k-1}<|\alpha_{n+k}|<2^{2n+6}-6^{-k-1}.\end{equation} Evidently, these inequalities hold for $k=0$. Suppose that they are true for some $k\geq0$. Then using (\ref{E:mainest}) \begin{eqnarray*}|f_{n+k+2}|&\geq&|\alpha_{n+k}f_{n+k+1}|- 7M|\alpha_{n+k}|a^{N(n+k)}\\&\geq& 2^{-n-2}6^{-k-1}+2^{-n-2}6^{-2k-1}-2^{2n+9}Ma^{N(n+k)}.\end{eqnarray*} By (\ref{E:ass}) $$2^{-n-2}6^{-2k-1}-2^{2n+9}Ma^{N(n+k)}>0,$$ so we see that $|f_{n+k+2}|>2^{-n-2}6^{-k-1}$. \par Since $|f_{n+k+1}|>2^{-n-2}6^{-k}$, we have in view of (\ref{E:mainest}) and (\ref{E:ass}) that $\alpha_{n+k+1}\in{\Bbb C}$. Therefore by (\ref{E:mainest}) $$|\alpha_{n+k}f_{n+k+1}-f_{n+k+2}|\le M(|\alpha_{n+k}|+1)a^{N(n+k)},$$ $$|\alpha_{n+k+1}f_{n+k+1}-f_{n+k+2}|\le M(|\alpha_{n+k+1}|+1)a^{N(n+k+1)}.$$ As $|\alpha_{n+k}|>1/6$ and $N(n)$ is increasing, it follows that $$|f_{n+k+1}||\alpha_{n+k}-\alpha_{n+k+1}|\le M(13|\alpha_{n+k}|+|\alpha_{n+k+1}|)a^{N(n+k)}.$$ Hence $$|\alpha_{n+k+1}|\left(1-\frac{Ma^{N(n+k)}}{|f_{n+k+1}|}\right)\le |\alpha_{n+k}|\left(1+13\,\frac{Ma^{N(n+k)}}{|f_{n+k+1}|}\right).$$ So, by (\ref{E:ass}) and (\ref{E:ind}), $|\alpha_{n+k+1}|<4|\alpha_{n+k}|$. \par Thus $$|\alpha_{n+k}-\alpha_{n+k+1}|\le 17M2^{2n+6}a^{N(n+k)}2^{n+2}6^k\le2^{3n+3k+13}Ma^{N(n+k)},$$ and by (\ref{E:ass}) $$|\alpha_{n+k}-\alpha_{n+k+1}|\le2^{-3k-3}<6^{-k-1}-6^{-k-2}.$$ Using the bounds for $|\alpha_{n+k}|$, this yields the desired estimates for $|\alpha_{n+k+1}|$. \par The inductive proof of the inequalities (\ref{E:ind}) is now concluded. Moreover, we have shown that $$|\alpha_m-\alpha_{m+1}|\le2^{3m+13}Ma^{N(m)},$$ for all $m\ge n$. This implies that $\alpha_m\rightarrow\alpha\in{\Bbb C}$, and for $m\geq n$ $$|\alpha-\alpha_m|\le2^{13}M\sum_{j=m}^\infty2^{3j}a^{N(j)}\le 2^{13}Ma^{N(m)/2}\sum_{j=m}^\infty2^{3j}a^{N(j)/2}.$$ Hence \begin{equation}\label{E:est} |\alpha-\alpha_m|\le Ba^{N(m)/2},\; B=2^{13}M\sum_{j=0}^\infty2^{3j}a^{N(j)/2}. \end{equation} \par Let $Q(z)=\alpha z-1$. Lemma \ref{L:limsup} implies that $|\alpha|\geq1$. If $|\alpha|>1$ then by Lemma \ref{L:liminf} $f$ is a polynomial, which is in contradiction to $\rho=1$. Thus $|\alpha|=1$. We let $$P(z)=Q(z)f(z)=\sum_{k\ge0}c_kz^k.$$ Note that $$P(z)-P_m(z)=Q_m(z)f(z)-P_m(z)+(\alpha-\alpha_m)zf(z).$$ It follows, using (\ref{E:mainest}), (\ref{E:ass}), (\ref{E:ind}) and (\ref{E:est}), that \begin{eqnarray*}|c_{m+1}|&\le&|\alpha_mf_m-f_{m+1}|+ |\alpha-\alpha_m||f_m|\\&\leq&M\left(2^{2n+6}+1\right)a^{N(m)}+ 2^mBa^{N(m)/2},\end{eqnarray*} for all $m\ge n$. This implies that $|c_m|^{1/m}\rightarrow0$, hence $P$ is an entire function. \par Observe that $Q_m(z)P(z)-(\alpha z-1)P_m(z)$ has $N(m)$ zeros in $\overline\Delta_r$. Since $P$ is entire, it follows by Lemma \ref{L:liminf} that $P$ is in fact a polynomial. So $f=P/Q$, and $Q$ does not divide $P$ since $f$ is not entire. This finishes the proof. $\Box$ \par Theorem \ref{T:rational} has the following corollary, which is proved exactly as Corollary \ref{C:Lagr}. \begin{Corollary}\label{C:rational} Let $\{n_k\}_{k\geq0}$ be an increasing sequence of natural numbers such that $n_{k+1}/n_k\leq C$ for some constant $C$. Let $f\in O(\Delta)$ and $K=\overline\Delta_r$, where $r<1$. If $\lim_{k\to\infty}N_K(n_k,1)/n_k=\infty$, then either $f$ is entire or $f=P/Q$, where $P,\,Q$ are polynomials, $\deg Q=1$ and $Q$ does not divide $P$.\end{Corollary} \par We conclude this section with a remark about Pad\'e overinterpolation. Let $f$ be a germ of a holomorphic function at the origin. A rational function $R\in{\mathcal R}_{nm}$ is called a Pad\'e interpolator (or Pad\'e approximant) of type $(m,n)$ of $f$ if $f-R$ has a zero of the highest possible order at the origin, i.e. of order $N_K(n,m)$, where $K=\{0\}$. We prove the following simple fact about overinterpolation in the $m$-th row of the Pad\'e table. \begin{Proposition}\label{T:Pade} Let $f$ be a holomorphic germ at the origin and $m\in{\Bbb N}$. If, for all $n\ge k$, there exist functions $R_n\in{\mathcal R}_{nm}$ so that $f-R_n$ vanishes to order at least $n+m+2$ at the origin, then $f\in{\mathcal R}_{km}$. \end{Proposition} \begin{pf} Let us write $R_n=P_n/Q_n$, where $P_n\in{\mathcal P}_n$ and $Q_n\in{\mathcal P}_m$, $Q_n\neq0$. For $n\geq k$ the function $R_n-R_{n+1}$ vanishes to order at least $n+m+2$ at the origin. Since $\deg(P_nQ_{n+1}-P_{n+1}Q_n)\leq n+m+1$, this implies $R_n=R_{n+1}=R\in{\mathcal R}_{km}$, for $n\geq k$. It follows that $f=R$.\end{pf} \section{Overinterpolation and overconvergence}\label{S:oao} \par Throughout this section we assume that $f\in O(\overline\Delta)$ and that $0<r<1$ is fixed. For a compact set $E\subset{\Bbb C}$ and a continuous complex-valued function $g$ on $E$, we denote by $\|g\|_E$ the uniform norm of $g$ on $E$. \par The following theorem shows that, in the presence of overinterpolation, the functions $R_{nm}$ quickly approximate $f$ on some circle $S_t=\{z\in{\Bbb C}:\,|z|=t\}$. \begin{Theorem}\label{T:cl} Let $m(n)\in{\Bbb N}$, and $d_n>0$ be so that $\sum d_n$ converges. Suppose that for all $n$ there are polynomials $P_n\in{\mathcal P}_n$ and $Q_{m(n)}\in{\mathcal P}_{m(n)}$, $Q_{m(n)}\ne0$, so that the function $Q_{m(n)}f-P_n$ has $N(n)$ zeros in $\overline\Delta_r$. There exist positive constants $b<1$, $c$, depending only on $r$, and $t\in[r,(1+r)/2]$, such that $$\|f-R_n\|_{S_t}\le M\left(\frac{c}{d_n}\right)^{m(n)}b^{N(n)}$$ holds for all $n$ sufficiently large, where $R_n=P_n/Q_{m(n)}$ and $M=M(1,f)$. \end{Theorem} \begin{pf} We may assume that $M(r/2,Q_{m(n)})=1$. Following \cite{CP2}, we define the $n$-th diameter of a set $G\subset{\Bbb C}$ by $$\operatorname{diam}_n(G)=\inf\left\{r_1+\dots+r_k:\,k\leq n,\;G\subset\bigcup_{j=1}^kC_j(r_j)\right\},$$ where $C_j(r_j)$ are closed disks of radii $r_j>0$. If $H_n(z)=Q_{m(n)}(rz/2)$ then by Lemma 3.3 from \cite{CP2}, for every $0<h\leq 1/(8e)$, the $n$-th diameter of the set $$G'=\left\{z\in{\Bbb C}:\,|H_n(z)|\leq \left(\frac{hr^2|z|}{(1+r)^2}\right)^{m(n)}, 2\leq|z|\leq \frac{1+r}r\right\}$$ does not exceed $36eh$. Hence the $n$-th diameter of the set $$G=\left\{z\in{\Bbb C}:\,|Q_{m(n)}(z)|\leq \left(\frac{2hr|z|}{(1+r)^2}\right)^{m(n)}, r\leq|z|\leq \frac{1+r}2\right\}$$ does not exceed $18ehr$. This means that the measure of the set $$F_n=\left\{t\in\left[r,\frac{1+r}{2}\right]:\, |Q_{m(n)}(z)|\ge\left(\frac{2hr|z|}{(1+r)^2}\right)^{m(n)}, \;\forall\,z\in S_t\right\}$$ is at least $(1-r)/2-36ehr$. \par Since $M(r/2,Q_{m(n)})=1$, the classical Bernstein-Walsh inequality implies that $$M(1,Q_{m(n)})\leq(2/r)^{m(n)}.$$ If $t\in F_n$ then by (\ref{e:mfe0}) we have $$M(t,Q_{m(n)}f-P_n)\leq M\left(\frac{2}{r}\right)^{m(n)} \left(\frac3{1-t}\right)^{m(n)+2}a^{N(n)}_1,$$ where $$a_1=a_1(t)=\frac{12t^2+6t}{13t^2+4t+1}<1.$$ The function $a_1(t)$ is increasing on $[0,1]$ and, therefore, it does not exceed $$b=a_1((1+r)/2)$$ on $F_n$. Hence for $t\in F_n$ we have $$\|f-R_n\|_{S_t}\le \frac{9M}{(1-t)^2}\left(\frac{3(1+r)^2}{hr^2t(1-t)}\right)^{m(n)} b^{N(n)}\le M\left(\frac{c_1}h\right)^{m(n)}b^{N(n)},$$ where $c_1$ is a constant depending only on $r$. \par If we let $h=d_n/(36er)$ then the measure of $F_n$ is at least $(1-r)/2-d_n$ and for $t\in F_n$ we have $$\|f-R_n\|_{S_t}\le M\left(\frac{c}{d_n}\right)^{m(n)}b^{N(n)},$$ where $c=36erc_1$. Since $\sum d_n<\infty$ there is $n_0$ such that the set $F=\bigcap_{n=n_0}^\infty F_n$ is not empty. If $t\in F$ then the conclusion of the theorem holds for $t$ and for all $n\ge n_0$.\end{pf} \par If $g$ is a continuous function on a compact set $E\subset{\Bbb C}$, we let $$\rho(n,m)=\inf\|g-R\|_E,$$ where the infimum is taken over all $R\in{\mathcal R}_{nm}$. We say that rational functions {\it overconverge} to $g$ on $E$ if $$\lim_{n\to\infty}\rho(n,m(n))^{1/n}=0,$$ for some sequence $m(n)\in{\Bbb N}$. \par The following corollary shows that, under suitable conditions, overinterpolation implies overconvergence. \begin{Corollary}\label{C:oioc} Under the assumptions of Theorem \ref{T:cl}, suppose that there is a sequence $\{a_n\}$ of positive numbers converging to 0 such that $$\sum_{n=1}^\infty\frac{b^{N(n)/m(n)}}{a_n^{n/m(n)}}<\infty.$$ Then there exists $t\in[r,(1+r)/2]$ for which $$\lim_{n\to\infty}\|f-R_n\|^{1/n}_{S_t}=0.$$ \end{Corollary} \begin{pf} For the proof, take $d_n=c\,b^{N(n)/m(n)}/a_n^{n/m(n)}$. \end{pf} \par The fact that overinterpolation implies overconvergence allows us to use results of Gonchar and Chirka to prove other results about overinterpolation. Let us first recall some definitions from \cite{Ch}. The class ${\mathcal R}_{n,(m)}$ consists of all rational functions of degree at most $n$ and with at most $m$ geometrically distinct poles. The class ${\mathcal A}^0_m$ consists of all functions meromorphic on ${\Bbb P}^1$ except for at most $m$ singularities of finite order. This means that for every singular point $a$ there is a number $p$ such that $|f(z)|<\exp(1/|z-a|^p)$ near $a$. \begin{Theorem}\label{T:gc} Let $m\ge0$ be an integer. If for all $n$ there are functions $R_n\in{\mathcal R}_{nm}$ such that $f-R_n$ has $N(n)$ zeros in $\overline\Delta_r$, where $N(n)/n\rightarrow\infty$, then $f$ extends to a meromorphic function on ${\Bbb C}$ with at most $m$ poles. \par If the functions $R_n\in{\mathcal R}_{n,(m)}$ and $$\liminf_{n\to\infty}\frac{N(n)}{n\log n}>-\frac1{\log b}\;,$$ where $b$ is the constant from Theorem \ref{T:cl}, then $f$ has an extension in ${\mathcal A}^0_m$. \end{Theorem} \begin{pf} To prove the first statement, we take a number $\alpha$ such that $b<\alpha<1$ and let $a_n=\alpha^{N(n)/n}$. By Corollary \ref{C:oioc}, there is $t\in[r,(1+r)/2]$ for which $$\lim_{n\to\infty}\|f-R_n\|^{1/n}_{S_t}=0.$$ By Theorem 1 from \cite{Go}, $f$ extends to a meromorphic function to ${\Bbb C}$ with at most $m$ poles. \par A result of Chirka and Gonchar (see \cite[Theorem 1]{Ch}) states that if $f$ is analytic in a neighborhood of a compact set $E\subset{\Bbb C}$ of positive capacity then $f$ has an extension in ${\mathcal A}^0_m$ if and only if there are a sequence of rational functions $R_n\in {\mathcal R}_{n,(m)}$ and a number $\lambda>0$ such that $$\|f-R_n\|^{1/n}_E<\frac1{n^\lambda}$$ for all $n$ sufficiently large. (The theorem is stated for $E=\overline\Delta_s$, but see the note after the statement.) \par Take numbers $\alpha$ and $\lambda$ such that $$\liminf_{n\to\infty}\frac{N(n)}{n\log n}>\alpha>-\frac1{\log b} \;,\;\;0<\lambda<-1-\alpha\log b.$$ Let $$d_n=cb^{\alpha\log n}n^\lambda=cn^{\alpha\log b+\lambda},$$ where $c$ is the constant from Theorem \ref{T:cl}. Then $\sum d_n<\infty$ and by Theorem \ref{T:cl} there is $t\in[r,(1+r)/2]$ such that $$\|f-R_n\|^{1/n}_{S_t}\le M^{1/n}\frac{c}{d_n}\,b^{N(n)/n}= M^{1/n}\frac{b^{N(n)/n-\alpha\log n}}{n^\lambda}< \frac1{n^\lambda}\;,$$ for all $n$ sufficiently large. Now the second statement of the theorem follows from the result of Chirka and Gonchar mentioned above. \end{pf} \section{Interpolation by algebraic functions}\label{S:obaf} \begin{Proposition}\label{P:revBezout} Let $S$ be an infinite set in ${\Bbb C}^2$ with the following property: There exist positive constants $A\ge1$ and $\alpha<2$ such that $$|S\cap X|\leq A(\deg X)^\alpha,$$ for any algebraic curve $X\subset{\Bbb C}^2$ not containing $S$. Then $\alpha\geq1$ and $S$ is contained in an irreducible algebraic curve of degree at most $(2A)^{1/(2-\alpha)}$. Moreover, $$|S\cap X|\leq(2A)^{1/(2-\alpha)}\,\deg X,$$ for any algebraic curve $X\subset{\Bbb C}^2$ not containing $S$. \end{Proposition} \begin{pf} Suppose $\alpha<1$. Assume that $\{z_1,\dots,z_n\}\subseteq S$, where $n\geq2$, and let $L_j$, $1\leq j<n$, be a complex line passing through $z_j$ and not containing $z_n$. If $X=L_1\cup\dots\cup L_{n-1}$, then $X$ does not contain $S$, hence $$n-1\leq|S\cap X|\leq A(n-1)^\alpha.$$ Thus $|S|\leq1+A^{1/(1-\alpha)}$, which is a contradiction. \par Let $k$ denote the greatest integer in $x=(2A)^{1/(2-\alpha)}$. Then $$2Ak^\alpha= x^{2-\alpha}k^\alpha<(k+1)^2\leq k^2+3k.$$ Note that the dimension of the space of polynomials in ${\Bbb C}^2$ of degree at most $n$ is $(n+1)(n+2)/2$. Therefore there exists a curve $C$ of degree at most $k$ so that \begin{equation}\label{E:est2}|S\cap C|\geq(k^2+3k)/2>Ak^\alpha. \end{equation} It follows that $S\subseteq C$. Assume that $C=C_1\cup\dots\cup C_m$, where $C_j$ is an irreducible algebraic curve of degree $k_j$, $k_1+\dots+k_m\leq k$. If no curve $C_j$ contains $S$ then, since $\alpha\geq1$, $$|S\cap C|\leq\sum_{j=1}^m|S\cap C_j|\leq A\sum_{j=1}^mk_j^\alpha\leq Ak^\alpha,$$ which contradicts (\ref{E:est2}). We conclude that $S$ is contained in an irreducible curve $\Gamma$ of degree at most $k$. Hence by Bezout's theorem, $$|S\cap X|\leq|\Gamma\cap X|\leq(2A)^{1/(2-\alpha)}\,\deg X,$$ for any algebraic curve $X\subset{\Bbb C}^2$ not containing $S$. \end{pf}
1,108,101,562,619
arxiv
\section{Introduction} Flattening is a crucial operation in computer vision, converting a multi-dimensional feature map or image into one dimensional vector. In deep learning, when needing extract semantics from the learned feature maps or tokens, we flatten these high-dimensional inputs into a vector of high-level representation, neglecting localization information. Though localization descriptions in flattened results are usually oblivious, the specific flatten approaches still contain some of them. Taking the mostly used ``zigzag " as an example, most feature points in zigzag flattened vectors are spatially neighboring (1-pixel away) in their original 2D formation, except turning points in zigzag scanning. With the size increase of the feature map for flattening, these 1D distances to the neighboring points of these outcast ones in vector format would be more distorted. In some vision applications, \textit{e.g.,} \ patch embedding of vision transformer, it would require position compensation techniques (like position encodings) for persevering model performance when using flatten. Generally, a fancy position encoding designs will bring non-trivial performance gains in a negligible cost. From this perspective, we wonder whether there exists a flatten manner could preserve the theoretically upper-bound of localization information in vector format from matrix one. If so, such a flatten approach whether could bring benefits to current position encoding studies. Further, it may bring research paradigm change as it could just flatten the input signals (with any forms) into a vector for processing, outlining the significance of 1D operators like fully connected layer and depthwise separable convolution. \begin{figure*}[!htb] \begin{center} \includegraphics[width=\linewidth]{fig1} \end{center} \caption{Multi-scale transformation of dimensional space with Zigzag flattening and Hilbert flattening, respectively. } \label{Hilbert-and-Zigzag-curve} \end{figure*} Inspired by Hilbert fractal theories, we propose a Hilbert Flattening as an alternative for sequence ordering in computer vision. In our investigation, HF has proven to be superior to other methods in maintaining spatial locality, when performing multi-scale transformations of dimensional space in Mathematics \cite{tkde_analysis_hilbert}. But, in this paper, we give a concrete analysis of the theoretical guarantees of HF and ZF in position preserving, and we formulate the nature of the Hilbert fractal in image dimensional transformation and derive its scale robustness accordingly. Further, we validate the practical effectiveness of HF by finding its applications as Hilbert patch embedding, Hilbert feature down-sampling, and Hilbert image interpolation, beyond its theoretical merits. Our contributions can be summarized as follows: \begin{itemize} \item Through theoretical analysis and experimental supports, we attempt to give one alternative flattening approach named Hilbert flattening in the current vision community. It can preserve much more position information than zigzag and show robustness in dimension scaling. \item We present theoretical evidence that the consecutive parts in sequence by HF are close in the corresponding image. Specifically, we have theoretically estimated the square-to-linear dilation factor of the finite approximation of the Hilbert curve. Meanwhile, The Average Square Distance was proposed to give a quantitative description of the comparison between inverse Hilbert flattening and inverse Zigzag flattening on the probability of points close in 2D being close in a linear sequence. Additionally, we give both theoretical and empirical evidence to demonstrate that HF can maintain feature consistency in multi-scale images. \item We proposed a new patch embedding method based on HF, named Hilbert Patch Embedding (HPE), dedicated to the preservation of local relative position information when flattening the 2D image into a 1D sequence, considering both effectiveness and simplicity. Experiments demonstrate that, without introducing additional hyperparameters, it achieves consistent performance gains in image classification tasks based on multiple network architectures (\textit{e.g.,} ViTs, and Multi-layer Perceptron Networks (MLPs)). In particular, it has better length extrapolation \cite{press2021train} in dense prediction tasks (which is more reliable on the absolute position encoding), i.e., reduces the effect of inconsistent training and prediction token lengths. \end{itemize} \section{Related Works} \paragraph{Hilbert Curves. } Such prominent works in Mathematics as \cite{linear_clustering,tip96space_filling_curves,tkde_analysis_hilbert} have evidenced that the locality between objects in multi-dimensional space is preserved in linear space. Inspired by this idea, recent works consisting of \cite{hilbert_curve_for_sEMG, tip19hybrid_LSTM, thinking_in_patch} have been proposed to introduce the Hilbert curves into a computer vision (CV) application. \cite{tip19hybrid_LSTM} noted that the order of the image patches has a significant impact on the performance of the Long-Short Term Memory (LSTM), and if the zigzag flattening was performed in the horizontal direction, the neighboring blocks in the vertical direction are far apart. Eventually, the LSTM may not establish the connection between those patches well. So they utilized the Hilbert curves to arrange image patches before the block sequences were fed into the LSTM. Similarly, to extract better spatial features, FDPT \cite{thinking_in_patch} also utilized the Hilbert curves to flatten image patches before feeding them into the Gated Recurrent Unit (GRU). By contrast, \cite{hilbert_curve_for_sEMG} employed the Hilbert curves to generate 2D image representations from 1D surface electromyography (sEMG) signals, then the features of the sEMG signals were extracted by the CNN-based backbones. These above methods only apply the Hilbert curve to a CV task without theoretical analysis and empirical proofs. \paragraph{ViTs and MLPs. } Vision Transformer \cite{ViT} inspires a new paradigm architecture that differs from CNNs by utilizing patch embedding instead of taking images directly as input. Swin Transformer \cite{swin_T} proposes shifted windows to solve larger variations of the input image caused by the multi-scale and high resolution. By contrast, MLP-Mixer \cite{MLP-Mixer} proposed a new architecture that differs from CNNs and Transformers by eliminating the need for convolution and self-attention, and relies only on the repeated implementations of multi-layer perceptron across the spatial or feature channels. Those works above all employed zigzag flattening to expand 2D images or features into 1D patch or pixel sequences. But the ZF would move the initially adjacent image blocks (semantically related patches) away from each other, but HF does not, see Fig. \ref{Hilbert-and-Zigzag-curve} for details. Hence, we explored Hilbert curves whose cluster properties outperform zigzag curves for those ViTs and MLPs based architectures. \section{Hilbert Flattening} Hilbert flattening is built upon Hilbert curve, so we begin this section with an introduction to Hilbert curve, then define Hilbert flattening. After that, we show properties of Hilbert flattening and its applications. The definitions and known theorems used in this paper mainly come from \cite{sagan2012space}, please refer to it for more details. \subsection{Definition} \paragraph{Hilbert Curve. } We denote $\mathcal{I}$ and $\mathcal{Q}$ as the interval $[0,1]$ and square $[0,1]\times [0,1]$ respectively. The generating process of Hilbert curve is driven by the following: \begin{equation} \begin{aligned} \mathcal{H}:t& \in[0,1]\mapsto \mathcal{H}(t)\in[0,1]\times [0,1],\\ t&=0.q_{1}q_{2}\cdots, 0\le q_{j}\le 3,\\ \mathcal{H}(t)&=\left( \begin{aligned} &\mathcal{R}e\\ &\mathcal{I}m, \end{aligned}\right)\lim\limits_{n\rightarrow \infty}T_{q_{1}}T_{q_{2}}\cdots T_{q_{n}}\mathcal{Q}, \end{aligned} \end{equation} where $t$ is represented in quaternary form. The definition of $\{T_{i}|0\le i\le 3\}$ is defined as follows: \begin{equation} \begin{aligned} &T_{i}z=\frac{1}{2}H_{i}z+h_{i}, 0\le i\le 3,\\ &\begin{aligned} &H_{0}z=\bar{z}i,H_{1}z=z,H_{2}z=z,H_{3}z=-\bar{z}i,\\ &h_{0}=0,h_{1}=\frac{i}{2},h_{2}=\frac{1+i}{2},h_{3}=\frac{2+i}{2}, \end{aligned} \end{aligned} \end{equation} where we consider complex numbers $z\in\mathbb{C}$ as $(Re(z),Im(z))\in [0,1]\times [0,1]$. The transformations $\{T_{i}|0\le i\le 3\}$ defined above corresponds to different geometric deformations. Take transformation $T_{0}$ as an example, we first shrink the original $\mathcal{Q}$ towards the original point under the ratio $\frac{1}{2}$, then reflect on the imaginary axis by multiplying with -1 and rotate the square through $90^{\circ}$ by multiplying with imaginary number $i$. As the order of Hilbert curve increases, the sub-squares shrink into points, which claims that $\mathcal{H}(t)$ is a point in $\mathbb{R}^{2}$. Moreover, for \textbf{finite quaternaries} which are end or beginning points of sub-intervals of partition of $\mathcal{I}$, we have \begin{equation}\label{Hilbert Formula} \begin{aligned} \mathcal{H}(0.q_{1}q_{2}& \cdots q_{n})=\left( \begin{aligned} &\mathcal{R}e\\ &\mathcal{I}m \end{aligned} \right)\sum\limits_{j=1}^{n}\frac{1}{2^{j}}H_{q_{0}}H_{q_{1}}H_{q_{2}}\cdots H_{q_{j-1}}h_{q_{j}},\\ &=\sum\limits_{j=1}^{n}\frac{1}{2^{j}}(-1)^{e_{0j}}\text{sgn}(q_{j})\left( \begin{aligned} &(1-d_{j})q_{j}-1\\ &1-d_{j}q_{j} \end{aligned} \right) \end{aligned} \end{equation} \begin{equation*} \begin{aligned} &\text{sgn}(x)=\left\{ \begin{aligned} &1, \text{ if } x>0,\\ &0, x=0. \end{aligned} \right.\\ &e_{kj}=\#(\text{ ''k" preceding } q_{j})\text{ mod } 2,\\ &d_{j}=e_{0j} + e_{3j} \text{ mod } 2, \end{aligned} \end{equation*} where $\#$ is the counting function and $k\in \{0,3\}$. We have drawn the image points of finite quaternaries ($3\le n\le 7$) connected by straight lines in Figure \ref{Hilbert-and-Zigzag-curve}. We call $n$-th iteration as the approximation of Hilbert curve of order $n$. As shown in Figure \ref{Hilbert-and-Zigzag-curve}, note that the order $n$ approximation of Hilbert curve originates in the lower-left sub-square and terminates in the lower-right sub-square. The exit point from each sub-square coincides with the point which goes into the following sub-square. \paragraph{Hilbert Flattening. } With approximation of Hilbert curve of order $n$, we can define the operation Hilbert Flattening. Consider an image with resolution $n\times n$, the inverse map of approximation of Hilbert curve of order $n$ provides the mechanism of Hilbert Flattening of order $n$: \begin{equation} \mathcal{H}^{-1}: \left( \begin{aligned} & \frac{i}{2^{n}}+\frac{1}{2^{n+1}}\\ &\frac{j}{2^{n}}+\frac{1}{2^{n+1}} \end{aligned} \right)\mapsto =0.q_{1}q_{2}\cdots q_{n}, \end{equation} where $\mathcal{H}(0.q_{1}q_{2}\cdots q_{n})=(\frac{i}{2^{n}}+\frac{1}{2^{n+1}},\frac{j}{2^{n}}+\frac{1}{2^{n+1}})^{T}$. Then the pixel on the image contains point $(\frac{i}{2^{n}}+\frac{1}{2^{n+1}},\frac{j}{2^{n}}+\frac{1}{2^{n+1}})^{T}$ will be assigned the value $0.q_{1}q_{2}\cdots q_{n}$. All the pixels on the images will be ordered by theirs values, which in fact give the definition of Hilbert Flattening. The relative position information of the pixels can be preserved under the Hilbert Flattening. We explore it in the following. \subsection{Properties} Compared with the commonly used zigzag, Hilbert flatten owns nice properties in preserving as much locality as possible when flattening an multi-dimensional feature map into an 1D vector. Also, it has another merit that the locality preservation in Hilbert flatten holds steady when the target feature maps are scaling up or down. \subsubsection{Position Information Preservation} \label{PIP} Given two points $t_{1},t_{2}\in[0,1]$, the quanternary form are represented as $t^{1}=0.q_{1}^{1}q_{2}^{1}\cdots$ and $t^{2}=0.q_{1}^{2}q_{2}^{2}\cdots$ when these two points are close. It means that for an large integer $j$ such that $q_{k}^{1}=q_{k}^{2},\forall 1\le k\le j$. By applying the formula in Equation \ref{Hilbert Formula}, we obtain the distance between points of $\mathcal{H}(t^{1}),\mathcal{H}(t^{2})$ as follows: \begin{equation} |\mathcal{H}(t^{1})-\mathcal{H}(t^{2})|^{2}\le \sum\limits_{k=j+1}\frac{8}{2^{k}}\le \frac{8}{2^{j}}. \end{equation} The dilation bound of Hilbert curve is shown in Theorem \ref{Dilation_HF}, \begin{theorem}\label{Dilation_HF} The square-to-linear \textbf{dilation factor} of the Peano-Hilbert curve is equal to 6 \cite{article}, which means that the maximum value of $\frac{|\mathcal{H}(t^1)-\mathcal{H}(t^2)|^{2}}{|t^1-t^2|}\le 6$. \end{theorem} \begin{table*}[!htb] \begin{center} \caption{The percentage of points with ASD$^{\frac{1}{2}}$ smaller than given ASD$^{\frac{1}{2}}$ threshold. Indexing pixel points within the same 2D neighborhood, the larger the percentage indicates the better position information preservation. } \scalebox{0.8}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \toprule[1pt] $\text{ASD}^{\frac{1}{2 } } \text{threshold}$ & 1.4000e-2 & 1.3778e-2 & 1.3556e-2 & 1.3333e-2 & 1.3111e-2 & 1.2889e-2 & 1.2667e-2 & 1.2444e-2 & 1.2222e-2&1.2000e-2\\ \toprule[0.5pt] ZF& 100.00\% &96.97\% & 96.97\% & 3.13\% & 3.13\% & 3.13\% & 3.08\% & 3.03\% & 3.03\% & 0.00\% \\ HF & 80.57\% & 80.57\% & 80.57\% & 80.57\% & 80.57\% & 79.54\% & 78.76\% & 78.76\% & 78.76\% & 76.71\% \\ \toprule[1pt] \end{tabular}} \end{center} \label{table:DSP} \end{table*} The HF operation can obtain an sequence ordering of the image/feature map which guarantee that consecutive parts in sequence are close in original image. Now we study the ZF operation on image with size of $H\times W$. For convenience, we can assume that both $H$ and $W$ equal to $2^{n}$. Given a real number $t\in [0,1]$ which can be represented in quanternary form with finite length: $t=0.q_{1}q_{2}\cdots q_{n}$, the ZF is defined by $\mathcal{Z}^{-1}$ as follows: \begin{equation} \mathcal{Z}:0.q_{1}q_{2}\cdots q_{n} \mapsto \left( \begin{aligned} & (\sum\limits_{k=1}^{n}q_{k}4^{n-k}\% 2^{n}) * \frac{1}{2^{n}} + \frac{1}{2^{n+1}}\\ & \lfloor\frac{\sum\limits_{k=1}^{n}q_{k}4^{n-k}}{2^{n}}\rfloor* \frac{1}{2^{n}}+\frac{1}{2^{n+1}} \end{aligned} \right) \end{equation} \begin{equation} \mathcal{Z}^{-1}: [\frac{i}{2^{n}},\frac{j}{2^{n}}]\mapsto 0.q_{1}q_{2}\cdots q_{n}=\mathcal{Z}^{-1}([\frac{i}{2^{n}},\frac{j}{2^{n}}]), \end{equation} where $0\le i,j\le 2^{n}-1$. Let $t^{1}=0.\underbrace{00\cdots 0}_{\frac{n}{2}}\underbrace{33\cdots 3}_{\frac{n}{2}}$ and $t^{2}=0.\underbrace{00\cdots 0}_{\frac{n}{2}-1}1\underbrace{00\cdots 0}_{\frac{n}{2}}$, which are consecutive points in the interval $[0,1]$ with distance $\frac{1}{4^{n}}$. Then we have $|\frac{\mathcal{Z}(t^{1})-\mathcal{Z}(t^{2})|^{2}}{\frac{1}{4^{n}}}=\frac{(1-\frac{1}{2^{n}})^{2}+\frac{1}{4^{n}}}{\frac{1}{4^{n}}}=4^{n}-2^{n+1}+2$. Then we get Remark \ref{ZF_dia} (our proposed Remark \ref{ZF_dia} has been utilized in the point cloud classification and segmentation tasks \cite{chen2022efficient}). \begin{remark}\label{ZF_dia} The square-to-linear dilation factor of the ZF curve is $\infty$. ($\lim\limits_{n\rightarrow\infty}4^n-2^{n+1}+2=\infty$). \end{remark} \begin{figure}[!tb] \begin{center} \includegraphics[width=\linewidth]{asd} \end{center} \caption{The heatmap of $-\log\text{ASD}^\frac{1}{2}$, the brightness of the pixels indicates the 2D position information preservation. The closer the pixels are to white, the better. } \label{Fig.ASD} \end{figure} We draw the conclusion ZF operation obtain a sequence ordering of the image/feature map where consecutive points are distant in original image. Now we explore how much HF/ZF can preserve the original 2D structure. For each pixel $p$ at position $(i,j)$, we collect the neighbors which are $K$ steps away from $p$. In order to measure the extent of 2D position information preservation, we define the Average Square Distance (ASD) of those pixels respect to $p$ as follows: \begin{equation} \textbf{ASD}(p)=\frac{\sum\limits_{k=i-K}^{i+k}\sum\limits_{l=j-K}^{j+K}(\mathcal{F}^{-1}(p_{kl})-\mathcal{F}^{-1}(p))^{2}}{\#(K \text{ step neighbors})}. \end{equation} As shown in Figure \ref{Fig.ASD}, most points have an uniform high $\text{ASD}$ value under ZF mechanism, while over a certain percentage of points in HF have a small ASD value. Moreover, we give the quantitative comparison between ZF and HF in Table \ref{table:DSP}(order $6$) on percentage of points whose ASD$^{\frac{1}{2}}$ is smaller than a given threshold. This shows that HF has an advantage in the preservation of original relative position information over ZF. \subsubsection{Scale Robustness} Take $\Omega=\mathbb{Z}_{2^n}\times \mathbb{Z}_{2^n}$ to be a two-dimensional $2^n\times 2^n$ grid, given an RGB image as an mapping $:\Omega\rightarrow \mathbb{R}^{3}$. As proposed by \cite{bronstein2021geometric}, the convolutional layers of CNNs are shift-equivariant. The general equivariant is defined as follows, \begin{definition} A function $f:\mathcal{X}(\Omega)\rightarrow \mathcal{X}(\Omega)$ is $\mathcal{G}$-equivariant if $f(\rho(g)x)=\rho(g)f(x)$ for all $g\in \mathcal{G}$, i.e., group action on the input affects the output in the same way, where $\mathcal{X}(\Omega)$ denotes all signals on domain $\Omega$. \end{definition} \begin{definition} A function $f:\mathcal{X}(\Omega)\rightarrow \mathcal{X}(\Omega)$ is $\mathcal{G}$-robust if $f(\rho(g)x)\approx\rho(g)f(x)$ for all $g\in \mathcal{G}$, i.e., group action on the input affects the output in the same way, where $\mathcal{X}(\Omega)$ denotes all signals on domain $\Omega$. \end{definition} Consider the $n$-th order and $(n+1)$-th order approximation of Hilbert curve mapping: {\small \begin{equation} \begin{aligned} &0.q_{1}q_{2}\cdots q_{n}\mapsto \sum\limits_{j=1}^{n}\frac{1}{2^{j}}(-1)^{e_{0j}}\text{sgn}(q_{j})\left( \begin{aligned} &(1-d_{j})q_{j}-1\\ &1-d_{j}q_{j} \end{aligned} \right)\\ &0.q_{1}q_{2}\cdots q_{n}q_{n+1}\mapsto \sum\limits_{j=1}^{n+1}\frac{1}{2^{j}}(-1)^{e_{0j}}\text{sgn}(q_{j})\left( \begin{aligned} &(1-d_{j})q_{j}-1\\ &1-d_{j}q_{j} \end{aligned} \right). \end{aligned} \end{equation}} Geometrically, the above operation just divides the $n$-th order approximation Hilbert curve uniformly between every pair of end points into three parts, then moves the second part away from the original curve with distance $\frac{1}{2^{n+1}}$. Finally, it connects the moving part with the end points of the second part. (See Figure \ref{Hilbert-and-Zigzag-curve} with $n=3$ and $n=4$). Given an image $I$ with size $2^{n+1}\times 2^{n+1}$, we utilize the $n+1$-th order HF to flatten it. We denote the image/feature after flattening as $\mathcal{H}_{n+1}(I)$. On the other hand, we scale down the image $I$ into image $I_{\frac{1}{2}}$ with size $2^{n}\times 2^{n}$. We denote the image/feature after $n$-th order HF as $\mathcal{H}_{n}(I_{\frac{1}{2}})$. Commonly, adjacent pixels usually contain similar information with high probability. $\mathcal{H}_{n}(I_{\frac{1}{2}})$ and $\mathcal{H}_{n+1}(I)$ satisfy the following condition: \begin{equation} (\mathcal{H}_{n+1}(I))_{\frac{1}{2}}\approx \mathcal{H}_{n}(I_{\frac{1}{2}}), \end{equation} where $\frac{1}{2}$ means that downsampling the image with ratio $\frac{1}{2}$. Consider the scale operation group $\mathcal{S}=\{2^{-m}|m\in \mathbb{Z}\}$, we have \begin{equation} (\mathcal{H}_{n+m}(I))_{2^{-m}}\approx \mathcal{H}_{n}(I_{2^{-m}}). \end{equation} In conclusion, as $n$ approach a sufficiently large number (infinity), we get the Hilbert curve mapping is $\mathcal{S}$-robust. \subsection{Applications} \label{app} With the above analysis, we suppose that Hilbert Flattening can be applied to computer vision tasks. For example, we propose a new patch embedding approach based on the position information preservation of HF for the vision transformer models. We introduce a Hilbert sampling methods for the commonly used feature down-up sampling operation in neural network structures based on the scale robustness of HF. The proposed Hilbert sampling also can be utilized in traditional digital image processing tasks e.g., image interpolation. \subsubsection{Hilbert Patch Embedding} \label{HPE} \begin{figure}[!tb] \begin{center} \includegraphics[width=\linewidth]{HPE.pdf} \end{center} \caption{Hilbert patch embedding (HPE). The proposed HPE consists of two critical modules, i.e., Hilbert Patch Flattening (HPF) and Token Aggregator (TA). The HPF would not move the initially adjacent image blocks (semantically related patches) away from each other, e.g., the head of cat (red dotted line) remained clustered together after slicing, and the position of it on the sequence would not change. More detailed introductions turn to Section \ref{HPE}. } \label{Fig.HPE} \end{figure} Tokens extracted from image patches in transformers are ordered in zigzag by default. We employ HF to process this token order as in Fig. \ref{Fig.HPE}. We introduce a new patch embedding method for the Transformer models named Hilbert patch embedding. As shown in Fig. \ref{Fig.HPE}, the Hilbert patch flattening and token aggregator modules collaborate with each other for the next self-attention layers to complete the encoding of patch and position. Different from RNN and CNN, the position encoding is prominent for the Transformer model because the self-attention module only calculates the similarity between the tokens. Two similar tokens may be far apart, but the distance information between these tokens is vital for some computer vision tasks (e.g. object localization). Generally, existing position encoding methods either use relatively simple and direct absolute encoding, or the absolute encoding crucial for detection and localization tasks. Absolute encoding is more sensitive to both the size of the input image and the data inductive bias (e.g. object is always in the center) than relative position coding. From this perspective, we suppose that Hilbert Flatten can alleviate the scale transformation problem in the existing absolute encoding methods. The Hilbert patch flattening is an unfolding operation for image patches, unlike the orthodox ``zigzag" unfolding (\textit{e.g.,} \textit{torch.flatten(·)}), which requires image patches to be flattened in the order of the Hilbert fractal as shown in Fig. \ref{Fig.HPE}. The positions between two adjacent tokens in an image are always close to each other \cite{coatnet, t2t}, which is a widely used assumption in computer vision (aka locality). As introduced in Remark \ref{ZF_dia}, the ``zigzag" unfolding will make the adjacent patches to be far away from each other, while HF not. After HPF, it is guaranteed that similar tokens will be close in the flattened one-dimensional sequence. This is necessary for our token aggregator module to work. For the self-attention to take advantage of this inductive bias (e.g. locality). We introduce a generic token aggregator, which elegantly enhance the similarity between two adjacent tokens. Specifically, for one input sequence, e.g., the embedding of image patches $ {\cal X}=\{x_1,x_2, \cdots, x_n\}$, where $x_i \in \mathbb{R}^{D} $, $n $ means the number of patches. The output matrix ${\cal X'}$ of TA equals to $\{x'_1,x'_2, \cdots, x'_n\}$, Each output element $x'_i$ is computed by the Eq. \ref{eq:TA} below. \begin{equation} \label{eq:TA} x'_i = {\cal F}(x_i, k)U_j, \end{equation} where $j \in \{1,2, \cdots, D\}$, $U \in \mathbb{R}^{D \times k}$ is a learnable parameter matrix to integrate $n$ embedded tokens, ${\cal F}$ is a function (the output of it $\in \mathbb{R}^{n \times k} $ ) feeding it $x_i$ that returns a neighborhood of $x_i$ with a radius size $k$. Therefor, we obtain the aggregated embedding of image patches $x'_i \in \mathbb{R}^{D} $ and put them into the next self-attention blocks. \subsubsection{Hilbert Feature Sampling} In the feature learning of deep models, it is inevitable to perform up- and down-sampling operations on the feature maps, \textit{e.g.,} \ super-resolution. Common down-sampling operations include pooling, convolution with step size larger than or equal to 2, etc. Down-sampling operations inevitably lead to information loss, hence a good down-sampling method is required to preserve as much consistent information of the original feature map as possible, in which the consistency is location information or local/global induction bias. From this perspective, we introduce the Hilbert feature down-sampling approach for the feature extraction based on the scale robustness of Hilbert in dimensional transformation. Specifically, unlike traditional ``Zigzag" based sampling, we discard or select features or pixels along the Hilbert index to ensure that the selected feature map has a better multi-scale consistency (turn to Appendix \ref{Experiment Details} for details). \begin{table}[!tb] \caption{Recognition accuracy of different patch embedding methods on ImageNet with multiple models. \dag, and \ddag denote the results reported in \cite{ViT}, and \cite{swin2} respectively. $^{\ast}$ means that some parameters are reduced. } \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{l|cccc} \toprule[1pt] \multirow{2}*{Models} & \multirow{2}*{Methods} & \multirow{1}*{\#Param} & \multirow{1}*{Top-1} & \multirow{1}*{Top-5} \\ ~ & ~ & (M) & (\%) & (\%) \\ \toprule[0.5pt] ViT-S/16 & Original &22.1 & 78.10\dag & - \\ ViT-B/16 & Original &86 & 79.80\dag & - \\ ViT-L/16 & Original &304 & 81.10\dag & - \\ DeiT-S & Original &22.1 & 79.90\ddag & - \\ T2T-ViT-14 &Original &21.5 & 80.70\ddag & - \\ Swin-T & Original &28.3 &81.30\ddag & - \\ Swin-S3-T & Original &28.1 & 82.10\ddag & 95.80 \\ \toprule[0.5pt] ViT-T/16 & Original &5.7 & 70.38 & 88.75 \\ ViT-S/14 & Original &22 & 80.10 & 95.19 \\ ViT-B/16 & Original &86 & 82.02 & 95.78 \\ ViT-L/14 & Original &304 & 82.98 & 96.17 \\ Swin-S3-T & Original &28.1 & 82.08 & 95.63 \\ \toprule[0.5pt] ViT-T/16 & HPE &5.7$^{\ast}$ & 70.91 & 89.22 \\ ViT-S/14 & HPE &22$^{\ast}$ & 80.41 & 95.33 \\ ViT-B/16 & HPE &86$^{\ast}$ & 82.47 & 96.00 \\ ViT-L/14 & HPE &304$^{\ast}$ & 83.43 & 96.44 \\ Swin-S3-T & HPE &28.1$^{\ast}$ & 82.34 & 95.85 \\ \toprule[1pt] \end{tabular}} \end{center} \label{table:vit_imagenet} \end{table} \section{Experiments} We implement the mentioned applications in Sec. \ref{app} and validate them in several mainstream benchmarks with popular model architectures and settings. Specifically, we verify the effectiveness of our proposed HF in Hilbert patch embedding (Sec. \ref{exp_HPE}) and Hilbert feature sampling (Sec. \ref{exp_FS}). What's more, we compare the differences between the Hilbert Flattening and Zigzag Flattening methods using the same metrics, e.g., image scaling (Sec. \ref{exp_IS}) and dynamic time warping distance (Sec. \ref{exp_DTW}). More experimental settings, results, and visualizations can be found in the Appendix. \textbf{Experimental Setup.} We utilized the same model architectures in all baseline settings to compare the performance for fairness. With limited computational resources, we are not motivated by practice-based CV tasks, so we are not good at utilizing the training tricks. Notably, we only change the flattening method in any experiments. Other settings including software and hardware are strictly consistent. \subsection{Hilbert Patch Embedding} \label{exp_HPE} As shown in Fig.\ref{Fig.HPE}, the proposed HPE utilizes the Hilbert flattening strategy and a token aggregator module for better image patch embedding. We employ ViTs and Swin as the backbone of image classification on the ImageNet \cite{deng2009imagenet}. We conduct all experiments by utilizing the same comparison protocols and data augmentation as \cite{Radosavovic2020,deit}. The results are reported in Table \ref{table:vit_imagenet}. For ViTs with different scales (\textit{e.g.,} ViT-T, ViT-S, ViT-B, and ViT-L), our HPE can improve them by 0.53\%, 0.31\%, 0.45\%, and 0.45\% (Top1 Acc) over their original methods, respectively. Those improvements are clear and consistent regardless of model size, demonstrating the advantages of the proposed HPE over the original patch embedding. Also, We show that our HPE can bring gains (0.24\%) to the Swin model which applies both relative and absolute position encoding methods. One can note that the image classification task is relatively reliable in relative position coding. \begin{table}[!tb] \caption{Ablation study. The effect of Hilbert patch embedding and token aggregator on the recognition accuracy for the ViT-B/16 and ViT-T/2 on ImageNet and CIFAR-10 respectively. } \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{l|lcccc} \toprule[1pt] Methods & Dataset & HPF & TA & Top-1\% &Top-5\% \\ \toprule[0.5pt] ViT-B/16 & ImageNet & - &- & 82.02 & 95.78 \\ ViT-B/16 & ImageNet &$\checkmark$&- & 82.19 & 95.82 \\ ViT-B/16 & ImageNet &- &$\checkmark$ & 82.14 & 95.81 \\ ViT-B/16 & ImageNet &$\checkmark$&$\checkmark$ & \textbf{82.47} & \textbf{96.00} \\ \toprule[0.5pt] ViT-T/2 & CIFAR-10 & - & - & 83.85 & 99.00 \\ ViT-T/2 & CIFAR-10 & - & $\checkmark$ & 85.25 & 99.08 \\ ViT-T/2 & CIFAR-10 & $\checkmark$ & - & \textbf{86.32} & \textbf{99.15} \\ ViT-T/2 & CIFAR-10 & $\checkmark$ & $\checkmark$ & 85.71 & 99.10 \\ \toprule[1pt] \end{tabular}} \end{center} \label{vit_abla} \end{table} \paragraph{Ablation Study. } To verify the effectiveness of the proposed two modules in HPE separately, we carry out experiments of image classification both on ImageNet and CIFAR-10 \cite{krizhevsky2014cifar}. As shown in Table \ref{vit_abla}, both HPF and TA could achieve non-negligible improvements (i.e., HPF gets 0.17\% on ImageNet and 2.47\% on CIFAR-10, TA gets 0.12\% on ImageNet and 1.40\% on CIFAR-10) over the original patch embedding. Moreover, HPF and TA are orthogonal on ImageNet. The combination of them results in the better performance than either one of them. Specifically, the best score on cifar10 was achieved using only HPF, which indicates that Transformer models have overfitting problems on small datasets (training from scratch). \begin{table}[!tb] \caption{ Finetune recognition accuracy of different patch embedding methods on ImageNet with the ViT-S/14 and ViT-B/16 model. W/o finetune means taking the original model trained with image size 224 directly and test it with 384 inputs.} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{l|l|c|c|c|c|cc} \toprule[1pt] \multirow{2}*{Models} & \multirow{2}*{Methods} & \multirow{1}*{Train} & \multirow{1}*{Test} & \multirow{1}*{Val. Top-1\%} & \multirow{1}*{Finetune} & \multicolumn{2}{c}{W/o Finetune Top-1\%} \\ ~ & ~ & Res. & Res. & Res. $224^2$ & Top-1\% & W/ 1D Interp. & W/ 2D Interp. \\ \toprule[0.7pt] ViT-S/14 & Original &$224^2$ &$336^2$ &80.10 & 81.99 & 69.89 & 80.50 \\ ViT-S/14 & HPE &$224^2$ &$336^2$ &80.41 &82.91 & \textbf{78.61} & 80.89 \\ ViT-B/16 & Original &$224^2$ &$384^2$ &81.88 &83.37 & 66.03 & 81.51 \\ ViT-B/16 & HPE &$224^2$ &$384^2$ &\textbf{82.13} &\textbf{83.70} & 69.69 & \textbf{81.67} \\ \toprule[1pt] \end{tabular}} \end{center} \label{table:vit_finetune} \end{table} \begin{figure*}[!tb] \begin{center} \includegraphics[width=\linewidth]{pes.pdf} \end{center} \caption{ Similarity of position embeddings of ViT-B16 for different patch embedding approaches. The brightness of a pixel indicates the cosine similarity between the position embedding of one patch with the indicated row and column and the other patches. For more clarity, we intercepted the middle position ($4\times 4$) of all patches ($14\times 14$) for visualization. The two figures on the right are obtained by linear interpolation of the original position embedding. Same as the left figures, we extract the middle position ($6\times 6$) of all patches ($24\times 24$) for visualization. Turn to the Appendix for more details. } \label{fig:pe_plot} \end{figure*} To compare the transfer ability of the models at different input image scales, We chose pre-trained models trained by Hilbert patch embedding and original patch embedding, respectively. And the fine-tuning performance and the position encoding interpolation-based performance were tested separately. As reported in Table \ref{table:vit_finetune}, the experimental results show that our HPE has better multi-scale transfer capability. Also, without fine-tuning, only using 1D position encoding interpolation our method achieves an absolute lead (i.e., 8.72\% on ViT-S and 3.66\% on ViT-B). To check that our method has better positional encoding capability, we visualize the similarity of position embeddings as shown in Fig. \ref{fig:pe_plot}. We can see that the absolute position encoding learned utilizing our HPE method is more accurate, which is one of the reasons why our method can bring classification performance improvement on ImageNet. Here, we plot the similarity of position embeddings of them after linear interpolation (the two figures on the right), and we can see that compared to the original patch embedding method, Hilbert flattening retains most of the position information. \begin{table}[!tb] \begin{center} \caption{Recognition accuracy of different feature sampling methods on CIFAR-10. ``HF" indicates that sampling of image patches along the Hilbert flattening index. ``ZF" means using the ``Zigzag" flattening index. ``Overlap" indicates that there is an overlap of sampling points.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccccc} \toprule[1pt] Models& Methods & Overlap &Top-1\% &Top-5\% \\ \toprule[0.5pt] Mixer-B/4 &ZF & - &79.73 & 98.34 \\ Mixer-B/4 &HF & - &\textbf{80.59} & \textbf{98.55} \\ Mixer-B/4 &ZF & $\checkmark$ &80.48 & 98.32 \\ Mixer-B/4 &HF & $\checkmark$ &\textbf{81.68} & \textbf{98.57} \\ Mixer-B/8 &ZF & - &83.58 & 98.71 \\ Mixer-B/8 &HF & - &\textbf{84.52} & \textbf{98.83} \\ \toprule[0.5pt] FPN-MLP &ZF & - &81.42 & 99.18 \\ FPN-MLP &HF & - &\textbf{85.71} & \textbf{99.58} \\ \toprule[1pt] \end{tabular}} \end{center} \label{table:mixer_cifar10} \end{table} \subsection{Semantic Segmentation } Our dense prediction (i.e., semantic segmentation) experiments are conducted on ADE20k data set, as reported in Table \ref{table:seg_ade}. We adopt the popular and Upernet framework (base on mmseg \cite{mmseg2020}) for a fair comparison. We simply apply the settings of Swin Transformer and train the models for 80k iterations. The drop path rates are set to 0.15/0.35 for small/base variants with Upernet respectively. As shown in Table \ref{table:seg_ade}, we can see that the performance impact of position encoding on segmentation is non-trivial. Compared with the original ViT models, our HPE method yields stable gains (0.71\% mIoU on ViT-S and 0.86\% mIoU on ViT-B). we also plot the learning process of position encoding with model training for our PHE (right) and original (left) patch embedding methods, as shown in Appendix Fig. \ref{fig:pos_iter}. Those further suggest that our HPE has better multi-scale adaptability. \begin{table*}[!tb] \caption{ Semantic segmentation with Upernet 80K \protect\cite{upernet} framework on ADE20K \cite{zhou2017scene}. Backbones are taken from our HPE and the original ViT pre-trained on ImageNet, respectively. ``Z-Linear'' means that down-up sampling the position embedding uses Linear interpolation methods along the Zigzag index. \dag denotes the results reported in \protect\cite{uniformer} with Upernet 160K. The FLOPs are measured at resolution $512 \times 512$, \ddag means the input image size is $512 \times 2048$. $^*$ denotes without adding deconvolution. } \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{l|l|cccc} \toprule[1pt] \multirow{2}*{Backbones} & \multirow{2}*{Methods} & \multicolumn{4}{c}{Upernet 80K} \\ ~ & ~ & \#Param.(M) & FLOPs(G) & mIoU(\%) & mAcc(\%) \\ \toprule[0.7pt] ResNet-50\cite{swin2} & - & 67 & 951\ddag & 42.05\dag & - \\ DeiT-B$^{*}$\cite{swin2} & Original & 121 & 2772\ddag & 44.09\dag & - \\ Swin-T\cite{swin} & Original & 60 & 945\ddag & 44.50\dag & - \\ Swin-S3-T\cite{swin2} & Original & 60 & 954\ddag & 44.87\dag & - \\ Focal-T\cite{focal} & Original & 62 & 998\ddag & 45.80\dag & - \\ Twins-S\cite{twins} & Original & 54 & 901\ddag & 46.20\dag & - \\ \toprule[0.5pt] ViT-S/14-336 & Original & 58 & 326 & 44.05 & 55.02 \\ ViT-S/14-336 & HPE & 58 & 326 & 44.76 & 56.22 \\ ViT-S/14-336 & Z-Linear & 58 & 326 & 42.22 & 53.22 \\ \toprule[0.5pt] ViT-B/16-384 & Original & 144 & 395 & 45.48 & 57.15 \\ ViT-B/16-384 & HPE & 144 & 395 & 46.34 & 57.54 \\ ViT-B/16-384 & Z-Linear & 144 & 395 & 44.28 & 55.80 \\ \toprule[1pt] \end{tabular}} \end{center} \label{table:seg_ade} \end{table*} \begin{figure*}[!tb] \begin{center} \includegraphics[width=\linewidth]{segmentation.pdf} \end{center} \caption{ Qualitative results of semantic segmentation on ADE20K \cite{zhou2017scene} } \label{fig:seg} \end{figure*} \subsection{Feature Sampling} \label{exp_FS} To prove that better feature sampling can enhance the stability of multi-scale representations, an image pyramid structure based MLPs architecture was employed, turn to Appendix Fig. \ref{fig:fpn_mlp} and Appendix Table \ref{table:fpn_mlp} for details. We conduct experiments on CIFAR-10, with the commonly used protocols and the data augmentation \cite{lee2015deeply}. Table \ref{table:mixer_cifar10} presents the results of different feature sampling methods. The HF based method can outperform the baseline of ZF by obvious margins (4.29\%). We suppose that compared to ZF, HF can provide stronger stability of multi-scale representations for the model. We also conduct the experiments for the MLPs model (i.e., MLP-Mixer) with different patch flattening approaches (aka embedded patches sampling). As shown in Table \ref{table:mixer_cifar10}, The results show that the proposed HF besed sampling is effective in MLP-Mixer and also achieves significant improvement based on the Zigzag flattening method. Notably, when we utilize the overlapping strategy, the gap is widened. \section{Conclusion} In this paper, we propose a flattening alternative named Hilbert Flattening. Compared with ``zigzag", HF can well preserve locality in the original 2D form and is robust to scale changes. For the merits of HF, we theoretically evaluate the square-to-linear dilation factor of the finite approximation of Hilbert curve, and propose the Average Square Distance to compare inverse HF with ZF. All these advantages are supported by theoretical analysis and experimental results. Based on the above theorise, we proposed Hilbert patch embedding and Hilbert feature sampling methods for the ViTs and other networks. The code will be released upon acceptance. \bibliographystyle{unsrtnat}
1,108,101,562,620
arxiv
\section{Introduction} \label{remmorepictures} \subsection{Historical background} The notion of heat kernel has a long history. The oldest and the best-known heat kernel is the Gauss--Weierstrass function \[ p_{t}( x,y) =\frac{1}{( 4\pi t) ^{n/2}}\exp \biggl( \frac{\vert x-y\vert^{2}}{4t}\biggr) , \] where $t>0$ and $x,y\in\mathbb{R}^{n}$, which is a fundamental solution of the heat equation \begin{equation} \label{heat} \frac{\partial u}{\partial t}=\Delta u, \end{equation} where $\Delta$ is the Laplace operator in $\mathbb{R}^{n}$. A more general parabolic equation $\frac{\partial u}{\partial t}=Lu$, where \[ L=\sum_{i,j=1}^{n}\frac{\partial}{\partial x_{i}}\biggl( a_{ij}( x)\, \frac{\partial}{\partial x_{j}}\biggr) \] is a uniformly elliptic operator with measurable coefficients $a_{ij}=a_{ji} , has also a positive fundamental solution $p_{t}( x,y) $, and the latter admits the Gaussian bound \begin{equation} \label{Gaussian} p_{t}( x,y) \asymp\frac{C}{t^{n/2}}\exp\biggl( -\frac {\vert x-y\vert^{2}}{ct}\biggr) , \end{equation} where the sign $\asymp$ means that both $\leq$ and $\geq$ are true, but the positive constants $c$ and $C$ may be different for upper and lower bounds. The estimate~(\ref{Gaussian}) was proved by Aronson \cite{Aron} using the parabolic Harnack inequality of Moser \cite{Moser}. The next chapter in the history of heat kernels was opened in differential geometry. Consider the heat equation (\ref{heat}) on a Riemannian manifold~$M$, where $\Delta$ is now the Laplace--Beltrami operator on $M$. The heat kernel $p_{t}( x,y) $ is defined as the minimal positive fundamental solution of~(\ref{heat}), which always exists and is a smooth nonnegative function of $t,x,y$; cf. \cite{Chavbook,Grigbook,SchoenYauBook}. The question of estimating the heat kernel on Riemannian manifolds was addressed by many authors (see, e.g., \cite{Davbook,Grigbook,MolchSurvey,Varbook}). Apart from obvious analytic and geometric motivation, a strong interest to heat kernel estimates persists in stochastic analysis because the heat kernel coincides with the transition density of Brownian motion on $M$ generated by the Laplace--Beltrami operator. One of the most powerful estimates of heat kernels was proved by Li and Yau~\cite{LiYau}: if $M$ is a complete Riemannian manifold of nonnegative Ricci curvature, the \begin{equation}\label{LiYau} p_{t}( x,y) \asymp\frac{C}{V( x,\sqrt{t}) }\exp \biggl( -\frac{d^{2}( x,y) }{ct}\biggr) , \end{equation} where $d( x,y) $ is the geodesic distance on $M$, and $V( x,r) $ is the Riemannian volume of the geodesic ball $B( x,r) =\{ y\in M\dvtx d( x,y) <r\} $. Similar estimates were obtained by Gushchin with coauthors \cite{Gush,GushM} for certain unbounded domains in $\mathbb{R}^{n}$ with the Neumann boundary condition. An interesting question is what minimal geometric assumptions imply~(\ref{LiYau}). The upper bound in (\ref{LiYau}) is know to be equivalent to a certain \textit{Faber--Krahn}-type inequality (see Section \ref {SecEF}). The geometric background of the lower bound in (\ref{LiYau}) is more complicated and is closely related to the Harnack inequalities. In fact, the full estimate (\ref{LiYau}) is equivalent, on one hand, to the parabolic Harnack inequality of Moser (see \cite{FS}), and, on the other hand, to the conjunction of the volume doubling property and the Poincar\'{e} inequality (see \cite{GrigHar,SalHar}). For a more detailed account of heat kernel bounds on manifolds we refer the reader to the books and surveys~\cite{Chavbook,Davbook,GrigNotes,GrigW,Grigbook,Jorg,SchoenYauBook,Varbook}. New dimensions in the history of heat kernels were literally discovered in analysis on fractals. Fractals are typically subsets of $\mathbb{R}^{n}$ with certain self-similarity properties, like the Sierpinski gasket ($\mathrm{SG} $) or the Sierpinski carpet ($\mathrm{SC}$). One makes a fractal into a metric measure space by choosing appropriately a metric $d$ (e.g., the extrinsic metric from the ambient~$\mathbb{R}^{n}$) and a~measure~$\mu$ (usually the Hausdorff measure). The next crucial step is introduction of a strongly\vspace*{1pt} local regular \textit{Dirichlet form} on a fractal, that is, an analogy of the Dirichlet integral $\int\vert\nabla f \vert ^{2} $ on manifolds, which is equivalent to construction of Brownian motion on the fractal in question; cf. \cite{FOT}. This step is highly nontrivial and its implementation depends on a particular class of fractals. On \mathrm{SG}$ Brownian motion was constructed by Goldstein~\cite{Goldstein} and Kusuoka \cite{Kus}, on $\mathrm{SC}$, by Barlow and Bass \cite{BarBasCon}. Kigami \cite{Kigami,Kigamibook} introduced a~class of \textit{post-critically finite} (p.c.f.) fractals, containing $\mathrm{SG}$, and constructed the Dirichlet form on such a fractal as a scaled limit of the discrete Dirichlet forms on the graph approximations. A strongly local regular Dirichlet form canonically leads to the notion of the heat semigroup and the heat kernel, where the latter can be defined either as the integral kernel of the heat semigroup or as the transition density of Brownian motion. Surprisingly enough, the Dirichlet forms on many families of fractals admit continuous heat kernels that satisfy the \textit sub-Gaussian} estimate \begin{equation}\label{SubGauss} p_{t}( x,y) \asymp\frac{C}{t^{\alpha/\beta}}\exp\biggl( -c\biggl( \frac{d^{\beta}(x,y)}{t}\biggr) ^{{1}/({\beta -1})}\biggr) , \end{equation} where $\alpha>0$ and $\beta>1$ are two parameters that come from the geometric properties of the underlying fractal. Estimate (\ref{SubGauss} ) was proved by Barlow and Perkins \cite{BarPerGas} on $\mathrm{SG}$, by Kumagai \cite{Kumagai} on nested fractals, by Fitzsimmons, Hambly and Kumagai \cite{FHK} on affine nested fractals and by Barlow and Bass on $\mathrm{SC}$ \cite{BarBasTran} and on generalized Sierpinski carpets \cite{BarBas} (see also \cite{Barlow,BBKStab,Kigamibook,KigamiVD,KusZhou}). In fact, $\alpha$ is the Hausdorff dimension of the space, while $\beta$ is a new quantity that is called the \textit{walk dimension} and that can be characterized either in terms of the exit time of Brownian motion from balls or as the critical exponent of a family of Besov function spaces on the fractal (cf.~\cite{Barlow,GrigHuLau,GrigIHP,GrigHGA}).\looseness=-1 \subsection{Description of the results} The purpose of this paper is to find convenient equivalent conditions for sub-Gaussian estimates of the heat kernels on abstract metric measure spaces. Let $( M,d) $ be a locally compact separable metric space, $\mu$~be a Radon measure on $M$ with full support and $( \mathcal{E},\mathcal{F}) $ be a strongly local regular Dirichlet form on $M$ (see Section \ref{SecBasic} for the details). We are interested in the conditions that ensure the existence of the heat kernel $p_{t}( x,y) $ as a measurable or continuous function of $x,y$, and the estimates of the following typ \begin{equation} \label{RFi} p_{t}( x,y) \asymp\frac{C}{V( x,\mathcal{R} ( t) ) }\exp\biggl( -ct\Phi\biggl( c\frac{d( x,y) }{t}\biggr) \biggr) , \end{equation} where $V( x,r) =\mu( B( x,r) ) $ and \mathcal{R}( t) $, $\Phi( s) $ are some nonnegative increasing functions on $[0,\infty)$. For example, (\ref{LiYau}) has the form (\ref{RFi}) with $\mathcal{R}( t) =\sqrt{t}$ and $\Phi ( s) =s^{2}$, while (\ref{SubGauss}) has the form (\ref{RFi}) with $\mathcal{R}( t) =t^{1/\beta}$ and $\Phi( s) =s^{{\beta}/({\beta-1})}$ [assuming that\setcounter {footnote}{2}\footnote The sign $\simeq$ means that the ratio of both sides is bounded between two positive constants.} $V( x,r) \simeq r^{\alpha}$, which, in fact, follows from (\ref{SubGauss})]. To describe the results of the paper, let us introduce some hypotheses. First, we assume that the metric space $( M,d) $ is unbounded and that all metric balls are precompact\footnote The precompactness of balls implies that $( M,d) $ is a complete metric spaces. The following partial converse is also true: if $( M,d) $ is complete and the volume doubling property (\ref{VD}) holds, then all balls are precompact. However, since we do not always assume (\ref{VD}), we make an independent assumption of precompactness of the balls.} (although these assumptions are needed only for a part of the results). Next, define the following conditions: $\bullet$ the volume doubling property (\ref{VD}): there is a constant $C$ such tha {\renewcommand{\theequation}{\textit{VD}} \begin{equation}\label{VD} V( x,2r) \leq CV( x,r) \end{equation}} for all $x\in M$ and $r>0$;\vadjust{\goodbreak} $\bullet$ the elliptic Harnack inequality (\ref{H}): there is a constant $C$ such that, for any nonnegative harmonic function $u$ in any ball $B( x,r) \subset M$ {\renewcommand{\theequation}{$H$} \begin{equation}\label{H} \limfunc{esup}_{B( x,r/2) }u\leq C\limfunc{einf}_{B( x,r/2) }u, \end{equation}} \vspace*{-8pt} \noindent where $\limfunc{esup}$ and $\limfunc{einf}$ are the essential supremum and infimum, respectively, (see Section \ref{SecHarnack} for more details); $\bullet$ the estimate of the mean exit time (\ref{EF}), {\renewcommand{\theequation}{${E}_{F}$} \begin{equation}\label{EF} \mathbb{E}_{x}\tau_{B( x,r) }\simeq F( r) , \end{equation}} \vspace*{-\baselineskip} \noindent where $\tau_{B( x,r) }$ is the first exist time from ball B( x,r) $ of the associated diffusion process, started at the center $x$, and $F( r) $ is a given function with a certain regularity (see Section \ref{SecEF} for more details). A typical example is F( r) =r^{\beta}$ for some constant $\beta>1$. The conditions $\mbox{(\ref{H})} + \mbox{(\ref{VD})} + \mbox{(\ref{EF})} $ are known to be true on p.c.f. fractals (see \cite{Kigamibook,HamblyKum}) as well as on generalized Sierpinski carpets (see \cite{BarBas,BBKT}) so that our results apply to such fractals. Another situation where \mbox{(\ref{H})} +\mbox{(\ref{VD})} +\mbox{(\ref{EF})} $ are satisfied is the setting of \textit{resistance forms} introduced by Kigami~\cite{KigamiRes}. A resistance form is a specific Dirichlet form that corresponds to a~strongly recurrent Brownian motion. Kigami showed that, in the setting of resistance forms on self-similar sets, (\ref{VD}) alone implies (\ref{H}) and (\ref{EF}) with $F( r) =r^{\beta}$, for a suitable choice of a distance function. Examples with more general functions $F( r) $ appear in \cite{BarHam} and \cite{Telcsbook}. Let us emphasize in this connection that our results do not depend on the recurrence or transience hypotheses and apply to both cases, which partly explains the complexity of the proofs. A transient case occurs, for example, for some generalized Sierpinski carpets. Another point worth mentioning is that we do not assume specific properties of the metric $d$ such as being geodesic; the latter is quite a common assumption in the fractal literature. This level of generality enables applications to resistance forms where the distance function is usually the resistance metric that is not geodesic. Our first main result, which is stated in Theorem \ref{Tmain} and which, in fact, is a combination of Theorems \ref{TG=>FK}, \ref{TDUE}, \ref {THolder}, \ref{TNLE}, says the following: if the hypotheses $\mbox{(\ref{VD})} +\mbox{(\ref{H})} +\mbox{(\ref{EF})}$ are satisfied, then the heat kernel $ p_{t}( x,y) $ exists, is H\"{o}lder continuous in $x,y$ and satisfies the following upper estimate {\renewcommand{\theequation}{\textit{UE}} \begin{equation}\label{UE} p_{t}( x,y) \leq\frac{C}{V( x,\mathcal{R}( t) ) }\exp\biggl( -\frac{1}{2}t\Phi\biggl( c\frac{d( x,y) }{t \biggr) \biggr), \end{equation}} \vspace*{-8pt} \noindent where $\mathcal{R} = F^{-1}$ and \[ \Phi( s) :=\sup_{r>0}\biggl\{ \frac{s}{r}-\frac {1}{F( r) }\biggr\} , \] and the \textit{near-diagonal} lower estimat {\renewcommand{\theequation}{\textit{NLE}} \begin{equation}\label{NLE} p_{t}( x,y) \geq\frac{c}{V( x,\mathcal{R}( t) ) } \qquad\mbox{provided }d( x,y) \leq\eta\mathcal {R}(t), \end{equation}} \vspace*{-13pt} \noindent where $\eta>0$ is a small enough constant. Furthermore, assuming that (\ref{VD}) holds a priori, we have the equivalence\footnote For comparison, let us observe that, under the same standing assumptions, it was proved in \cite{BarGrigKumHar} that \[ \mbox{(\ref{UE})} + \mbox{(\ref{NLE})} \Leftrightarrow( \mathit{PHI}_{F}), \] where $(\mathit{PHI}_{F}) $ stands for the \textit{parabolic} Harnack inequality for caloric functions. Hence, we see that the ``difference'' between $( \mathit{PHI}_{F}) $ and (\ref{H}) is the condition (\ref{EF}), that in particular provides a necessary space/time scaling for $( \mathit{PHI}_{F}) $.} \setcounter{equation}{5} \begin{equation} \label{eq} \mbox{(\ref{UE})} + \mbox{(\ref{NLE})} \Leftrightarrow\mbox{(\ref{H})} +\mbox{(\ref{EF})} \end{equation} (Theorem \ref{Tconv}). For example, if $F( r) =r^{\beta}$ for some $\beta>1$, then \mathcal{R}( t) =t^{1/\beta}$ and $\Phi( s ) =\func const}s^{{\beta}/({\beta-1})}$. Hence, (\ref{UE}) and (\ref{NLE}) become as follows \begin{equation}\label{subu} p_{t}( x,y) \leq\frac{C}{V( x,t^{1/\beta}) }\exp\biggl( -c\biggl( \frac{d^{\beta}(x,y)}{t}\biggr) ^{{1}/({\beta-1})}\biggr) \end{equation} an \[ p_{t}( x,y) \geq\frac{c}{V( x,t^{1/\beta})}\qquad \mbox{provided }d( x,y) \leq\eta t^{1/\beta}. \] It is desirable to have a lower bound of $p_{t}( x,y) $ for all x,y$ that would match the upper bound (\ref{subu}). However, such a lower bound fails in general. The reason for that is the lack of \textit{chaining properties} of the distance function, where by chaining properties we loosely mean a possibility to connect any two points $x,y\in M$ by a chain of balls of controllable radii so that the number of balls in this chain is also under control. More precisely, this property can be stated in terms of the modified distance $d_{\varepsilon}( x,y) $ where \varepsilon>0$ is a~parameter. The exact definition of $d_{\varepsilon}$ is given in Section \ref{Secde}, where it is also shown that \[ d_{\varepsilon}( x,y) \simeq\varepsilon N_{\varepsilon }(x,y) , \] where $N_{\varepsilon}( x,y) $ is the smallest number of balls in a chain of balls of radii $\varepsilon$ connecting $x$ and $y$. As $ \varepsilon$ goes to $0$, $d_{\varepsilon}( x,y) $ increases and can go to $\infty$ or even become equal to $\infty$. If the distance function $d$ is geodesic then $d_{\varepsilon}\equiv d$, which corresponds to the best possible chaining property. In general, the rate of growth of d_{\varepsilon}( x,y) $ as $\varepsilon\rightarrow0$ can be regarded as a quantitative description of the chaining properties of~$d$. For this part of our work, we assume tha \begin{equation}\label{ad} \frac{F( \varepsilon) }{\varepsilon}d_{\varepsilon} (x,y) \rightarrow0 \qquad\mbox{as }\varepsilon\rightarrow0, \end{equation} which allows to define a function $\varepsilon( t,x,y) $ from the identit \begin{equation} \label{ede} \frac{F( \varepsilon) }{\varepsilon}d_{\varepsilon} (x,y) =t. \end{equation} Our second main result states the following: if (\ref{ad}) and $\mbox {(\ref{VD})} +\mbox{(\ref{H})} + \mbox{(\ref{EF})}$ are satisfied, the \begin{eqnarray} \label{twoe} p_{t}( x,y) &\asymp&\frac{C}{V(x,\mathcal{R}( t) ) \exp\biggl( -ct\Phi\biggl( c\frac{d_{\varepsilon}( x,y ) }{t \biggr) \biggr) \\ \label{twohk} &\asymp&\frac{C}{V(x,\mathcal{R}( t) )}\exp( -cN_{\varepsilon}) , \end{eqnarray} where $\varepsilon=\varepsilon( ct,x,y) $ (Theorem \ref{Ttwo}). For example, the above hypotheses and, hence, the estimates (\ref {twoe}) and \ref{twohk}) hold on connected p.c.f. fractals endowed with resistance distance, where one has $V( x,r) \simeq r^{\alpha}$ and $F( r) =r^{\alpha+1}$ for some constant $\alpha$. The estimate (\ref{twohk}) on p.c.f. fractals was first proved by Hambly and Kumagai \cite{HamblyKum}. In fact, we use the argument from \cite{HamblyKum} to verify our hypotheses (see Remark \ref{ExHK}). Note that the dependence on $t,x,y$ in the estimates (\ref{twoe}) and (\ref {twohk}) in very implicit and is hidden in $\varepsilon( ct,x,y) $. One can loosely interpret the use of this function in (\ref {twoe}) and (\ref{twohk}) as follows. In order to find a most probable path for Brownian motion to go from $x$ to $y$ in time $t$, one determines the optimal size \varepsilon=\varepsilon( ct,x,y) $ of balls and then the optimal chain of balls of radii~$\varepsilon$ connecting $x$ and $y$, and this chain provides an optimal route between~$x$ and~$y$. This phenomenon was discovered by Hambly and Kumagai in the setting of p.c.f. fractals, where they used instead of balls the construction cells of the fractal. As it follows from our results, this phenomenon is generic and independent of self-similar structures. If the distance function satisfies the \textit{chain condition} d_{\varepsilon}\leq Cd$, which is stronger than (\ref{ad}), then one can replace in (\ref{twoe}) $d_{\varepsilon}$ by $d$ and obtain (\ref{RFi}) (Corollary \ref{Cortwo}). In fact, in this case we have the equivalence \begin{equation} \label{3+2} \mbox{(\ref{VD})} +\mbox{(\ref{H})} +\mbox{(\ref{EF})} \Leftrightarrow\mbox{(\ref{RFi})} \end{equation} (Corollary \ref{Corunbound}). In the setting of random walks on infinite graphs, the equivalence (\ref{3+2 ) was proved by the authors in \cite{GrigTelTran,GrigTelLoc}. Of course, in this case all the conditions have to be adjusted to the discrete setting. For the sake of applications (cf., e.g, \cite{BBKT}), it is desirable to replace the probabilistic condition (\ref{EF}) in all the above results by an analytic condition, namely, by a certain estimate of the capacity between two concentric balls. This type of result requires different techniques and will be treated elsewhere. \subsection{Structure of the paper and interconnection of the results} In Section~\ref{SecHH} we revise the basic properties of the heat semigroups and heat kernels and prove the criterion for the existence of the heat kernel in terms of local ultracontractivity of the heat semigroup (Theorem \ref{TptOm}). In Section \ref{Secaux} we prove two preparatory results: \begin{longlist}[(2)] \item[(1)] $\mbox{(\ref{VD})} +\mbox{(\ref{H})} +\mbox{(\ref{EF})} \Rightarrow \mbox{(\ref{FK})}$ where (\ref{FK}) stands for a certain \textit Faber--Krahn inequality}, which provides a lower bound for the bottom eigenvalue in any bounded open set $\Omega\subset M$ via its measure (Theorem \ref{TG=>FK}). In turn,~(\ref{FK}) implies the local ultracontractivity of the heat semigroup, which by Theorem~\ref{TptOm} ensures the existence of the heat kernel. \item[(2)] $( E_{F}) $ implies the following estimate of the tail of the exit time from balls \begin{equation} \label{Psi2} \mathbb{P}_{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\exp \biggl( -t\Phi\biggl( c\frac{R}{t}\biggr) \biggr) \end{equation} (Theorem \ref{TEF}). \end{longlist} In Section \ref{SecDUE} we prove the upper estimate of the heat kernel, more precisely, the implicatio \[ \mbox{(\ref{VD})} + \mbox{(\ref{FK})} +\mbox{(\ref{EF})} \Rightarrow\mbox{(\ref{UE})} \] (Theorem \ref{TDUE}). The main difficulty lies already in the proof of the diagonal upper boun {\renewcommand{\theequation}{\textit{DUE}} \begin{equation}\label{DUE} p_{t}( x,x) \leq\frac{C}{V( x,\mathcal{R}( t) ) }. \end{equation}} \vspace*{-8pt} \noindent Using (\ref{FK}), we obtain first some diagonal upper bound for the Dirichlet heat kernels in balls, and then use Kigami's iteration argument and (\ref{Psi2}) to pass to (\ref{DUE}). The latter argument is borrowed from \cite{GrigHuUpper}. The full upper estimate (\ref{UE}) follows from (\ref{DUE}) and (\ref{Psi2}). In Section \ref{SecLow} we prove the lower bounds of the heat kernel. The diagonal lower boun {\renewcommand{\theequation}{\textit{DLE}} \begin{equation}\label{DLE} p_{t}( x,x) \geq\frac{C}{V( x,\mathcal{R}( t) ) } \end{equation}} \vspace*{-8pt} \noindent follows directly from (\ref{Psi2}) (Lemma \ref{LemDLE}). To obtain the near diagonal lower estimate (\ref{NLE}), one estimates from above the differenc \setcounter{equation}{13} \begin{equation}\label{di} \vert p_{t}( x,x) -p_{t}( x,y) \vert, \end{equation} where $y$ is close to $x$, which requires the following two ingredients: \begin{longlist}[(2)] \item[(1)] the oscillation inequalities that are consequences of the elliptic Harnack inequality (\ref{H}) (Lemma \ref{Lemosc} and Proposition \ref{Posc}); \item[(2)] the upper estimate of the time derivative $\partial_{t}p_{t}( x,y) $ (Corollary \ref{Cdtpt}). \end{longlist} Combining them with (\ref{UE}), one obtains an upper bound for (\ref{di}), which together with (\ref{DLE}) yields (\ref{NLE}) (Theorem \ref{TNLE}). The same method gives also the H\"{o}lder continuity of the heat kernel (Theorem~\ref{THolder}). In Section \ref{SecTwo} we prove two-sided estimates (\ref{twoe}) and (\ref {twohk}) (Theorem~\ref{Ttwo}). For the upper bound, we basically repeat the proof of (\ref{UE}) by tracing the use of the distance function $d$ and replacing it by $d_{\varepsilon}$. The lower bound for large $d( x,y) $ is obtained from (\ref{NLE}) by a standard chaining argument using the semigroup property of the heat kernel and the chaining property of the distance function. In Section \ref{Secconv} we prove the converse Theorem \ref{Tconv}, which essentially consists of the equivalence (\ref{eq}).\vspace*{-3pt} \begin{notation} We use the letters $C,c,C^{\prime},c^{\prime}$ etc. to denote positive constant whose value is unimportant and can change at each occurrence. Note that the value of such constants in the conclusions depend on the values of the constants in the hypotheses (and, perhaps, on some other explicit parameters). In this sense, all our results are quantitative. The relation $f\simeq g$ means that $C^{-1}g\leq f\leq Cg$ for some positive constant~$C$ and for a specified range of the arguments of functions $f$ and $g$. The relation $f\asymp g$ means that both inequalities $f\leq g$ and f\geq g$ hold but possibly with different values of constants $c,C$ that are involved in the expressions~$f$ and/or $g$.\vspace*{-3pt} \end{notation} \section{Heat semigroups and heat kernels}\vspace*{-3pt} \label{SecHH} \subsection{Basic setup} \label{SecBasic}Throughout the paper, we assume that $( M,d) $ is a~locally compact separable metric space, and $\mu$ is a Radon measure on $M$ with full support. As usual, denote by $L^{q}( M) $ where $q\i [ 1,+\infty] $ the Lebesgue function space with respect measure \mu$, and by $\Vert\cdot\Vert_{q}$ the norm in $L^{q}( M) $. The inner product in $L^{2}( M) $ is denoted by ( \cdot,\cdot) $. All functions on $M$ are supposed to be real valued. Denote by $C_{0}( M) $ the space of all continuous functions on $M$ with compact supports, equipped with the $\sup$-norm. Let $( \mathcal{E},\mathcal{F}) $ be Dirichlet form in L^{2}( M) $. This means that $\mathcal{F}$ is a dense subspace of $L^{2}( M) $, and $\mathcal{E}( f,g) $ is a bilinear, nonnegative definite, closed\footnote{The form $(\mathcal{E},\mathcal{F})$ is called closed if $\mathcal{F}$ is a Hilbert space with respect to the following inner product: \[ \mathcal{E}_{1}(f,g)=\mathcal{E}(f,g)+(f,g). \]} form defined for functions $f,g\in\mathcal{F}$, which satisfies, in addition, the Markovian property.\footnote The Markovian property (which could be also called the Beurling--Deny property) means that if $f\in\mathcal{F}$, then also the function $\hat{f =f_{+}\wedge1$ belongs to $\mathcal{F}$ and $\mathcal{E(}\hat {f},\hat{f )\leq\mathcal{E}( f,f)$.} The Dirichlet form $( \mathcal{ },\mathcal{F}) $ is called \textit{regular} if $\mathcal{F}\cap C_{0}( M) $ is dense both in $\mathcal{F}$ and in $C_{0}( M) $. The Dirichlet form is called \textit{strongly local} if $\mathcal{ }( f,g) =0$ for all functions $f,g\in\mathcal{F}$ such that $g$ has a compact support and $f\equiv\func{const}$ in a neighborhood of $ \limfunc{supp}g$. In this paper, we assume by default that $( \mathcal{ },\mathcal{F}) $ is a regular, strongly local Dirichlet form. A general theory of Dirichlet forms can be found in \cite{FOT}.\vadjust{\goodbreak} Let $\mathcal{L}$ be the generator of $( \mathcal{E},\mathcal {F}) $; that is, $\mathcal{L}$ is a self-adjoint nonnegative definite operator in $L^{2}( M) $ with the domain $\func{dom}( \mathcal{L ) $ that is a dense subset of $\mathcal{F}$ and such that, for all f\in\func{dom}( \mathcal{L}) $ and $g\in\mathcal{F} \[ \mathcal{E}( f,g) =( \mathcal{L}f,g) . \] The associated \textit{heat semigroup} \[ P_{t}=e^{-t\mathcal{L}}, \qquad t\geq0, \] is a family of bounded self-adjoint operators in $L^{2}( M ) $. The Markovian properties allow the extension of $P_{t}$ to a bounded operator in $L^{q}( M) $, with the norm $\leq1$, for any $q\in \lbrack 1,+\infty]$. Denote by $\mathcal{B}( M) $ the class of all Borel functions on M$, by $\mathcal{B}_{b}$ the class of bounded Borel functions, by $\mathcal{ }_{+}( M) $ the class of nonnegative Borel functions and by \mathcal{B}L^{q}( M) $ the class of Borel functions that belong to $L^{q}( M) $. By \cite{FOT}, Theorem 7.2.1, for any local Dirichlet form, there exists a diffusion process $\{ \{ X_{t}\} _{t\geq0},\{ \mathbb{ }_{x}\} _{x\in M\setminus\mathcal{N}_{0}}\} $ with the initial point $x$ outside some properly exceptional set\footnote A set $\mathcal{N}\subset M$ is called properly exceptional if it is Borel, \mu( \mathcal{N}) =0$ an \[ \mathbb{P}_{x}( X_{t}\in\mathcal{N}\mbox{ for some }t\geq 0) =0 \] for all $x\in M\setminus\mathcal{N}$ (see \cite{FOT}, page 134 and Theorem 4.1.1 on page 137).} $\mathcal{N}_{0}\subset M$, which is associated with the heat semigroup $\{ P_{t}\} $ as follows: for any $f\in \mathcal{B}L^{q}( M) $, $1\leq q\leq\infty$ \begin{equation}\label{Ee} \mathbb{E}_{x}f( X_{t}) =P_{t}f( x) \qquad\mbox{for }\mu\mbox{-a.a. } x\in M. \end{equation} Consider the family of operators $\{ \mathcal{P}_{t}\} _{t\geq0}$ defined b \begin{equation}\label{PtEx} \mathcal{P}_{t}f( x) :=\mathbb{E}_{x}f (X_{t}),\qquad x\in M\setminus\mathcal{N}_{0}, \end{equation} for all functions $f\in\mathcal{B}_{b}( M) $ (if $X_{t}$ has a finite lifetime, then $f$ is to be extended by $0$ at the cemetery). The function $\mathcal{P}_{t}f( x) $ is a bounded Borel function on M\setminus\mathcal{N}_{0}$. \label{remdoweneedthisextension?} It is convenient to extend it to all $x\in M$ by settin \begin{equation}\label{PtEx0} \mathcal{P}_{t}f( x) =0,\qquad x\in\mathcal{N}_{0}, \end{equation} so that $\mathcal{P}_{t}$ can be considered as an operator in $\mathcal{B _{b}( M) $. Obviously, $\mathcal{P}_{t}f\geq0$ if $f\geq 0$ and \mathcal{P}_{t}1\leq1$. Moreover, the family $\{ \mathcal{P _{t}\} _{t\geq0}$ satisfies the semigroup identity \[ \mathcal{P}_{t}\mathcal{P}_{s}=\mathcal{P}_{t+s}. \] Indeed, if $x\in M\setminus\mathcal{N}_{0}$, then we have by the Markov property, for any $f\in\mathcal{B}_{b}( M) $ \[ \mathcal{P}_{t+s}f( x) =\mathbb{E}_{x}( f( X_{t+s}) ) =\mathbb{E}_{x}( \mathbb {E}_{X_{t}}( f( X_{s}) ) ) =\mathbb{E}_{x}( \mathcal{P _{s}f( X_{t}) ) =\mathcal{P}_{t}( \mathcal {P _{s}f) ( x), \] where we have used that $X_{t}\in M\setminus\mathcal{N}_{0}$ with $\mathbb{ }_{x}$-probability $1$. If $x\in\mathcal{N}_{0}$, then we have agai \[ \mathcal{P}_{t+s}f( x) =\mathcal{P}_{t}( \mathcal {P _{s}f) ( x), \] because the both sides are $0$. By considering increasing sequences of bounded functions, one extends the definition of $\mathcal{P}_{t}f$ to all $f\in\mathcal{B}_{+}( M) $ so that the defining identities (\ref{PtEx}) and (\ref{PtEx0}) remain valid also for $f\in\mathcal{B}_{+}( M) $ [allowing value +\infty$ for $\mathcal{P}_{t}f( x) $]. For a signed function f\in\mathcal{B}( M) $, define $\mathcal{P}_{t}f$ b \[ \mathcal{P}_{t}f( x) =\mathcal{P}_{t}( f_{+} ) (x) -\mathcal{P}_{t}( f_{-}) ( x) , \] provided at least one of the functions $\mathcal{P}_{t}( f_{+}) , $\mathcal{P}_{t}( f_{-}) $ is finite. Obviously, identities (\ref{PtEx}), (\ref{PtEx0}) are satisfied for such functions as well. If follows from the comparison of (\ref{Ee}) and (\ref{PtEx}) that, for all f\in\mathcal{B}L^{q}( M) $, \[ \mathcal{P}_{t}f( x) =P_{t}f( x) \qquad\mbox {for }\mu \mbox{-a.a. }x\in M. \] It particular, $\mathcal{P}_{t}f$ is finite almost everywhere. The set of the above assumptions will be referred to as the \textit{basic hypotheses}, and they are assumed by default in all parts of this paper. Sometimes we need also the following property. \begin{definition} The Dirichlet form $( \mathcal{E},\mathcal{F}) $ is called \textit{conservative} (or \textit{stochastically complete}) if $\mathcal {P _{t}1\equiv1$ for all $t>0$. \end{definition} \begin{example} Let $M$ be a connected Riemannian manifold, $d$ be the geodesic distance on $M$, $\mu$ be the Riemannian volume. Define the Sobolev spac \[ W^{1}=\{ f\in L^{2}( M) \dvtx\nabla f\in L^{2}( M)\}, \] where $\nabla f$ is the Riemannian gradient of $f$ understood in the weak sense. For all $f,g\in W^{1}$, one defines the energy for \[ \mathcal{E}( f,g) =\int_{M}( \nabla f,\nabla g) \,d\mu. \] Let $\mathcal{F}$ be the closure of $C_{0}^{\infty}( M) $ in W^{1}$. Then $( \mathcal{E},\mathcal{F}) $ is a regular strongly local Dirichlet form in $L^{2}( M) $. \label {remexamplesofDirichletformonfractals} \end{example} \subsection{The heat kernel and the transition semigroup} \label{SecHeat} \begin{definition} \label{DefHK} The \textit{heat kernel} (or the \textit{transition density}) of the transition semigroup $\{ \mathcal{P}_{t}\} $ is a function $p_{t}( x,y) $ defined for all $t>0$ and $x,y\in D:=M\setminus \mathcal{N}$, where $\mathcal{N}$ is a properly exceptional set containing \mathcal{N}_{0}$, and such that the following properties are satisfied: \begin{longlist}[(2)] \item[(1)] for any $t>0$, the function $p_{t}( x,y) $ is measurable jointly in $x,y$;\vadjust{\goodbreak} \item[(2)] for all $f\in\mathcal{B}_{+}( M) $, $t>0$ and x\in D$ \begin{equation}\label{Pt=pt} \mathcal{P}_{t}f( x) =\int_{D}p_{t}( x,y) f( y) \,d\mu( y) ; \end{equation} \item[(3)] for all $t>0$ and $x,y\in D$, \begin{equation}\label{sym} p_{t}( x,y) =p_{t}( y,x) ; \end{equation} \item[(4)] for all $t,s>0$ and $x,y\in D$ \begin{equation} \label{semi} p_{t+s}( x,y) =\int_{D}p_{t}( x,z) p_{s}( z,y) \,d\mu( z) . \end{equation} The set $D$ is called the domain of the heat kernel. \end{longlist} \end{definition} Let us extend $p_{t}( x,y) $ to all $x,y\in M$ by setting $ p_{t}( x,y) =0$ if $x$ or $y$ is outside~$D$. Then (\ref{sym}) and (\ref{semi}) hold for all $x,y\in M$, and the domain of integration in \ref{Pt=pt}) and (\ref{semi}) can be extended to $M$. The existence of the heat kernel allows us to extend the definition of $\mathcal{P}_{t}f$ to all measurable functions~$f$ by choosing a Borel measurable version of~$f$ and noticing that the integral~(\ref{Pt=pt}) does not change if function $f$ is changed on a set of measure~$0$. It follows from (\ref{Ee}) and (\ref{Pt=pt}) that, for any $f\in L^{2}( M) $, \begin{equation}\label{ept} P_{t}f( x) =\int_{M}p_{t}( x,y) f( y) \,d\mu ( y) \end{equation} for all $t>0$ and $\mu$-a.a. $x\in M$. A measurable function p_{t}( x,y) $ that satisfies~(\ref{ept}) is called the heat kernel of the semigroup $P_{t}$. It is well known that the heat kernel of P_{t}$ satisfies (\ref{sym}) and (\ref{semi}) although for \textit {almost} all $x,y\in M$ (see \cite{GrigHuUpper}, Section~3.3). Hence, the relation between the heat kernels of $\mathcal{P}_{t}$ and $P_{t}$ is as follows: the former is defined as a pointwise function of $x,y$, while the latter is defined almost everywhere, and the former is a pointwise realization of the latter, where the defining identities (\ref{Pt=pt}), (\ref{sym}), (\ref{ept}) must be satisfied pointwise. In this paper the heat kernel is understood exclusively in the sense of Definition~\ref{DefHK}. The existence of the heat kernel is not obvious at all and will be given a special treatment. Those who are interested in the settings where the pointwise existence of the heat kernel is known otherwise, can skip the rest of this section and go to Section \ref{Secaux}. \begin{lemma} \label{Lemuniq}Let $p_{t}$ be the heat kernel of $\mathcal{P}_{t}$. \begin{longlist}[(a)] \item[(a)] The function $p_{t}( x,\cdot) $ belongs to \mathcal{B}L^{2}( M) $ for all $t>0$ and $x\in M$. \item[(b)] For all $t>0$, $x,y\in M$, we have $p_{t}( x,y) \geq0$ and \begin{equation}\label{int1} \int_{M}p_{t}( x,z) \,d\mu( z) \leq1. \end{equation} Consequently, $p_{t}( x,\cdot) \in\mathcal{B}L^{1}( M) $.\vadjust{\goodbreak} \item[(c)] If $q_{t}$ is another heat kernel, then $p_{t}=q_{t}$ in the common part of their domains. \end{longlist} \end{lemma} \begin{pf} (a) Set $f=p_{t/2}( x,\cdot) $ and observe that, by (\ref{sym}) and (\ref{semi}) \begin{equation}\label{t2} p_{t}( x,y) =\int_{M}p_{t/2}( x,\cdot) p_{t/2}( y,\cdot) \,d\mu=\mathcal{P}_{t/2}f( y) \end{equation} for all $t>0$ and $x,y\in D$. Since $\mathcal{P}_{t/2}f$ is a Borel function, we obtain that $p_{t}( x,\cdot) $ is Borel. The latter is true also if $x\in\mathcal{N}$ since in this case $p_{t}( x,\cdot ) =0$. Setting in (\ref{t2}) $x=y$, we obtai \begin{equation}\label{pt2} \int_{M}p_{t/2}( x.\cdot) ^{2}\,d\mu=p_{t}( x,x) <\infty, \end{equation} whence $p_{t/2}( x,\cdot) \in L^{2}( M) $. (b) By (\ref{PtEx}), (\ref{PtEx0}) we have $\mathcal {P _{t}f( x) \geq0$ for all $t>0$, $x\in M$ provided $f\geq0$. Setting $f=[ p_{t}( x,\cdot) ] _{-}$, we obtai \[ 0\leq\mathcal{P}_{t}f( x) =\int_{M}p_{t}( x,\cdot ) [ p_{t}( x,\cdot) ] _{-}\,d\mu=-\int _{M}[ p_{t}( x,\cdot) ] _{-}^{2}\,d\mu, \] whence it follows that $[ p_{t}( x,\cdot) ] _{-}=0$ a.e., that is, $p_{t}( x,\cdot) \geq0$ a.e. on $M$. It follows from (\ref{t2}) that, for all $x,y\in M$ \[ p_{t}( x,y) =\int_{M}p_{t/2}( x,\cdot) p_{t/2}( y,\cdot) \,d\mu\geq0. \] Inequality (\ref{int1}) is trivial if $x\in\mathcal{N}$, and if $x\in D$ then it follows fro \[ \int_{M}p_{t}( x,\cdot) \,d\mu=\mathcal{P}_{t}1( x) \mathbb{E}_{x}1\leq1. \] (c) Let $D$ be the intersection of the domains of $p_{t}$ and $q_{t}$. For all $f\in\mathcal{B}_{+}( M) $ and $t>0$, $x\in D$, we have \[ \int_{D}p_{t}( x,\cdot) f\,d\mu=\mathcal{P}_{t}f( x) =\int_{D}q_{t}( x,\cdot) f\,d\mu. \] Applying this identity to function $f=p_{t}( y,\cdot) $ where y\in D$, and using (\ref{t2}), we obtai \[ p_{2t}( x,y) =\int_{D}q_{t}( x,\cdot) p_{t}( y,\cdot) \,d\mu. \] Similarly, we hav \[ q_{2t}( x,y) =\int_{D}p_{t}( y,\cdot) q_{t}( x,\cdot) \,d\mu, \] whence $p_{2t}( x,y) =q_{2t}( x,y) $. \end{pf} Following \cite{FOT}, page 67, a sequence $\{ F_{n}\} _{n=1}^{\infty }$ of subsets of $M$ will be called a \textit{regular nest} if: \begin{longlist}[(2)] \item[(1)] each $F_{n}$ is closed;\vadjust{\goodbreak} \item[(2)] $F_{n}\subset F_{n+1}$ for all $n\geq1$; \item[(3)] $\limfunc{Cap}(M\setminus F_{n})\rightarrow0$ as n\rightarrow\infty$ (see \cite{FOT} for the definition of capacity); \item[(4)] measure $\mu|_{F_{n}}$ has full support in $F_{n}$ (in the induced topology of $F_{n}$). \end{longlist} \begin{definition} \label{DefN} A set $\mathcal{N}\subset M$ is called \textit{truly exceptional} if: \begin{longlist}[(2)] \item[(1)] $\mathcal{N}$ is properly exceptional; \item[(2)] $\mathcal{N}\supset\mathcal{N}_{0}$; \item[(3)] there is a regular nest $\{ F_{n}\} $ in $M$ such that M\setminus\mathcal{N}=\tbigcup_{n=1}^{\infty}F_{n}$ and that the function \mathcal{P}_{t}f\vert_{F_{n}}$ is continuous for all $f\in \mathcal{B}L^{1}( M) $, $t>0$, and $n\in\mathbb{N}$. \end{longlist} \end{definition} The conditions under which a truly exceptional set exists, will be discussed later on. Let us mention some important consequences of the existence of such a set. \begin{lemma} \label{LemPt<fi}Let $\mathcal{N}$ be a truly exceptional set. If, for some f\in\mathcal{B}L^{1}( M) $, \mbox{$t>0$}, and for an upper semicontinuous function $\varphi\dvtx M\rightarrow(-\infty,+\infty]$, the inequalit \[ \mathcal{P}_{t}f( x) \leq\varphi( x) \] holds for $\mu$-a.a. $x\in M$, then it is true for all $x\in M\setminus\mathcal{N}$. Similarly, if $\psi\dvtx M\rightarrow\lbrack -\infty ,+\infty)$ is a lower semicontinuous function an \[ \mathcal{P}_{t}f( x) \geq\psi( x) \] holds for $\mu$-a.a. $x\in M$, then it is true for all $x\in M\setminus\mathcal{N}$. \end{lemma} \begin{pf} This proof is essentially the same as in \cite{FOT}, Theorem 2.1.2(ii). Assume that $\mathcal{P}_{t}f( x_{0}) >\varphi( x_{0}) $ for some $x_{0}\in M\setminus\mathcal{N}$. By Definition \ref{DefN}, $x_{0}$ belongs to one of the sets $F_{n}$. Since $\mathcal{P _{t}f|_{F_{n}}$ is continuous and, hence, $( \mathcal {P}_{t}f-\varphi ) |_{F_{n}}$ is lower semicontinuous, the condition $( \mathcal{P _{t}f-\varphi) ( x_{0}) >0$ implies that $( \mathcal{ }_{t}f-\varphi) ( x) >0$ for all $x$ in some open neighborhood $U$ of $x_{0}$ in $F_{n}$. Since measure $\mu$ has full support in $F_{n}$, we have $\mu( U) >0$ so that $\mathcal{P _{t}f( x) >\varphi( x) $ in a set of positive measure, that contradicts the hypothesis. The second claim follows from the first one with $\varphi=-\psi$. \end{pf} Denote by $\limfunc{esup}_{A}f$ the $\mu$-essential supremum of a function f$ on a set $A\subset M$, and by $\limfunc{einf}_{A}f$---the $\mu -essential infimum. \begin{corollary} \label{Cesup=sup1}Let $\mathcal{N}$ be a truly exceptional set. Then, for any $f\in\mathcal{B}L^{1}( M) $, $t>0$, and an open set $ X\subset M$ \begin{equation} \label{esi} \limfunc{esup}_{X}\mathcal{P}_{t}f=\sup_{X\setminus\mathcal {N}}\mathcal{P _{t}f \quad\mbox{and}\quad\limfunc{einf}_{X}\mathcal{P}_{t}f=\inf _{X\setminus \mathcal{N}}\mathcal{P}_{t}f. \end{equation} \end{corollary} \begin{pf} Functio \[ \varphi( x) =\cases{ \displaystyle \limfunc{esup}_{X}\mathcal{P}_{t}f, &\quad$x\in X$, \vspace*{2pt}\cr +\infty, &\quad$x\notin X$,}\vadjust{\goodbreak} \] is upper semicontinuous. Since $\mathcal{P}_{t}f( x) \leq \varphi ( x) $ for $\mu$-a.a. $x\in M$, we conclude by Lemma \ref{LemPt<fi} that this inequality is true for all $x\in M\setminus \mathcal{N}$, whenc \[ \sup_{X\setminus\mathcal{N}}\mathcal{P}_{t}f\leq\limfunc {esup}_{X}\mathcal P}_{t}f. \] The opposite inequality follows trivially from the definition of the essential supremum. The second identity in (\ref{esi}) follows from the first one by changing $f$ to~$-f$. \end{pf} Note that if $p_{t}( x,y) $ is the heat kernel with domain D=M\setminus\mathcal{N}$, then we have by (\ref{semi}) that, for all $x,y\in D$, $0<s<t$ \begin{equation}\label{ptPt} p_{t}( x,y) =\mathcal{P}_{s}f( x) , \end{equation} where $f=p_{t-s}( \cdot,y) $. Hence, if $\mathcal{N}$ is truly exceptional, then the claims of Lem\-ma~\ref{LemPt<fi} and Corollary \ref{Cesup=sup1} apply to function $p_{t}( x,y) $ in place of $ \mathcal{P}_{t}f( x) $, with any fixed $y\in D$. \begin{lemma} \label{LemBGK} Let $p_{t}(x,y)$ be the heat kernel with the domain D=M\setminus\mathcal{N}$ such that $\mathcal{N}$ is a truly exceptional set. Let $\varphi\dvtx D\times D\rightarrow[ 0,+\infty] $ be an upper semicontinuous function and $\psi\dvtx D\times D\rightarrow\lbrack 0,+\infty)$ be a lower semicontinuous function. If, for some fixed $t>0$, the following inequality: \begin{equation}\label{ptA} \psi( x,y) \leq p_{t}(x,y)\leq\varphi(x,y) \end{equation} holds for $\mu\times\mu$-almost all $x,y\in D$, then (\ref{ptA}) holds for all $x,y\in D$. \end{lemma} This lemma is a generalization of \cite{BarGrigKum}, Lemma 2.2, and the proof follows the argument in \cite{BarGrigKum}. \begin{pf*}{Proof of Lemma \ref{LemBGK}} Consider the se \[ D^{\prime}=\{ y\in D\dvt\mbox{(\ref{ptA}) holds for }\mu\mbox {-a.a. }x\in D\} . \] If $y\in D^{\prime}$ then applying Lemma \ref{LemPt<fi} to the function p_{t}( \cdot,y) $, we obtain that~(\ref{ptA}) holds for all x\in D$. Now fix $x\in D$. Since by Fubini's theorem $\mu(D\setminus D^{\prime })=0$,~(\ref{ptA}) holds for $\mu$-a.a. $y\in M$. Applying Lemma~\ref{LemPt<fi} to the function $p_{t}( x,\cdot) $, we conclude that~(\ref{ptA}) holds for all $y\in D$. \end{pf*} \begin{corollary} \label{Cesup=sup}Under the hypotheses of Lemma \ref{LemBGK}, if $X,Y$ are two open subsets of $M$ the \begin{equation}\label{esup=sup} \mathop{\limfunc{esup}_{x\in X}}_{y\in Y}p_{t}( x,y) =\mathop{\sup _{x\in X\setminus\mathcal{N}}}_{y\in Y\setminus\mathcal{N} p_{t}( x,y) \end{equation} an \begin{equation}\label{einf=inf} \mathop{\limfunc{einf}_{x\in X}}_{y\in Y}p_{t}( x,y) =\mathop{\inf _{x\in X\setminus\mathcal{N}}}_{y\in Y\setminus\mathcal{N} p_{t}( x,y) . \end{equation} \end{corollary} \begin{pf} This follows from Lemma \ref{LemBGK} with functions \[ \varphi( x,y) =\cases{ \func{const}, &\quad$x\in X,y\in Y$, \cr +\infty, &\quad otherwise, \] an \[ \psi( x,y) =\cases{ \func{const}, &\quad$x\in X,y\in Y$, \cr 0, &\quad otherwise. \] \upqed\end{pf} In conclusion of this section, let us state a result that ensures the existence of the heat kernel outside a truly exceptional set. \begin{theorem}[(\cite{BBCK}, Theorem 2.1)] \label{TBBCK} Assume that there is a positive left-continuous function $\gamma( t) $ such that for all $f\in L^{1}\cap L^{2}( M) $ and $t>0$ \begin{equation} \label{ultra} \Vert{}P_{t}f\Vert_{\infty}\leq\gamma( t) \Vert {}f\Vert _{1}. \end{equation} Then the transition semigroup $\mathcal{P}_{t}$ possesses the heat kernel p_{t}( x,y) $ with domain $D=M\setminus\mathcal{N}$ for some truly exceptional set $\mathcal{N}$, and $p_{t}( x,y) \leq\gamma ( t) $ for all $x,y\in D$ and $t>0$. \end{theorem} If the semigroup $\{ P_{t}\} $ satisfies (\ref{ultra}), then it is called \textit{ultracontractive} (cf.~\cite{Davbook}). It was proved in \cite{BBCK} that the ultracontractivity implies the existence of a function $ p_{t}( x,y) $ that satisfies all the requirements of Definition~\ref{DefHK} except for the joint measurability in $x,y$. Let us prove the latter so that $p_{t}( x,y) $ is indeed the heat kernel in our strict sense. Given that $p_{t}( x,y) $ satisfies conditions (2)--(4) of Definition~\ref{DefHK}, we see that the statement of Lemma~\ref {Lemuniq} remains true because the proof of that lemma does not use the joint measurability.\vspace*{1pt} In particular, for any $t>0$, $x\in D$, the function p_{t}( x,\cdot) $ is in $L^{2}( M) $. Also, the mapping $x\mapsto p_{t}( x,\cdot) $ is weakly measurable as a mapping from $D$ to $L^{2}( M) $ because for any $f\in L^{2}( M) $, the function $x\mapsto( p_{t,x},f) \mathcal{P}_{t}f(x)$ is measurable. Since $L^{2}( M) $ is separable, by Pettis's measurability theorem (see \cite{Yosida}, Chapter V,\vspace*{1pt} Section 4) the mapping $x\mapsto p_{t}( x,\cdot) $ is strongly measurable in $L^{2}( M)$. It follows that the functio \[ p_{2t}( x,y) =( p_{t}( x,\cdot) ,p_{t}( y,\cdot) ) \] is jointly measurable in $x,y\in D$ as the composition of two strongly measurable mappings $D\rightarrow L^{2}( M) $ and a continuous mapping $ f,g\mapsto( f,g)$. \subsection{Restricted heat semigroup and local ultracontractivity} Any open subset $\Omega$ of $M$ can be considered as a metric measure space $( \Omega,d,\mu) $. Let us identify\vadjust{\goodbreak} $L^{2}( \Omega ) $ as a subspace in $L^{2}( M) $ by extending functions outside~$\Omega$ by~$0$. Define~$\mathcal{F}( \Omega) $ as the closure of $\mathcal{F}\cap C_{0}( \Omega) $ in $\mathcal{F}$ so that \mathcal{F}( \Omega) $ is a~subspace of both $\mathcal {F}$ and $L^{2}( \Omega) $. Then $( \mathcal {E},\mathcal{F ( \Omega) ) $ is a regular strongly local Dirichlet form in $L^{2}( \Omega) $, which is called the restriction of $( \mathcal{E},\mathcal{F}) $ to $\Omega$. Let~$\mathcal{L}^{\Omega}$ be the generator of the form $( \mathcal{E},\mathcal{F}( \Omega ) ) $ and $P_{t}^{\Omega}=e^{-t\mathcal{L}^{\Omega}}$, $t\geq 0 $, be the \textit{restricted} heat semigroup. Define the \textit{first exit time} from $\Omega$ b \[ \tau_{\Omega}=\inf\{ t>0\dvtx X_{t}\notin\Omega\} . \] The diffusion process associated with the restricted Dirichlet form, can be canonically obtained from $\{ X_{t}\} $ by killing the latter outside $\Omega$, that is, by restricting the life time of $X_{t}$ by $\tau _{\Omega}$ (see \cite{FOT}). It follows that the transition operator $ \mathcal{P}_{t}^{\Omega}$ of the killed diffusion is given b \begin{equation}\label{PtOm} \mathcal{P}_{t}^{\Omega}f( x) =\mathbb{E}_{x}\bigl( \mathbf{1 _{\{ t<\tau_{\Omega}\} }f( X_{t}) \bigr) \qquad \mbox{for all }x\in\Omega\setminus\mathcal{N}_{0}, \end{equation} for all $f\in\mathcal{B}_{+}( \Omega) $. Then $\mathcal {P _{t}^{\Omega}f$ is defined for $f$ from other function classes in the same way as $\mathcal{P}_{t}$. Also, extend $\mathcal{P}_{t}^{\Omega }f( x) $ to all $x\in\Omega$ by setting it to be $0$ if $x\in \mathcal{N _{0}$. \begin{definition} We say that the semigroup $P_{t}$ is \textit{locally ultracontractive} if the restricted heat semigroup $P_{t}^{B}$ is ultracontractive for any metric ball $B$ of $( M,d) $. \end{definition} \begin{theorem} \label{TptOm}Let the semigroup $P_{t}$ be locally ultracontractive. Then the following is true. \begin{longlist}[(a)] \item[(a)] There exists a properly exceptional set $\mathcal{ }\subset M$ such that, for any open subset $\Omega\subset M$, the semigroup $\mathcal{P}_{t}^{\Omega}$ possesses the heat kernel $p_{t}^{\Omega }( x,y) $ with the domain $\Omega\setminus\mathcal{N}$. \item[(b)] If $\Omega_{1}\subset\Omega_{2}$ are open subsets of $M$, then $p_{t}^{\Omega_{1}}( x,y) \leq p_{t}^{\Omega _{2}}( x,y) $ for all $t>0$, $x,y\in\Omega_{1}\setminus \mathcal{N}$. \item[(c)] If $\{ \Omega_{k}\} _{k=1}^{\infty}$ is an increasing sequence of open subsets of $M$ and $\Omega =\tbigcup_{k}\Omega_{k}$, then $p_{t}^{\Omega_{k}}( x,y) \rightarrow p_{t}^{\Omega}( x,y) $ as $k\rightarrow \infty$ for all $t>0$, $x,y\in\Omega\setminus\mathcal{N}$. \item[(d)] Set $D=M\setminus\mathcal{N}$. Let $\varphi ( x,y) \dvtx D\times D\rightarrow[ 0,+\infty] $ be an upper semi-continuous function such that, for some open set $\Omega \subset M $ and for some $t>0$ \begin{equation}\label{ptfi} p_{t}^{\Omega}( x,y) \leq\varphi( x,y) \end{equation} for almost all $x,y\in\Omega$. Then (\ref{ptfi}) holds for all x,y\in\Omega\setminus\mathcal{N}$. \end{longlist} \end{theorem} For simplicity of notation, set $p_{t}^{\Omega}( x,y) $ to be $0$ for all $x,y$ outside $\Omega$ (which, however, does not mean the extension of the domain of $p_{t}^{\Omega}$). \begin{pf*}{Proof of Theorem \ref{TptOm}} (a) Since the metric space $( M,d) $ is separable, there is a countable family of balls that form a base. Let $\mathcal {U}$ be the family of all finite unions of such balls so that $\mathcal{U}$ is countable and any open set $\Omega\subset M$ can be represented\vadjust{\goodbreak} as an increasing union of sets of $\mathcal{U}$. Since any set $U\in \mathcal{U}$ is contained in a metric ball, the semigroup $P_{t}^{U}$ is dominated by~$P_{t}^{B}$ and, hence, is ultracontractive. By Theorem \ref{TBBCK}, there is a truly exceptional set $\mathcal{N}_{U}\subset U$ such that the $\mathcal{P _{t}^{U}$ has the heat kernel $p_{t}^{U}$ in the domain $U\setminus \mathcal N}_{U}$. Since the family $\mathcal{U}$ is countable, the se \begin{equation}\label{N=} \mathcal{N}=\tbigcup_{U\in\mathcal{U}}\mathcal{N}_{U} \end{equation} is properly exceptional. Let us first show that if $U_{1}$, $U_{2}$ are the sets from $\mathcal{U}$ and $U_{1}\subset U_{2}$, then \begin{equation} \label{pt12} p_{t}^{U_{1}}( x,y) \leq p_{t}^{U_{2}}( x,y) \qquad\mbox{for all }t>0,x,y\in U_{1}\setminus\mathcal{N}. \end{equation} It follows from (\ref{PtOm}) that, for any $f\in\mathcal{B}_{+}( U_{1}) $ \[ \mathcal{P}_{t}^{U_{1}}f( x) \leq\mathcal {P}_{t}^{U_{2}}f( x) \qquad\mbox{for all }t>0\mbox{ and }x\in U_{1}, \] that is \begin{equation}\label{Om12} \int_{U_{1}}p_{t}^{U_{1}}( x,\cdot) f\,d\mu\leq \int_{U_{2}}p_{t}^{U_{2}}( x,\cdot) f\,d\mu. \end{equation} Setting here $f=P_{t}^{U_{1}}( y,\cdot) $ where $y\in U_{1}\setminus\mathcal{N}$, we obtai \[ p_{2t}^{U_{1}}( x,y) \leq\int_{U_{1}}p_{t}^{U_{2}} ( x,\cdot ) p_{t}^{U_{1}}( y,\cdot) \,d\mu. \] Setting in (\ref{Om12}) $f=P_{t}^{U_{2}}( y,\cdot) $, we obtai \[ \int_{U_{1}}p_{t}^{U_{1}}( x,\cdot) P_{t}^{U_{2}}( y,\cdot ) \,d\mu\leq p_{2t}^{U_{2}}( x,y) . \] Combining the above two lines gives (\ref{pt12}). Let $\Omega$ be any open subset of $M$ and $\{ U_{n}\} _{n=1}^{\infty}$ be an increasing sequence of sets from $\mathcal{U}$ such that $\Omega=\tbigcup_{n=1}^{\infty}U_{n}$. Let us se \begin{equation}\label{ptOmlim} p_{t}^{\Omega}( x,y) =\lim_{n\rightarrow\infty }p_{t}^{U_{n}}( x,y) \qquad\mbox{for all }t>0\mbox{ and }x,y\in \Omega\setminus\mathcal{N}. \end{equation} This limit exists (finite or infinite) by the monotonicity of the sequence \{ p_{t}^{U_{n}}\hspace*{-0.2pt}( x,y) \} $. It follows from (\ref{PtOm}) that, for any $f\in\mathcal{B}_{+}( \Omega) $ \[ \mathcal{P}_{t}^{U_{n}}f( x) \uparrow\mathcal {P}_{t}^{\Omega }f( x) \qquad\mbox{for all }t>0\mbox{ and }x\in\Omega \setminus \mathcal{N}. \] By the monotone convergence theorem, we obtai \[ \mathcal{P}_{t}^{U_{n}}f( x) =\int_{\Omega }p_{t}^{U_{n}}( x,y) f( y) \,d\mu( y) \rightarrow\int _{\Omega }p_{t}^{\Omega}( x,y) f( y) \,d\mu( y) \] for all $t>0$ and $x\in\Omega\setminus\mathcal{N}$. Comparing the above two lines, we obtai \[ \mathcal{P}_{t}^{\Omega}f( x) =\int_{\Omega }p_{t}^{\Omega }( x,y) f( y) \,d\mu( y) \qquad\mbox {for all }t> \mbox{ and }x\in\Omega\setminus\mathcal{N}. \] The symmetry of $p_{t}^{\Omega}( x,y) $ is obvious from (\ref{ptOmlim}), and the semigroup property of $p_{t}^{\Omega}$ follows from that of $p_{t}^{U_{n}}$ by the monotone convergence theorem. Note that $ p_{t}^{\Omega}$ does not depend on the choice of $\{ U_{n} \} $ by the uniqueness of the heat kernel (Lemma \ref{Lemuniq}). (b) For two arbitrary open sets $\Omega_{1}\subset \Omega _{2}$ let $\{U_{n}\}_{n=1}^{\infty}$ and $\{W_{n}\}_{n=1}^{\infty}$ be increasing sequences of sets from $\mathcal{U}$ that exhaust $\Omega_{1}$ and $\Omega_{2}$, respectively. Set $V_{n}=U_{n}\cup W_{n}$ so that V_{n}\in\mathcal{U}$ and $\Omega_{2}$ is the\vspace*{1pt} increasing union of sets V_{n}$ (see Figure \ref{pic6}). Then $U_{n}\subset V_{n}$ and, hence, $ p_{t}^{U_{n}}\leq p_{t}^{V_{n}}$, which implies as $n\rightarrow\infty$ that $p_{t}^{\Omega_{1}}\leq p_{t}^{\Omega_{2}}$. \begin{figure} \includegraphics{645f01.eps} \caption{Sets $U_{n},W_{n},V_{n}$.}\label{pic6} \end{figure} (c) Let $\{ \Omega_{k}\} _{k=1}^{\infty }$ be an increasing sequence of open sets whose union is~$\Omega$. Let \{U_{n}^{( k) }\}_{n=1}^{\infty}$ be an increasing sequence of sets from $\mathcal{U}$ that exhausts~$\Omega_{k}$. As in the previous argument,\vspace*{1pt} we can replace $U_{n}^{( 2) }$ by $V_{n}^{( 2) }=U_{n}^{( 1) }\cup U_{n}^{( 2) }$ so that U_{n}^{( 1) }\subset V_{n}^{( 2) }$. Rename V_{n}^{( 2) }$ back to $U_{n}^{( 2) }$, and assume\vspace*{1pt} in the sequel that $U_{n}^{( 1) }\subset U_{n}^{( 2) }$. Similarly, replace $U_{n}^{( 3) }$ by $U_{n}^{( 1) }\cup U_{n}^{( 2) }\cup U_{n}^{( 3) }$ and assume in the sequel that $U_{n}^{( 2) }\subset U_{n}^{( 3) }$. Arguing by induction, we redefine the double sequence $U_{n}^{( k) }$ in the way that it is monotone increasing not only in $n$ but also in $k$. Then we claim that \[ \Omega=\tbigcup_{m=1}^{\infty}U_{m}^{( m) }. \] Indeed, if $x\in\Omega, $ then $x\in\Omega_{k}$ for some $k$ and, hence, x\in U_{n}^{( k) }$ for some $n$, which implies $x\in U_{m}^{( m) }$ for $m=\max( k,n) $. Finally, we have p_{t}^{\Omega}\geq p_{t}^{\Omega_{m}}$ an \[ p_{t}^{\Omega}=\lim_{m\rightarrow\infty}p_{t}^{U_{m}^{( m) }}\leq\lim_{m\rightarrow\infty}p^{\Omega_{m}}, \] whence it follows tha \[ p_{t}^{\Omega}=\lim_{m\rightarrow\infty}p^{\Omega_{m}}. \] (d) Let $U\in\mathcal{U}$ be subset of $\Omega$. Then the semigroup $P_{t}^{U}$ is ultracontractive and possesses the heat kernel p_{t}^{U}$ with the domain $U\setminus\mathcal{N}_{U}$ where $\mathcal{N _{U}$ is a~truly exceptional\vadjust{\goodbreak} set as in part (a). Note that \mathcal{N}_{U}\subset\mathcal{N}$. Since $p_{t}^{U}\leq p_{t}^{\Omega}$ in $U\setminus\mathcal{N}$, we obtain by hypothesis that \[ p_{t}^{U}( x,y) \leq\varphi( x,y) \] for almost all $x,y\in U$. By Lemma \ref{LemBGK}, we conclude that this inequality is true for all $x,y\in U\setminus\mathcal{N}$. Exhausting \Omega$ be a sequence of subsets $U\in\mathcal{U}$ and using (\ref {ptOmlim ), we obtain (\ref{ptfi}). \end{pf*} \section{Some preparatory results} \label{Secaux} \subsection{Green operator} \label{SecFK} A priori we assume here only the basic hypotheses. All necessary additional assumptions are explicitly stated. The main result of this section is Theorem \ref{TG=>FK}. Given an open set $\Omega\subset M$, define the Green operator $G^{\Omega}$ first for all $f\in\mathcal{B}_{+}( \Omega) $ b \begin{equation}\label{Gdef} G^{\Omega}f( x) =\int_{0}^{\infty}\mathcal {P}_{t}^{\Omega }f ( x) \,dt \end{equation} for all $x\in M\setminus\mathcal{N}_{0}$, where we admit infinite values of the integral. If $f$ $\in\mathcal{B}( \Omega) $ and $G^{\Omega }\vert f\vert<\infty, $ then $G^{\Omega}f$ is also defined by \[ G^{\Omega}f=G^{\Omega}f_{+}-G^{\Omega}f_{-} . \] \begin{lemma} We have, for any open $\Omega\subset M$ and all $f\in\mathcal {B}_{+}( \Omega) $ \begin{equation} \label{GOm} G^{\Omega}f( x) =\mathbb{E}_{x}\biggl( \int_{0}^{\tau _{\Omega }}f( X_{t}) \,dt\biggr) \end{equation} for any $x\in\Omega\setminus\mathcal{N}_{0}$. In particular \begin{equation}\label{Gtau} G^{\Omega}1( x) =\mathbb{E}_{x}\tau_{\Omega}. \end{equation} \end{lemma} \begin{pf} Indeed, integrating (\ref{PtOm}) in $t$, we obtai \begin{eqnarray*} G^{\Omega}f( x) &=&\int_{0}^{\infty}\mathcal {P}_{t}^{\Omega }f( x) \,dt \\ &=&\int_{0}^{\infty}\mathbb{E}_{x}\bigl( \mathbf{1}_{\{ t<\tau _{\Omega}\} }f( X_{t}) \bigr) \,dt \\ &=&\mathbb{E}_{x}\int_{0}^{\infty}\bigl( \mathbf{1}_{\{ t<\tau _{\Omega}\} }f( X_{t}) \bigr) \,dt \\ &=&\mathbb{E}_{x}\biggl( \int_{0}^{\tau_{\Omega}}f( X_{t}) \,dt\biggr) . \end{eqnarray*} Obviously, (\ref{Gtau}) follows from (\ref{GOm}) for $f\equiv1$. \end{pf} Denote by $\lambda_{\min}( \Omega) $ the bottom of the spectrum of $\mathcal{L}^{\Omega}$ in $L^{2}( \Omega) $, that is \begin{equation}\label{ladef} \lambda_{\min}( \Omega) :=\inf\func{spec}\mathcal {L}^{\Omega }=\inf_{f\in\mathcal{F}( \Omega) \setminus\{ 0\} \frac{\mathcal{E}( f,f) }{( f,f) }.\vadjust{\goodbreak} \end{equation} For any open set $\Omega\subset M$, we will consider the \textit{mean exit time} $\mathbb{E}_{x}\tau_{\Omega}$ from $\Omega$ as a function of $x\in \Omega\setminus\mathcal{N}_{0}$. Also, se \begin{equation}\label{Ebar} \widetilde{E}( \Omega) :=\limfunc{esup}_{x\in\Omega }\mathbb{E _{x}\tau_{\Omega}. \end{equation} \begin{lemma} \label{LG1-1}If $\widetilde{E}( \Omega) <\infty, $ then G^{\Omega}$ is a bounded operator on $\mathcal{B}_{b}( \Omega ), $ and it uniquely extends to each of the spaces $L^{\infty}( \Omega ) $, $L^{1}( \Omega) $, $L^{2}( \Omega ) $, with the following norm estimates \begin{eqnarray}\label{Gi-i} \Vert G^{\Omega}\Vert_{L^{\infty}\rightarrow L^{\infty}}&\leq& \widetilde{ }( \Omega) , \\[-1pt] \label{G1-1} \Vert G^{\Omega}\Vert_{L^{1}\rightarrow L^{1}}&\leq&\widetilde {E}( \Omega) , \\[-1pt] \label{G2-2} \Vert G^{\Omega}\Vert_{L^{2}\rightarrow L^{2}}&\leq&\widetilde {E}( \Omega) . \end{eqnarray} Moreover, \begin{equation}\label{lamin<} \lambda_{\min}( \Omega) ^{-1}\leq\widetilde{E}( \Omega) , \end{equation} and $G^{\Omega}$ is the inverse in $L^{2}( \Omega) $ to the operator $\mathcal{L}^{\Omega}$. \end{lemma} \begin{pf} It follows from (\ref{Gtau}) tha \begin{equation}\label{GOm1} \Vert G^{\Omega}1\Vert_{\infty}=\widetilde{E}( \Omega) , \end{equation} which implies that for any $f\in\mathcal{B}_{b}( \Omega ) $, \[ \Vert G^{\Omega}f\Vert_{\infty}\leq\widetilde{E}( \Omega ) \Vert f\Vert_{\infty}. \] Hence, $G^{\Omega}$ can be considered as a bounded operator in $L^{\infty}$ with the norm estimate (\ref{Gi-i}). Estimate (\ref{G1-1}) follows from (\ref{Gi-i}) by duality. Indeed, for any two functions $f,h\in\mathcal{B}_{+}( \Omega) $, we hav \begin{equation}\label{Gfh} \int_{\Omega}( G^{\Omega}f ) h\,d\mu=\int_{\Omega }fG^{\Omega }h \,d\mu, \end{equation} which follows from (\ref{Gdef}) and the symmetry of $\mathcal {P}_{t}^{\Omega }$. By linearity, (\ref{Gfh}) extends to all $f,h\in\mathcal {B}_{b}( \Omega) $. Then, for any $f\in C_{0}( \Omega) $, we hav \begin{eqnarray*} \Vert G^{\Omega}f\Vert_{1} &=&\sup_{h\in\mathcal{B}_{b}( \Omega ) \setminus\{ 0\} }\frac{\int_{\Omega}( G^{\Omega }f) h\,d\mu}{\Vert h\Vert_{\infty}} \\[-1pt] &=&\sup_{h\in\mathcal{B}_{b}( \Omega) \setminus \{ 0\} }\frac{\int_{\Omega}fG^{\Omega}h\,d\mu}{\Vert h\Vert _{\infty}} \\[-1pt] &\leq&\sup_{h\in\mathcal{B}_{b}( \Omega) \setminus \{ 0\} }\frac{\Vert G^{\Omega}h\Vert_{\infty}\Vert f\Vert _{1}}{\Vert h\Vert_{\infty}} \\[-1pt] &\leq&\widetilde{E}( \Omega) \Vert f\Vert_{1}, \end{eqnarray*} whence it follows that $G^{\Omega}$ uniquely extends to a bounded operator in $L^{1}$ with the norm estimate (\ref{G1-1}).\vadjust{\goodbreak} The estimate (\ref{G2-2}) [as well as a similar estimate for $ \Vert G\Vert_{L^{p}\rightarrow L^{p}}$ for any $p\in( 1,\infty ) $] follows from (\ref{Gi-i}) and (\ref{G1-1}) by the Riesz--Thorin interpolation theorem. To prove (\ref{lamin<}), let us consider the following ``cut-down'' version of the Green operator \[ G_{T}^{\Omega}f=\int_{0}^{T}\mathcal{P}_{t}^{\Omega}f \,dt, \] where $T\in(0,+\infty)$. The same argument as above shows that G_{T}^{\Omega}$ can be considered as an operator in $L^{2}$ with the same norm boun \[ \Vert G_{T}^{\Omega}\Vert_{L^{2}\rightarrow L^{2}}\leq\widetilde {E}( \Omega) . \] On the other hand, using the spectral resolution $\{ E_{\lambda }\} _{\lambda\geq0}$ of the generator~$\mathcal{L}^{\Omega }$, we obtain, for any $f\in C_{0}( \Omega) $ \begin{eqnarray} \label{fiT} G_{T}^{\Omega}f &=&\int_{0}^{T}\biggl( \int_{0}^{\infty}e^{-\lambda t}\,dE_{\lambda}f\biggr) \,dt \nonumber\\ &=&\int_{0}^{\infty}\biggl( \int_{0}^{T}e^{-\lambda t}\,dt\biggr)\, dE_{\lambda }f \nonumber\\[-8pt]\\[-8pt] &=&\int_{0}^{\infty}\varphi_{T}( \lambda) \,dE_{\lambda}f \nonumber\\ &=&\varphi_{T}( \mathcal{L}^{\Omega}) f,\nonumber \end{eqnarray} wher \[ \varphi_{T}( \lambda) =\int_{0}^{T}e^{-\lambda t}\,dt=\frac 1-e^{-T\lambda}}{\lambda}. \] Since $\varphi_{T}$ is a bounded function on $[0,+\infty)$, the operator \varphi_{T}( \mathcal{L}^{\Omega}) $ is a~bounded operator in L^{2}$. By the spectral mapping theorem, we obtai \begin{eqnarray*} \sup\varphi_{T}( \func{spec}\mathcal{L}^{\Omega}) &=&\sup \func{spec}\varphi_{T}( \mathcal{L}^{\Omega}) \\ &=&\Vert\varphi_{T}( \mathcal{L}^{\Omega}) \Vert _{L^{2}\rightarrow L^{2}} \\ &=&\Vert G_{T}^{\Omega}\Vert_{L^{2}\rightarrow L^{2}} \\ &\leq&\widetilde{E}( \Omega) . \end{eqnarray*} On the other hand, since $\varphi_{T}( \lambda) $ is decreasing in $\lambda$ \[ \sup\varphi_{T}( \func{spec}\mathcal{L}^{\Omega}) =\varphi _{T}( \lambda_{\min}( \Omega) ) , \] whenc \[ \varphi_{T}( \lambda_{\min}( \Omega) ) \leq \widetilde{E}( \Omega) . \] By letting $T\rightarrow\infty$ and observing that $\varphi_{T}( \lambda) \rightarrow\frac{1}{\lambda}$, we obtai \[ \lambda_{\min}( \Omega) ^{-1}\leq\widetilde{E}( \Omega ) ,\vadjust{\goodbreak} \] which in particular means that $\lambda_{\min}( \Omega) >0$. Consequently, the operator $\mathcal{L}^{\Omega}$ has a bounded inverse. Passing in (\ref{fiT}) to the limit as $T\rightarrow\infty$, we obtain G^{\Omega}=( \mathcal{L}^{\Omega}) ^{-1}$. \end{pf} \subsection{Harmonic functions and Harnack inequality} \label{SecHarnack} Let $\Omega$ be an open subset of~$M$. \begin{definition} We say that a function $u\in\mathcal{F}$ is \textit{harmonic} in $\Omega$ i \[ \mathcal{E}( u,v) =0 \qquad\mbox{for any }v\in\mathcal {F}( \Omega) . \] \end{definition} \begin{lemma} \label{LemG-G}Let $\Omega$ be an open subset of $M$ such that $\widetilde{E ( \Omega) <\infty$, and let $U$ be an open subset of $\Omega$. \begin{longlist}[(a)] \item[(a)] For any $f\in L^{2}( \Omega\setminus U) $, the function $G^{\Omega}f$ is harmonic in $U$.\label{remdoweneeda?} \item[(b)] For any $f\in L^{2}( \Omega) $, the function G^{\Omega}f-G^{U}f$ is harmonic in $U$. \end{longlist} \end{lemma} \begin{remark} If $f\in L^{2}( \Omega), $ then $G^{U}f$ is defined as the extension of $G^{U}( f|_{U}) $ to $\Omega$ by setting it to be equal to $0$ in $\Omega\setminus U$. \end{remark} \begin{pf*}{Proof of Lemma \ref{LemG-G}} (a) Set $u=G^{\Omega}f$. To prove that $u$ is harmonic in~$U , we need to show that $\mathcal{E}( u,v) =0$, for any $v\in \mathcal{F}( U) $. Since by Lemma~\ref{LG1-1} $G^{\Omega }=( \mathcal{L}^{\Omega}) ^{-1}$, we have $u\in\func{dom}( \mathcal L}^{\Omega}) $. Therefore, by the definition of $\mathcal {L}^{\Omega } $ \[ \mathcal{E}( u,v) =( \mathcal{L}^{\Omega }u,v) =( f,v) =0. \] (b) Set $u=G^{\Omega}f-G^{U}f$. Any function $v\in \mathcal{ }( U) $ can be considered as an element of $\mathcal {F}( \Omega) $ by setting it to be $0$ in $\Omega\setminus U$. Then both u$ and $v$ are in~$\mathcal{F}( \Omega) $ whenc \begin{eqnarray*} \mathcal{E}( u,v) &=&\mathcal{E}( G^{\Omega }f,v) \mathcal{E}( G^{U}f,v) \\ &=&( f,v) _{L^{2}( \Omega) }-( f,v) _{L^{2}( U) } \\ &=&0. \end{eqnarray*} \upqed\end{pf*} Denote by \[ B( x,r) =\{ y\in M\dvtx d( x,y) <r\} \] the open metric ball of radius $r>0$ centered at a point $x\in M$, and se \[ V( x,r) =\mu( B( x,r) ) . \] That $\mu$ has full support implies $V( x,r) >0$ whenever $r>0$. Whenever we use the function $V( x,r) $, we always assume that \[ V( x,r) <\infty\qquad\mbox{for all }x\in M\mbox{ and }r>0. \] For example, this condition is automatically satisfied if all balls are precompact. However, we do not assume precompactness of all balls unless otherwise explicitly stated.\vadjust{\goodbreak} \begin{definition} We say that the \textit{elliptic Harnack inequality} (\ref{condH}) holds on~$M$, if there exist constants $C>1$ and $\delta\in( 0,1) $ such that, for any ball~$B( x,r) $ in $M$ and for any function $u\in\mathcal{F}$ that is nonnegative and harmonic~$B(x,r) $, {\renewcommand{\theequation}{$H$} \begin{equation}\label{condH} \limfunc{esup}_{B( x,\delta r) }u\leq C \mathop{\func {einf}}_{B( x,\delta r) }u . \end{equation}} \end{definition} \begin{definition} We say that the \textit{volume doubling} property (\ref{VD}) holds if there exists a constant $C$ such that, for all $x\in M$ and $r>0 \label{condVD {\renewcommand{\theequation}{\textit{VD}} \begin{equation} V( x,2r) \leq CV( x,r) . \end{equation}} \end{definition} It is known that (\ref{VD}) implies that, for all $x,y\in M$ and 0<r<R$ \setcounter{equation}{12} \begin{equation}\label{Va} \frac{V( x,R) }{V( y,r) }\leq C\biggl( \frac {R+d( x,y) }{r}\biggr) ^{\alpha} \end{equation} for some $\alpha>0$ (see \cite{GrigHuUpper}). \begin{lemma} Assume that $\mbox{(\ref{VD})} +\mbox{(\ref{condH})} $ hold. Let $\Omega $ be an open subset of~$M$ such that $\widetilde{E}( \Omega) <\infty, $ and let $B=B( x,r) $ be a ball contained in $\Omega$. \begin{longlist}[(a)] \item[(a)] \label{LgOm} For any nonnegative function $\varphi\in L^{1}( \Omega\setminus B) $, \label{remdoweneeda?copy1 \begin{equation}\label{gOm<E} \limfunc{esup}_{B( x,\delta r) }G^{\Omega}\varphi\leq C\frac \widetilde{E}( \Omega) }{V( x,r) }\Vert {}\varphi \Vert_{1}. \end{equation} \item[(b)]\label{LemGB-GB}For and any nonnegative function \varphi\in L^{1}( \Omega) $ \begin{equation}\label{gBB} \limfunc{esup}_{B( x,\delta r) }( G^{\Omega }\varphi -G^{B}\varphi) \leq\frac{C\widetilde{E}( \Omega ) } V( x,r) }\Vert{}\varphi\Vert_{1}. \end{equation} \end{longlist} \end{lemma} \begin{pf} (a) Since identity (\ref{gOm<E}) survives monotone increasing limits of sequences of functions $\varphi$, it suffices to prove (\ref{gOm<E}) for any nonnegative function $\varphi\in L^{1}\cap L^{2}( \Omega\setminus B) $. Then, by Lemma \ref{LemG-G}, the function $u=G^{\Omega}\varphi$ is harmonic in $B( x,r)$.\vadjust{\goodbreak} Since $u\geq0$, we can use the Harnack inequality (\ref{condH}) in ball $B$, which yield \begin{eqnarray}\label{fi1} \limfunc{esup}_{B( x,\delta r) }u( x) &\leq &C\limfunc einf}_{B( x,\delta r) }u\leq\frac{C}{V( x,r ) }\Vert u\Vert_{1} \nonumber\\ &\leq&\frac{C}{V( x,r) }\Vert G^{\Omega}\Vert _{L^{1}\rightarrow L^{1}}\Vert\varphi\Vert_{1} \\ &\leq&\frac{C\widetilde{E}( \Omega) }{V( x,r) }\Vert \varphi\Vert_{1}.\nonumber \end{eqnarray} (b) Assume first that $\varphi\in L^{1}\cap L^{2}( \Omega) $. By Lemma \ref{LemG-G}, the function $u=G^{\Omega }\varphi-G^{B}\varphi$ is harmonic in $B( x,r) $. Since $u\geq 0 $, applying for this function argument (\ref{fi1}), we obtain (\ref {gBB}). An arbitrary nonnegative function $\varphi\in L^{1}( \Omega ) $ can be represented as a sum in $L^{1}( \Omega) \[ \varphi=\sum_{k=0}^{\infty}\varphi_{k}, \] where $\varphi_{k}:=( \varphi-k) _{+}\wedge1\in L^{1}\cap L^{\infty}( \Omega) $. Applying (\ref{gBB}) to each $\varphi _{k}$ and summing up, we obtain (\ref{gBB}) for $\varphi$. \end{pf} \subsection{Faber--Krahn inequality and mean exit time} \label{SecEF} A classical theorem of Faber and Krahn says that for any bounded open set $\Omega\subset\mathbb{R}^{n}$, \[ \lambda_{\min}( \Omega) \geq\lambda_{\min}(B), \] where $B$ is a ball in $\mathbb{R}^{n}$ of the same volume as $\Omega $. If the radius of $B$ is $r$, then $\lambda_{\min}( B) =\frac{c} r^{2}}$ where $c$ is a positive constant depending only on $n$, which implies that \begin{equation}\label{FKRn} \lambda_{\min}( \Omega) \geq c\mu( \Omega ) ^{-2/n}; \end{equation} cf. \cite{Chavbook,Chavnotes}. We refer to lower estimates of \lambda_{\min}( \Omega) $ via a function of $\mu ( \Omega ) $ as \textit{Faber--Krahn inequalities}. A more general type of a Faber--Krahn inequality holds on a complete $n$-dimensional Riemannian manifold $M$ of nonnegative Ricci curvature: for any bounded open set $ \Omega\subset M$ and for any ball $B$ of radius $r$ containing~$\Omega $ \label{condFK0 \begin{equation}\label{FKrel} \lambda_{\min}( \Omega) \geq\frac{c}{r^{2}}\biggl( \frac{\mu ( B) }{\mu( \Omega) }\biggr) ^{\nu}, \end{equation} where $\nu=2/n$ and $c=c( n) >0$ (see \cite{GrigHar}). Note that (\ref{FKRn}) follows from~(\ref{FKrel}) (apart from the sharp value of the constant $c$) because in $\mathbb{R}^{n}$ we have $\mu( B ) \func{const}r^{n}$. It was proved in \cite{GrigHeat} that, on any complete Riemannian manifold, \[ \mbox{(\ref{FKrel})}\Leftrightarrow\mbox{(\ref{VD})} + \mbox{(\textit{UE})}, \] where (\textit{UE}) is here the upper bound of the heat kernel in the Li--Yau estimate (\ref{LiYau}). In Section \ref{SecDUE} we will derive a general upper bound (\ref{UEE}) from a~set of hypotheses containing a suitable version of (\ref{FKrel}). In this section, we will deduce a Faber--Krahn inequality from the main hypotheses. We fix from now on a function $F\dvtx( 0,\infty) \rightarrow ( 0,\infty) $ that is a continuous increasing bijection of $(0,\infty)$ onto itself, such that, for all $0<r\leq R$ \begin{equation}\label{Fb} C^{-1}\biggl( \frac{R}{r}\biggr) ^{\beta}\leq\frac{F( R) } F( r) }\leq C\biggl( \frac{R}{r}\biggr) ^{\beta^{\prime}} \end{equation} for some constants $1<\beta\leq\beta^{\prime}$, $C>1$. In the sequel we will use the inverse function $\mathcal{R}=F^{-1}$. It follows from (\ref{Fb ) tha \begin{equation} \label{Rb} C^{-1}\biggl( \frac{T}{t}\biggr) ^{1/\beta^{\prime}}\leq\frac {\mathcal{R ( T) }{\mathcal{R}( t) }\leq C\biggl( \frac {T}{t \biggr) ^{1/\beta} \end{equation} for all $0<t\leq T$. \begin{definition} We say that the Faber--Krahn inequality (\ref{FK}) holds if, for any ball $B$ in $M$ of radius $r$ and any open set $\Omega\subset B$,\label{condFK {\renewcommand{\theequation}{\textit{FK}} \begin{equation}\label{FK} \lambda_{\min}( \Omega) \geq\frac{c}{F( r ) }\biggl( \frac{\mu( B) }{\mu( \Omega) }\biggr) ^{\nu} \end{equation}} \vspace*{-8pt} \noindent with some positive constants $c,\nu$. \end{definition} \begin{definition} We say that the mean exit time estimate (\ref{EF}) holds if, for all $x\in M\setminus\mathcal{N}_{0}$ and $r>0$,\label{condEF {\renewcommand{\theequation}{${E}_{F}$} \begin{equation}\label{EFF} C^{-1}F( r) \leq\mathbb{E}_{x}\tau_{B( x,r ) }\leq CF( r) \end{equation}} \vspace*{-8pt} \noindent with some constant $C>1$. \end{definition} We denote by $({E}_{F}\mbox{$\leq$}) $ and $({E}_{F}\mbox{$\geq$} ) $ the upper and lower bounds of $\mathbb{E}_{x}\tau_{B( x,r) }$ in~(\ref{EFF}), respectively. \begin{theorem} \label{TG=>FK}The hypotheses $\mbox{(\ref{VD})} +\mbox{(\ref{condH})} +({E}_{F}\mbox{$\leq$}) $ imply (\ref{FK}). \end{theorem} \begin{pf} We have by (\ref{lamin<}) and (\ref{Gtau} \setcounter{equation}{20} \begin{equation}\label{laG} \lambda_{\min}( \Omega) ^{-1}\leq\widetilde{E}( \Omega ) =\func{esup}_{x\in\Omega}G^{\Omega}1_{\Omega}. \end{equation} It will be convenient to rename $R$ to $R/2$ and let the original ball $B$ be $B( z,R/2) $ and $\Omega\subset B( z,R/2) $. Fix a point $x\in\Omega$ so that $\Omega\subset B( x,R) $, consider a numerical sequence $R_{k}=\delta^{k}R$, $k=0,1,2,\ldots,$ where $\delta $ is the parameter from (\ref{condH}), and the balls $B_{k}=B( x,R_{k}) $. We hav \[ G^{\Omega}1_{\Omega}\leq G^{B_{0}}1_{\Omega}=\sum_{k=0}^{n-1}( G^{B_{k}}-G^{B_{k+1}}) 1_{\Omega}+G^{B_{n}}1_{\Omega}, \] where $n$ is to be chosen (see Figure \ref{pic3}), whenc \[ \limfunc{esup}_{B_{n+1}}G^{\Omega}1_{\Omega}\leq\sum _{k=0}^{n-1}\limfunc esup}_{B_{k+2}}( G^{B_{k}}-G^{B_{k+1}}) 1_{\Omega }+\limfunc{esup _{B_{n}}G^{B_{n}}1_{\Omega}. \] \begin{figure} \includegraphics{645f02.eps} \caption{Balls $B_{k}$.}\label{pic3} \end{figure} Setting $V( r) =V( x,r) $ and using $\widetilde{E ( B_{k}) \leq F( R_{k}) $, we obtain, by Lemma~\ref{LemGB-GB} \[ \limfunc{esup}_{B_{k+2}}( G^{B_{k}}-G^{B_{k+1}}) 1_{\Omega}\leq \frac{CF( R_{k}) }{V( R_{k}) }\mu( \Omega). \] Also, by (\ref{GOm1}) \[ \limfunc{esup}_{B_{n}}G^{B_{n}}1_{\Omega}\leq\limfunc{esup _{B_{n}}G^{B_{n}}1=\widetilde{E}( B_{n}) \leq CF( R_{n}) . \] Hence, collecting together the previous lines, we obtai \[ \limfunc{esup}_{B_{n+1}}G^{\Omega}1_{\Omega}\leq C\sum _{k=0}^{n-1}\frac F( R_{k}) }{V( R_{k}) }\mu( \Omega ) +CF( R_{n}) . \] Choose any $\nu\in( 0,1) $ so that $\nu<\beta/\alpha $. Using (\ref{Fb}), (\ref{Va}) and the monotonicity of $V( s) $, we obtai \begin{eqnarray*} \sum_{k=0}^{n-1}\frac{F( R_{k}) }{V( R_{k}) } &=&\frac F( R) }{V( R_{n}) ^{1-\nu}V( R) ^{\nu} \sum_{k=0}^{n-1}\frac{F( R_{k}) }{F( R) }\biggl( \frac V( R) }{V( R_{k}) }\biggr) ^{\nu}\biggl( \frac{V( R_{n}) }{V( R_{k}) }\biggr) ^{1-\nu} \\ &\leq&\frac{CF( R) }{V( R_{n}) ^{1-\nu }V( R) ^{\nu}}\sum_{k=0}^{n-1}\biggl( \frac{R_{k}}{R}\biggr) ^{\beta }\biggl( \frac{R}{R_{k}}\biggr) ^{\alpha\nu} \\ &=&\frac{CF( R) }{V( R_{n}) ^{1-\nu}V ( R) ^{\nu}}\sum_{k=0}^{n-1}\delta^{k( \beta-\alpha\nu) } \\ &\leq&\frac{CF( R) }{V( R_{n}) ^{1-\nu }V( R) ^{\nu}}. \end{eqnarray*} Now choose $n$ from the condition \[ V( R_{n+1}) <\mu( \Omega) \leq V( R_{n}), \] and set $r=R_{n}$. We obtain the \begin{eqnarray} \label{esupG2} \limfunc{esup}_{B( x,\delta r) }G^{\Omega}1_{\Omega} &\leq& \frac{F( R) }{V( r) ^{1-\nu}V( R) ^{\nu} \mu( \Omega) +CF( r) \nonumber\\[-8pt]\\[-8pt] &\leq&CF( R) \biggl( \frac{V( r) }{V( R) \biggr) ^{\nu}+CF( r) .\nonumber \end{eqnarray} Using again (\ref{Va}), (\ref{Fb}) and $\alpha\nu<\beta$, we obtain \[ \frac{F( r) }{F( R) }\leq C\biggl( \frac {r}{R}\biggr) ^{\beta}\leq C\biggl( \frac{r}{R}\biggr) ^{a\nu}\leq C\biggl( \frac {V( r) }{V( R) }\biggr) ^{\nu}, \] which implies that the second term in (\ref{esupG2}) can be absorbed by the first one, thus giving \[ \limfunc{esup}_{B( x,\delta r) }G^{\Omega}1_{\Omega }\leq CF( R) \biggl( \frac{V( r) }{V( R) }\biggr) ^{\nu}\leq CF( R) \biggl( \frac{\mu( \Omega ) } V( R) }\biggr) ^{\nu}. \] Since the point $x\in\Omega$ was arbitrary, covering $\Omega$ by a countable family of balls like $B( x,\delta r) $, we obtain \[ \limfunc{esup}_{\Omega}G^{\Omega}1_{\Omega}\leq CF( R) \biggl( \frac{\mu( \Omega) }{V( R) }\biggr) ^{\nu}, \] which together with (\ref{laG}) finishes the proof. \end{pf} \subsection{Estimates of the exit time} Our main result in this section is Theorem \ref{TEF} saying that the condition (\ref{EFF}) implies a certain upper bound for the tail \mathbb{P}_{x}( \tau_{B}\leq t) $ of the exit time from balls. The results of this type go back to Barlow \cite{Barlow}, Theorem~3.11. Here we give a self-contained proof in the present setting, which is based on the ideas of \cite{Barlow}. An alternative analytic approach can be found in \cite{GrigHuUpper}. For any open set $\Omega\subset M$, se \begin{equation}\label{Ebarsup} \overline{E}( \Omega) =\sup_{\Omega\setminus\mathcal {N}_{0} \mathbb{E}_{x}\tau_{\Omega}. \end{equation} \begin{lemma} \label{lTtail}For any open $\Omega\subset M$ such that $\overline {E}( \Omega) <\infty$, we have, for all $t>0$ and $x\in\Omega \setminus \mathcal{N}_{0}$ \begin{equation}\label{Px<} \mathbb{P}_{x}( \tau_{\Omega}<t) \leq1-\frac{\mathbb {E _{x}( \tau_{\Omega}) }{\overline{E}( \Omega ) } \frac{t}{\overline{E}( \Omega) }. \end{equation} \end{lemma} \begin{pf} Denote $\tau=\tau_{\Omega}$, and observe tha \[ \tau\leq t+( \tau-t) \mathbf{1}_{\{ \tau\geq t\} }=t+( \tau\circ\Theta_{t}) \mathbf{1}_{\{ \tau \geq t\} },\vadjust{\goodbreak} \] where $\Theta_{t}$ is the time shift of trajectories. Using the Markov property, we obtain, for any $x\in\Omega\setminus\mathcal{N}_{0}$ \[ \mathbb{E}_{x}\tau\leq t+\mathbb{E}_{x}\bigl( ( \tau\circ \Theta _{t}) \mathbf{1}_{\{ \tau\geq t\} }\bigr) =t+\mathbb{E _{x}\bigl( \mathbb{E}_{X_{t}}( \tau) \mathbf{1}_{ \{ \tau \geq t\} }\bigr), \] whence \[ \mathbb{E}_{x}\tau\leq t+\mathbb{P}_{x}( \tau\geq t) \sup_{y\in \Omega\setminus\mathcal{N}_{0}}\mathbb{E}_{y}\tau=t+\mathbb {P}_{x}( \tau\geq t) \overline{E}( \Omega) \] (see Figure \ref{pic7}), \begin{figure} \includegraphics{645f03.eps} \caption{Illustration to the proof of Lemma \protect\ref{lTtail}.}\label{pic7} \end{figure} and (\ref{Px<}) follows. \end{pf} \begin{lemma} \label{lE<1-e}Assume that the condition (\ref{EFF}) is satisfied. Then there are constants $\varepsilon,\sigma>0$ such that, for all x\in M\setminus\mathcal{N}_{0}$, $R>0$, and $\lambda\geq\frac {\sigma} F( R) }$ \begin{equation}\label{Exe} \mathbb{E}_{x}\bigl( e^{-\lambda\tau_{B( x,R) }} \bigr) \leq 1-\varepsilon. \end{equation} \end{lemma} \begin{pf} Denoting $B=B( x,R) $ and using Lemma \ref{lTtail}, we have, for any \mbox{$t>0$}, \begin{eqnarray*} \mathbb{E}_{x}( e^{-\lambda\tau_{B}}) &=&\mathbb {E}_{x}\bigl( e^{-\lambda\tau_{B}}\mathbf{1}_{\{ \tau_{B}<t\} }\bigr) \mathbb{E}_{x}\bigl( e^{-\lambda\tau_{B}}\mathbf{1}_{\{ \tau _{B}\geq t\} }\bigr) \\ &\leq&\mathbb{P}_{x}( \tau_{B}<t) +e^{-\lambda t} \\ &\leq&1-\frac{\mathbb{E}_{x}\tau_{B}}{\overline{E}( B ) }+\frac{ }{\overline{E}( B) }+e^{-\lambda t}. \end{eqnarray*} The condition (\ref{EFF}) implies tha \[ \overline{E}( B) =\sup_{z\in B( x,R) \setminus \mathcal{N}_{0}}\mathbb{E}_{z}\tau_{B( x,R) }\leq\sup _{z\in M\setminus\mathcal{N}_{0}}\mathbb{E}_{z}\tau_{B( z,2R) }\leq CF( 2R) , \] whenc \begin{equation}\label{EbarF} \overline{E}( B) \leq C\mathbb{E}_{x}\tau_{B}. \end{equation} Using these two estimates of $\overline{E}( B) $, we obtai \[ \mathbb{E}_{x}( e^{-\lambda\tau_{B}}) \leq1-\frac {1}{C}+\frac Ct}{F( R) }+e^{-\lambda t}. \] Setting $\varepsilon=\frac{1}{3C}$ and choosing $t=\frac{\varepsilon }{C F( R) $, we obtai \[ \mathbb{E}_{x}( e^{-\lambda\tau_{B}}) \leq 1-3\varepsilon +\varepsilon+e^{-\lambda t}. \] If also $e^{-\lambda t}\leq\varepsilon$, then we obtain (\ref{Exe}). Clearly, the former condition will be satisfied provide \[ \lambda\geq\frac{\log({1/\varepsilon})}{t}=\frac{({C}/ \varepsilon})\log({1/\varepsilon})}{F( R) }, \] which finishes the proof. \end{pf} \begin{lemma} \label{pf<exp}Assume that the condition (\ref{EFF}) is satisfied. Then there exists constant $\gamma>0$ such that, for all precompact balls B( x,R) $ with $x\in M\setminus\mathcal{N}_{0}$ and for all \lambda>0$ \begin{equation}\label{Ex0} \mathbb{E}_{x}\bigl( e^{-\lambda\tau_{B( x,R) }} \bigr) \leq C\exp\biggl( -\gamma\frac{R}{\mathcal{R}( 1/\lambda)}\biggr), \end{equation} where $\mathcal{R}=F^{-1}$. \end{lemma} \begin{pf} Rename the center $x$ of the ball to $z$ so that the letter $x$ will be used to denote a variable point. Fix some $0<r<R$ to be specified later, and set n=[ \frac{R}{r}] $. Set also $\tau=\tau_{B( z,R) }$, \[ u( x) =\mathbb{E}_{x}( e^{-\lambda\tau}) \] an \[ m_{k}=\sup_{\overline{B}( z,kr) \setminus\mathcal{N}_{0}}u, \] where $k=1,2,\ldots,n$. Note that all $m_{k}$ are bounded by $1$. Choose 0<\varepsilon^{\prime}<\varepsilon$ where $\varepsilon$ is the constant from Lemma \ref{lE<1-e}, and let $x_{k}$ be a point in $\overline {B}( z,kr) \setminus\mathcal{N}_{0}$ for which \[ ( 1-\varepsilon^{\prime}) m_{k}\leq u( x_{k} ) \leq m_{k}. \] Fix $k\leq n-1$, observe that \[ B( x_{k},r) \subset B\bigl( z,( k+1) r \bigr) \subset B( z,R) \] and consider the following function in $B( x_{k},r)$ \[ v_{k}( x) =\mathbb{E}_{x}( e^{-\lambda\tau _{k}}) , \] where $\tau_{k}=\tau_{B( x_{k},r) }$ (see Figure \ref{pic4}). Since the ball $B( x_{k},r) $ is precompact, we have $X_{\tau _{k}}\in\overline{B}( x_{k},r) $ (while for noncompact balls the exit point could have been at the cemetery). \begin{figure} \includegraphics{645f04.eps} \caption{Exit times from $B( x_{k},r) $ and $B( z,R) $.}\label{pic4} \end{figure} Let us show that, for all $x\in B( x_{k},r) \setminus \mathcal{N _{0}$ \begin{equation}\label{u<uu} u( x) \leq v_{k}( x) \sup_{\overline {B}( x_{k},r) \setminus\mathcal{N}_{0}}u.\vadjust{\goodbreak} \end{equation} Indeed, we have by the strong Markov propert \begin{eqnarray*} u( x) &=&\mathbb{E}_{x}( e^{-\lambda\tau _{k}}) \mathbb{E}_{x}\bigl( e^{-\lambda\tau_{k}}e^{-\lambda( \tau -\tau _{k}) }\bigr) \\ &=&\mathbb{E}_{x}\bigl( e^{-\lambda\tau_{k}}( e^{-\lambda \tau}\circ \Theta_{\tau_{k}}) \bigr) \\ &=&\mathbb{E}_{x}( e^{-\lambda\tau_{k}}\mathbb{E}_{X_{\tau _{k}}}( e^{-\lambda\tau}) ) \\ &=&\mathbb{E}_{x}( e^{-\lambda\tau_{k}}u( X_{\tau _{k}}) ) \\ &\leq&\mathbb{E}_{x}( e^{-\lambda\tau_{k}}) \sup _{\overline{B ( x_{k},r) \setminus\mathcal{N}_{0}}u, \end{eqnarray*} which proves (\ref{u<uu}). It follows from (\ref{u<uu}) tha \[ u( x_{k}) \leq v_{k}( x_{k}) \sup_{\overline {B}( z,( k+1) r) \setminus\mathcal{N}_{0}}u=v_{k}( x_{k}) m_{k+1} , \] whenc \[ ( 1-\varepsilon^{\prime}) m_{k}\leq v_{k}( x_{k}) m_{k+1}. \] By Lemma \ref{lE<1-e}, if \begin{equation}\label{lar} \lambda\geq\frac{\sigma}{F( r) } , \end{equation} then $v_{k}( x_{k}) \leq1-\varepsilon$. Therefore, under hypothesis (\ref{lar}), we hav \[ ( 1-\varepsilon^{\prime}) m_{k}\leq( 1-\varepsilon ) m_{k+1}, \] whence it follows by iteration tha \begin{equation}\label{n} u( z) \leq m_{1}\leq\biggl( \frac{1-\varepsilon }{1-\varepsilon ^{\prime}}\biggr) ^{n-1}m_{n}\leq C\exp\biggl( -c\frac{R}{r}\biggr) , \end{equation} where in the last inequality we have used that $n\geq\frac{R}{r}-1$ and c:=\log\frac{1-\varepsilon^{\prime}}{1-\varepsilon}>0$.\vadjust{\goodbreak} Condition (\ref{lar}) can be satisfied by choosing \begin{equation} \label{n=} r=\mathcal{R}\biggl( \frac{\sigma}{\lambda}\biggr) . \end{equation} This value of $r$ is legitimate only if $r<R$, that is, i \begin{equation} \label{R>} R>\mathcal{R}\biggl( \frac{\sigma}{\lambda}\biggr) . \end{equation} If (\ref{R>}) is not fulfilled, then (\ref{Ex0}) is trivially satisfied by choosing the constant $C$ large enough. Assuming that (\ref{R>}) is satisfied and defining $r$ by~(\ref{n=}) we obtain from (\ref{n}) tha \[ u( z) \leq C\exp\biggl( -c\frac{R}{\mathcal{R}( \sigma /\lambda) }\biggr) , \] whence (\ref{Ex0}) follows. \end{pf} \begin{theorem} \label{TEF}Assume that (\ref{EF}) holds. Then, for any precompact ball~$B( x,R) $ with $x\in M\setminus\mathcal{N}_{0}$ and for any $t>0$, \begin{equation} \label{Psi} \mathbb{P}_{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\exp ( -\Phi( \gamma R,t) ), \end{equation} where $\gamma>0$ is the constant from Lemma \ref{pf<exp} and \begin{equation} \label{Fidef} \Phi( R,t) =\sup_{r>0}\biggl\{ \frac{R}{r}-\frac {t}{F( r) }\biggr\}. \end{equation} \end{theorem} Changing the variable $r$ in (\ref{Fidef}), we obtain the following equivalent definitions of $\Phi$: \begin{equation}\label{Fidef2} \Phi( R,t) =\sup_{\xi>0}\biggl\{ \frac{R}{\mathcal {R}( \xi ) }-\frac{t}{\xi}\biggr\} =\sup_{\lambda>0}\biggl\{ \frac {R}{\mathcal R}( 1/\lambda) }-\lambda t\biggr\} , \end{equation} where $\mathcal{R}=F^{-1}$. \begin{pf*}{Proof of Theorem \ref{TEF}} Denoting $B=B( x,R) $ and using Lem-\break ma~\ref{pf<exp}, we obtain that, for any $\lambda>0$ \begin{eqnarray} \label{3a} \mathbb{P}_{x}( \tau_{B}\leq t) &=&\mathbb{P}_{x}( e^{-\lambda\tau_{B}}\geq e^{-\lambda t}) \nonumber\\ &\leq&e^{\lambda t}\mathbb{E}_{x}( e^{-\lambda\tau_{B}}) \\ &\leq&C\exp\biggl( -\gamma\frac{R}{\mathcal{R}( 1/\lambda ) +\lambda t\biggr) .\nonumber \end{eqnarray} Taking the supremum in $\lambda$ and using (\ref{Fidef2}), we obtain (\ref{Psi}). \end{pf*} \begin{remark} \label{RemFi} It is clear from (\ref{Fidef}) that function $\Phi ( R,t) $ is increasing in $R$ and decreasing in $t$. Also, we have, for any constants $a,b>0$ \begin{equation}\label{Fiab} \Phi( aR,bt) =ab\Phi\biggl( \frac{R}{b}, \frac{t}{a}\biggr) .\vadjust{\goodbreak} \end{equation} In particular, it follows that \begin{equation} \label{FiFi} \Phi( R,t) =t\Phi\biggl( \frac{R}{t},1\biggr) =t\Phi \biggl(\frac{R}{t}\biggr) , \end{equation} wher \begin{equation} \label{Fidef1} \Phi( s) :=\Phi( s,1) =\sup_{r>0}\biggl\{ \frac{s}{r} \frac{1}{F( r) }\biggr\}. \end{equation} Hence, (\ref{Psi}) can be written also in the for \begin{equation} \label{Psi0} \mathbb{P}_{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\exp \biggl( -t\Phi\biggl( \gamma\frac{R}{t}\biggr) \biggr) . \end{equation} Clearly, $\Phi( 0) =0$. Let us show that $0<\Phi( s) <\infty$ for all $s>0$. Sinc \[ \lim_{r\rightarrow\infty}\biggl( \frac{s}{r}-\frac{1}{F( r) \biggr) =0, \] we see from (\ref{Fidef1}) that $\Phi( s) \geq0$. It follows from (\ref{Fb}) and $\beta>1$ that \begin{equation} \label{Frr} \lim_{r\rightarrow0}\frac{r}{F( r) }=\infty \quad\mbox{and}\quad \lim_{r\rightarrow+\infty}\frac{r}{F( r) }=0. \end{equation} Given $s>0$, choose $r$ so big that $\frac{r}{F( r) }<s$ [such $r$ exists by the second condition in (\ref{Frr})]. The \[ \Phi( s) \geq\frac{s}{r}-\frac{1}{F( r) }>0. \] In order to prove that $\Phi( s) <\infty$, it suffices to show tha \[ \lim_{r\rightarrow0}\biggl( \frac{s}{r}-\frac{1}{F( r) }\biggr) \leq0. \] Indeed, if $r$ is sufficiently small, then by the first condition in (\ref{Frr}), $\frac{r}{F( r) }>s$ whence $\frac{s}{r}<\frac {1} F( r) }$.\vspace*{2pt} Another useful property of function $\Phi( s) $ is the inequalit \label{remdoweuseit? \begin{equation} \label{Fi2s} \Phi( as) \geq a\Phi( s) \qquad\mbox{for all }s\geq0 \mbox{ and }a\geq1. \end{equation} Indeed, we have for any $r>0$ \[ \frac{as}{r}-\frac{1}{F( r) }\geq a\biggl( \frac {s}{r}-\frac{1} F( r) }\biggr) , \] whence (\ref{Fi2s}) follows by taking $\sup$ in $r$. \end{remark} \begin{example} If $F( r) $ is differentiable then the supremum in (\ref{Fidef1}) is attained at the value of $r$ that solves the equatio \[ \frac{r^{2}F^{\prime}( r) }{F^{2}( r) }=s.\vadjust{\goodbreak} \] For example, $F( r) =r^{\beta}$ then we obtain $r=( \frac \beta}{s}) ^{{1}/({\beta-1})}$ whence $\Phi( s ) =cs^ {\beta}/({\beta-1})}$ an \[ \Phi( R,t) =c\biggl( \frac{R^{\beta}}{t}\biggr) ^{{1}/({\beta -1})}. \] Consequently, (\ref{Psi}) become \[ \mathbb{P}_{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\exp \biggl( -c\biggl( \frac{R^{\beta}}{t}\biggr) ^{{1}/({\beta -1})}\biggr) . \] \end{example} \begin{example} \label{ExF120} Consider the following example of function $F$ \begin{equation}\label{F120} F( r) =\cases{ r^{\beta_{1}},&\quad$r<1$, \cr r^{\beta_{2}},&\quad$r\geq1$,} \end{equation} where $\beta_{1},\beta_{2}>1$. It is easy to see that (\ref{Fb}) is satisfied with $\beta=\beta_{1}\wedge\beta_{2}$ and $\beta^{\prime }=\beta_{1}\vee\beta_{2}$. Similarly to the previous example, one obtains tha \begin{equation} \label{Fi120} \Phi( s) \simeq\cases{ s^{{\beta_{1}}/({\beta_{1}-1})}, &\quad$s>1$, \cr s^{{\beta_{2}}/({\beta_{2}-1})}, &\quad$s\leq1$,} \end{equation} so that (\ref{Psi}) become \[ \mathbb{P}_{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\cases{ \displaystyle \exp\biggl( -c\biggl( \frac{R^{\beta_{1}}}{t}\biggr) ^{ {1}/({\beta_{1}-1} }\biggr) , &\quad$t<R$, \vspace*{2pt}\cr \displaystyle \exp\biggl( -c\biggl( \frac{R^{\beta_{2}}}{t}\biggr) ^{ {1}/({\beta_{2}-1} }\biggr) , &\quad$t\geq R$.} \] \end{example} \begin{lemma} \label{LemtiFi}The function $\Phi( R,t) $ satisfies the following inequality \begin{equation} \label{tFi} \Phi( R,t) \geq c\min\biggl\{ \biggl( \frac{F( R) }{t \biggr) ^{{1}/({\beta^{\prime}-1})},\biggl( \frac{F( R) }{t \biggr) ^{{1}/({\beta-1})}\biggr\} \end{equation} for all $R,t>0$. \end{lemma} \begin{pf} By (\ref{Fidef}), we have, for any $r>0$ \[ \Phi( R,t) \geq\frac{R}{r}-\frac{t}{F( r) }. \] We claim that there exists $r>0$ such tha \begin{equation} \label{rt} \frac{t}{F( r) }=\frac{1}{2}\frac{R}{r}. \end{equation} Indeed, the function $\frac{F( r) }{r}$ is continuous on $( 0,+\infty) $, tends to $0$ as $r\rightarrow0$ and to $\infty$ as r\rightarrow\infty$ so that $\frac{F( r) }{r}$ takes all\vspace*{1pt} positive values, whence the claim follows. With the value of $r$ as in (\ref{rt}), we have \begin{equation} \label{lati} \Phi( R,t) \geq\frac{t}{F( r) }.\vadjust{\goodbreak} \end{equation} If $r\leq R$ then using the left-hand side inequality of (\ref{Fb}), we obtai \[ \frac{R}{r}\geq c\biggl( \frac{F( R) }{F( r ) }\biggr) ^{1/\beta}, \] which together with (\ref{rt}) yields \[ F( r) \leq C\biggl( \frac{t^{\beta}}{F( R) }\biggr) ^ {1}/({\beta-1})}. \] Substituting into (\ref{lati}), we obtai \[ \Phi( R,t) \geq c\biggl( \frac{F( R) }{t}\biggr) ^ {1}/({\beta-1})}. \] Similarly, it $r>R$ then using the right-hand side inequality in (\ref{Fb}) we obtai \[ \frac{R}{r}\geq c\biggl( \frac{F( R) }{F( r ) }\biggr) ^{1/\beta^{\prime}}, \] whence it follows that \[ \Phi( R,t) \geq c\biggl( \frac{F( R) }{t}\biggr) ^ {1}/({\beta^{\prime}-1})}. \] \upqed\end{pf} \begin{corollary} \label{CEF}Under the hypotheses of Theorem \ref{TEF}, we have, for any $x\in M\setminus\mathcal{N}_{0}$, $R>0$, $t>0$, \begin{equation} \label{Ptau} \mathbb{P}_{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\exp \biggl( -c\biggl( \frac{F( R) }{t}\biggr) ^{ {1}/({\beta^{\prime }-1})}\biggr) . \end{equation} \end{corollary} \begin{pf} Indeed, if $\frac{F( R) }{t}\geq1$, then (\ref{Ptau}) follows from Theorem \ref{TEF}, Lem\-ma~\ref{LemtiFi} and (\ref{Fb}). If $\frac F( R) }{t}<1$, then (\ref{Ptau}) is trivial. \end{pf} \section{Upper bounds of heat kernel} \label{SecDUE} The following result will be used in the proof of Theorem \ref{TDUE} below. \begin{proposition}[(\cite{GrigHuUpper}, Lemma 5.5)] \label{Pdirhk} Let $U$ be an open subset of $M$, and assume that, for any nonempty open set $\Omega \subset U$ \[ \lambda_{\min}( \Omega) \geq a\mu( \Omega ) ^{-\nu} \] for some positive constants $a,\nu$. Then the semigroup $\{P_{t}^{B}\} $ is ultracontractive with the following estimate \begin{equation} \label{Uultra} \Vert{}P_{t}^{B}f\Vert_{\infty}\leq C( at) ^{-1/\nu }\Vert {}f\Vert_{1} \end{equation} for any $f\in L^{1}( B) $. \end{proposition} The next theorem provides pointwise upper bounds for the heat kernel.\vadjust{\goodbreak} \begin{theorem} \label{TDUE}\label{TUE}If the conditions $\mbox{(\ref{VD})} + \mbox{(\ref{FK})} +\mbox{(\ref{EFF})}$ are satisfied and all balls are precompact then the heat kernel exists with the domain $M\setminus\mathcal{N}$ for some properly exceptional set $\mathcal{N}$, and satisfies the upper bound\label{condUE {\renewcommand{\theequation}{\textit{UE}} \begin{equation}\label{UEE} p_{t}( x,y) \leq\frac{C}{V( x,\mathcal{R}( t))}\exp\biggl( -\frac{1}{2}\Phi( cd( x,y) ,t)\biggr) \end{equation}} \vspace*{-8pt} \noindent for all $t>0$ and $x,y\in M\setminus\mathcal{N}$, where $\mathcal R}=F^{-1}$ and $\Phi$ is defined by (\ref{Fidef}). \end{theorem} \begin{remark} As it follows from Theorem \ref{TG=>FK}, the hypotheses $\mbox{(\ref{VD})} + \mbox{(\ref{FK})} + \mbox{(\ref{EFF})}$ here can be replaced by \mbox{(\ref{VD})} +\mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$. Also, using (\ref{FiFi}), one can write (\ref{UEE}) in the for \[ p_{t}( x,y) \leq\frac{C}{V( x,\mathcal{R}( t) ) }\exp\biggl( -\frac{1}{2}t\Phi\biggl( c\frac{d( x,y) }{t \biggr) \biggr) \] as it was stated in the \hyperref[remmorepictures]{Introduction}. \end{remark} \begin{remark} A version of Theorem \ref{TDUE} was proved by Kigami \cite{KigamiNash} under additional assumptions that the heat kernel is a priori continuous and ultracontractive, and using instead of (\ref{FK}) a local Nash inequality. In the case $F( r) =r^{\beta}$, another version of Theorem \ref{TDUE} was proved in \cite{GrigHuUpper}, where the upper bound (\ref{UEE}) was understood for \textit{almost all} $x,y$. The proof below uses a combination of techniques from \cite{GrigHuUpper} and \cite{KigamiNash}. \end{remark} \begin{example} If function $F( r) $ is given by (\ref{F120}) as in Example \ref{ExF120}, then $\Phi( s) $ is given by (\ref{Fi120}) and (\ref{UEE}) become \[ p_{t}( x,y) \leq\frac{C}{V( x,\mathcal{R}( t) ) }\cases{ \displaystyle \exp\biggl( -c\biggl( \frac{r^{\beta_{1}}}{t}\biggr) ^{ {1}/({\beta_{1}-1} }\biggr) , &\quad$t<r$, \vspace*{2pt}\cr \displaystyle \exp\biggl( -c\biggl( \frac{r^{\beta_{2}}}{t}\biggr) ^{ {1}/({\beta_{2}-1} }\biggr) , &\quad$t\geq r$,} \] where $r=d( x,y) $. \end{example} \begin{pf*}{Proof of Theorem \protect\ref{TDUE}} The hypothesis (\ref{FK}) can be stated as follows: for any ball B=B( x,r) $ where $x\in M$ and $r>0$, and for any nonempty open set $\Omega\subset B$, we hav \setcounter{equation}{1} \begin{equation} \label{laa} \lambda_{\min}( \Omega) \geq a( B) \mu ( \Omega) ^{-\nu}, \end{equation} wher \begin{equation}\label{aB} a( B) =\frac{c}{F( r) }\mu( B) ^{\nu} \end{equation} and $\nu,c$ are positive constants. Hence, (\ref{FK}) implies by Proposition \ref{Pdirhk} tha \begin{equation} \label{Bultra} \Vert{}P_{t}^{B}f\Vert_{L^{1}\rightarrow L^{\infty}}\leq\frac {C}{( a( B) t) ^{1/\nu}}.\vadjust{\goodbreak} \end{equation} In particular, the semigroup $\{ P_{t}^{B}\} $ is ultracontractive and $\{ P_{t}\} $ is locally ultracontractive. By Theorem \ref{TptOm}, there exists a properly exceptional set $\mathcal{N \subset M$ (containing $\mathcal{N}_{0}$) such that, for any open subset \Omega\subset M$, the semigroup $\mathcal{P}_{t}^{\Omega}$ possesses the heat kernel $p_{t}^{\Omega}( x,y) $ with the\vspace*{1pt} domain $\Omega \setminus\mathcal{N}$. Fix this set $\mathcal{N}$ for what follows. By Theorem \ref{TBBCK}, (\ref{aB}) and (\ref{Bultra}) imply the following estimate: \begin{equation} \label{ptB} p_{t}^{B}( x,y) \leq\frac{C}{( a( B) t) ^{1/\nu}}=\frac{C}{\mu( B) }\biggl( \frac{F( r) }{t \biggr) ^{1/\nu} \end{equation} for any ball $B$ of radius $r$, and for all $t>0$, $x,y\in B\setminus\mathcal{N}$. Our next step is to prove the on-diagonal estimat {\renewcommand{\theequation}{\textit{DUE}} \begin{equation} p_{t}( x,x) \leq\frac{C}{V( x,\mathcal{R}( t) ) } \end{equation}} \vspace*{-8pt} \noindent for all $x\in M\setminus\mathcal{N}$ and $t>0$. To understand the difficulties, let us first consider a particular case when the volume function satisfies the following estimate \setcounter{equation}{5} \begin{equation} \label{VF} V( x,R) \simeq F( R) ^{1/\nu} \end{equation} for all $x\in M$ and $R>0$, where $\nu$ is the exponent in (\ref{FK}) [e.g., (\ref{VF}) holds, if $V( x,R) \simeq R^{\alpha}$, $F( R) =R^{\beta}$ and $\nu=\beta/\alpha$]. In this case, the value $F( R) $ in~(\ref{FK}) cancels our, and we obtain \begin{equation} \label{FKc} \lambda_{\min}( \Omega) \geq c\mu( \Omega ) ^{-\nu}. \end{equation} Hence, by Proposition \ref{Pdirhk}, the semigroup $\{ P_{t}\} $ is ultracontractive, and by Theorem~\ref{TBBCK} we obtain the estimat \begin{equation} \label{t1nu} p_{t}( x,x) \leq Ct^{-1/\nu} \end{equation} for all $x\in M\setminus\mathcal{N}$ and $t>0$. Observing tha \[ V( x,\mathcal{R}( t) ) \simeq F( \mathcal{R ( t) ) ^{1/\nu}=t^{1/\nu}, \] we see that (\ref{t1nu}) is equivalent to (\ref{DUE}). Although this argument works only under restriction (\ref{VF}), it has an advantage that it can be localized as follows. Assuming that (\ref{VF}) is satisfied for all $R<R_{0}$ with some fixed constant $R_{0}$,~(\ref {FKc}) is satisfied for all open sets $\Omega$ with a bounded value of~$\mu ( \Omega) $, and~(\ref{EFF}) is satisfied for all balls with a bounded value of the radius, one can prove in the same way that (\ref {t1nu}) is true for $t<t_{0}$ for some $t_{0}>0$. The proof below does not allow such a localization in the general case.\looseness=-1 In the general case, without the hypotheses (\ref{VF}), the heat semigroup~$\{ P_{t}\} $ is not necessarily ultracontractive, which requires other tools for obtaining~(\ref{DUE}). In the case of Riemannian manifolds, one can obtain~(\ref{DUE}) from~(\ref{FK}) using a certain mean value inequality (see \cite{GrigNotes,CouGgraph}) but this method heavily relies on a specific property of the distance function that \mbox{$\vert\nabla d\vert\leq1$},\vadjust{\goodbreak} which is not available in our generality. We will use Kigami's iteration argument that allows us to obtain (\ref{DUE}) from (\ref{ptB}) using, in addition, the hypothesis~(\ref{EFF}). This argument is presented in an abstract form in~\cite{GrigHuUpper}, Lemma 5.6, that says the following. Assume that the following two conditions are satisfied: \begin{longlist}[(2)] \item[(1)] for any ball $B=B( x,r) $ \begin{equation}\label{Psit} \limfunc{esup}_{B}p_{t}^{B}\leq\Psi_{t}( x,r), \end{equation} where function $\Psi_{t}( x,t) $ satisfies certain condition \footnote Function $\Psi_{t}( x,r) $ should be monotone decreasing in $t$ and should satisfy the following doubling condition: if $r\leq r^{\prime }\leq2r$ and $t^{\prime}\geq t/2$, then \[ \Psi_{t^{\prime}}( x,r^{\prime}) \leq K\Psi_{t}( x,r) \] for some constant $K$. This is obviously satisfied for the function $\Psi$ given by (\ref{Psidef}).}; \item[(2)] for all $x\in M\setminus\mathcal{N}_{0}$, $t>0$, and $r\geq \varphi ( t) $ \begin{equation}\label{Pxtau} \mathbb{P}_{x}( \tau_{B}\leq t) \leq\varepsilon, \end{equation} where $\varepsilon>0$ is a sufficiently small\footnote More precisely, this means that $\varepsilon\leq\frac{1}{2K}$ where $K$ is the constant from the conditions for $\Psi$.} constant and $\varphi$ is a positive increasing function on $( 0,+\infty) $ such tha \begin{equation}\label{intfi} \int_{0}\varphi( t) \,\frac{dt}{t}<\infty. \end{equation} \end{longlist} Then the heat kernel on $M$ satisfies the estimat \begin{equation} \label{ptxx} \limfunc{esup}_{B( x,\varphi( t) ) }p_{t}\leq C\Psi _{t}( x,\varphi( t) ) . \end{equation} Obviously, (\ref{ptB}) implies (\ref{Psit}) with the function \begin{equation}\label{Psidef} \Psi_{t}( x,r) =\frac{C}{V( x,r) }\biggl( \frac F( r) }{t}\biggr) ^{1/\nu}. \end{equation} By Corollary \ref{CEF}, (\ref{EFF}) implies (\ref{Ptau}), which means that (\ref{Pxtau}) is satisfied provided $Ct\leq F( r) $ for a sufficiently large constant $C$; hence, the function~$\varphi ( t) $ can be chosen as follows: \[ \varphi( t) =\mathcal{R}( Ct) , \] which clearly satisfies (\ref{intfi}) [indeed, by (\ref{Rb}), we have \mathcal{R}( t) \leq Ct^{1/\beta^{\prime}}$ for all $0<t<1$, whence (\ref{intfi}) follows]. By (\ref{ptxx}), we obtai \[ \limfunc{esup}_{B( x,\varphi( t) ) }p_{t}\leq C\Psi _{t}( x,\mathcal{R}( Ct) ) \leq\frac {C}{V( x \mathcal{R}( Ct) ) }\leq\frac{C}{V( x,\mathcal{R ( t) ) }, \] where we have also used (\ref{Rb}) and (\ref{Va}). By Theorem \ref {TptOm}(d), we can replace here $\limfunc{esup}p_{t}$ by $\sup p_{t}$ outside $\mathcal{N}$, whence (\ref{DUE}) follows.\vadjust{\goodbreak} Now we prove the full upper estimate (\ref{UEE}). Fix two disjoint open subsets~$U,V$ of $M$ and use the following inequality proved in \cite{BarGrigKum}, Lemma 2.1: for all functions $f,g\in\mathcal{B}_{+}( M) $ \begin{eqnarray} \label{Ptfgb} (\mathcal{P}_{t}f,g)&\leq&\bigl(\mathbb{E}_{\cdot}\bigl(\mathbf{1}_{\{\tau _{U}\leq t/2\}}\mathcal{P}_{t-\tau_{U}}f(X_{\tau_{U}})\bigr),g\bigr)\nonumber\\[-9pt]\\[-9pt] &&{} +\bigl(\mathbb {E}_{\cdot}\bigl \mathbf{1}_{\{\tau_{V}\leq t/2\}}\mathcal{P}_{t-\tau_{V}}g(X_{\tau _{V}})\bigr),f\bigr)\nonumber \end{eqnarray} (see Figure \ref{pic5}). \begin{figure} \includegraphics{645f05.eps} \caption{Illustration to (\protect\ref{Ptfgb}).}\label{pic5} \vspace*{-3pt} \end{figure} Assume in addition that $f\in\mathcal{B}L^{1}( V) $ and $g\in \mathcal{B}L^{1}( U) $. Then, under the condition $\tau _{U}\leq t/2$, we hav \[ \mathcal{P}_{t-\tau_{U}}f(X_{\tau_{U}})=\int_{V\setminus\mathcal {N }p_{t-\tau_{U}}( X_{\tau_{U}},y) f( y) \,d\mu ( y) \leq S\Vert{}f\Vert_{1} \] almost surely, wher \begin{equation} \label{S} S:=\sup_{t/2\leq s\leq t}\mathop{\sup_{u\in\overline{U}\setminus \mathcal{N}}}_{v\in\overline{V}\setminus\mathcal{N}}p_{s}( u,v) . \end{equation} Here we have used that $X_{\tau_{U}}\in\overline{U}\setminus \mathcal{N}$ almost surely, which is due to the fact that $\{ X_{t}\} $ is a diffusion and the set $\mathcal{N}$ is properly exceptional. It follows~tha \[ \bigl(\mathbb{E}_{\cdot}\bigl(\mathbf{1}_{\{\tau\leq t/2\}}\mathcal {P}_{t-\tau }f(X_{\tau})\bigr),g\bigr)\leq S\Vert{}f\Vert_{1}\int_{U}\mathbb{P}_{x} \biggl( \tau _{U}\leq\frac{t}{2}\biggr) g( x) \,d\mu( x ) . \] Estimating similarly the second term in (\ref{Ptfgb}), we obtain from (\ref{Ptfgb} \begin{eqnarray*} &&\int_{U}\int_{V}p_{t}( x,y) f( y) g( x) \,d\mu( x) \,d\mu( y) \\[-2pt] &&\qquad\leq S\Vert f\Vert _{1}\int_{U}\mathbb{P}_{x}\biggl( \tau_{U}\leq\frac{t}{2}\biggr) g( x) \,d\mu( x)\\[-2pt] &&\qquad\quad{} +S\Vert g\Vert_{1}\int_{V}\mathbb{P}_{y}\biggl( \tau _{V}\leq \frac{t}{2}\biggr) f( y) \,d\mu( y) . \end{eqnarray*} By \cite{GrigHuUpper}, Lemma 3.4, we conclude that, for $\mu $-a.a. $x\in V$ and $y\in U$, \begin{equation}\label{p<pB2} p_{t}( x,y) \leq S\mathbb{P}_{x}\biggl( \tau_{V}\leq \frac{t}{2 \biggr) +S\mathbb{P}_{y}\biggl( \tau_{U}\leq\frac{t}{2}\biggr) .\vadjust{\goodbreak} \end{equation} A slightly different inequality (i.e., also enough for our purposes) was proved in~\cite{GrigHuComp}. For the case of heat kernels on Riemannian manifolds, (\ref{p<pB2}) was proved in \cite{GrigSal}, Lemma 3.3. By Theorem \ref{TEF}, we hav \begin{equation} \label{Psi1} \mathbb{P}_{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\exp ( -\Phi( \gamma R,t) ) \end{equation} for all $x\in M\setminus\mathcal{N}$ and $t,R>0$. Let \[ V_{R}=\{ x\in V\dvtx d( x,V^{c}) >R\} . \] Then, for any $x\in V_{R}\setminus\mathcal{N}$, we obtain by (\ref {Psi1} \[ \mathbb{P}_{x}\biggl( \tau_{V}\leq\frac{t}{2}\biggr) \leq\mathbb {P _{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\exp ( -\Phi ( \gamma R,t) ) . \] Using a similar estimate for $y\in U_{R}$, we obtain from (\ref{p<pB2}) that, for $\mu$-a.a. $x\in V_{R}$ and $y\in U_{R}$ \begin{equation} \label{ptro} p_{t}( x,y) \leq CS\exp( -\Phi( \gamma R,t) ) . \end{equation} Since the right-hand side here is a constant in $x,y$, we conclude by Theorem~\ref{TptOm}(d) that (\ref{ptro}) holds for all $x\in V_{R}\setminus\mathcal{N}$ and $y\in U_{R}\setminus\mathcal{N}$. Now fix two distinct points $x,y\in M\setminus\mathcal{N}$, set \begin{equation}\label{14} R=\tfrac{1}{4}d( x,y) \end{equation} and observe that the balls $V=B( x,2R) $ and $U=B( y,2R) $ are disjoint. Since $x\in V_{R}$ and $y\in U_{R}$, we conclude that (\ref{ptro}) is satisfied for these points $x,y$ with the above value of $R$. Let us estimate the quantity $S$ defined by (\ref{S}). Using the semigroup property and (\ref{DUE}), we obtain, for all $u,v\in M\setminus\mathcal{N}$ \[ p_{s}( u,v) \leq\sqrt{p_{s}( u,u) p_{s}( v,v) }\leq\frac{C}{\sqrt{V( u,\mathcal{R}( s) ) V( v,\mathcal{R}( s) ) }}. \] Observe that by (\ref{Va}), for all $z\in M$ \begin{equation}\label{VRR} \frac{V( z,\mathcal{R}( s) ) }{V( x,\mathcal{R ( s) ) }\leq C\biggl( 1+\frac{d( x,z) }{\mathcal{ }( s) }\biggr) ^{\alpha}. \end{equation} Applying this for $z=u\in\overline{U}$ and $z=v\in\overline{V}$ so that d( x,u) \leq2R$ and $d( x,v) \leq6R$, and substituting to the above estimate of $p_{s}( u,v) $ we obtai \[ p_{s}( u,v) \leq\frac{C}{V( x,\mathcal{R}( s) ) }\biggl( 1+\frac{R}{\mathcal{R}( s) }\biggr) ^{\alpha}. \] Using that $s\in[ t/2,t] $ as well as (\ref{Rb}) and (\ref{Va}), we obtain \[ S\leq\frac{C}{V( x,\mathcal{R}( t) ) }\biggl( 1+\frac{ }{\mathcal{R}( t) }\biggr) ^{\alpha},\vadjust{\goodbreak} \] which together with (\ref{ptro}) yields \begin{equation} \label{ptS} p_{t}( x,y) \leq\frac{C}{V( x,\mathcal{R}( t) ) }\biggl( 1+\frac{R}{\mathcal{R}( t) }\biggr) ^{\alpha }\exp( -\Phi( \gamma R,t) ) . \end{equation} On the other hand, we have by (\ref{Fidef2}) \[ \Phi( R,t) =\sup_{\xi>0}\biggl\{ \frac{R}{\mathcal {R}( \xi ) }-\frac{t}{\xi}\biggr\} \geq\frac{R}{\mathcal{R}( t) -1, \] where we have chosen $\xi=t$. Using the elementary estimat \[ 1+z\leq\frac{1}{a}\exp( az) ,\qquad z>0, 0<a\leq1, \] and its consequenc \[ 2+z\leq\frac{2}{a}\exp( az) , \] we obtai \[ 1+\frac{R}{\mathcal{R}( t) }\leq2+\Phi( R,t) \leq \frac{2}{a}\exp\biggl( \frac{a}{\gamma}\Phi( \gamma R,t) \biggr), \] whenc \begin{equation}\label{1+} \biggl( 1+\frac{R}{\mathcal{R}( t) }\biggr) ^{\alpha }\leq\biggl( \frac{2}{a}\biggr) ^{\alpha}\exp\biggl( \frac{\alpha a}{\gamma }\Phi( \gamma R,t) \biggr) . \end{equation} Choosing $a$ small enough and substituting this estimate to (\ref {ptS}), we obtain~(\ref{UEE}). \end{pf*} \begin{remark} \label{Rembdd}It is desirable to have a localized version of Theorem \ref{TDUE} when the hypotheses are assumed for balls of bounded radii and the conclusions are proved for a bounded range of time. As was already mentioned in the proof, Kigami's argument requires the ultracontractivity of P_{t}^{B} $ for \textit{all} balls, and (\ref{EFF}) should also be satisfied for all balls, because, loosely speaking, one deals with the estimate of $p_{t}^{B_{k+1}}-p_{t}^{B_{k}}$ for an exhausting sequence of balls $\{ B_{k}\} $ (see \cite{GrigHuUpper} or \cite{KigamiNash}). As we will see in Section \ref{SecRVD}, the hypotheses $\mbox{(\ref{VD})} +\mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$ for all balls imply that the space ( M,d) $ is unbounded. Note that all other arguments used in this paper do admit localization. \end{remark} \section{Lower bounds of heat kernel} \label{SecLow} \subsection{Oscillation inequalities} The Harnack inequality (\ref{condH}) is a standing assumption in this subsection. The main result is Proposition \ref{Posc} that is heavily based on the oscillation inequality of Lemma \ref{Lemosc}. The latter is considered as a standard consequence of (\ref{condH}), but we still provide a full proof to emphasize the use of the precompactness of balls. \begin{lemma} \label{LemH}Let $B$ be a precompact ball of radius $r$ in $M$. If $u\in \mathcal{F}$ is harmonic in $B$, and if $u\geq a$ in $B$ for some real constant $a$, the \begin{equation}\label{H-a} \limfunc{esup}_{\delta B}( u-a) \leq C\limfunc {einf}_{\delta B}( u-a) , \end{equation} where $C$ and $\delta$ are the same constants as in (\ref{condH}). \end{lemma} \begin{pf} Let $\psi$ be a cutoff function of $B$ in $M$, that is, $\psi\in \mathcal{ }\cap C_{0}( M) $ and $\psi\equiv1$ in an open neighborhood of \overline{B}$. The function $v=u-a\psi$ belongs to $\mathcal{F}$ and is equal to $u-a$ in $B$. Let us show that $v$ is harmonic in $B$. Indeed, for any $\varphi\in\mathcal{F}( B) $, we hav \begin{equation}\label{EEE} \mathcal{E}( v,\varphi) =\mathcal{E}( u-a\psi ,\varphi ) =\mathcal{E}( u,\varphi) -a\mathcal{E}( \psi ,\varphi) =0, \end{equation} because $\mathcal{E}( u,\varphi) =0$ by the harmonicity of $u$ in $B$, and $\mathcal{E}( \psi,\varphi) =0$ by the strong locality as $\psi\equiv1$ in a neighborhood of $\limfunc {supp}\varphi$. Applying (\ref{condH}) to $v$, we obtain (\ref{H-a}). \end{pf} For any function $f$ on any set $S\subset M$, define the essential oscillation of $f$ in $S$ b \[ \limfunc{eosc}_{S}f=\limfunc{esup}_{S}f-\limfunc{einf}_{S}f. \] The following statement is well known for functions in $\mathbb{R}^{n}$ (see, e.g., \cite{MoserEl} and~\cite{Salbook}, Lemma 2.3.2). \begin{lemma} \label{Lemosc}There exists $\theta>0$ such that, for any precompact ball B( x,\allowbreak r) \subset M$, for any nonnegative harmonic function $u$ in $B( x,r) $, and any $0<\rho\leq r$ \begin{equation} \label{eosc} \limfunc{eosc}_{B( x,\rho) }u\leq2\biggl( \frac{\rho }{r}\biggr) ^{\theta}\limfunc{eosc}_{B( x,r) }u, \end{equation} where $\theta$ is a positive constant that depends on the constants in (\ref{condH}). \end{lemma} \begin{pf} Fix $x\in M$, and write for simplicity $B_{r}=B( x,r) $. Consider first the case when $\rho=\delta r$ [where $\delta$ is a parameter from (\ref{condH})] and se \[ a=\limfunc{esup}_{B_{r}}u,\qquad b=\limfunc{einf}_{B_{r}}u \] an \[ a^{\prime}=\limfunc{esup}_{B_{\rho}}u,\qquad b^{\prime}=\limfunc{einf} _{B_{\rho}}u. \] Clearly, $b\leq b^{\prime}\leq a^{\prime}\leq a$. By Lemma \ref{LemH}, we hav \begin{equation}\label{esiu} \limfunc{esup}_{B_{\rho}}( u-b) \leq C\limfunc {einf}_{B_{\rho }}( u-b), \end{equation} that is \[ a^{\prime}-b\leq C( b^{\prime}-b) .\vadjust{\goodbreak} \] Similarly, applying Lemma \ref{LemH} to function $-u$, we obtai \[ \limfunc{esup}_{B_{\rho}}( a-u) \leq C\limfunc {einf}_{B_{\rho }}( a-u) , \] whenc \[ a-b^{\prime}\leq C( a-a^{\prime}) . \] Adding up the two inequalities yield \[ ( a-b) +( a^{\prime}-b^{\prime}) \leq C( a-b) -C( a^{\prime}-b^{\prime}), \] whenc \[ a^{\prime}-b^{\prime}\leq\frac{C-1}{C+1}( a-b) , \] that is \begin{equation}\label{der} \limfunc{eosc}_{B_{\delta r}}u\leq\gamma\limfunc{eosc}_{B_{r}}u, \end{equation} where $\gamma:=\frac{C-1}{C+1}<1$. For an arbitrary $\rho\leq r$, find a nonnegative integer $k$ such that \[ \delta^{k+1}r<\rho\leq\delta^{k}r. \] Iterating (\ref{der}), we obtai \begin{eqnarray*} \limfunc{eosc}_{B_{\rho}}u&\leq&\limfunc{eosc}_{B_{\delta ^{k}r}}u\leq \gamma^{k}\limfunc{eosc}_{B_{r}}u\leq\gamma^{{\log ({r/\rho})} \log({1}/{\delta})}-1}\limfunc{eosc}_{B_{r}}u\\ &=&\frac{1}{\gamma }\biggl( \frac{\rho}{r}\biggr) ^{({\log({1/\gamma})})/({\log ({1/\delta} })}\limfunc{eosc}_{B_{r}}u. \end{eqnarray*} Note that the constant $C$ in (\ref{esiu}) can be assumed to be big enough, say $C>3$. Then $\gamma>1/2$ and (\ref{eosc}) follows from the previous line with $\theta={({\log({1/\gamma})})/({\log ({1/\delta} })}$. \end{pf} \begin{proposition} \label{Posc}Let $\Omega$ be an open subset of $M$ such that $\widetilde{E ( \Omega) <\infty$. Fix a function $f\in\mathcal {B}_{b}( \Omega), $ and set $u=G^{\Omega}f$. Then, for any precompact ball B( x,r) \subset\Omega$ and all $\rho\in(0,r]$ \begin{equation} \label{oscu} \limfunc{eosc}_{B( x,\rho) }u\leq2\widetilde{E}( x,r) \limfunc{esup}_{B( x,r) }\vert f \vert +4\biggl( \frac{\rho}{r}\biggr) ^{\theta}\limfunc{esup}_{B( x,r) }\vert u\vert, \end{equation} where $\theta$ is the same constant as in Lemma \ref{Lemosc} an \[ \widetilde{E}( x,r) :=\widetilde{E}( B( x,r) ) . \] \end{proposition} \begin{pf} Write for simplicity $B_{r}:=B( x,r) $. Let us first prove that if $f\geq0$, then \begin{equation} \label{osc+} \limfunc{eosc}_{B_{\rho}}u\leq\widetilde{E}( x,r) \limfunc{esup _{B_{r}}f+2\biggl( \frac{\rho}{r}\biggr) ^{\theta}\limfunc {esup}_{B_{r}}u.\vadjust{\goodbreak} \end{equation} By Lemma \ref{LG1-1}, we have for the function $v=G^{B_{r}}f$ tha \[ \limfunc{eosc}_{B_{\rho}}v\leq\limfunc{esup}_{B_{r}}v\leq \widetilde{E ( x,r) \limfunc{esup}_{B_{r}}f. \] The function $w:=u-v$ is harmonic in $B_{r}$ by Lemma \ref{LemG-G} and nonnegative by Theorem \ref{TptOm}(b). By Lemma \ref{Lemosc , we obtai \[ \limfunc{eosc}_{B_{\rho}}w\leq2\biggl( \frac{\rho}{r}\biggr) ^{\theta \limfunc{eosc}_{B_{r}}w\leq2\biggl( \frac{\rho}{r}\biggr) ^{\theta \limfunc{esup}_{B_{r}}u. \] Since $u=v+w$, (\ref{osc+}) follows by adding up the two previous lines. For a signed function $f$, write $f=f_{+}-f_{-}$ and se \[ \overline{u}:=G^{\Omega}f_{+} \quad\mbox{and}\quad\underline {u}:=G^{\Omega }f_{-}. \] Then $\overline{u}$ and $\underline{u}$ are nonnegative and $u=\overline{u} \underline{u}$, whence it follows tha \[ \limfunc{eosc}u=\limfunc{eosc}( \overline{u}-\underline {u}) \leq \limfunc{eosc}\overline{u}+\limfunc{eosc}\underline{u}. \] Applying (\ref{osc+}) separately to $\overline{u}$ and $\underline {u}$ and adding up the inequalities, we obtain~(\ref{oscu}). \end{pf} \subsection{Time derivative} In this section, we assume only the basic hypotheses. If $f\in L^{2}( M) $, then, for any $t>0$, the function $u_{t}=P_{t}f$ is in $\func{dom ( \mathcal{L}) $ and satisfies the heat equatio \begin{equation} \label{tL} \partial_{t}u_{t}+\mathcal{L}u=0, \end{equation} where $\partial_{t}u_{t}$ is the strong derivative in $L^{2}( M) $ of the mapping $t\mapsto u_{t}$; cf. \cite{GrigHu} and~\cite {Grigbook}, Section 4.3. The argument in the next lemma is well known in the context of the semigroup theory (see \cite{Davbooko,Davbook}), and we reproduce it here for the sake of completeness. \begin{lemma} For any $f\in L^{2}( M) $ and all $0\leq s<t$, we hav \begin{equation}\label{dudt} \Vert\partial_{t}u_{t}\Vert_{2}\leq\frac{1}{t-s}\Vert u_{s}\Vert_{2}, \end{equation} where $u_{t}=\mathcal{P}_{t}f$. \end{lemma} \begin{pf} Let $\{ E_{\lambda}\} _{\lambda\geq0}$ be spectral resolution in $L^{2}( M) $ of the operator~$\mathcal{L}$. Then we hav \begin{eqnarray*} u_{t} &=&e^{-t\mathcal{L}}f=\int_{0}^{\infty}e^{-t\lambda }\,dE_{\lambda}f,\\ \partial_{t}u_{t} &=&-\mathcal{L}e^{-t\mathcal{L}}f=\int _{0}^{\infty }( -\lambda) e^{-t\lambda}\,dE_{\lambda}f, \end{eqnarray*} whenc \begin{eqnarray*} \Vert u_{t}\Vert_{2}^{2} &=&\int_{0}^{\infty}e^{-2t\lambda}\,d\Vert E_{\lambda}f\Vert^{2}, \\ \Vert\partial_{t}u_{t}\Vert_{2}^{2} &=&\int_{0}^{\infty}\lambda ^{2}e^{-2t\lambda}\,d\Vert E_{\lambda}f\Vert^{2}. \end{eqnarray*} Since \[ \lambda^{2}e^{-2t\lambda}=\bigl( \lambda e^{-( t-s) \lambda }\bigr) ^{2}e^{-2s\lambda}\leq\frac{1}{( t-s) ^{2} e^{-2s\lambda}, \] we obtai \[ \Vert\partial_{t}u_{t}\Vert_{2}^{2}\leq\frac{1}{( t-s ) ^{2} \int_{0}^{\infty}e^{-2s\lambda}\,d\Vert E_{\lambda}f\Vert^{2}=\frac {1} ( t-s) ^{2}}\Vert u_{s}\Vert_{2}^{2}, \] which was to be proved. \end{pf} In the rest of this section, assume that $p_{t}( x,y) $ is the heat kernel with domain~$D$. \begin{corollary} \label{CdtptL2}For any $t>0$ and $y\in D$, the function $t\mapsto p_{t}( \cdot,y) $ is strongly differentiable in $L^{2}( M) $ and, for all $0<s<t$ \[ \Vert{}\partial_{t}p_{t}( \cdot,y) \Vert_{2}\leq\frac {1}{t-s \sqrt{p_{2s}( y,y) }. \] \end{corollary} \begin{pf} Setting $f=p_{\varepsilon}( \cdot,y) $ for some $\varepsilon>0$ and using (\ref{ptPt}), we obtain that the functio \[ u_{t}=\mathcal{P}_{t}f=p_{t+\varepsilon}( \cdot,y) \] satisfies (\ref{dudt}), that is \[ \Vert{}\partial_{t}p_{t+\varepsilon}( \cdot,y) \Vert _{2}\leq \frac{1}{t-s}\Vert p_{s+\varepsilon}( \cdot,y) \Vert _{2}=\frac 1}{t-s}\sqrt{p_{2( s+\varepsilon) }( y,y) }. \] Renaming $t+\varepsilon$ by $t$ and $s+\varepsilon$ by $s$, we finish the proof. \end{pf} Se \[ p_{t}^{\prime}( x,y) \equiv\partial_{t}p_{t}( x,y) , \] where the strong derivative $\partial_{t}$ is taken with respect to the first variable $x$. Hence, for any $y\in D$ and $t>0$, $p_{t}^{\prime }( x,y) $ is an $L^{2}$-function of $x$. \begin{lemma} \label{LemqtPt}For all $0<s<t$ and $y\in D$, we hav \begin{equation} \label{p'} p_{t}^{\prime}( \cdot,y) =\mathcal {P}_{s}p_{t-s}^{\prime }( \cdot,y) . \end{equation} \end{lemma} \begin{pf} Indeed, we have by (\ref{ptPt} \[ p_{t}( \cdot,y) =\mathcal{P}_{s}p_{t-s}( \cdot ,y) .\vadjust{\goodbreak} \] Since $\mathcal{P}_{s}$ is a bounded operator in $L^{2}$, it commutes with the operator $\partial_{t}$ of strong derivation. Applying the latter to the both sides of the above identity, we obtain~(\ref{p'}). \end{pf} \begin{corollary} \label{Cdtpt}For all $t>0$, $y\in D$ and $\mu$-a.a. $x\in D$, \begin{equation} \label{dtpt} \vert\partial_{t}p_{t}( x,y) \vert\leq \frac{2}{t \sqrt{p_{t/2}( x,x) p_{t/2}( y,y) }. \end{equation} \end{corollary} \begin{pf} By Lemma \ref{LemqtPt}, we have, for all $y\in D$ and $\mu$-a.a. $x\in D$ \[ p_{t}^{\prime}( x,y) =( p_{s}( x,\cdot) ,p_{t-s}^{\prime}( \cdot,y) ) , \] whence by Corollary \ref{CdtptL2}, \[ \vert p_{t}^{\prime}( x,y) \vert\leq\Vert {}p_{s}( x,\cdot) \Vert_{2}\Vert p_{t-s}^{\prime} ( \cdot ,y) \Vert_{2}\leq\frac{1}{t-s-r}\sqrt{p_{2s}( x,x) p_{2r}( y,y) } \] for any $0<r<t-s$. Choosing $s=r=t/4$, we finish the proof of (\ref{dtpt}). \end{pf} \begin{remark} It follows easily from the identit \[ p_{t}( x,y) =( p_{s}( x,\cdot) ,p_{t-s}( \cdot,y) ) , \] that the function $t\mapsto p_{t}( x,y) $ is differentiable for all fixed $x,y\in D$ an \[ \frac{\partial}{\partial t}p_{t}( x,y) =( p_{s}( x,\cdot) ,\partial_{t}p_{t-s}( \cdot,y) ) =( p_{s}( x,\cdot) ,q_{t-s}( \cdot,y) ) . \] Arguing as in the previous proof, we obtai \[ \biggl\vert\frac{\partial}{\partial t}p_{t}( x,y) \biggr\vert \leq\frac{2}{t}\sqrt{p_{t/2}( x,x) p_{t/2}( y,y) } \] for all $t>0$ and $x,y\in D$. However, for applications we need estimate (\ref{dtpt}) for the strong derivative $\partial_{t}p_{t}$ rather than for the partial derivative $\frac{\partial}{\partial t}p_{t}( x,y) $. \end{remark} \begin{lemma} \label{LemGu}If $\Omega$ is an open subset of $M$ and if $\widetilde {E ( \Omega) <\infty$, then, for all $t>0$ and $z\in M$, the function $u_{t}:=p_{t}^{\Omega}( \cdot,z) $ satisfies in ( 0,+\infty) \times\Omega$ the equatio \[ G^{\Omega}( \partial_{t}u_{t}) +u_{t}=0. \] \end{lemma} \begin{pf} By Lemma \ref{LG1-1}, the Green operator $G^{\Omega}$ is a bounded operator in $L^{2}( \Omega) $, and $G^{\Omega}$ is the inverse operator to $\mathcal{L}^{\Omega}$. Since the function $u_{t}$ satisfies the equation $\partial_{t}u_{t}+\mathcal{L}^{\Omega}u_{t}=0$, applying G^{\Omega}$ proves the claim. \end{pf} \subsection{\texorpdfstring{The H\"{o}lder continuity}{The Holder continuity}} In this subsection we use the hypotheses $\mbox{(\ref{VD})} +\mbox{(\ref{condH})} +({E}_{F}\mbox{$\leq$}) $. As it follows from Theorem \ref{TG=>FK} and Proposition \ref{Pdirhk}, under these hypotheses the heat semigroup $\{ P_{t}\} $ is locally ultracontractive. Hence, by Theorem~\ref{TptOm}, for any open set $\Omega\subset M$, the heat kernel $p_{t}^{\Omega}$ exists with the domain $\Omega\setminus\mathcal{N}$ where $\mathcal {N \subset M$ is a fixed properly exceptional set; cf. the beginning of the proof of Theorem \ref{TDUE}.\vadjust{\goodbreak} \begin{lemma} \label{LemoscpB}Let the hypotheses $\mbox{(\ref{VD})} +\mbox{(\ref{condH})} +({E}_{F}\mbox{$\leq$}) $ be satisfied, and let $\Omega$ be an open subset of $M$. Fix $t>0$, $0<\rho\leq\mathcal{R}( t), $ and set \begin{equation} \label{r=} r=( \mathcal{R}( t) ^{\beta}\rho^{\theta} ) ^{{ }/({\beta+\theta})}, \end{equation} where $\beta$ is the exponent from (\ref{Fb}), and $\theta$ is the constant from Lemma \ref{Lemosc}. Fix also a point $x\in \Omega \setminus\mathcal{N}$ and assume that the ball $B( x,r) $ is precompact, and its closure is contained in $\Omega$. The \begin{equation} \label{oscpB} \limfunc{osc}_{y\in B( x,\rho) \setminus\mathcal{N }p_{t}^{\Omega}( x,y) \leq C\biggl( \frac{\rho }{\mathcal{R ( t) }\biggr) ^{\Theta}\sup_{y\in B( x,r) \setminus \mathcal{N}}p_{t/2}^{\Omega}( y,y) , \end{equation} where $\Theta=\frac{\beta\theta}{\beta+\theta}$ and $C$ depends on the constants in $({E}_{F}\mbox{$\leq$}) $ and (\ref{Fb}). \end{lemma} \begin{pf} By construction in the proof of Theorem \ref{TptOm}, the heat kernel~ p_{t}^{\Omega}$ is obtained as a monotone increasing limit of $p_{t}^{U_{n}}$ as $n\rightarrow\infty$ where $\{ U_{n}\} $ is an exhaustion of $\Omega$ by sets $U_{n}$ that are finite union of balls from a countable base and the convergence is pointwise in $\Omega\setminus\mathcal{N}$. Suppose for a~moment that we have proved (\ref{oscpB}) for $U_{n}$ instead of $\Omega$, that is \begin{eqnarray} \label{ptUn} &&\sup_{y\in B( x,\rho) \setminus\mathcal {N}}p_{t}^{U_{n}}( x,y)\nonumber\\[-8pt]\\[-8pt] &&\qquad \leq\inf_{y\in B( x,\rho) \setminus \mathcal{N }p_{t}^{U_{n}}( x,y) +C\biggl( \frac{\rho}{\mathcal {R}( t) }\biggr) ^{\Theta}\sup_{y\in B( x,r) \setminus\mathcal N}}p_{t/2}^{U_{n}}( y,y)\nonumber \end{eqnarray} [note that if $n$ is large enough, then $B( x,r) \Subset U_{n}$]. Replacing on the right-hand side of (\ref{ptUn}) $p_{t}^{U_{n}}$ by a larger value $p_{t}^{\Omega}$ and letting $n\rightarrow\infty$ on the left-hand side, we obtain (\ref{oscpB}). To prove (\ref{ptUn}), rename for simplicity $U_{n}$ into $U$ and recall that, by construction in the proof of Theorem \ref{TptOm}, the domain of p_{t}^{U}$ is $U\setminus\mathcal{N}_{U}$ where~$\mathcal{N}_{U}$ is a truly exceptional set in $U$, that is contained in $\mathcal{N}$. It follows from Corollary \ref{Cesup=sup1}, that, for any $x\in U\setminus \mathcal{N}$ \[ \sup_{y\in B( x,\rho) \setminus\mathcal {N}}p_{t}^{U}( x,y) =\limfunc{esup}_{y\in B( x,\rho) }p_{t}^{U}( x,y) \] and a similar identity for $\inf$ and $\limfunc{einf}$. Hence, it suffices to prove tha \[ \limfunc{eosc}_{y\in B( x,\rho) }p_{t}^{U}( x,y) \leq C\biggl( \frac{\rho}{\mathcal{R}( t) }\biggr) ^{\Theta}A, \] wher \[ A=\sup_{y\in B( x,r) \setminus\mathcal {N}}p_{t/2}^{U}( y,y) . \] Se \[ u( y) =p_{t}^{U}( x,y) \quad\mbox{and}\quad f( y) =\partial_{t}p_{t}^{U}( x,y) , \] where $\partial_{t}$ is the strong derivative in $L^{2}( U ) $ with respect to the variable~$y$. Applying Corollary \ref{Cdtpt} to the heat kernel $p_{t}^{U}$, we obtain, for $\mu$-a.a. $y\in B( x,r) $ \[ \vert f( y) \vert\leq\frac{2}{t}\sqrt p_{t/2}^{U}( x,x) p_{t/2}^{U}( y,y) }\leq \frac{2}{t}A. \] By Lemma \ref{LemGu}, we have $u=-G^{U}f$. Since for all $y\in B( x,r) \setminus\mathcal{N} \[ u( y) \leq\sqrt{p_{t/2}^{U}( x,x) p_{t/2}^{U}( y,y) }\leq A \] and $\rho\leq r$, we obtain by Proposition \ref{Posc} and $( {E}_{F}\mbox{$\leq$} ) $ tha \begin{eqnarray*} \limfunc{eosc}_{B( x,\rho) }u &\leq&2\widetilde {E}( x,r) \limfunc{esup}_{B( x,r) }\vert f \vert +4\biggl( \frac{\rho}{r}\biggr) ^{\theta}\limfunc{esup}_{B( x,r) }\vert u\vert\\ &\leq&CF( r) \frac{A}{t}+4\biggl( \frac{\rho}{r} \biggr) ^{\theta }A. \end{eqnarray*} Since $r\leq\mathcal{R}( t) $, we have by (\ref{Fb} \[ F( r) \leq C\biggl( \frac{r}{\mathcal{R}( t ) }\biggr) ^{\beta}F( \mathcal{R}( t) ) =C\biggl( \frac{r} \mathcal{R}( t) }\biggr) ^{\beta}t, \] whence it follows tha \[ \limfunc{eosc}_{B( x,\rho) }u\leq C\biggl( \biggl( \frac {r} \mathcal{R}( t) }\biggr) ^{\beta}+\biggl( \frac{\rho }{r}\biggr) ^{\theta}\biggr) A. \] Note that this inequality is true for any $r$ such that $B( x,r) \Subset U$ and $\rho\leq r\leq\mathcal{R}( t) $. Choosing r=( \mathcal{R}( t) ^{\beta}\rho^{\theta} ) ^{{ }/({\beta+\theta})}$, we obtain (\ref{oscpB}). \end{pf} \begin{theorem} \label{THolder}Let the hypotheses $\mbox{(\ref{VD})} + \mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$ be satisfied. Then, for any open set $\Omega\subset M$, the heat kernel $p_{t}^{\Omega}( x,y) $ is H\"{o}lder continuous in x$ and $y$ in $\Omega\setminus\mathcal{N}$. \end{theorem} \begin{pf} Fix $x\in\Omega\setminus\mathcal{N}$, $t>0$, and choose $\rho>0$ so small that $B( x,r) \Subset\Omega$ where $r=r( t,\rho ) $ is defined by (\ref{r=}). Using Theorem \ref{TDUE}, (\ref{VD}), and (\ref{Fb}), we obtain that, for any $y\in B( x,r) \setminus \mathcal{N}$ \begin{eqnarray*} p_{t/2}^{\Omega}( y,y) &\leq&p_{t/2}( y,y) \leq \frac{C}{V( y,\mathcal{R}( t) ) } \\ &=&\frac{C}{V( x,\mathcal{R}( t) ) }\frac {V( x \mathcal{R}( t) ) }{V( y,\mathcal{R}( t) ) } \\ &\leq&\frac{C}{V( x,\mathcal{R}( t) ) }\biggl( 1+\frac d( x,y) }{\mathcal{R}( t) }\biggr) ^{\alpha } \\ &\leq&\frac{C}{V( x,\mathcal{R}( t) ) }, \end{eqnarray*} where we have used that $d( x,y) <r\leq\mathcal{R}( t) $. Therefore, by Lemma \ref{LemoscpB} \begin{equation} \label{oscptOm} \limfunc{osc}_{y\in B( x,\rho) \setminus\mathcal{N }p_{t}^{\Omega}( x,y) \leq\biggl( \frac{\rho}{\mathcal {R}( t) }\biggr) ^{\Theta}\frac{C}{V( x,\mathcal{R}( t) ) }. \end{equation} In particular, if $y\in\Omega\setminus\mathcal{N}$ is close enough to $x$, then we hav \[ \vert p_{t}^{\Omega}( x,x) -p_{t}^{\Omega}( x,y) \vert\leq\biggl( \frac{d( x,y) }{\mathcal{R ( t) }\biggr) ^{\Theta}\frac{C}{V( x,\mathcal {R}( t) ) }, \] which means that $p_{t}^{\Omega}( x,\cdot) $ is H\"{o}lder continuous in $\Omega\setminus\mathcal{N}$. \end{pf} \begin{corollary} \label{CorHolder}Let the hypotheses $\mbox{(\ref{VD})} + \mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$ be satisfied, and let $B( x,R) $ be a precompact ball, such that $x\in M\setminus\mathcal{N}$. Then for all \rho$ and $t$, such that \begin{equation} \label{troR} 0<\rho\leq\mathcal{R}( t) <R, \end{equation} and for all $y\in B( x,\rho) \setminus\mathcal{N}$, the following estimate holds: \begin{equation} \label{pt-pt} \bigl\vert p_{t}^{B( x,R) }( x,x) -p_{t}^{B( x,R) }( x,y) \bigr\vert\leq\biggl( \frac{\rho }{\mathcal{ }( t) }\biggr) ^{\Theta}\frac{C}{V( x,\mathcal {R}( t) ) }. \end{equation} \end{corollary} \begin{pf} Set $\Omega=B( x,R) $. Then the condition $B( x,r) \Subset\Omega$ from the previous proof is satisfied because $r\leq \mathcal{R}( t) <R$ by (\ref{r=}) and (\ref{troR}). Hence,~(\ref{pt-pt}) follows from (\ref{oscptOm}). \end{pf} \subsection{Proof of the lower bounds} \label{Seclow} \begin{lemma} \label{LemDLE}Assume that $\mbox{(\ref{VD})} + \mbox{(\ref{EFF})}$ are satisfied. Then there exists $\varepsilon>0$ such that, for all precompact balls $B( x,R) $ with $x\in M\setminus\mathcal{N}$ and for all 0<t\leq\varepsilon F( R) $ \begin{equation} \label{pB>} p_{t}^{B( x,R) }( x,x) \geq\frac{c}{V ( x \mathcal{R}( t) ) }. \end{equation} \end{lemma} \begin{pf} Choose $r$ from the condition $t=\varepsilon F( r) $ which implies $R\geq r$ and, hence, $p_{t}^{B( x,R) }\geq p_{t}^{B( x,r) }$. Hence, it suffices to prove tha \[ p_{t}^{B( x,r) }( x,x) \geq\frac{c}{V ( x \mathcal{R}( t) ) }. \] Setting $B=B( x,r) $, we have by (\ref{PtOm} \[ \int_{B\setminus\mathcal{N}}p_{t}^{B}( x,y) \,d\mu ( y) =\mathcal{P}_{t}^{B}1=\mathbb{P}_{x}( t<\tau_{B}) =1-\mathbb{P _{x}( \tau_{B}\leq t) . \] By Corollary \ref{CEF}, we obtai \[ \mathbb{P}_{x}( \tau_{B}\leq t) \leq C\exp\biggl( -c\biggl( \frac F( r) }{t}\biggr) ^{{1}/({\beta^{\prime}-1})} \biggr) =C\exp \bigl( -c\varepsilon^{-{1}/({\beta^{\prime}-1})}\bigr) ,\vadjust{\goodbreak} \] whence it follows that, for small enough $\varepsilon\in( 0,1) \[ \int_{B}p_{t}^{B}( x,y) \,d\mu( y) \geq\frac{1}{2}. \] Therefore \begin{eqnarray*} p_{2t}^{B}( x,x) &=&\int_{B\setminus\mathcal {N}}p_{t}^{B}( x,y) ^{2}\,d\mu( y) \\ &\geq&\frac{1}{\mu( B) }\biggl( \int_{B\setminus \mathcal{N }p_{t}^{B}( x,y) \,d\mu( y) \biggr) ^{2} \\ &\geq&\frac{1/4}{V( x,\mathcal{R}( t/\varepsilon ) ) , \end{eqnarray*} where we have used that $r=\mathcal{R}( t/\varepsilon) $. Finally, using (\ref{Rb}) and (\ref{Va}) we obtain (\ref{pB>}). \end{pf} \begin{theorem} \label{TNLE}Assume that the hypotheses $\mbox{(\ref{VD})} + \mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$ are satisfied, and all metric balls in $( M,d) $ are precompact. Then there exist~$\varepsilon,\eta>0$ such that\label{condLLE \begin{equation} \label{LLE} p_{t}^{B( x,R) }( x,y) \geq\frac{c}{V ( x \mathcal{R}( t) ) } \end{equation} for all $R>0$, $0<t\leq\varepsilon F( R) $ and $x,y\in M\setminus\mathcal{N}$, provided \begin{equation} \label{dd} d( x,y) \leq\eta\mathcal{R}( t) \end{equation} (see Figure \ref{pic9}). Consequently, the following \begin{figure} \includegraphics{645f06.eps} \caption{Illustration to Theorem \protect\ref{TNLE}.}\label{pic9} \end{figure} inequality:\label{condNLE {\renewcommand{\theequation}{\textit{NLE}} \begin{equation} p_{t}( x,y) \geq\frac{c}{V( x,\mathcal{R}( t) ) } \end{equation}} \vspace*{-8pt} \noindent holds for all $t>0$ and $x,y\in M\setminus\mathcal{N}$ satisfying (\ref{dd}). \end{theorem} \begin{pf} Obviously, (\ref{NLE}) follows from (\ref{LLE}) by letting $R\rightarrow\infty$, so that it suffices to prove (\ref{LLE}). Let $\rho$ and $t$ be such tha \[ 0<\rho\leq\eta\mathcal{R}( t) \quad\mbox{and}\quad t\leq \varepsilon F( R), \] where $\varepsilon\in( 0,1) $ is the constant from Lemma \ref{LemDLE}, and $\eta\in( 0,1) $ is to be defined below. Then the hypotheses of Lemma \ref{LemDLE} and Corollary \ref{CorHolder} are satisfied. Writing for simplicity $B=B( x,R) $, we obtain by (\ref{LLE}) and (\ref{pt-pt}) that, for any $y\in B( x,\rho) \setminus\mathcal{N}$ \begin{eqnarray*} p_{t}^{B}( x,y) &\geq&p_{t}^{B}( x,x) - \vert p_{t}^{B}( x,x) -p_{t}^{B}( x,y) \vert \\ &\geq&\frac{c}{V( x,\mathcal{R}( t) ) }-\biggl( \frac \rho}{\mathcal{R}( t) }\biggr) ^{\Theta}\frac {C}{V( x \mathcal{R}( t) ) } \\ &\geq&\frac{c-C\eta^{\Theta}}{V( x,\mathcal{R}( t) ) }. \end{eqnarray*} Choosing $\eta$ sufficiently small, we obtain (\ref{LLE}). \end{pf} Combining Theorems \ref{TUE}, \ref{THolder} and \ref{TNLE}, we obtain the main result: \begin{theorem} \label{Tmain}If the hypotheses $\mbox{(\ref{VD})} +\mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$ are satisfied and all metric balls are precompact, then the heat kernel exists, is H\"{o}lder continuous in $x,y$, and satisfies (\ref{UEE}) and (\ref{NLE}). \end{theorem} \begin{example} Under the hypotheses of Theorem \ref{Tmain}, assume that the volume function $V( x,r) $ satisfies the uniform estimat \[ V( x,r) \simeq r^{\alpha} \] with some $\alpha>0$, and function $F$ be as follows \setcounter{equation}{20} \begin{equation} \label{F12} F( r) =\cases{ r^{\beta_{1}}, &\quad$r<1$, \cr r^{\beta_{2}}, &\quad$r\geq1$,} \end{equation} where $\beta_{1}>\beta_{2}>1$.\label{remwhysuchexampleexists?} Then \[ \mathcal{R}( t) = \cases{ t^{1/\beta_{1}}, &\quad$t<1$, \cr t^{1/\beta_{2}}, &\quad$t\geq1$,} \] and the heat kernel satisfies the estimat \[ p_{t}( x,y) \leq\frac{C}{V( x,\mathcal{R}( t) ) }\simeq C \cases{ t^{-\alpha/\beta_{1}}, &\quad$t<1$,\cr t^{-\alpha/\beta_{2}}, &\quad$t\geq1$.} \] It follows that \begin{equation} \label{pupbad} p_{t}( x,y) \leq Ct^{-\alpha/\beta} \end{equation} for any $\beta$ from the interval $\beta_{2}<\beta<\beta_{1}$. Let us verify that the following upper bound fails \begin{equation} \label{ptupbad} p_{t}( x,y) \leq Ct^{-\alpha/\beta}\exp\biggl( -\biggl( \frac r^{\beta}}{Ct}\biggr) ^{{1}/({\beta-1})}\biggr) , \end{equation} where $r=d( x,y) $. Indeed, by (\ref{NLE}) we hav \[ p_{t}( x,y) \geq\frac{c}{V( x,\mathcal{R}( t) ) } \] provided $r\leq\eta\mathcal{R}( t) $. Assuming that $t<1$ and setting $r=\eta\mathcal{R}( t) =\eta t^{1/\beta_{1}}$ we obtai \footnote The existence of a couple $x,y$ with a prescribed distance $r=d( x,y) $ can be guaranteed, provided the space $( M,d ) $ is connected.} \begin{equation} \label{plowab} p_{t}( x,y) \geq\frac{c}{t^{\alpha/\beta_{1}}}, \end{equation} while it follows from (\ref{ptupbad}) tha \begin{equation}\label{pupab} p_{t}( x,y) \leq\frac{C}{t^{\alpha/\beta}}\exp\bigl( -c( t^{{\beta}/{\beta_{1}}-1}) ^{{1}/({\beta-1})}\bigr) . \end{equation} Since $\beta/\beta_{1}<1$, the exponent of $t$ under the exponential is negative so that the right-hand side of (\ref{pupab}) becomes as t\rightarrow0$ much smaller than that of~(\ref{plowab}), which is a contradiction. Another way to see a contradiction is to observe that (\ref{ptupbad}) implies (\ref{EFF}) with function $F( r) \simeq r^{\beta}$ (cf. \cite{KigamiNash,GrigHuUpper}), which is incompatible with (\ref{EFF}) with function (\ref{F12}) [although this argument requires the conservativeness of $( \mathcal {E},\mathcal{ }) $]. The conclusion is that in general (\ref{pupbad}) does not imply (\ref {ptupbad}). For comparison, let us note that if $\beta=2$ and the underlying space is a Riemannian manifold, then (\ref{pupbad}) does imply \ref{ptupbad}); cf. \cite{GrigSuper}. \end{example} \section{Matching upper and lower bounds} \label{SecTwo} \subsection{\texorpdfstring{Distance $d_{\varepsilon}$}{Distance d_epsilon}} \label{Secde} \begin{definition} We say that a sequence $\{ x_{i}\} _{i=0}^{N}$ of points in M $ is an $\varepsilon$-\textit{chain} between points $x,y\in M$ i \[ x_{0}=x,\qquad x_{N}=y\quad \mbox{and}\quad d(x_{i},x_{i-1})<\varepsilon \qquad\mbox{for all }i=1,2,\ldots,N. \] \end{definition} One can view an $\varepsilon$-chain as a sequence of \textit{chained} balls \{ B_{i}\} _{i=0}^{N}$ of radii~$\varepsilon$, that connect $x$ and~$y$; that is, the center of $B_{0}$ is $x$, the center of $B_{N}$ is~$y , and the center of $B_{i}$ is contained in $B_{i-1}$ for any $i=1,\ldots,N$ \begin{figure} \includegraphics{645f07.eps} \caption{An $\protect\varepsilon$-chain connecting $x$ and $y$.}\label{pic8} \end{figure} (see Figure~\ref{pic8}). \begin{definition} For any $\varepsilon>0$ and all $x,y\in M$, defin \begin{equation}\label{dedef} d_{\varepsilon}( x,y) =\inf_{\{ x_{i}\} \ \mathrm{is}\ \varepsilon\mbox{{-}}\mathrm{chain}}\sum_{i=1}^{N}d( x_{i},x_{i-1}), \end{equation} where the infimum is taken over all $\varepsilon$-chains $\{ x_{i}\} _{i=0}^{N}$ between $x,y$ with arbitrary~$N$. \end{definition} It is obvious that $d_{\varepsilon}( x,y) $ is a decreasing left-continuous function of $\varepsilon$ and \begin{equation}\label{de>} d_{\varepsilon}( x,y) \geq d( x,y) . \end{equation} Furthermore, \begin{equation} \label{eps>} \varepsilon>d( x,y) \quad\Rightarrow\quad d_{\varepsilon}( x,y) =d( x,y) . \end{equation} It is clear that $d_{\varepsilon}$ is an extended metric in the sense that d_{\varepsilon}$ satisfies all the axioms of a metric except for finiteness. If an $\varepsilon$-chain exists for any two points $x,y$, then d_{\varepsilon}( x,y) <\infty$, and hence $d_{\varepsilon}$ is a true metric. \begin{lemma} \label{LemN}If $0<d_{\varepsilon}( x,y) <\infty$ for some x,y\in M$ and $\varepsilon>0$, then there exists an $\varepsilon $-chain \{ x_{i}\} _{i=0}^{N}$ between $x,y$ such that \begin{equation}\label{N<} N\leq9\biggl\lceil\frac{d_{\varepsilon}( x,y) }{\varepsilon }\biggr\rceil. \end{equation} \end{lemma} Here \mbox{$\lceil\cdot\rceil$} stands for the least integer upper bound of the argument. It follows from (\ref{dedef}) by the triangle inequality that always \[ N\geq\biggl\lceil\frac{d_{\varepsilon}( x,y) }{\varepsilon }\biggr\rceil. \] Hence, denoting by $N_{\varepsilon}( x,y) $ the minimal value of $N$ for which there exists an $\varepsilon$ chain $\{ x_{i}\} _{i=0}^{N}$ between $x$ and $y$, we obtai \begin{equation} \label{Nede} N_{\varepsilon}( x,y) \simeq\biggl\lceil\frac{d_{\varepsilon }( x,y) }{\varepsilon}\biggr\rceil. \end{equation} The number $N_{\varepsilon}( x,y) $ can be also viewed as the minimal number in a~sequence of chained balls of radii $\varepsilon$ connecting $x$ and $y$.\vadjust{\goodbreak} \begin{pf*}{Proof of Lemma \protect\ref{LemN}} If $d_{\varepsilon}( x,y) <\varepsilon$, then also $d( x,y) <\varepsilon$, and hence $\{ x,y\} $ is an \varepsilon$-chain with $N=1$. Assume $d_{\varepsilon}( x,y) \geq\varepsilon$, and let $\{ x_{i}\} _{i=0}^{n}$ be a~ \varepsilon$-chain between $x,y$, such tha \begin{equation} \label{xixi+1} \sum_{i=1}^{n}d( x_{i},x_{i-1}) \leq2d_{\varepsilon }( x,y) , \end{equation} which exists by hypothesis. Set $r_{i}=d( x_{i},x_{i-1}) $. Then \ref{xixi+1}) implie \[ \#\{ i\dvtx r_{i}\geq\varepsilon/2\} \leq\frac {4d_{\varepsilon }( x,y) }{\varepsilon}, \] whenc \[ \#\{ i\dvtx r_{i}<\varepsilon/2\} \geq n-\frac {4d_{\varepsilon }( x,y) }{\varepsilon}. \] If $n>9\lceil\frac{d_{\varepsilon}( x,y) }{\varepsilon }\rceil$, then $n>9$ and $n>9\frac{d_{\varepsilon}( x,y) }{\varepsilon}$, whence it follows tha \[ \#\{ i\dvtx r_{i}<\varepsilon/2\} >\frac{5n}{9}>\frac{n+1}{2}. \] Hence, there is an index $i$ such that both $r_{i-1}$ and $r_{i}$ are smaller than $\varepsilon/2$. This implies that $d( x_{i-1},x_{i+1}) <\varepsilon$ so that by removing the point $x_{i}$ from the chain we still have an $\varepsilon$-chain. Continuing this way, we finally obtain an $\varepsilon$-chain satisfying~(\ref{N<}). \end{pf*} \subsection{Two-sided estimates of the heat kernel} If $x\neq y$, then it follows from~(\ref{de>}) and (\ref{Fb}) tha \begin{equation}\label{Fei} \frac{F( \varepsilon) }{\varepsilon}d_{\varepsilon }(x,y) \rightarrow\infty\qquad\mbox{as }\varepsilon\rightarrow \infty. \end{equation} In this section, we make an additional assumption that, for all $x,y\in M$ \begin{equation}\label{Fe} \frac{F( \varepsilon) }{\varepsilon}d_{\varepsilon }(x,y) \rightarrow0 \qquad\mbox{as }\varepsilon\rightarrow0. \end{equation} In particular, (\ref{Fe}) implies the finiteness of $d_{\varepsilon }$ for all $\varepsilon>0$. Define the function $\varepsilon( t,x,y) $ as follows \begin{equation} \label{epsdef} \varepsilon( t,x,y) =\sup\biggl\{ \varepsilon>0\dvtx\frac {F( \varepsilon) }{\varepsilon}d_{\varepsilon}( x,y) \leq t\biggr\}. \end{equation} If $x=y$, then $\varepsilon( t,x,x) =\infty$. If $x\neq y$, then it follows from (\ref{Fe}) and (\ref{Fei}) that $0<\varepsilon ( t,x,y) <\infty$. \begin{lemma} \label{LemFe}If (\ref{Fe}) is satisfied, then the function $ \varepsilon( t,x,y) $ satisfies the identit \begin{equation} \label{te} \frac{F( \varepsilon) }{\varepsilon}d_{\varepsilon }( x,y) =t \end{equation} for all distinct $x,y\in M$ and $t>0$. \end{lemma} \begin{pf} Since the function $F( \varepsilon) $ is continuous and d_{\varepsilon}( x,y) $ is left-contin\-uous in $\varepsilon$, we have \[ \frac{F( \varepsilon) }{\varepsilon}d_{\varepsilon }( x,y) \leq t. \] Assume from the contrary that \[ \frac{F( \varepsilon) }{\varepsilon}d_{\varepsilon }( x,y) <t, \] and note that, for any $\varepsilon^{\prime}>\varepsilon$, we have by definition of $\varepsilon$ tha \begin{equation} \label{eps} \frac{F( \varepsilon^{\prime}) }{\varepsilon^{\prime}} d_{\varepsilon^{\prime}}( x,y) >t. \end{equation} On the other hand, $d_{\varepsilon^{\prime}}( x,y) \leq d_{\varepsilon}( x,y) $ and \[ \frac{F( \varepsilon^{\prime}) }{\varepsilon^{\prime}} \rightarrow\frac{F( \varepsilon) }{\varepsilon} \qquad\mbox {as \varepsilon^{\prime}\rightarrow\varepsilon+. \] Hence \[ \limsup_{\varepsilon^{\prime}\rightarrow\varepsilon+}\frac{F( \varepsilon^{\prime}) }{\varepsilon^{\prime}}\leq\frac {F( \varepsilon) }{\varepsilon}d_{\varepsilon}( x,y) <t, \] which contradicts (\ref{eps}). \end{pf} \begin{theorem} \label{Ttwo}Let all metric balls be precompact. Let the hypotheses $\mbox{(\ref{VD})} + \mbox{(\ref{EFF})} +\mbox{(\ref{condH})} $ and (\ref {Fe}) be satisfied, and let $\varepsilon( t,x,y) $ be the function from (\ref{epsdef}). Then, for all $x,y\in M\setminus\mathcal{N}$ and t>0 $, we have \begin{equation}\label{two} p_{t}( x,y) \asymp\frac{C}{V(x,\mathcal{R}( t) )}\exp ( -c\Phi( cd_{\varepsilon}( x,y) ,t)) , \end{equation} where $\varepsilon=\varepsilon( \kappa t,x,y) $ and $\kappa=8$ for the upper bound in (\ref{two}) while $\kappa$ is a~small enough positive constant for the lower bound. \end{theorem} The proof of Theorem \ref{Ttwo} is preceded by a lemma. \begin{lemma} \label{Lemdee}For all distinct $x,y\in M$ and $t>0$, we hav \begin{equation}\label{dF} \Phi( cd_{\varepsilon}( x,y) ,t) \leq\frac d_{\varepsilon}( x,y) }{\varepsilon}\leq\Phi( Cd_{\varepsilon}( x,y) ,t) , \end{equation} where $\varepsilon=\varepsilon( t,x,y) $. \end{lemma} \begin{pf} Let us first show that, for all $\varepsilon>0$ and some $c\in( 0,1) $, \begin{equation} \label{FiF} \Phi\biggl( c\frac{\varepsilon}{F( \varepsilon) }\biggr) \leq \frac{1}{F( \varepsilon) }\leq\Phi\biggl( 2\frac {\varepsilon} F( \varepsilon) }\biggr) . \end{equation} By (\ref{Fidef1}), we have, for all $r>0$ \[ \Phi\biggl( \frac{2\varepsilon}{F( \varepsilon) }\biggr) \geq \frac{2\varepsilon}{F( \varepsilon) r}-\frac{1}{F( r) }. \] Choosing $r=\varepsilon$ we obtain the right-hand side inequality in (\ref{FiF}). By (\ref{Fidef1}), the left-hand side inequality in (\ref {FiF}) is equivalent t \[ \frac{c\varepsilon}{F( \varepsilon) r}-\frac{1}{F( r) }\leq\frac{1}{F( \varepsilon) } \qquad\mbox{for all }r>0, \] that is, t \[ \frac{F( \varepsilon) }{F( r) }\geq\frac c\varepsilon}{r}-1. \] If $r\geq\varepsilon$, then this is trivially satisfied provided $c\leq1$. If $r<\varepsilon$, then by (\ref{Fb}) we hav \[ \frac{F( \varepsilon) }{F( r) }\geq c \biggl( \frac \varepsilon}{r}\biggr) ^{\beta}\geq c\frac{\varepsilon}{r}, \] which proves the previous inequality and, hence, (\ref{FiF}). Putting in (\ref{FiF}) $\varepsilon=\varepsilon( t,x,y) $ and using $\frac{\varepsilon}{F( \varepsilon) }=\frac d_{\varepsilon}( x,y) }{t}$, which is true by Lemma \ref{LemFe , we obtai \[ \Phi\biggl( c\frac{d_{\varepsilon}( x,y) }{t}\biggr) \leq\frac d_{\varepsilon}( x,y) }{\varepsilon t}\leq\Phi\biggl( 2\frac d_{\varepsilon}( x,y) }{t}\biggr) , \] whence (\ref{dF}) follows. \end{pf} \begin{pf*}{Proof of Theorem \protect\ref{Ttwo}} If $x=y$, then $d_{\varepsilon}( x,y) =0$, and (\ref{two}) follows from Theorems \ref{TUE} and \ref{TNLE}. Assume in the sequel that $x\neq y$. Let us first prove the lower bound in (\ref{two}), that is \begin{equation}\label{twol} p_{t}( x,y) \geq\frac{c}{V(x,\mathcal{R}( t ) )}\exp ( -C\Phi( Cd_{\varepsilon}( x,y) ,t)) . \end{equation} By Theorem \ref{TNLE}, we hav {\renewcommand{\theequation}{\textit{NLE}} \begin{equation} p_{t}( x,y) \geq\frac{c}{V( x,\mathcal{R}( t) ) } \end{equation}} \vspace*{-8pt} \noindent provided \setcounter{equation}{15} \begin{equation} \label{detR} d( x,y) \leq\eta\mathcal{R}( t) \end{equation} for some $\eta>0$. Set $\varepsilon=\varepsilon( \kappa t,x,y) $ where $\kappa\in( 0,1) $ will be chosen later. Consider first the case $\varepsilon\geq d_{\varepsilon}( x,y) . By (\ref{dF}), we hav \[ \Phi( cd_{\varepsilon}( x,y) ,\kappa t) \leq1. \] Applying (\ref{tFi}) with $R=cd_{\varepsilon}( x,y) $, we obtai \[ F( cd_{\varepsilon}( x,y) ) \leq C\kappa t,\vadjust{\goodbreak} \] whence by (\ref{Rb} \[ d_{\varepsilon}( x,y) \leq c^{-1}\mathcal{R}( C\kappa t) \leq\eta\mathcal{R}( t) , \] provided $\kappa$ is small enough. Since $d( x,y) \leq d_{\varepsilon}( x,y) $, we see that the condition~(\ref{detR}) is satisfied and, hence, (\ref{twol}) follows from (\ref{NLE}). Assume now that $\varepsilon<d_{\varepsilon}( x,y) $. By Lemma \ref{LemN}, there is an $\varepsilon$-chain $\{ x_{i}\} _{i=1}^{N}$ connecting $x$ and $y$ and such tha \begin{equation}\label{Ne} N\simeq\frac{d_{\varepsilon}( x,y) }{\varepsilon}. \end{equation} By (\ref{semi}), we hav \begin{eqnarray} \label{chain} p_{t}( x,y) &=&\int_{M}\cdots\int _{M}p_{{t/N }(x,z_{1})p_{{t/N}}(z_{1},z_{2})\cdots\nonumber\\ &&\hspace*{41.5pt}{}\times p_{{t/N }(z_{N-1},y)\,dz_{1}\cdots dz_{N-1} \nonumber\\[-8pt]\\[-8pt] &\geq&\int_{B_{1}}\cdots\int_{B_{N-1}}p_{{t/N} }(z_{0},z_{1})p_{{t/N}}(z_{1},z_{2})\cdots\nonumber\\ &&\hspace*{54.8pt}{}\times p_{{t/N }(z_{N-1},z_{N})\,dz_{1}\cdots dz_{N-1},\nonumber \end{eqnarray} where $z_{0}=x$, $z_{N}=y$, $B_{i}=B( x_{i},\varepsilon) $. We will estimate $p_{t/N}( z_{i},z_{i+1}) $ from below by means of (\ref{NLE}). For that, we need to verify the conditio \[ d( z_{i},z_{i+1}) \leq\eta\mathcal{R}( t/N ) . \] By (\ref{te}), we have \begin{equation} \label{Tze} \frac{F( \varepsilon) }{\varepsilon}=\frac{\kappa t} d_{\varepsilon}( x,y) }. \end{equation} It follows from (\ref{Tze}) and (\ref{Ne}) tha \[ F( \varepsilon) \simeq\frac{\kappa t}{N}, \] whence by (\ref{Rb}) \[ \varepsilon\simeq\mathcal{R}\biggl( \frac{\kappa t}{N}\biggr) . \] Clearly, if $\kappa$ is small enough, the \begin{equation}\label{ee} \varepsilon\leq\frac{\eta}{3}\mathcal{R}\biggl( \frac{t}{N}\biggr) . \end{equation} Since in (\ref{chain}) $z_{i}\in B( x_{i},\varepsilon) $ and d( x_{i},x_{i+1}) \leq\varepsilon$, it follows from (\ref{ee}) tha \[ d( z_{i},z_{i+1}) \leq3\varepsilon\leq\eta\mathcal {R}( t/N) . \] Hence, by (\ref{NLE}) and (\ref{Va}) \[ p_{{t/N}}( z_{i},z_{i+1}) \geq\frac{c}{V( z_{i} \mathcal{R}( t/N) ) }\geq\frac{c}{V( x_{i},\mathcal{R ( t/N) ) }\geq\frac{c}{V( x_{i},\varepsilon ) }.\vadjust{\goodbreak} \] Therefore, (\ref{chain}) implie \begin{eqnarray}\label{4} p_{t}( x,y) &\geq&\frac{c}{V( x,\mathcal{R}( t/N) ) }\prod_{i=1}^{N-1}\frac{c}{V( x_{i},\varepsilon ) }V( x_{i},\varepsilon) \nonumber\vadjust{\goodbreak}\\ &\geq&\frac{c^{-N}}{V( x,\mathcal{R}( t/N) ) } \nonumber\\[-8pt]\\[-8pt] &\geq&\frac{\exp( -CN) }{V( x,\mathcal{R}( t) ) } \nonumber\\ &\geq&\frac{\exp( -C({d_{\varepsilon}( x,y) })/ \varepsilon}) }{V( x,\mathcal{R}( t) ) }.\nonumber \end{eqnarray} By Lemma \ref{Lemdee}, we hav \[ \frac{d_{\varepsilon}( x,y) }{\varepsilon}\leq\Phi ( Cd_{\varepsilon}( x,y) ,\kappa t) =\kappa\Phi \biggl( \frac{C}{\kappa}d_{\varepsilon}( x,y) ,t\biggr) . \] Substituting into (\ref{4}), we obtain (\ref{twol}). To prove the upper bound in (\ref{two}), we basically repeat the proof of Theorem~\ref{TUE} with $d$ being replaced by $d_{\varepsilon}$ for an appropriate $\varepsilon$. Fix some $\varepsilon>0$ and denote by B_{\varepsilon}( x,r) $ the ball in the metric $d_{\varepsilon} . It follows from (\ref{eps>}) that \begin{equation} \label{B=B} B_{\varepsilon}( x,r) =B( x,r) \qquad\mbox{for all r\leq\varepsilon, \end{equation} which allows to modify Lemma \ref{pf<exp} as follows: for all $x\in M\setminus\mathcal{N}_{0}$ and $R>0 \begin{equation} \label{Rr} \mathbb{E}_{x}\bigl( e^{-\lambda\tau_{B_{\varepsilon}(x,R)}} \bigr) \leq C\exp\biggl( -c\frac{R}{r}\biggr) , \end{equation} provided the values of parameters $r$ and $\lambda$ satisfy the condition \begin{equation}\label{rl} 0<r\leq\varepsilon\quad\mbox{and}\quad\lambda\geq\frac{\sigma}{ F( r) }. \end{equation} Indeed, (\ref{Rr}) is analogous to estimate (\ref{n}) from the proof of Lemma \ref{pf<exp} for $d$-balls, which was proved using $\lambda \geq \frac{\sigma}{F( r) }$. To repeat\vspace*{1pt} the proof for the metric d_{\varepsilon}$, we need the precompactness of $d_{\varepsilon}$-balls, that follows from (\ref{de>}), and the condition (\ref{EFF}) for d_{\varepsilon}$-balls of radii $\leq r$, that follows from (\ref{B=B}), provided $r\leq\varepsilon$. Consequently, the statement of Theorem \ref{TEF} is modified as follows: for all $x\in M\setminus\mathcal{N}_{0}$ and $R,t>0$, \begin{equation}\label{PG} \mathbb{P}_{x}\bigl( \tau_{B_{\varepsilon}( x,R) }\leq t\bigr) \leq C\exp( -c\Phi_{\varepsilon}( cR,t) ), \end{equation} where \begin{equation} \label{G} \Phi_{\varepsilon}( R,t) :=\sup_{0<r\leq\varepsilon }\biggl\{ \frac{R}{r}-\frac{t}{F( r) }\biggr\}. \end{equation} Indeed, arguing as in (\ref{3a}) and using (\ref{Rr}) we obtain that, under the assumptions of (\ref{rl}) \[ \mathbb{P}_{x}\bigl( \tau_{B_{\varepsilon}( x,R) }\leq t\bigr) \leq C\exp\biggl( -c\frac{R}{r}+\lambda t\biggr) . \] Setting here $\lambda=\sigma/F( r) $ yield \begin{equation} \label{g} \mathbb{P}_{x}\bigl( \tau_{B_{\varepsilon}( x,R) }\leq t\bigr) \leq C\exp\biggl( -\biggl( c\frac{R}{r}-\frac{\sigma t}{F( r) \biggr) \biggr) . \end{equation} Finally, minimizing the right-hand side of (\ref{g}) in $r\leq \varepsilon , we obtain (\ref{PG}). Let us show that if \begin{equation}\label{te2} t\leq\frac{1}{2}\frac{F( \varepsilon) }{\varepsilon}R, \end{equation} the \begin{equation}\label{p} \Phi( R,t) \leq2\Phi_{\varepsilon}( R,t) . \end{equation} We hav \[ \sup_{r>\varepsilon}\biggl\{ \frac{R}{r}-\frac{t}{F( r ) }\biggr\} \leq\frac{R}{\varepsilon}, \] wherea \[ \sup_{0<r\leq\varepsilon}\biggl\{ \frac{R}{r}-\frac{t}{F( r) \biggr\} \geq\frac{R}{\varepsilon}-\frac{t}{F( \varepsilon ) \geq\frac{R}{\varepsilon}-\frac{1}{2}\frac{R}{\varepsilon}=\frac {1}{2 \frac{R}{\varepsilon}. \] It follows tha \[ \Phi( R,t) =\sup_{r>0}\biggl\{ \frac{R}{r}-\frac {t}{F( r) }\biggr\} \leq2\sup_{0<r\leq\varepsilon}\biggl\{ \frac {R}{r} \frac{t}{F( r) }\biggr\}, \] which proves (\ref{p}). Hence, we can rewrite (\ref{PG}) in the form \begin{equation} \label{PG1} \mathbb{P}_{x}\bigl( \tau_{B_{\varepsilon}( x,R) }\leq t\bigr) \leq C\exp( -c\Phi( cR,t) ), \end{equation} provided the relation (\ref{te2}) between $\varepsilon,t,R$ is satisfied. As in the last part of the proof of Theorem \ref{TUE}, we apply (\ref{PG1}) with $R=\frac{1}{4}d_{\varepsilon}( x,y) $ for fixed $x,y\in M\setminus\mathcal{N}$. Note that in (\ref{VRR}) $d( x,z ) $ can be replaced by a larger value $d_{\varepsilon}( x,z) $. The rest of the argument goes through unchanged, and we obtain \begin{equation} \label{ptupe} p_{t}( x,y) \leq\frac{C}{V( x,\mathcal{R}( t) ) }\exp( -c\Phi( cd_{\varepsilon}( x,y) ,t) ) , \end{equation} provided \begin{equation} \label{18} t\leq\frac{1}{8}\frac{F( \varepsilon) }{\varepsilon d_{\varepsilon}( x,y) . \end{equation} By (\ref{te}), condition (\ref{18}) can be satisfied by setting \varepsilon=\varepsilon( 8t,x,y) $. \end{pf*} \begin{corollary} \label{Ctwo}Under the hypotheses of Theorem \ref{Ttwo}, we have \begin{eqnarray} \label{twod} p_{t}( x,y) &\asymp&\frac{C}{V(x,\mathcal{R}( t) ) \exp\biggl( -c\frac{d_{\varepsilon}( x,y) }{\varepsilon }\biggr) \\ \label{twoN} &\asymp&\frac{C}{V(x,\mathcal{R}( t) )}\exp( -cN_{\varepsilon}( x,y) ) , \end{eqnarray} where $\varepsilon=\varepsilon( \kappa t,x,y) $ and $\kappa$ is a large enough constant for the upper bound and a small enough positive constant for the lower bound. \end{corollary} \begin{pf} The lower bound in (\ref{twod}) follows from (\ref{4}), while the upper bound follows from (\ref{ptupe}) and \[ \frac{d_{\varepsilon}( x,y) }{\varepsilon}\leq\Phi ( Cd_{\varepsilon}( x,y) ,\kappa t) =\kappa\Phi \biggl( \frac{C}{\kappa}d_{\varepsilon}( x,y) ,t\biggr) , \] provided $\kappa$ is chosen large enough to ensure $C/\kappa\leq c$ where c$ is the constant from (\ref{ptupe}). Estimate (\ref{twoN}) follows then from (\ref{Nede}). \end{pf} \begin{remark} \label{ExHK}A good example to illustrate Theorem \ref{Ttwo} and Corollary~\ref{Ctwo} is the class of post critically finite (p.c.f.) fractals, where the heat kernel estimate (\ref{twoN}) was proved by Hambly and Kumagai \cite{HamblyKum}. Without going into the details of \cite{HamblyKum}, let us mention that $d( x,y) $ is the resistance metric on such a fractal $M$, and $\mu$ is the Hausdorff measure of $M$ of dimension $\alpha :=\dim_{H}M$. One has in this setting $V( x,r) \simeq r^{\alpha } $, in particular,~(\ref{VD}) is satisfied. Hambly and Kumagai proved that (\ref{EFF}) is satisfied with $F( r) =r^{\beta}$ where $\beta=\alpha+1$; cf. \cite{HamblyKum}, Theorem 3.8. Condition (\ref{Fe}) follows from the estimat \begin{equation} \label{Ne<} N_{\varepsilon}( x,y) \leq C\biggl( \frac{d( x,y) } \varepsilon}\biggr) ^{\gamma}, \end{equation} proved in \cite{HamblyKum}, Lemma 3.3, with $\gamma=\beta/2$, as (\ref{Ne< ) implies tha \[ \frac{F( \varepsilon) }{\varepsilon}d_{\varepsilon }( x,y) \leq C\varepsilon^{\beta}N_{\varepsilon}( x,y) \leq Cd( x,y) ^{\gamma}\varepsilon^{\beta-\gamma }\rightarrow0 \qquad\mbox{as }\varepsilon\rightarrow0. \] The Harnack inequality~(\ref{condH}) on p.c.f. fractals was proved by Kigami~\cite{Kigamibook}. Hence, Corollary~\ref{Ctwo} applies and gives on unbounded p.c.f. fractals estimate~(\ref{twoN}). The same estimate was proved in~\cite{HamblyKum}, Theorem 1.1, for bounded p.c.f. fractals using a different method. Note that (\ref{VD}) implies (\ref{Ne<}) with $\gamma =\alpha$ [where $\alpha$ comes from (\ref{Va})], provided all balls in $M$ are connected. Indeed, (\ref{VD}) implies by the classical ball covering argument that any ball of radius $r$ can be covered by at most~$C( \frac{r}{\varepsilon}) ^{\alpha}$ balls of radii $\varepsilon\in( 0,r)$. Consequently, any point $y\in B(x,r) $ can be connected to $x$ by a chain of $\varepsilon$-balls containing at most $C( \frac{r}{\varepsilon}) ^{\alpha}$ balls. Taking $r\simeq d( x,y) $ we obtain (\ref{Ne<}) with $\gamma =\alpha$. Therefore, hypothesis~(\ref{Fe}) is satisfied automatically for $F( r) =r^{\beta}$ with $\beta>\alpha$.\vadjust{\goodbreak} Estimate (\ref{twoN}) means that the diffusion process goes from $x$ to y$ in time~$t$ in the following way. The process first ``computes'' the value\footnote For example, in the above setting, when (\ref{Ne<}) is satisfied with $ \gamma<\beta$, we obtain from~(\ref{te}) $\varepsilon^{\beta }N_{\varepsilon}\simeq t$ whenc \[ \varepsilon^{\beta}\biggl( \frac{d( x,y) }{\varepsilon }\biggr) ^{\gamma}\geq ct \] an \begin{eqnarray*} \varepsilon\geq c\biggl( \frac{t}{d( x,y) ^{\gamma }}\biggr) ^ {1}/({\beta-\gamma})}.\\[-25pt] \end{eqnarray*}} of $\varepsilon$ as a~function of $t,x,y$, then ``detects'' a shortest chain of $\varepsilon$-balls connecting $x$ and $y$ and finally goes along that chain (see Figure \ref{pic10}). \begin{figure} \includegraphics{645f08.eps} \caption{Two shortest chains of $\protect\varepsilon$-balls for two distinct values of $\protect \varepsilon $ provide different routes for the diffusion from $x$ to $y$ for two distinct values of time $t$.}\label{pic10} \end{figure} This phenomenon was first observed by Hambly and Kumagai on p.c.f. fractals, but it seems to be generic. Hence, to obtain matching upper and lower bounds, one needs, in addition to the usual hypotheses, also the information encoded in the function $N_{\varepsilon}( x,y) $, namely, the graph distance between~$x$ and $y$ on any $\varepsilon$-net approximation of $M$. \end{remark} \subsection{Chain condition} \label{SecChain} The statement of Theorem \ref{Ttwo} can be simplified if the metric space $( M,d) $ possesses an additional property as follows. \begin{definition} We say that a metric space $( M,d) $ satisfies the \textit chain condition} if there exists a constant $C\geq1$ such that, for any positive integer~$n$ and for all $x,y\in M$, there is a sequence $ \{ x_{k}\} _{k=0}^{n}$ of points in $M$ such that $x_{0}=x$, $x_{n}=y$ an \[ d( x_{k-1},x_{k}) \leq C\frac{d( x,y) }{n} \qquad\mbox{for all }k=1,\ldots,n. \] \end{definition} For example, any geodesic metric satisfies the chain condition. \begin{lemma} \label{Lemde}If $( M,d) $ satisfies the chain condition, then d_{\varepsilon}\leq Cd$ for any $\varepsilon>0$. \end{lemma} \begin{pf} Indeed, fix $\varepsilon>0$ and two distinct points $x,y\in M$, and choose n$ so big that $C\frac{d( x,y) }{n}<\varepsilon$. Let $\{ x_{k}\} _{k=0}^{n}$ be a sequence from the chain condition. Then it is also an $\varepsilon$-chain, whenc \[ d_{\varepsilon}( x,y) \leq\sum_{k=1}^{n}d( x_{k-1},x_{k}) \leq Cd( x,y) , \] which was to be proved. \end{pf} \begin{corollary} \label{Cortwo}Let the metric space $( M,d) $ satisfy the chain condition, and let all metric balls be precompact. If the hypotheses $\mbox{(\ref{VD})} + \mbox{(\ref{EFF})} +\mbox{(\ref{condH})}$ are satisfied, then, for all $x,y\in M\setminus\mathcal{N}$ and $t>0$, \begin{equation}\label{twosided} p_{t}( x,y) \asymp\frac{C}{V(x,\mathcal{R}( t) )}\exp \biggl( -ct\Phi\biggl( c\frac{d( x,y) }{t}\biggr)\biggr) , \end{equation} where $\Phi( s) $ is defined by (\ref{Fidef1}). \end{corollary} \begin{pf} Since by Lemma \ref{Lemde} $d_{\varepsilon}\leq Cd$, condition (\ref {Fe}) is obviously satisfied. Since $d_{\varepsilon}\simeq d$, we can replace in (\ref{two}) $d_{\varepsilon}$ by $d$, which together with~(\ref{FiFi}) yields (\ref{twosided}). \end{pf} \begin{remark} \label{RemNLE} $\!\!\!$Obviously, estimate (\ref{twosided}) (should it be true) implies~(\ref{UEE}). We claim that (\ref{twosided}) implies also (\ref{NLE}); moreover, the parameter $\eta$ in~(\ref{NLE}) can be chosen to be arbitrarily large, say $\eta>1$. Indeed, we need to show that i \[ d( x,y) \leq\eta\mathcal{R}( t), \] where $\eta$ is a (large) given constant, the \[ t\Phi\biggl( c\frac{d( x,y) }{t}\biggr) \leq\func{const}, \] which amounts t \begin{equation} \label{txi} \Phi\biggl( \eta\frac{\mathcal{R}( t) }{t}\biggr) \leq\frac \func{const}}{t}, \end{equation} where we have renamed $c\eta$ to $\eta$. Indeed, by (\ref{Fidef1}) we hav \[ \Phi( s) =\sup_{\xi>0}\biggl\{ \frac{s}{\mathcal {R}( \xi ) }-\frac{1}{\xi}\biggr\} \] so that (\ref{txi}) is equivalent t \begin{equation} \label{t/xi} \frac{\eta\mathcal{R}( t) }{\mathcal{R}( \xi ) }\leq \frac{t}{\xi}+\func{const.}\vadjust{\goodbreak} \end{equation} If $\xi\leq t$, then by (\ref{Rb} \[ \frac{\mathcal{R}( t) }{\mathcal{R}( \xi) }\leq C\biggl( \frac{t}{\xi}\biggr) ^{1/\beta}. \] If the ratio $\frac{t}{\xi}$ is large enough then, using $1/\beta <1$, we obtain that \[ \eta C\biggl( \frac{t}{\xi}\biggr) ^{1/\beta}\leq\frac{t}{\xi}, \] whence (\ref{t/xi}) follows. If $\frac{t}{\xi}$ is bounded by a constant, say $\frac{t}{\xi}\leq C^{\prime}$ (which includes also the case $\xi>t$), then by (\ref{Rb} \[ \frac{\eta\mathcal{R}( t) }{\mathcal{R}( \xi ) }\leq \eta\frac{\mathcal{R}( C^{\prime}\xi) }{\mathcal {R}( \xi ) }\leq\func{const}, \] whence (\ref{txi}) follows again. \end{remark} \section{Consequences of heat kernel bounds} \label{Secconv} \subsection{Harmonic function and the Dirichlet problem} \label{SecHarm} We assume only basic hypotheses in this subsection. Moreover, we use neither the locality of $( \mathcal{E},\mathcal{F} ) $ nor the existence of the process $\{ X_{t}\} $. We state and prove some basic properties of the Dirichlet problem in the abstract setting, that will be used in the proof of Theorem~\ref{Tconv}. Fix an open set $\Omega\subset M$ such that $\lambda_{\min}( \Omega ) >0$, and consider the following weak Dirichlet problem in $\Omega$: given a function $f\in\mathcal{F}$, find a function $u\in\mathcal {F}$ such tha \begin{equation} \label{D} \cases{ u\mbox{ is harmonic in }\Omega, \cr u=f \func{mod}\mathcal{F}( \Omega) , \end{equation} where the second condition is a weak boundary condition and means that $ u-f\in\mathcal{F}( \Omega) $. \begin{lemma} \label{LemDir1} \textup{(a)} For any $f\in\mathcal{F}$, problem (\ref{D}) has a unique solution $u$. \textup{(b)} If $u$ solves (\ref{D}) and $w\in \mathcal{F}$ is another function such that $w=f$ $\func{mod}\mathcal{F}( \Omega ), $ then $\mathcal{E}( u) \leq\mathcal{E}( w ) $. Moreover, the identity $\mathcal{E}( u) =\mathcal {E}( w) $ holds if and only if $u=w$. \end{lemma} \begin{pf} (a) The condition $\lambda_{\min}( \Omega ) >0$ implies tha \[ \mathcal{E}( \varphi) \simeq\mathcal{E}( \varphi ) +\Vert\varphi\Vert_{2}^{2} \] for all $\varphi\in\mathcal{F}( \Omega) $. Hence, $\mathcal{F ( \Omega) $ is a Hilbert space also with respect to the inner product $\mathcal{E}( \varphi,\psi) $. The harmonicity of $u$ in (\ref{D}) means that \begin{equation} \label{ufi} \mathcal{E}( u,\varphi) =0 \qquad\mbox{for all }\varphi\in \mathcal{ }( \Omega) . \end{equation} Equivalently, this means for the function $v=f-u$ tha \begin{equation}\label{vfi} \mathcal{E}( v,\varphi) =\mathcal{E}( f,\varphi ) \qquad\mbox{for all }\varphi\in\mathcal{F}( \Omega) .\vadjust{\goodbreak} \end{equation} Since $\mathcal{E}( f,\varphi) \leq\mathcal{E}( f) ^{1/2}\mathcal{E}( \varphi) ^{1/2}$, the functional $\varphi \mapsto\mathcal{E}( f,\varphi) $ is a bounded linear functional in $\mathcal{F}( \Omega) $, and equation (\ref{vfi}) has a unique solution $v\in\mathcal{F}( \Omega) $ by the Riesz representation theorem. Then $u=f-v$ is a unique solution of~(\ref{D}). (b) Setting $\varphi=w-u$ and noticing that $\varphi \in \mathcal{F}( \Omega) $, we obtain using (\ref{ufi} \[ \mathcal{E}( w) =\mathcal{E}( u+\varphi) =\mathcal{E ( u) +2\mathcal{E}( u,\varphi) +\mathcal {E}( \varphi) =\mathcal{E}( u) +\mathcal{E}( \varphi ) . \] Hence, $\mathcal{E}( u) \leq\mathcal{E}( w ) $, and the equality is attained when $\mathcal{E}( \varphi) =0$, that is, when $\varphi=0$. \end{pf} In what follows, denote by $R$ the resolvent operator of (\ref{D}), that is, $u=Rf$. Obviously, $R$ is a linear operator in $\mathcal{F}$. Since by Lemma \ref{LemDir1} $\mathcal{E}( Rf) \leq\mathcal{E}( f) , we see that the norm of the operator $R$ in $\mathcal{F}$ is bounded by $1$. \begin{lemma} \label{LemDir2} \textup{(a)} If $f\leq g$, then $Rf\leq Rg$. In particular, if $f\geq0$, then $Rf\geq0$. \textup{(b)} If $0\leq f\leq1$, then also $0\leq Rf\leq1$. \textup{(c)} If $\{ f_{n}\} _{n=1}^{\infty}$ is an increasing sequence from $\mathcal{F}$ and $f_{n}\stackrel{\mathcal {F}} \rightarrow}f$ as $n\rightarrow\infty$, then $Rf_{n}\rightarrow Rf$ a.e. in $\Omega$ as $n\rightarrow\infty$. \end{lemma} \begin{pf} (a) The function $u=Rf-Rg$ is harmonic in $B$ and satisfies the boundary condition $u\leq0$ $\func{mod}\mathcal{F}( \Omega ) $. By \cite{GrigHu}, Lemma 4.4, the latter condition implies $u_{+}\in \mathcal{F}( \Omega) $. Substituting $\varphi=u_{+}$ into (\ref{ufi}), we obtain $\mathcal{E}( u,u_{+}) =0$. On the other hand, by \cite{GrigHu}, Lemma 4.3, $\mathcal{E}( u,u_{+}) \geq \mathcal{ }( u_{+}) $, whence it follows that $\mathcal{E}( u_{+}) =0$ and, hence, $u_{+}=0$. Consequently, $u\leq0$ and $Rf\leq Rg$. (b) Set $u=Rf$ and $w=u\wedge1$ so that $u,w\in \mathcal{F}$ and $\mathcal{E}( w) \leq\mathcal{E}( u) $. Setting \varphi=u-f$ and $\psi=w-f$, we see that $\varphi\in\mathcal {F}( \Omega) $, $\psi\in\mathcal{F}$ and $\psi\leq \varphi$. By~\cite{GrigHu}, Lemma 4.4, we conclude that $\psi_{+}\in\mathcal {F ( \Omega) $. On the other hand, we have $\psi _{-}=\varphi _{-}\in\mathcal{F}( \Omega) $ whence $\psi\in\mathcal {F ( \Omega) $. It follows that $w=f \func{mod}\mathcal {F}( \Omega) $. By Lemma~\ref{LemDir1} we conclude that $\mathcal {E}( u) \leq\mathcal{E}( w) $. Since the opposite inequality is true by the definition of a Dirichlet form, we see that $\mathcal {E}( w) =\mathcal{E}( u) $. It follows from Lemma \ref{LemDir1} that $w=u$, which implies $u\leq1$. (c) Since\vspace*{-1pt} $R$ is a bounded operator in $\mathcal{F}$, we see that $Rf_{n}\stackrel{\mathcal{F}}{\rightarrow}Rf$ as $n\rightarrow \infty . It follows that also $Rf_{n}\stackrel{L^{2}( \Omega) }{ \rightarrow}Rf$. Then\vspace*{1pt} there is a subsequence of $\{ Rf_{n} \} $ that converges to $Rf$ almost everywhere in $\Omega$. Finally, since the sequence $\{ Rf_{n}\} $ is monotone increasing, the entire sequence $\{ Rf_{n}\} $ also converges to~$Rf$ almost everywhere in $\Omega$. \end{pf} \subsection{Some consequences of the main hypotheses} \label{SecRVD} The next lemma states useful consequences of the main hypotheses and motivates the statement of Theorem \ref{Tconv} below. It is also used in the proof of Corollary \ref{CorUE}.\vadjust{\goodbreak} \begin{lemma} \label{Lemconv}Let all metric balls be precompact. Then the following implications are true: \begin{longlist}[(a)] \item[(a)] (\ref{condH}) implies that the metric space ( M,d) $ is connected; \item[(b)] (\ref{EFF}) implies that the Dirichlet form $( \mathcal{E},\mathcal{F}) $ is conservative; \item[(c)] (\ref{EFF}) implies that $\func{diam M=\infty$. \end{longlist} \end{lemma} \begin{pf} (a) Assume that $( M,d) $ is disconnected, and let \Omega$ be a nonempty open subset of $M$ such that $\Omega^{c}$ is also nonempty and open. There is a big enough ball $B\subset M$ such that the intersections of $\delta B$ both with $\Omega$ and $\Omega^{c}$ are nonempty, where $\delta$ is the parameter from (\ref{condH}). Since \overline{B}\cap\Omega$ is a compact set, there is a cutoff function $u$ of $\overline{B}\cap\Omega$ in $\Omega$; that is, $u\in\mathcal {F}\cap C_{0}( \Omega) $ and $u\equiv1$ in a neighborhood of $\overline B}\cap\Omega$. Obviously, $u\equiv0$ in $\Omega^{c}$. We claim that $u$ is harmonic in $B$. Indeed, for every function $v\in\mathcal{F}\cap C_{0}( B) $, we have $uv\in\mathcal{F}\cap C_{0}( B\cap \Omega) $ an \[ \mathcal{E}( u,v) =\mathcal{E}( u,uv) +\mathcal{E ( u,v-uv) . \] Since $\limfunc{supp}( uv) \subset\overline{B}\cap \Omega$ and, hence, $u\equiv1$ in a neighborhood of $\limfunc{supp}( uv) $, we obtain by the strong locality of $( \mathcal{E},\mathcal {F}) $ that $\mathcal{E}( u,uv) =0$. Since \[ \limfunc{supp}\bigl( v( 1-u) \bigr) \subset\overline {B}\cap ( \overline{B}\cap\Omega) ^{c}=\overline{B}\cap\Omega^{c} \] and $u=0$ in $\Omega^{c}$, it follows that $\mathcal{E}( u,v-uv) =0$. Hence $\mathcal{E}( u,v) =0$ and $u$ is a nonnegative harmonic function in $B$. However, the function $u$ does not satisfy (\ref{condH}) because $u$ takes in $\delta B$ the values $1$ and $0$. (b) By Corollary \ref{CEF}, (\ref{EFF}) implies tha \[ \mathbb{P}_{x}\bigl( \tau_{B( x,R) }\leq t\bigr) \leq C\exp \biggl( -c\biggl( \frac{F( R) }{t}\biggr) ^{ {1}/({\beta^{\prime }-1})}\biggr) \] for any $x\in M\setminus\mathcal{N}_{0}$, $R>0$, $t>0$. Using this estimate and (\ref{PtOm}), we obtai \begin{eqnarray*} \mathcal{P}_{t}1( x) &\geq&\mathcal{P}_{t}^{B( x,R) }1( x) \\ &=&\mathbb{P}_{x}\bigl( \tau_{B( x,R) }>t\bigr) \\ &\geq&1-C\exp\biggl( -c\biggl( \frac{F( R) }{t}\biggr) ^{{1}/( \beta^{\prime}-1})}\biggr) . \end{eqnarray*} As $R\rightarrow\infty$, we see that $\mathcal{P}_{t}1( x) \geq 1$, which proves the stochastic completeness. (c) If $\func{diam}M=R<\infty, $ then $M=B_{R}$ so that the exit time from $B_{R}$ is~$\infty$ and $({E}_{F}\mbox{$\leq$}) $ fails. \end{pf} \subsection{The converse theorem} In the next statement, we use weaker versions of (\ref{UEE}) and (\ref{NLE}) that will be denoted by $(\mathit{UE}_{\mathrm{weak}} ) $ and (\mathit{NLE}_{\mathrm{weak}}) $. Namely, in each of these conditions we assume that the heat kernel exists as a measurable integral kernel of the heat semigroup $\{ P_{t}\} $ and satisfies the estimates (\ref{UEE}) and (\ref{NLE})\vadjust{\goodbreak} for all $t>0$ and for \textit {almost} all $x,y\in M$. Note that unlike the conditions (\ref{UEE}) and (\ref{NLE}), their weak versions do not use the diffusion process $\{ X_{t}\} $. \begin{theorem} \label{Tconv}Assume that all metric balls are precompact and\break $\func {diam M=\infty$. Then the following sets of conditions are equivalent: \begin{longlist} \item $\mbox{(\ref{VD})} + \mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$; \item $\mbox{(\ref{VD})} + \mbox{(\ref{UEE})} +\mbox{(\ref{NLE})}$, and the heat kernel is H\"{o}lder continuous outside a properly exceptional set; \item $\mbox{(\ref{VD})} +( \mathit{UE}_{\mathrm{weak}}) +(\mathit{NLE}_{\mathrm{weak}}) $. \end{longlist} \end{theorem} Note that, by Lemma \ref{Lemconv}, (i) implies that $\func diam}M=\infty$. However, neither of conditions (ii) or (iii) implies that $M$ is unbounded because (ii) is satisfied on any compact Riemannian manifold. \begin{pf*}{Proof of Theorem \ref{Tconv}} The implication $\mbox{(i)} \Rightarrow\mbox{(ii)}$ is contained in Theorem \ref{Tmain}, and the implication $\mbox{(ii)} \Rightarrow \mbox{(iii)}$ is trivial. In what follows we prove the implication $\mbox{(iii)} \Rightarrow \mbox{(i)}$. Assuming (iii), let us first show that $M$ is connected. Indeed, let $M$ split into a disjoint union of two nonempty open sets $ \Omega_{1}$ and $\Omega_{2}$. By the continuity of the paths of $\{ X_{t}\} $, we have $p_{t}( x,y) =0$ for all $t>0$ and $x\in \Omega_{1}\setminus\mathcal{N}$, $y\in\Omega_{2}\setminus\mathcal{N}$, whereas by (\ref{NLE}) we have $p_{t}( x,y) >0$ whenever $t>\eta^{-1}d( x,y) $. This contradictions proves the connectedness of $M$. By \cite{GrigHuUpper}, Corollary 5.3,~(\ref{VD}), the connectedness, and the unboundedness of $M$ imply the \textit reverse volume doubling} (\ref{RVD}); that is, the following inequality holds:\label{condRVD {\renewcommand{\theequation}{\textit{RVD}} \begin{equation}\label{RVD} \frac{V( x,R) }{V( x,r) }\geq c\biggl( \frac {R}{r \biggr) ^{\alpha^{\prime}}, \end{equation}} \vspace*{-8pt} \noindent which holds for all $x\in M$, $0<r\leq R$, with some positive constants c,\alpha^{\prime}$. By~\cite{GrigHuUpper}, Theorem 2.2 and Section 6.4 (see also \cite{KigamiNash}), $\mbox{(\ref{VD})} + \mbox{(\ref{RVD})} +(\mathit{UE}_{\mathrm{weak}}) $ imply $({E}_{F}\mbox{$\leq$}) $.\footnote Note that (\ref{RVD}) is essential for $({E}_{F}\mbox{$\leq$} ) $ (see \cite{GrigHuUpper}, Theorem 2.2). In fact, it was shown in \cite {GrigHuUpper} and \cite{KigamiNash} that $\mbox{(\ref{VD})} + \mbox {(\ref{RVD})} +(\mathit{UE}_{\mathrm{weak}}) $ imply also $({E}_{F}\mbox{$\geq$} ) $ provided the Dirichlet form is conservative. In our setting the conservativeness of the Dirichlet form can also be proved but a direct proof of $({E}_{F}\mbox{$\geq$}) $ is shorter.} Let us now prove $({E}_{F}\mbox{$\geq$}) $, that is, \setcounter{equation}{3} \begin{equation} \label{PF} \int_{0}^{\infty}\mathcal{P}_{t}^{B( x,R) }1( x) \,dt\geq cF( R) \end{equation} for all $x\in M\setminus\mathcal{N}$ and $R>0$, where $\mathcal{N}$ is a properly exceptional set. It suffices to prove that there is a constant \zeta>0$ such that, for any ball $B=B( x_{0},R) $ \begin{equation} \label{PBB} \int_{0}^{\infty}P_{t}^{B}1\,dt\geq cF( R) \qquad\mbox{a.e. in }\zeta B. \end{equation} Indeed, the functio \[ u=\int_{0}^{\infty}\mathcal{P}_{t}^{B}1\,dt=G^{B}1 \] is quasi-continuous by \cite{FOT}, Theorem 4.2.3.\label{remcheckthisinFOT} By \cite{GrigHuUpper}, Proposition 6.1, if $u( x) \geq a$ for almost all $x\in\Omega$, where $a$ is a constant and $\Omega$ is an open set, then $u( x) \geq a$ for all $x\in\Omega \setminus \mathcal{N}$ where $\mathcal{N}$ is a properly exceptional set. Hence,~(\ref{PBB}) implies tha \begin{equation} \label{PFN} \int_{0}^{\infty}\mathcal{P}_{t}^{B}1( x) \,dt\geq cF( R) \qquad\mbox{for all }x\in\zeta B\setminus\mathcal{N} \end{equation} for some properly exceptional set $\mathcal{N}=\mathcal{N}_{B}$. Taking the union of such sets $\mathcal{N}_{B}$ where $B$ varies over a countable family $S$ of all balls with rational radii and whose centers form a dense subset of $M$, we obtain a properly exceptional set $\mathcal{N}$ such that \ref{PFN}) holds for any ball $B\in S$. Approximating any ball $B$ from inside by balls of the family $S$, we obtain (\ref{PFN}) for all balls, which implies (\ref{PF}). Now let us prove (\ref{PBB}). By the comparison principle of \cite{GrigHuUpper}, Proposition~4.7 (see also \cite{GrigHu}, Lemma 4.18), we have, for any nonnegative function $f\in L^{2}\cap L^{\infty}( B) $, \begin{equation}\label{eqs-2} P_{t}f( x) \leq P_{t}^{B}f( x) +\sup_{s\in (0,t] \limfunc{esup}_{y\in B\setminus({1/2})B}P_{s}f( y) \end{equation} for almost all $x\in B$. Let $\zeta$ be a small positive constant to be specified below, and set $f=\mathbf{1}_{\zeta B}$. It follows from $(\mathit{NLE}_{\mathrm{weak}}) $ and (\ref{Va}) tha \begin{equation} \label{infp} p_{t}( x,z) \geq\frac{c}{V( x_{0},\mathcal {R}( t) ) }\qquad\mbox{for a.a. }x,z\in B\biggl(x_{0},\frac {1}{2}\eta \mathcal{R}( t) \biggr), \end{equation} provided $0<t\leq\varepsilon F( R) $. The initial value of \varepsilon$ is given by the condition $(\mathit{NLE}_{\mathrm{weak}}) $ but we are going to further reduce this value of $\varepsilon$ in the course of the proof. Assume that $t$ varies in the following interval \begin{equation} \label{tR} \tfrac{1}{2}\varepsilon F( R) \leq t\leq\varepsilon F(R) . \end{equation} The left-hand side inequality in (\ref{tR}) implies by (\ref{Fb}) that \begin{equation} \label{Rt} R\leq C\biggl( \frac{1}{\varepsilon}\biggr) ^{1/\beta}\mathcal {R}( t) . \end{equation} Chose $\zeta$ from the identit \begin{equation}\label{zeta} \zeta C\biggl( \frac{1}{\varepsilon}\biggr) ^{1/\beta}=\frac {1}{2}\eta \end{equation} so that (\ref{Rt}) implie \[ B( x_{0},\zeta R) \subset B\bigl(x_{0},\tfrac{1}{2}\eta \mathcal{R ( t) \bigr). \] Integrating (\ref{infp}) over $B( x_{0},\zeta R) $ and using (\ref{VD}) and (\ref{zeta}), we obtai \begin{eqnarray} \label{eqs-4} P_{t}f( x) &=&\int_{B( x_{0},\zeta R) }p_{t}( x,z) \,d\mu( z) \nonumber\\ &\geq&\frac{cV( x_{0},\zeta R) }{V( x_{0},\mathcal {R}( t) ) } \nonumber\\[-8pt]\\[-8pt] &\geq&c\zeta^{\alpha} \nonumber\\ &=&c^{\prime}\varepsilon^{\alpha/\beta}\nonumber \end{eqnarray} for almost all $x\in B( x_{0},\zeta R) $. On the other hand, for almost all $y\in B\setminus\frac{1}{2}B$, we have by $( \mathit{UE}_{\mathrm{weak}}) $ and Lemma \ref{LemtiFi} \begin{eqnarray*} P_{s}f( y) &=&\int_{B( x_{0},\zeta R) }p_{s}( y,z) \,d\mu( z) \\ &\leq&C\frac{V( x_{0},R) }{V( y,\mathcal{R} ( s) ) }\exp\biggl( -c\biggl( \frac{F( R) }{s}\biggr) ^{{1}/( \beta^{\prime}-1})}\biggr) , \end{eqnarray*} where we have used that $d( y,z) \simeq R$ and $s\leq t<F( R) $. Using (\ref{Va}) and~(\ref{Fb}) we obtai \[ \frac{V( x_{0},R) }{V( y,\mathcal{R}( s) ) \leq C\biggl( \frac{R}{\mathcal{R}( s) }\biggr) ^{\alpha}\leq C^{\prime}\biggl( \frac{F( R) }{s}\biggr) ^{\alpha /\beta}. \] Finally, it follows from (\ref{tR}) and $s\leq t$ that $\frac{F( R) }{s}\geq\frac{1}{\varepsilon}$ whenc \begin{equation} \label{eqs-3} P_{s}f( y) \leq C\biggl( \frac{1}{\varepsilon}\biggr) ^{\alpha /\beta}\exp\biggl( -c\biggl( \frac{1}{\varepsilon}\biggr) ^{ {1}/({\beta ^{\prime}-1})}\biggr) \end{equation} for almost all $y\in B\setminus\frac{1}{2}B$. Combining (\ref{eqs-2}), (\ref{eqs-4}) and (\ref{eqs-3}), we obtain, for almost all $x\in B( x_{0},\zeta R) $ \begin{eqnarray*} P_{t}^{B}f( x) &\geq&P_{t}f( x) -\sup_{s\in (0,t] \limfunc{esup}_{B\setminus K}P_{s}f \\ &\geq&c^{\prime}\varepsilon^{\alpha/\beta}-C\biggl( \frac {1}{\varepsilon }\biggr) ^{\alpha/\beta}\exp\biggl( -c\biggl( \frac{1}{\varepsilon }\biggr) ^{{1}/({\beta^{\prime}-1})}\biggr) \\ &\geq&\frac{1}{2}c{^{\prime}}\varepsilon^{\alpha/\beta}, \end{eqnarray*} provided $\varepsilon$ is chosen small enough. The path $t\mapsto P_{t}^{B}f $ is a continuous path in $L^{2}( B) $ and, hence, can be integrated in $t$. It follows from the previous inequality tha \[ \int_{0}^{\infty}P_{t}^{B}1\,dt\geq\int_{({1/2})\varepsilon F( R) }^{\varepsilon F( R) }P_{t}^{B}f \,dt\geq c\varepsilon ^{\alpha/\beta+1}F( R) , \] which finishes the proof of $({E}_{F}\mbox{$\geq$}) $.\vadjust{\goodbreak} We are left to prove that $\mbox{(iii)} \Rightarrow\mbox{(\ref{condH})}$. By \cite{BarGrigKumHar}, Theorem 3.1 (see also \cite{FS} and \cite{HebSChar}, Theorem 5.3), $\mbox{(\ref {VD})} +( \mathit{UE}_{\mathrm{weak}}) +(\mathit{NLE}_{\mathrm{weak}}) $ imply the \textit {parabolic} Harnack inequality for bounded caloric function and, hence, the Harnack inequality (\ref{condH}) for bounded harmonic functions (note that this result uses the precompactness of the balls). We still have to obtain (\ref{condH}) for all nonnegative harmonic functions. Note that by \cite{GrigHuUpper}, Theorem 2.1, \[ \mbox{(\ref{VD})} + \mbox{(\ref{RVD})} +(\mathit{UE}_{\mathrm{weak}}) \Rightarrow \mbox{(\ref{FK})}. \] In particular, for any ball $B$, we have $\lambda_{\min}( B) >0 . Given a function $u\in\mathcal{F}$ that is nonnegative and harmonic in a ball $B\subset M$, set $f_{n}=u\wedge n$ for any $n\in\mathbb{N}$, and denote by $u_{n}$ the solution of the Dirichlet proble \[ \cases{ u_{n}\mbox{ is harmonic in }B, \cr u_{n}=f_{n} \func{mod}\mathcal{F}( B);} \] cf. Section \ref{SecHarm}. Since $0\leq f_{n}\leq n$, we have also $0\leq u_{n}\leq n$. Since the sequence~$\{ f_{n}\} $ increases and f_{n}\stackrel{\mathcal{F}}{\rightarrow}u$ (cf. \cite{FOT}, Theorem 1.4.2), it follows by Lemma~\ref{LemDir2} that $u_{n}\rightarrow u$ almost everywhere in $B$. Each function $u_{n}$ is bounded and, hence, satisfies the Harnack inequality in $B$, that is \[ \limfunc{esup}_{\delta B}u_{n}\leq C\limfunc{einf}_{\delta B}u_{n}. \] Replacing in the right-hand side $u_{n}$ by a larger function $u$ and passing to the limit in the left-hand side as $n\rightarrow\infty$, we obtain the same inequality for~$u$, which was to be proved. \end{pf*} \begin{corollary} \label{CorUE}Assume that all metric balls are precompact, $\func{diam} M=\infty$, and the Dirichlet form $( \mathcal{E},\mathcal {F}) $ is conservative. Then the following sets of conditions are equivalent: \begin{longlist} \item $\mbox{(\ref{VD})} +\mbox{(\ref{condH})} +(\mathit{UE} _{\mathrm{weak}}) $; \item $\mbox{(\ref{VD})} +\mbox{(\ref{UEE})} + \mbox{(\ref{NLE})}$. \end{longlist} \end{corollary} \begin{pf} In the view of Theorem \ref{Tconv}, it suffices to prove that $\mbox{(i)} \Rightarrow\mbox{(\ref{EFF})} $. By Lemma \ref{Lemconv}, (\ref{condH}) implies the connectedness of $M$. By \cite{GrigHuUpper}, Corollary~5.3, $\mbox{(\ref{VD})} \Rightarrow\mbox{(\ref{RVD})}$ provided $M$ is connected and unbounded, which is the case now. By \cite{GrigHuUpper}, Theorem 2.2, the conservativeness and $\mbox{(\ref{VD})} + \mbox{(\ref{RVD})} +( \mathit{UE}_{\mathrm{weak}}) $ imply (\ref{EFF}). \end{pf} Many equivalent conditions for $(\mathit{UE}_{\mathrm{weak}}) $ were proved in \cite{GrigHuUpper} under the standing assumptions $\mbox{(\ref{VD})} + \mbox{(\ref{RVD})}$ and the conservativeness of $( \mathcal {E},\mathcal{F ) $. Of course, each of these conditions can replace $( \mathit{UE}_{\mathrm{weak}}) $ in the statement of Corollary \ref{CorUE}. \begin{corollary} \label{Corunbound}Assume that all metric balls are precompact, $\func {diam M=\infty$, and $( M,d) $ satisfies the\vadjust{\goodbreak} chain condition. Then the following two sets of conditions are equivalent: \begin{longlist} \item $\mbox{(\ref{VD})} +\mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$; \item The heat kernel exists and satisfies the two-sided estimate (\ref{twosided}). \end{longlist} \end{corollary} \begin{pf} The implication $\mbox{(i)} \Rightarrow\mbox{(ii)} $ is contained in Corollary \ref{Cortwo}. Let us prove the implication $\mbox{(ii)}\Rightarrow\mbox{(i)} $. Estimate (\ref{twosided}) implies (\ref{UEE}) as well as~(\ref{NLE}) with any value of $\eta , in particular, $\eta>1$; cf. Remark \ref{RemNLE}. By \cite{GrigHuLauD}, Lemma 4.1, (\ref{NLE}) with $\eta>1$ implies (\ref{VD}). Finally, by Theorem \ref{Tconv}, we obtain $\mbox{(\ref{condH})} + \mbox{(\ref{EFF})}$. \end{pf} \begin{appendix} \section*{Appendix: List of conditions} We briefly list the lettered conditions used in this paper with references to the appropriate places in the main body. \begin{longlist}[$(\mathrm{RVD}) $] \item[$(H)$] $\mathop{\limfunc{esup}}_{B( x,\delta r) }u\leq C \mathop{\func{einf}}_{B( x,\delta r) }u$ (Section \ref{SecHarnack}); \item[$(\mathit{VD})$] $V( x,2r) \leq CV( x,r) $ (Section \ref{SecHarnack}); \item[$(E_{F})$] $\mathbb{E}_{x}\tau_{B( x,r) }\simeq F( r) $ (Section \ref{SecEF}); \item[$(\mathit{FK})$] $\lambda_{\min}( \Omega) \geq\frac c}{F( R) }( \frac{\mu( B) }{\mu ( \Omega ) }) ^{\nu}$ (Section \ref{SecEF}); \item[$(\mathit{UE})$] $p_{t}( x,y) \leq\frac {C}{V( x \mathcal{R}( t) ) }\exp( -\frac{1}{2}\Phi ( cd( x,y) ,t) ) $ (Section \ref{SecDUE}); \item[$(\mathit{NLE})$] $p_{t}( x,y) \geq\frac{c}{V( x,\mathcal{R ( t) ) } $ provided $d( x,y) \leq \eta \mathcal{R}( t) $ (Section \ref{Seclow}); \item[$(\mathit{RVD}) $] $\frac{V( x,R) }{V( x,r) \geq c( \frac{R}{r}) ^{\alpha^{\prime}}$ (Section \ref{SecRVD}). \end{longlist} \end{appendix} \makeatletter\write@toc@ignorecontentsline\makeatother \section*{Acknowledgments} The authors thank Martin Barlow, Jiaxin Hu, Jun Kigami, Takashi Kumagai and Alexander Teplyaev for valuable conversations on the topics of this paper. The authors are grateful to the unnamed referees for careful reading of the manuscript and for the useful remarks. \makeatletter\write@toc@restorecontentsline\makeatother
1,108,101,562,621
arxiv
\section*{References} \bibliographystyle{iopart-num}
1,108,101,562,622
arxiv
\section{Introduction} Solvable lattice models, which originate from statistical physics, have been playing an important role in various areas of mathematics and mathematical physics, including algebraic combinatorics (e.g. \cite{BBBG,BSW,Kup,Kup2,WZ}), quantum field theory (e.g. \cite{Bax3,BPZ,DMS}), and integrable probability (e.g. \cite{BP,BW,CP,OP}). A solvable lattice model is usually based on a finite lattice, and a state of the model is a labeling of the edges of the lattice. Each state of the model is then associated with a locally determined Boltzmann weight. The partition function of a lattice model, which is defined as the sum of the Boltzmann weights of all admissible states of the model, is of great importance. From the point of view of statistical physics, the partition function encodes thermodynamic properties of the lattice model. Recent works (see e.g. \cite{Bor,BBF,MS2}) have also found realizations of various symmetric functions--such as the Schur, Hall-Littlewood, and Grothendieck polynomials--as partition functions of certain solvable lattice models. The Yang-Baxter equation (cf. \cite{Jim,Maj}), also known as the ``star-triangle relation'', plays a key role in solvable lattice models. It reveals symmetries of the partition function of the lattice model. In Baxter's seminal work \cite{Bax2,Bax1} and many later works, the Yang-Baxter equation is crucially utilized to obtain explicit expressions for partition functions of solvable lattice models. However, going from the Yang-Baxter equation to an explicit evaluation of the partition function often requires quite a bit of non-trivial work, such as the combinatorics of Gelfand-Tsetlin patterns and Proctor patterns (see e.g. \cite{BBF,Iva}) and the Izergin-Korepin technique (see e.g. \cite{Ize,Kor,Whe}). A natural question is, for a given lattice model that satisfies the Yang-Baxter equation, is there a \underline{general} route to the computation of the partition function? This paper provides an answer for a class of solvable lattice models called ``free fermionic solvable lattice models'', which we will review in Section \ref{Sect.1.1}. A high-level overview of the method will be given in Section \ref{Sect.1.2}. \subsection{Free fermionic solvable lattice models}\label{Sect.1.1} Recently, there has been a series of works that relate solvable lattice models to representation theory. This originates in the seminal work \cite{BBF}, where a parametrized Yang-Baxter equation with non-abelian parameter group is introduced. The Yang-Baxter equation corresponds to a six-vertex model with \underline{free fermionic} Boltzmann weights. ``Free fermionic'' means that, if we denote by $a_1,a_2,b_1,b_2,c_1,c_2$ the Boltzmann weights of the six-vertex model (see Section \ref{Sect.2} for details), then the constraint $a_1a_2+b_1b_2-c_1c_2=0$ is satisfied. The partition function of the six-vertex model is shown to be equal to the product of a Schur polynomial and a deformation of the Weyl denominator of the general linear group, which provides an alternative proof of Tokuyama's formula \cite{Tok}. The result of \cite{BBF} is generalized to factorial Schur functions in \cite{BMN}. Later works \cite{BBBG3,BBBG4,BBBG5,BBCFG} construct solvable lattice models whose partition functions represent metaplectic Whittaker functions and Iwahori Whittaker functions. These culminate in the work \cite{BBBG2} that constructs a supersymmetric solvable lattice model whose partition function gives metaplectic Iwahori Whittaker functions. The above developments are for Cartan type A. There is also a parallel line of works for other Cartan types. Hamel and King \cite{HK1,HK2} and Ivanov \cite{Iva} constructed lattice models whose partition functions equal the product of an irreducible character and a deformation of Weyl's denominator of the symplectic group $\mathrm{Sp}(2n,\mathbb{C})$. The Yang-Baxter equation as developed in \cite{BBF} is used in Ivanov's work. Brubaker et al. \cite{BBCG} constructed a solvable lattice model and made the conjecture that its partition function represents the metaplectic Whittaker function on the double cover of $\mathrm{Sp}(2n,F)$, where $F$ is a non-archimedean local field. In later works \cite{Mot,MSW}, a dual version of the model in \cite{Iva} and generalizations of the models in \cite{BBCG,Iva} are studied. In \cite{Gra}, Ivanov's model is generalized to metaplectic ice for Cartan type C. Further developments include \cite{BS,BS2,BSn}. The lattice model in \cite{BBF} is based on a finite rectangular lattice. Each state of the model is represented by a labeling of the edges of the lattice by $\pm$ signs, which we refer to as ``spins'' in the following. Given a labeling, each vertex of the lattice is associated with a Boltzmann weight determined by the spins on its adjacent edges. The Boltzmann weight of the corresponding state is defined as the product of the Boltzmann weights of all the vertices. The lattice models in \cite{Iva} and \cite{BBCG} are also based on a finite rectangular lattice, but involve two types of vertices called $\Delta$ ice and $\Gamma$ ice that alternate in rows. There are also U-turn vertices, which we call ``cap vertices'' in this paper, that connect two adjacent rows of $\Delta$ ice and $\Gamma$ ice on the right boundary. The Boltzmann weights for these vertices will be reviewed in Section \ref{Sect.2}. The lattice models in \cite{BBF} and \cite{BBCG} will be reviewed in Sections \ref{Sect.4} and \ref{Sect.6}, respectively. \subsection{Overview of the strategy}\label{Sect.1.2} In this subsection, we give a high-level overview of our strategy for computing partition functions of free fermionic solvable lattice models. We focus on the lattice models in \cite{BBF} and \cite{BBCG} to illustrate the method. We expect our method to apply to a broad class of free fermionic solvable lattice models. Throughout the paper, we identify the $+$ spin with $0$ and the $-$ spin with $1$. We start with the model in \cite{BBF}. Suppose that the rectangular lattice has $N$ rows and $\lambda_1+N$ columns. For each column with a given assignment of spins on the top and bottom boundary (the boundary condition encountered in this model is $\alpha\in\{0,1\}$ on the top and $0$ at the bottom), we view it as an operator called the ``column operator''. Specifically, for every $a\in\{1,\cdots,N\}$, we denote by $W_a$ a two-dimensional vector space over $\mathbb{C}$ spanned by the basis vectors $|0\rangle$ and $|1\rangle$. Then the column operator is an element of $End(W_1\otimes\cdots \otimes W_N)$. The precise definition will be given in Section \ref{Sect.3.2}. The partition function of the lattice model can be written as a certain component of the product of $\lambda_1+N$ column operators. The idea is to \underline{conjugate} the column operators so that the components of the conjugated operators have an explicit form. The key concepts involved in this conjugation are the ``permutation graph'' and the ``$F$-matrix'', which we will introduce in Sections \ref{Sect.3.3} and \ref{Sect.3.4}. The permutation graph is an $N$-site generalization of the $R$-matrix, which corresponds to the rotated vertex in the Yang-Baxter equation. It is an element of $End(W_1\otimes\cdots\otimes W_N)$ that depends on two permutations $\rho_1,\rho_2\in S_N$. The Yang-Baxter equation and another relation, the unitarity relation, are used to ensure that the permutation graph is well-defined. The $F$-matrix is then constructed based on the permutation graph. It is also an element of $End(W_1\otimes \cdots \otimes W_N)$, and is used to conjugate the column operators. The explicit form of the conjugated column operators are given in Proposition \ref{P3} below, based on which the partition function of the lattice model can be evaluated. For the model in \cite{BBCG}, in Section \ref{Sect.5}, we generalize the definitions of the column operator, the permutation graph, and the $F$-matrix to incorporate both $\Delta$ ice and $\Gamma$ ice. Another complication comes from the cap vertices on the right boundary. Suppose that the rectangular lattice has $N=2r$ rows. There are $r$ cap vertices in total, and we define a ``cap vector'' $K$ based on all the cap vertices. In view of the conjugation procedure, if we denote the $F$-matrix by $F$, we need to analyze the components of the vector $FK$. This is done with the help of the caduceus relation, an additional relation that is more specific to Cartan types B and C. The components of the corresponding operators are given in Proposition \ref{P5}. The evaluation of the partition function of this lattice model in Theorem \ref{Theorem2} gives a new proof of a conjecture made in \cite{BBCG} (the conjecture was first proved by Motegi et al. \cite{MSW} using the Izergin-Korepin technique; see below for details). Previously, for Boltzmann weights that are related to the quantum group $U_q(\widehat{\mathfrak{sl}_2})$, the permutation graph and the $F$-matrix are introduced in \cite{MS}. These are used in \cite{MZ} to evaluate the partition functions of certain vertex models that are related to Hall-Littlewood polynomials. Those Boltzmann weights are quite different from the free fermionic Boltzmann weights considered in this paper. For free fermionic lattice models, the definition and application of the permutation graph and the $F$-matrix involve quite a bit more complications than those for $U_q(\widehat{\mathfrak{sl}_2})$ Boltzmann weights. In the following, we discuss previous approaches for the computation of partition functions of free fermionic solvable lattice models and compare them with the method introduced in this paper. One approach is based on the combinatorics of Gelfand-Tsetlin patterns (e.g. \cite{BBF}) and Proctor patterns (e.g. \cite{Iva}). Specifically, it relies on the Yang-Baxter equation (and two additional relations, the caduceus relation and the fish relation, for the model in \cite{Iva}) to establish symmetries of the partition function normalized by certain factors. Based on such symmetries, it can be shown that the normalized partition function is independent of a certain parameter of the model. Specializing this parameter to a particular value reduces the six-vertex model to a five-vertex model, and the combinatorics of Gelfand-Testlin patterns (or Proctor patterns) and the Weyl character formula are used to evaluate the partition function of the five-vertex model. This approach requires non-trivial combinatorial arguments tailored for each specific model, and does not lead to an evaluation of the partition function of the lattice model in \cite{BBCG}. Other combinatorial approaches include methods based on bijection (see e.g. \cite{HamelKing}) and methods based on lattice paths (see e.g. \cite{Okada}), which are also tailored for specific models. The method introduced in this paper is more general (for example, it applies to the lattice model in \cite{BBCG}) and less specific to the details of the models. Another approach is based on the Izergin-Korepin technique. The Izergin-Korepin technique was introduced by Korepin \cite{Kor} and Izergin \cite{Ize}, and involves obtaining sufficiently many properties that are satisfied by a class of partition functions to uniquely identify them (these properties include initial conditions and recursive relations). In the context of free fermionic solvable lattice models, the Izergin-Korepin technique was applied in \cite{Mot2} to obtain a generalization of the Tokuyama formula for factorial Schur functions. This was later extended in \cite{Mot3,MSW} to elliptic Felderhof models and generalizations of the models in \cite{BBCG,Iva}. The partition function of \cite{BBCG} is computed through this approach in \cite{MSW} based on the generalized models. In applying the Izergin-Korepin technique, one needs to find the correct set of properties that can uniquely identify the partition function. This may require generalizations of the lattice model at hand and can be problem specific. Also, the Izergin-Korepin technique is a method for \underline{verifying} a given form of the partition function, and does not provide much insights on how the explicit form can be \underline{discovered}. In comparison, the method introduced here is general and leads to a \underline{direct} computation of the partition function of a particular lattice model. \bigskip The rest of this paper is organized as follows. In Section \ref{Sect.2}, we review the Boltzmann weights and the Yang-Baxter equations that are involved in the lattice models from \cite{BBF} and \cite{BBCG}. Two additional relations, the unitarity relation and the caduceus relation, are also discussed in this section. Then we introduce the column operator, the permutation graph, and the $F$-matrix for type A lattice models--which include the model in \cite{BBF}--in Section \ref{Sect.3}. Equipped with these tools, we apply the method outlined in Section \ref{Sect.1.2} to the lattice model in \cite{BBF} and obtain an explicit evaluation of its partition function in Section \ref{Sect.4}. Section \ref{Sect.5} extends the concepts in Section \ref{Sect.3} to type B and C lattice models, which include the model in \cite{BBCG}. These combined with the strategy described in Section \ref{Sect.1.2} lead to an explicit expression for the partition function of the model in \cite{BBCG}, giving a new proof of the conjecture made in \cite{BBCG}. \subsection{Acknowledgement} The author wishes to thank Daniel Bump for his encouragement and many helpful conversations. \section{Boltzmann weights and Yang-Baxter equations}\label{Sect.2} In Section \ref{Sect.2.1}, we review the Boltzmann weights of vertices that are involved in the lattice models from \cite{BBF} and \cite{BBCG}. Then we review the Yang-Baxter equations and the unitarity relation that are satisfied by these Boltzmann weights in Section \ref{Sect.2.2}. For the model from \cite{BBCG}, an additional relation, the caduceus relation, is also discussed. There are three types of vertices that we consider: the ordinary vertex, the $R$-vertex, and the cap vertex. Ordinary vertices are vertices in the rectangular lattice on which the lattice model is based. There are two types of ordinary vertices called $\Delta$ ice and $\Gamma$ ice. Only $\Gamma$ ice is involved in the model from \cite{BBF}. $R$-vertices are auxiliary rotated vertices that appear in the Yang-Baxter equations. There are four types of $R$-vertices called $\Delta\Delta$ ice, $\Delta\Gamma$ ice, $\Gamma\Delta$ ice, and $\Gamma\Gamma$ ice. Only $\Gamma\Gamma$ ice is involved in the model from \cite{BBF}. Finally, cap vertices are U-turn vertices on the right boundary of the rectangular lattice that connect two adjacent rows of $\Delta$ ice and $\Gamma$ ice. They are only involved in the model from \cite{BBCG}. \subsection{Boltzmann weights}\label{Sect.2.1} In this subsection, we review the Boltzmann weights of the three types of vertices following \cite{BBF,BBCG}. We start with the ordinary vertices. The Boltzmann weight of an ordinary vertex depends on the spins on the four edges that are adjacent to it and two parameters called the deformation parameter (denoted by $v\in \mathbb{C}$) and the spectral parameter (denoted by $z\in\mathbb{C}$). The deformation parameter is fixed for a given lattice model, while the spectral parameter can vary across different rows. The Boltzmann weights for $\Gamma$ ice and $\Delta$ ice are listed in Figures \ref{Figure2.1}-\ref{Figure2.2}. Here for $\Delta$ ice the signs of the spins on the horizontal edges are switched (from $+$ to $-$ and from $-$ to $+$) compared to those in \cite{BBF,BBCG} in order to simplify the presentation in later sections. \begin{figure}[!h] \[ \begin{array}{|c|c|c|c|c|c|} \hline a_1 & a_2 & b_1 & b_2 & c_1 & c_2\\ \hline \gammaice{+}{+}{+}{+} & \gammaice{-}{-}{-}{-} & \gammaice{+}{-}{+}{-} & \gammaice{-}{+}{-}{+} & \gammaice{-}{+}{+}{-} & \gammaice{+}{-}{-}{+}\\ \hline 1 & z & -v & z & z(1-v) & 1\\ \hline\end{array}\] \caption{Boltzmann weights for $\Gamma$ ice with deformation parameter $v$ and spectral parameter $z$} \label{Figure2.1} \end{figure} \begin{figure}[!h] \[ \begin{array}{|c|c|c|c|c|c|} \hline a_1 & a_2 & b_1 & b_2 & c_1 & c_2\\ \hline \gammaice{+}{+}{+}{+} & \gammaice{-}{-}{-}{-} & \gammaice{+}{-}{+}{-} & \gammaice{-}{+}{-}{+} & \gammaice{-}{+}{+}{-} & \gammaice{+}{-}{-}{+}\\ \hline 1 & -v z & 1 & z & z(1-v) & 1\\ \hline\end{array}\] \caption{Boltzmann weights for $\Delta$ ice with deformation parameter $v$ and spectral parameter $z$} \label{Figure2.2} \end{figure} Now we introduce the $R$-vertices. They are rotated vertices that appear in the Yang-Baxter equations (see Section \ref{Sect.2.2} for details). The Boltzmann weight of an $R$-vertex depends on the spins on the four edges that are adjacent to it, the deformation parameter $v\in\mathbb{C}$, and two spectral parameters $z,z'\in\mathbb{C}$. The Boltzmann weights for the four types of $R$-vertices ($\Gamma\Gamma$ ice, $\Gamma\Delta$ ice, $\Delta\Gamma$ ice, and $\Delta\Delta$ ice) are shown in Figures \ref{Figure2.3}-\ref{Figure2.6}. Here for $\Gamma\Delta$ ice, $\Delta\Gamma$ ice, and $\Delta\Delta$ ice we switch the signs of certain spins (from $+$ to $-$ and from $-$ to $+$) compared to those in \cite{BBF,BBCG} in accordance with the change for $\Delta$ ice. We also take a different normalization of the Boltzmann weights compared to that in \cite{BBF,BBCG} so that the Boltzmann weight of the $a_1$ pattern equals $1$ for the four types of $R$-vertices. \begin{figure}[!h] \[\begin{array}{|c|c|c|c|c|c|} \hline a_1 & a_2 & b_1 & b_2 & c_1 & c_2\\ \hline \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$+$}; \node at (2,2) {$+$}; \node at (2,0) {$+$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$-$}; \node at (2,2) {$-$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$-$}; \node at (2,2) {$+$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$+$}; \node at (2,2) {$-$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$+$}; \node at (2,2) {$+$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$-$}; \node at (2,2) {$-$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}\\ \hline 1&\frac{z-vz'}{z'-vz} &\frac{v(z-z')}{z'-vz} &\frac{z-z'}{z'-vz} &\frac{(1-v)z}{z'-vz} &\frac{(1-v)z'}{z'-vz}\\ \hline \end{array}\] \caption{Boltzmann weights for $\Gamma\Gamma$ ice with deformation parameter $v$ and spectral parameters $z,z'$} \label{Figure2.3} \end{figure} \begin{figure}[!h] \[\begin{array}{|c|c|c|c|c|c|} \hline a_1 & a_2 & b_1 & b_2 & c_1 & c_2\\ \hline \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$+$}; \node at (2,2) {$+$}; \node at (2,0) {$+$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$-$}; \node at (2,2) {$-$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$-$}; \node at (2,2) {$+$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$+$}; \node at (2,2) {$-$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$+$}; \node at (2,2) {$+$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$-$}; \node at (2,2) {$-$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}\\ \hline 1&\frac{z'-vz}{z-vz'} &\frac{v(z-z')}{z-vz'} &\frac{z-z'}{z-vz'} &\frac{(1-v)z}{z-vz'} &\frac{(1-v)z'}{z-vz'}\\ \hline \end{array}\] \caption{Boltzmann weights for $\Delta\Delta$ ice with deformation parameter $v$ and spectral parameters $z,z'$} \label{Figure2.4} \end{figure} \begin{figure}[!h] \[\begin{array}{|c|c|c|c|c|c|} \hline a_1 & a_2 & b_1 & b_2 & c_1 & c_2\\ \hline \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$+$}; \node at (2,2) {$+$}; \node at (2,0) {$+$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$-$}; \node at (2,2) {$-$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$-$}; \node at (2,2) {$+$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$+$}; \node at (2,2) {$-$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$+$}; \node at (2,2) {$+$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$-$}; \node at (2,2) {$-$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}\\ \hline 1&1 &\frac{z'-v^2z}{z'-vz} &\frac{z-z'}{z'-vz} &\frac{(1-v)z}{z'-vz} &\frac{(1-v)z'}{z'-vz}\\ \hline \end{array}\] \caption{Boltzmann weights for $\Delta\Gamma$ ice with deformation parameter $v$ and spectral parameters $z,z'$} \label{Figure2.5} \end{figure} \begin{figure}[!h] \[\begin{array}{|c|c|c|c|c|c|} \hline a_1 & a_2 & b_1 & b_2 & c_1 & c_2\\ \hline \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$+$}; \node at (2,2) {$+$}; \node at (2,0) {$+$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$-$}; \node at (2,2) {$-$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$-$}; \node at (2,2) {$+$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$+$}; \node at (2,2) {$-$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$-$}; \node at (0,2) {$+$}; \node at (2,2) {$+$}; \node at (2,0) {$-$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}& \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$+$}; \node at (0,2) {$-$}; \node at (2,2) {$-$}; \node at (2,0) {$+$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{z,z'}$}; \end{tikzpicture}\\ \hline 1&1 &\frac{v^2z'-z}{z-vz'} &\frac{z-z'}{z-vz'} &\frac{(1-v)z}{z-vz'} &\frac{(1-v)z'}{z-vz'}\\ \hline \end{array}\] \caption{Boltzmann weights for $\Gamma\Delta$ ice with deformation parameter $v$ and spectral parameters $z,z'$} \label{Figure2.6} \end{figure} Finally we introduce the cap vertices. These are used in the lattice model from \cite{BBCG}. The Boltzmann weight of a cap vertex depends on the spins on the two edges that are adjacent to it, the deformation parameter $v\in\mathbb{C}$, and the spectral parameter $z\in\mathbb{C}$. The Boltzmann weights are shown in Figure \ref{Figure2.7}. Here, the sign of the spin on the top edge is switched (from $+$ to $-$ and from $-$ to $+$) in accordance with the change for $\Delta$ ice. We note that the model from \cite{Iva} involves a different set of Boltzmann weights for the cap vertices. Our method applies to that model as well. \begin{figure}[!h] \[ \begin{array}{|c|c|c|c|c|c|} \hline \text{Cap} &\caps{-}{+} & \caps{+}{-} \\ \hline \text{Boltzmann weight} & -\sqrt{v} z^{1\slash 2} & z^{-1\slash 2} \\ \hline\end{array}\] \caption{Boltzmann weights for a cap vertex with deformation parameter $v$ and spectral parameter $z$} \label{Figure2.7} \end{figure} \subsection{Yang-Baxter equations, unitarity relation, and caduceus relation}\label{Sect.2.2} In this subsection, we review several relations that are satisfied by the vertices introduced in Section \ref{Sect.2.1}. These include two sets of Yang-Baxter equations known as the ``$RTT$ relation'' and the ``$RRR$ relation'', the unitarity relation, and the caduceus relation. We start with the Yang-Baxter equations. Two ordinary vertices and an $R$-vertex satisfy the following set of Yang-Baxter equations known as the ``$RTT$ relation''. These relations were obtained in \cite[Theorem 9]{BBF2} (\cite{BBF2} is the arXiv version of \cite{BBF}). \begin{proposition}[\cite{BBF2}, Theorem 8]\label{YBE1} For any $X,Y\in \{\Gamma,\Delta\}$ the following holds. Assume that $S$ is $X$ ice with spectral parameter $z_i$, $T$ is $Y$ ice with spectral parameter $z_j$, and $R$ is $XY$ ice with spectral parameters $z_i,z_j$. Then the partition functions of the following two configurations are equal for any fixed combination of spins $a,b,c,d,e,f$. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,1) to [out = 0, in = 180] (2,3) to (4,3); \draw (0,3) to [out = 0, in = 180] (2,1) to (4,1); \draw (3,0) to (3,4); \draw[fill=white] (0,1) circle (.3); \draw[fill=white] (0,3) circle (.3); \draw[fill=white] (3,4) circle (.3); \draw[fill=white] (4,3) circle (.3); \draw[fill=white] (4,1) circle (.3); \draw[fill=white] (3,0) circle (.3); \draw[fill=white] (2,3) circle (.3); \draw[fill=white] (2,1) circle (.3); \draw[fill=white] (3,2) circle (.3); \node at (0,1) {$a$}; \node at (0,3) {$b$}; \node at (3,4) {$c$}; \node at (4,3) {$d$}; \node at (4,1) {$e$}; \node at (3,0) {$f$}; \node at (2,3) {$g$}; \node at (3,2) {$h$}; \node at (2,1) {$i$}; \filldraw[black] (3,3) circle (2pt); \node at (3,3) [anchor=south west] {$S$}; \filldraw[black] (3,1) circle (2pt); \node at (3,1) [anchor=north west] {$T$}; \filldraw[black] (1,2) circle (2pt); \node at (1,2) [anchor=west] {$R$}; \end{tikzpicture}\qquad\qquad \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,1) to (2,1) to [out = 0, in = 180] (4,3); \draw (0,3) to (2,3) to [out = 0, in = 180] (4,1); \draw (1,0) to (1,4); \draw[fill=white] (0,1) circle (.3); \draw[fill=white] (0,3) circle (.3); \draw[fill=white] (1,4) circle (.3); \draw[fill=white] (4,3) circle (.3); \draw[fill=white] (4,1) circle (.3); \draw[fill=white] (1,0) circle (.3); \draw[fill=white] (2,3) circle (.3); \draw[fill=white] (1,2) circle (.3); \draw[fill=white] (2,1) circle (.3); \node at (0,1) {$a$}; \node at (0,3) {$b$}; \node at (1,4) {$c$}; \node at (4,3) {$d$}; \node at (4,1) {$e$}; \node at (1,0) {$f$}; \node at (2,3) {$j$}; \node at (1,2) {$k$}; \node at (2,1) {$l$}; \filldraw[black] (1,3) circle (2pt); \node at (1,3) [anchor=south west] {$T$}; \filldraw[black] (1,1) circle (2pt); \node at (1,1) [anchor=north west]{$S$}; \filldraw[black] (3,2) circle (2pt); \node at (3,2) [anchor=west] {$R$}; \end{tikzpicture} \end{equation} \end{proposition} Three $R$-vertices satisfy another set of Yang-Baxter equations known as the ``$RRR$ relation''. These relations were obtained in \cite[Theorem 10]{BBF2}. \begin{proposition}[\cite{BBF2}, Theorem 10]\label{YBE2} For any $X,Y,Z\in\{\Gamma,\Delta\}$ the following holds. Assume that $R$ is $XY$ ice with spectral parameters $z_i,z_j$, $S$ is $XZ$ ice with spectral parameters $z_i,z_k$, and $T$ is $YZ$ ice with spectral parameters $z_j,z_k$. Then the partition functions of the following two configurations are equal for any fixed combination of spins $a,b,c,d,e,f$. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,0) to [out = 0, in = 180] (1.5,1.5) to [out = 0, in = 180] (3,3) to (4.5,3); \draw (0,3) to (1.5,3) to [out = 0, in = 180] (3,1.5) to [out=0,in=180] (4.5,0); \draw (0,1.5) to [out=0,in=180] (1.5,0) to (3,0) to [out=0, in=180] (4.5,1.5); \draw[fill=white] (0,0) circle (.3); \draw[fill=white] (0,1.5) circle (.3); \draw[fill=white] (0,3) circle (.3); \draw[fill=white] (2.25,0) circle (.3); \draw[fill=white] (1.5,1.5) circle (.3); \draw[fill=white] (3,1.5) circle (.3); \draw[fill=white] (4.5,0) circle (.3); \draw[fill=white] (4.5,1.5) circle (.3); \draw[fill=white] (4.5,3) circle (.3); \node at (0,0) {$a$}; \node at (0,1.5) {$b$}; \node at (0,3) {$c$}; \node at (4.5,3) {$d$}; \node at (4.5,1.5) {$e$}; \node at (4.5,0) {$f$}; \node at (1.5,1.5) {$g$}; \node at (2.25,0) {$h$}; \node at (3,1.5) {$i$}; \filldraw[black] (0.75,0.75) circle (2pt); \node at (0.75,0.75) [anchor=west] {$R$}; \filldraw[black] (2.25,2.25) circle (2pt); \node at (2.25,2.25) [anchor=west] {$S$}; \filldraw[black] (3.75,0.75) circle (2pt); \node at (3.75,0.75) [anchor=west] {$T$}; \end{tikzpicture}\qquad\qquad \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,0) to (1.5,0) to [out = 0, in = 180] (3,1.5) to [out=0, in=180] (4.5,3); \draw (0,3) to [out=0,in=180] (1.5,1.5) to [out = 0, in = 180] (3,0) to (4.5,0); \draw (0,1.5) to [out=0,in=180] (1.5,3) to (3,3) to [out=0, in=180] (4.5,1.5); \draw[fill=white] (0,0) circle (.3); \draw[fill=white] (0,1.5) circle (.3); \draw[fill=white] (0,3) circle (.3); \draw[fill=white] (4.5,0) circle (.3); \draw[fill=white] (4.5,1.5) circle (.3); \draw[fill=white] (4.5,3) circle (.3); \draw[fill=white] (2.25,3) circle (.3); \draw[fill=white] (1.5,1.5) circle (.3); \draw[fill=white] (3,1.5) circle (.3); \node at (0,0) {$a$}; \node at (0,1.5) {$b$}; \node at (0,3) {$c$}; \node at (4.5,3) {$d$}; \node at (4.5,1.5) {$e$}; \node at (4.5,0) {$f$}; \node at (2.25,3) {$j$}; \node at (1.5,1.5) {$k$}; \node at (3,1.5) {$l$}; \filldraw[black] (0.75,2.25) circle (2pt); \node at (0.75,2.25) [anchor=west] {$T$}; \filldraw[black] (2.25,0.75) circle (2pt); \node at (2.25,0.75) [anchor=west]{$S$}; \filldraw[black] (3.75,2.25) circle (2pt); \node at (3.75,2.25) [anchor=west] {$R$}; \end{tikzpicture} \end{equation} \end{proposition} The $R$-vertices also satisfy the unitarity relation as given by the following theorem. We note again that the normalization of the $R$-vertices here is different from that of \cite{BBF,BBCG}. \begin{proposition}\label{Unit} For any $X,Y\in\{\Gamma,\Delta\}$ the following holds. Assume that $S$ is $XY$ ice with spectral parameters $z_i,z_j$, and $T$ is $YX$ ice with spectral parameters $z_j,z_i$. Then the partition function of the following two configurations are equal for any fixed combination of spins $a,b,c,d$. Here, the partition function of the right configuration is $\mathbbm{1}_{a=d} \mathbbm{1}_{b=c}$. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,0) to [out = 0, in = 180] (1.5,1.5) to [out = 0, in = 180] (3,0); \draw (0,1.5) to [out=0,in=180] (1.5,0) to [out=0,in=180] (3,1.5); \draw[fill=white] (0,0) circle (.3); \draw[fill=white] (0,1.5) circle (.3); \draw[fill=white] (1.5,0) circle (.3); \draw[fill=white] (1.5,1.5) circle (.3); \draw[fill=white] (3,0) circle (.3); \draw[fill=white] (3,1.5) circle (.3); \node at (0,0) {$a$}; \node at (0,1.5) {$b$}; \node at (1.5,0) {$e$}; \node at (1.5,1.5) {$f$}; \node at (3,0) {$d$}; \node at (3,1.5) {$c$}; \filldraw[black] (0.75,0.75) circle (2pt); \node at (0.75,0.75) [anchor=west] {$S$}; \filldraw[black] (2.25,0.75) circle (2pt); \node at (2.25,0.75) [anchor=west] {$T$}; \end{tikzpicture}\qquad\qquad \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,0) to (3,0); \draw (0,1.5) to (3,1.5); \draw[fill=white] (0,0) circle (.3); \draw[fill=white] (0,1.5) circle (.3); \draw[fill=white] (3,0) circle (.3); \draw[fill=white] (3,1.5) circle (.3); \node at (0,0) {$a$}; \node at (0,1.5) {$b$}; \node at (3,0) {$d$}; \node at (3,1.5) {$c$}; \end{tikzpicture} \end{equation} \end{proposition} \begin{proof} The relation is checked using a SAGE program. \end{proof} The four types of $R$-vertices and the cap vertices also satisfy the following relation called the ``caduceus relation''. It is used in the lattice model from \cite{BBCG}. Note that the normalization of the $R$-vertices here is different from that in \cite{BBCG}. \begin{proposition}[\cite{BBCG}]\label{caduc} Assume that $A$ is $\Delta\Delta$ ice of spectral parameters $z_i,z_j$, $B$ is $\Gamma\Gamma$ ice of spectral parameters $z_i^{-1},z_j^{-1}$, $C$ is $\Gamma\Delta$ ice of spectral parameters $z_i^{-1},z_j$, and $D$ is $\Delta\Gamma$ ice of spectral parameters $z_i,z_j^{-1}$. Also assume that the cap vertices $K_1,K_2$ have spectral parameters $z_i$ and $z_j$, respectively. Denote by $Z(I_1(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4))$ the partition function of the following configuration with fixed combination of spins $\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4$. \begin{equation} \label{eqn:caduceus1} \hfill I_1(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4)= \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,0) to (0.6,0) to [out=0, in=180] (3,2); \draw (0,3) to (0.6,3) to [out=0, in=180] (3,1); \draw (0,1) to [out=0, in=180] (2.4,3) to (3,3) ; \draw (0,2) to [out=0, in=180] (2.4,0) to (3,0); \draw (3,2) arc(-90:90:0.5); \draw (3,0) arc(-90:90:0.5); \filldraw[black] (3.5,0.5) circle (2pt); \filldraw[black] (3.5,2.5) circle (2pt); \filldraw[black] (0.9,1.5) circle (2pt); \filldraw[black] (2.1,1.5) circle (2pt); \filldraw[black] (1.5,0.5) circle (2pt); \filldraw[black] (1.5,2.5) circle (2pt); \node at (0.9,1.5) [anchor=south] {$D$}; \node at (2.1,1.5) [anchor=south] {$C$}; \node at (1.5,0.5) [anchor=south] {$B$}; \node at (1.5,2.5) [anchor=south] {$A$}; \node at (0,0) [anchor=east] {$\epsilon_4$}; \node at (0,1) [anchor=east] {$\epsilon_3$}; \node at (0,3) [anchor=east] {$\epsilon_1$}; \node at (0,2) [anchor=east] {$\epsilon_2$}; \node at (3.5,2.5) [anchor=west] {$K_1$}; \node at (3.5,0.5) [anchor=west] {$K_2$}; \end{tikzpicture} \end{equation} Also denote by $Z(I_2(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4))$ the partition function of the following configuration with fixed combination of spins $\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4$. \begin{equation} \hfill I_2(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4)= \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,2) arc(-90:90:0.5); \draw (0,0) arc(-90:90:0.5); \filldraw[black] (0.5,0.5) circle (2pt); \filldraw[black] (0.5,2.5) circle (2pt); \node at (0,0) [anchor=east] {$\epsilon_4$}; \node at (0,1) [anchor=east] {$\epsilon_3$}; \node at (0,3) [anchor=east] {$\epsilon_1$}; \node at (0,2) [anchor=east] {$\epsilon_2$}; \node at (0.5,2.5) [anchor=west] {$K_2$}; \node at (0.5,0.5) [anchor=west] {$K_1$}; \end{tikzpicture} \end{equation} Then for any fixed combination of spins $\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4$, we have \begin{equation} Z(I_1(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4))=\frac{z_j-vz_i}{z_i-vz_j}Z(I_2(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4)). \end{equation} \end{proposition} \section{Column operator, permutation graph, and $F$-matrix}\label{Sect.3} In this section, we introduce three key concepts that are used in our method for computing the partition function: the column operator, the permutation graph, and the $F$-matrix. We focus on type A lattice models in this section, and defer the generalization to type B and C lattice models to Section \ref{Sect.5}. Therefore, in this section, only $\Gamma$ ice is involved for ordinary vertices, and only $\Gamma\Gamma$ ice is involved for $R$-vertices. Some basic notations are given in Section \ref{Sect.3.1}. Then we introduce the three concepts in Sections \ref{Sect.3.2}-\ref{Sect.3.4}, respectively. In Section \ref{Sect.3.5}, we derive some basic properties of the $F$-matrix. \subsection{Basic notations}\label{Sect.3.1} In this subsection, we set up some basic notations. For any $a\in\{1,2,\cdots\}$, we let $W_a \cong \mathbb{C}^2$ be a 2-dimensional vector space over $\mathbb{C}$ spanned by two basis vectors $|0\rangle$ and $|1\rangle$ (here, as mentioned before, $0$ corresponds to the $+$ spin and $1$ corresponds to the $-$ spin). For any $i,j\in\{0,1\}$, we denote by $E_a^{(i,j)}$ the $2\times 2$ elementary matrix acting on $W_a$ with $1$ at position $(i,j)$ and $0$ elsewhere (the rows and columns of the matrix are labeled by $0,1$). Now we discuss the $R$-matrix. For any $a,b,c,d\in\{0,1\}$ and any $x_i,x_j\in\mathbb{C}$, we denote by $R(a,b,c,d;x_i,x_j)$ the Boltzmann weight of the following $R$-vertex with spectral parameters $x_i,x_j$: \begin{equation} \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$a$}; \node at (0,2) {$b$}; \node at (2,2) {$c$}; \node at (2,0) {$d$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{x_i,x_j}$}; \end{tikzpicture} \end{equation} For any two distinct positive integers $i,j$, we define the $R$-matrix $R_{i,j}(x_i,x_j)$ with spectral parameters $x_i,x_j$ that acts on $W_i \otimes W_j$ as follows: \begin{equation} R_{i,j}(x_i,x_j)=\sum_{a,b,c,d\in\{0,1\}}R(a,b,c,d;x_i,x_j) E_i^{(a,c)} E_j^{(b,d)}. \end{equation} We also denote $R(x_1,x_2):=R_{12}(x_1,x_2)$. We also use the following notations for the Boltzmann weights. For $\Gamma$ ice with spectral parameter $x_i$, we denote by $a_1(x_i)$ the Boltzmann weight of the $a_1$ state (see Figure \ref{Figure2.1}), and similarly for the other states. For $\Gamma\Gamma$ ice with spectral parameters $x_i,x_j$, we denote by $a_1(x_i,x_j)$ the Boltzmann weight of the $a_1$ state (see Figure \ref{Figure2.3}), and similarly for the other states. In the following, we fix a positive integer $N$. For any $(i_1,i_2,\cdots,i_N)\in \{0,1\}^N$, we denote by $|i_1,i_2,\cdots,i_N\rangle=|i_1\rangle\otimes |i_2\rangle \otimes \cdots \otimes |i_N\rangle$ the corresponding basis vector of $W_1 \otimes \cdots \otimes W_N$, and $\langle i_1,i_2,\cdots,i_N|\in (W_1\otimes \cdots \otimes W_N)^{*}$ the dual vector of $|i_1,i_2,\cdots,i_N\rangle$. Then for any operator $A\in End(W_1\otimes \cdots \otimes W_N)$, we define the component $(A)_{i_1\cdots i_N}^{j_1\cdots j_N}$ by \begin{equation}\label{Oped} A|j_1,\cdots,j_N\rangle =\sum_{(i_1,\cdots,i_N)\in\{0,1\}^{N}}(A)_{i_1\cdots i_N}^{j_1\cdots j_N}|i_1,\cdots,i_N\rangle,\text{ for any }(j_1,\cdots,j_N)\in\{0,1\}^N. \end{equation} Similarly, for any $a\in W_1\otimes \dots \otimes W_N$, the component $(a)_{i_1,\cdots,i_N}$ is defined by \begin{equation} (a)_{i_1,\cdots,i_N}=\langle i_1,\cdots,i_N|a, \text{ for any }(i_1,\cdots,i_N)\in\{0,1\}^N; \end{equation} for any $a\in (W_1\otimes \cdots \otimes W_N)^{*}$, the component $(a)^{i_1\cdots i_N}$ is defined by \begin{equation} (a)^{i_1,\cdots,i_N}=a |i_1,\cdots,i_N\rangle, \text{ for any }(i_1,\cdots,i_N)\in\{0,1\}^N. \end{equation} \subsection{Column operator}\label{Sect.3.2} In this subsection, we introduce the concept of ``column operator''. Namely, for any $\alpha\in\{0,1\}$ and $\vec{x}=(x_1,\cdots,x_N)\in\mathbb{C}^N$, we define the column operator $S^{[\alpha]}(\vec{x})\in End(W_1\otimes \cdots \otimes W_N)$ by specifying its components $(S^{[\alpha]}(\vec{x}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ for any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in\{0,1\}^N$. To define $(S^{[\alpha]}(\vec{x}))_{i_1\cdots i_N}^{j_1\cdots j_N}$, recall that we have identified $+$ spin with $0$ and $-$ spin with $1$. Consider a column of ordinary vertices whose spectral parameters are given by $x_1,\cdots,x_N$ from bottom to top. We also specify the boundary condition as follows: the top edge is labeled $\alpha$, the bottom edge is labeled $0$, the left edges are labeled $i_1,\cdots,i_N$ (from bottom to top), and the right edges are labeled $j_1,\cdots,j_N$ (from bottom to top). The component $(S^{[\alpha]}(\vec{x}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ is defined as the partition function of this configuration. An illustration of this component is given below. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (-0.5,0.5) to (0.5,0.5); \draw (-0.5,1) to (0.5,1); \draw (-0.5,2) to (0.5,2); \draw (-0.5,2.5) to (0.5,2.5); \draw (0,0) to (0,3); \node at (-0.5,0.5) [anchor=east] {$i_1$}; \node at (-0.5,1) [anchor=east] {$i_2$}; \node at (-0.5,1.5) [anchor=east] {$\cdots$}; \node at (-0.5,2) [anchor=east] {$i_{N-1}$}; \node at (-0.5,2.5) [anchor=east] {$i_N$}; \node at (0.5,0.5) [anchor=west] {$j_1$}; \node at (0.5,1) [anchor=west] {$j_2$}; \node at (0.5,1.5) [anchor=west] {$\cdots$}; \node at (0.5,2) [anchor=west] {$j_{N-1}$}; \node at (0.5,2.5) [anchor=west] {$j_N$}; \node at (0,0) [anchor=north] {$0$}; \node at (0,3) [anchor=south] {$\alpha$}; \end{tikzpicture}\quad\quad \end{equation} More generally, for any $\sigma\in S_N$, $\alpha\in\{0,1\}$, and $\vec{x}=(x_1,\cdots,x_N)\in\mathbb{C}^N$, we define the column operator $S^{[\alpha]}_{\sigma}(\vec{x})\in End(W_1\otimes \cdots \otimes W_N)$ by specifying its components $(S^{[\alpha]}_{\sigma}(\vec{x}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ for any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in \{0,1\}^N$. To define $(S^{[\alpha]}_{\sigma}(\vec{x}))_{i_1\cdots i_N}^{j_1\cdots j_N}$, consider a column of ordinary vertices whose spectral parameters are given by $x_1,\cdots,x_N$ from bottom to top. We also specify the boundary condition as follows: the top edge is labeled $\alpha$, the bottom edge is labeled $0$, the left edges are labeled $i_{\sigma(1)},\cdots, i_{\sigma(N)}$ (from bottom to top), and the right edges are labeled $j_{\sigma(1)},\cdots,j_{\sigma(N)}$ (from bottom to top). The component $(S^{[\alpha]}_{\sigma}(\vec{x}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ is defined as the partition function of this configuration. A illustration of this component is given below. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (-0.5,0.5) to (0.5,0.5); \draw (-0.5,1) to (0.5,1); \draw (-0.5,2) to (0.5,2); \draw (-0.5,2.5) to (0.5,2.5); \draw (0,0) to (0,3); \node at (-0.5,0.5) [anchor=east] {$i_{\sigma(1)}$}; \node at (-0.5,1) [anchor=east] {$i_{\sigma(2)}$}; \node at (-0.5,1.5) [anchor=east] {$\cdots$}; \node at (-0.5,2) [anchor=east] {$i_{\sigma(N-1)}$}; \node at (-0.5,2.5) [anchor=east] {$i_{\sigma(N)}$}; \node at (0.5,0.5) [anchor=west] {$j_{\sigma(1)}$}; \node at (0.5,1) [anchor=west] {$j_{\sigma(2)}$}; \node at (0.5,1.5) [anchor=west] {$\cdots$}; \node at (0.5,2) [anchor=west] {$j_{\sigma(N-1)}$}; \node at (0.5,2.5) [anchor=west] {$j_{\sigma(N)}$}; \node at (0,0) [anchor=north] {$0$}; \node at (0,3) [anchor=south] {$\alpha$}; \end{tikzpicture}\quad\quad \end{equation} \subsection{Permutation graph}\label{Sect.3.3} In this subsection, we introduce the concept of ``permutation graph'' for free fermionic Boltzmann weights. The permutation graph is a generalization of the $R$-matrix. For any two permutations $\rho_1,\rho_2\in S_N$ and any vector of spectral parameters $\vec{x}=(x_1,\cdots,x_N)\in\mathbb{C}^N$, the ``permutation graph'' $R_{\rho_1}^{\rho_2}(\vec{x})$ is an element of $End(W_1\otimes \cdots \otimes W_N)$ as defined below. We first consider the case where $\rho_1=s_i=(i,i+1)$ and $\rho_2=id$ for some $1\leq i\leq N-1$. We let \begin{equation} R_{s_i}^{id}(\vec{x})=R_{i, i+1}(x_i,x_{i+1}). \end{equation} More generally, for any $\rho_1\in S_N$, we let \begin{equation} R_{\rho_1}^{\rho_1}(\vec{x})=1,\quad R_{\rho_1\circ s_i}^{\rho_1}(\vec{x})=R_{\rho_1(i),\rho_1(i+1)}(x_{\rho_1(i)},x_{\rho_1(i+1)}), \end{equation} and recursively for any $\rho_1,\rho_2\in S_N$, \begin{equation} R_{\rho_1 \circ s_i}^{\rho_2}(\vec{x})=R_{\rho_1}^{\rho_2}(\vec{x}) R_{\rho_1\circ s_i}^{\rho_1}(\vec{x}). \end{equation} For general $\rho_1,\rho_2$, $R_{\rho_1}^{\rho_2}(\vec{x})$ can be constructed from the above definition. By the ``$RRR$'' Yang-Baxter equations and the unitarity relation (Theorems \ref{YBE2}-\ref{Unit}), $R_{\rho_1}^{\rho_2}(\vec{x})$ is well-defined. As a simple example, consider the case where $N=4$, $\rho_1=id$, and $\rho_2=(132)$. Then for any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in \{0,1\}^N$, the component $(R_{\rho_1}^{\rho_2}(\vec{x}))_{i_1\cdots i_N}^{j_1 \cdots j_N}$ is given by the partition function of the following configuration. $S$ as shown below is $\Gamma\Gamma$ ice with spectral parameters $x_3,x_1$, and $T$ is $\Gamma\Gamma$ ice with spectral parameters $x_3,x_2$. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,3) to (5,3); \draw (0,0) to [out=0,in=180] (4,2) to (5,2); \draw (0,1) to [out=0,in=180] (4,0) to (5,0); \draw (0,2) to [out=0,in=180] (4,1) to (5,1); \node at (0,0) [anchor=east] {$i_3$}; \node at (0,1) [anchor=east] {$i_1$}; \node at (0,2) [anchor=east] {$i_2$}; \node at (0,3) [anchor=east] {$i_4$}; \node at (5,0) [anchor=west] {$j_1$}; \node at (5,1) [anchor=west] {$j_2$}; \node at (5,2) [anchor=west] {$j_3$}; \node at (5,3) [anchor=west] {$j_4$}; \node at (4.5,3) [anchor=south] {$\Gamma,x_4$}; \node at (4.5,2) [anchor=south] {$\Gamma, x_3$}; \node at (4.5,1) [anchor=south] {$\Gamma,x_2$}; \node at (4.5,0) [anchor=south] {$\Gamma,x_1$}; \filldraw[black] (1.55,0.65) circle (2pt); \node at (1.6,0.65) [anchor=west] {$S$}; \filldraw[black] (2.4,1.3) circle (2pt); \node at (2.45,1.3) [anchor=west] {$T$}; \end{tikzpicture} \end{equation} \subsection{$F$-matrix}\label{Sect.3.4} Based on the permutation graph, we construct the ``$F$-matrix'' as follows. The $F$-matrix introduced in this paper is inspired by the $F$-matrix (also called the ``factorization matrix'') introduced in \cite{MS} for $U_q(\widehat{\mathfrak{sl}}_2)$ $R$-matrix, but works for free fermionic Boltzmann weights. First we set up some relevant notations. For any permutation $\rho\in S_N$, we define \begin{eqnarray}\label{I} I(\rho)&:=&\{(k_1,\cdots,k_N)\in \{0,1\}^N: 0\leq k_N\leq \cdots\leq k_1\leq 1; \text{ for any }1\leq t \leq N-1, \text{ if }\rho(t)>\rho(t+1),\nonumber \\ &&\quad \text{ then }k_t>k_{t+1}\}, \end{eqnarray} \begin{eqnarray}\label{I'} I'(\rho)&:=&\{(k_1,\cdots,k_N)\in\{0,1\}^N: 0\leq k_1\leq \cdots\leq k_N\leq 1; \text{ for any }1\leq t \leq N-1, \text{ if }\rho(t)<\rho(t+1),\nonumber\\ &&\quad \text{ then }k_t<k_{t+1}\}. \end{eqnarray} As a simple example, consider the case where $N=2$. When $\rho=id$, we have $I(\rho)=\{(0,0),(1,0),(1,1)\}$ and $I'(\rho)=\{(0,1)\}$. When $\rho=(12)$, we have $I(\rho)=\{(1,0)\}$ and $I'(\rho)=\{(0,0),(0,1),(1,1)\}$. Now we define the $F$-matrices $F(\vec{x})=F_{1\cdots N}(\vec{x}),F^{*}(\vec{x})=F^{*}_{1\cdots N}(\vec{x})\in End(W_1 \otimes \cdots \otimes W_N)$ by \begin{equation} F(\vec{x}):=\sum_{\rho\in S_N}\sum_{(k_1,\cdots,k_N)\in I(\rho)}\prod_{i=1}^N E_{\rho(i)}^{(k_i,k_i)} R_{id}^{\rho}(\vec{x}), \end{equation} \begin{equation} F^{*}(\vec{x}):=\sum_{\rho\in S_N}\sum_{(k_1,\cdots,k_N)\in I'(\rho)}R_{\rho}^{id}(\vec{x})\prod_{i=1}^N E_{\rho(i)}^{(k_i,k_i)}. \end{equation} Again, consider the case where $N=2$. We have \begin{equation*} F(x_1,x_2)=\sum_{(k_1,k_2)\in I(id)}E_1^{(k_1,k_1)}E_2^{(k_2,k_2)}+E_2^{(1,1)}E_1^{(0,0)}R_{id}^{(12)}(x_1,x_2). \end{equation*} We order the basis vectors of $W_1\otimes W_2$ as $|0,0\rangle,|0,1\rangle,|1,0\rangle,|1,1\rangle$. Note that \begin{equation*} R_{id}^{(12)}(x_1,x_2)=\begin{bmatrix} a_1(x_2,x_1) & & & \\ & b_2(x_2,x_1) & c_1(x_2,x_1) &\\ & c_2(x_2,x_1) & b_1(x_2,x_1) &\\ &&& a_2(x_2,x_1) \end{bmatrix}. \end{equation*} Hence we have \begin{equation}\label{e1} F(x_1,x_2)= \begin{bmatrix} 1 & & & \\ & b_2(x_2,x_1) & c_1(x_2,x_1) &\\ & 0 & 1 &\\ &&&1 \end{bmatrix}. \end{equation} Similarly, \begin{equation*} F^{*}(x_1,x_2)=E_1^{(0,0)}E_2^{(1,1)}+\sum_{(k_1,k_2)\in I'((12))}R_{(12)}^{id}(x_1,x_2)E_2^{(k_1,k_1)}E_1^{(k_2,k_2)}, \end{equation*} which leads to \begin{equation}\label{e2} F^{*}(x_1,x_2)=\begin{bmatrix} a_1(x_1,x_2) & &&\\ & 1 & c_2(x_1,x_2) \\ & 0 & b_2(x_1,x_2) \\ &&&a_2(x_1,x_2) \end{bmatrix}. \end{equation} Using the relations \begin{equation*} b_2(x_2,x_1)c_2(x_1,x_2)+c_1(x_2,x_1)b_2(x_1,x_2)=0,\quad a_1(x_1,x_2)=1, \end{equation*} we have \begin{equation}\label{Eq} F(x_1,x_2) F^{*}(x_1,x_2)=\begin{bmatrix} 1 &&&\\ & b_2(x_2,x_1) &&\\ && b_2(x_1,x_2)&\\ &&&a_2(x_1,x_2) \end{bmatrix}. \end{equation} More generally, for any $\sigma\in S_N$, we can define $F_{\sigma(1)\cdots\sigma(N)}(x_1,\cdots,x_N)$ as follows. We define the permutation operator $P_{1\cdots N}^{\sigma}$ by \begin{equation*} P_{1\cdots N}^{\sigma}|i_1,\cdots,i_N\rangle=|i_{\sigma^{-1}(1)},\cdots, i_{\sigma^{-1}(N)}\rangle. \end{equation*} Now we define \begin{equation} F_{\sigma(1)\cdots\sigma(N)}(x_1,\cdots,x_N)=P_{1\cdots N}^{\sigma} F_{1\cdots N}(x_1,\cdots,x_N) P_{1\cdots N}^{\sigma^{-1}}. \end{equation} When $N=2$, we have \begin{equation}\label{e3} F_{21}(x_2,x_1)=\begin{bmatrix} 1 &&&\\ &1&0&\\ &c_1(x_1,x_2)&b_2(x_1,x_2)&\\ &&&1 \end{bmatrix}. \end{equation} Hereafter, we may omit the argument $\vec{x}$ from the $F$-matrix when it is clear from the context. \subsection{Basic properties of the $F$-matrix}\label{Sect.3.5} In this subsection, we derive some basic properties of the $F$-matrix introduced in Section \ref{Sect.3.4}. The following proposition gives the inverse $F^{-1}$ of $F$ in terms of $F^{*}$. \begin{proposition}\label{P1} $\Delta:=F F^{*}$ is a diagonal matrix. The diagonal entries of $\Delta$ are given by \begin{eqnarray*} (\Delta)_{i_1\cdots i_N}^{i_1\cdots i_N}=\prod_{(a,b):i_a=1,i_b=0} b_2(x_a,x_b)\prod_{(a,b):a<b,i_a=1,i_b=1}a_2(x_a,x_b) \end{eqnarray*} for every $(i_1,\cdots,i_N)\in\{0,1\}^N$. \end{proposition} \begin{remark} The special case where $N=2$ is computed in (\ref{Eq}). \end{remark} \begin{remark} The proposition implies that $F^{-1}=F^{*}\Delta^{-1}$ if $\Delta$ is invertible. \end{remark} \begin{proof} Let $\sigma_0\in S_N$ be defined by $\sigma_0(i)=N+1-i$ for every $1\leq i\leq N$. By elementary computations (similar to those in \cite[Proposition 3.2]{ABFR}), we obtain that \begin{equation}\label{E1} \Delta=F F^{*}=\sum_{\rho\in S_N}\sum_{(k_1,\cdots,k_N)\in I(\rho)}\prod_{i=1}^N E_{\rho(i)}^{(k_i,k_i)} R_{\rho\sigma_0}^{\rho}(\vec{x})\prod_{i=1}^N E_{\rho(i)}^{(k_i,k_i)}. \end{equation} This shows that $\Delta$ is a diagonal matrix. Now we compute the component $(\Delta)_{i_1\cdots i_N}^{i_1\cdots i_N}$ for every $(i_1,\cdots,i_N)\in\{0,1\}^N$. For any $\rho\in S_N$, the only element $(k_1,\cdots,k_N)\in I(\rho)$ in the sum of the right hand side of (\ref{E1}) that contributes to $(\Delta)_{i_1\cdots i_N}^{i_1\cdots i_N}$ is determined by $k_t=i_{\rho(t)}$ for every $1\leq t\leq N$. From the definition of $I(\rho)$ in (\ref{I}), in order for this term to be non-vanishing, we mush have \begin{eqnarray}\label{E2} && 0\leq i_{\rho(N)}\leq \cdots \leq i_{\rho(1)}\leq 1, \nonumber \\ && i_{\rho(t)}=i_{\rho(t+1)}\text{ implies }\rho(t)<\rho(t+1), \text{ for every }1\leq t\leq N-1. \end{eqnarray} There is a unique permutation, denoted by $\rho(i_1,\cdots,i_N)\in S_N$, that satisfies the condition (\ref{E2}). Therefore we conclude that \begin{equation}\label{E3} (\Delta)_{i_1\cdots i_N}^{i_1\cdots i_N}=(R_{\rho(i_1,\cdots,i_N)\sigma_0}^{\rho(i_1\cdots i_N)}(\vec{x}))_{i_1\cdots i_N}^{i_1\cdots i_N}. \end{equation} To compute the right hand side of (\ref{E3}), we note that there is only one admissible state for the corresponding permutation graph, as indicated below (in which $\rho=\rho(i_1,\cdots,i_N)$). \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,3) to [out=0, in=120] (1.5,1.5) to [out=-60, in=180] (5,0); \draw (0,0) to [out=0,in=-120] (3.5,1.5) to [out=60, in=180] (5,3); \draw (0,1) to [out=0,in=180] (5,2); \draw (0,2) to [out=0,in=180] (5,1); \node at (0,0.5) [anchor=east] {$\cdots$}; \node at (0,0) [anchor=east] {$i_{\rho(1)}=1$}; \node at (0,1) [anchor=east] {$i_{\rho(s)}=1$}; \node at (0,2) [anchor=east] {$i_{\rho(s+1)}=0$}; \node at (0,2.5) [anchor=east] {$\cdots$}; \node at (0,3) [anchor=east] {$i_{\rho(N)}=0$}; \node at (5,0) [anchor=west] {$i_{\rho(N)}=0$}; \node at (5,0.5) [anchor=west] {$\cdots$}; \node at (5,1) [anchor=west] {$i_{\rho(s+1)}=0$}; \node at (5,2.5) [anchor=west] {$\cdots$}; \node at (5,2) [anchor=west] {$i_{\rho(s)}=1$}; \node at (5,3) [anchor=west] {$i_{\rho(1)}=1$}; \node at (2,1.6) [anchor=south] {$0$}; \node at (3,1.6) [anchor=south] {$1$}; \node at (1.5,1.8) [anchor=north east] {$0$}; \node at (3.5,1.8) [anchor=north west] {$1$}; \node at (1.5,1.8) [anchor=north east] {$0$}; \node at (2.2,1.4) [anchor=north] {$1$}; \node at (2.8,1.4) [anchor=north ] {$0$}; \node at (1.9,0.9) [anchor=north] {$0$}; \node at (3.1,0.9) [anchor=north ] {$1$}; \end{tikzpicture}\quad\quad \end{equation} The Boltzmann weight of the unique admissible state is (note that there are only crossings of $a_1,a_2$, or $b_2$ patterns as can be seen from the above figure) \begin{equation} \prod_{(a,b): i_a=1,i_b=0}b_2(x_a,x_b)\prod_{(a,b):a<b,i_a=1,i_b=1}a_2(x_a,x_b). \end{equation} Hence for every $(i_1,\cdots,i_N)\in\{0,1\}^N$, we have \begin{eqnarray*} (\Delta)_{i_1\cdots i_N}^{i_1\cdots i_N}=\prod_{(a,b):i_a=1,i_b=0} b_2(x_a,x_b)\prod_{(a,b):a<b,i_a=1,i_b=1}a_2(x_a,x_b). \end{eqnarray*} \end{proof} Using similar arguments, we obtain the following proposition on the components of $F$ and $F^{*}$. \begin{proposition}\label{P2} For any given $(i_1,\cdots,i_N)\in\{0,1\}^N$, let $\rho=\rho(i_1,\cdots,i_N)\in S_N$ be the unique permutation determined by the following condition \begin{eqnarray*} && 0\leq i_{\rho(N)}\leq \cdots \leq i_{\rho(1)} \leq 1,\\ && i_{\rho(t)}=i_{\rho(t+1)}\text{ implies }\rho(t)<\rho(t+1), \text{ for every } 1\leq t\leq N-1. \end{eqnarray*} Similarly, for any given $(j_1,\cdots,j_N)\in \{0,1\}^N$, let $\rho^{*}=\rho^{*}(j_1,\cdots,j_N)\in S_N$ be the unique permutation determined by the following condition \begin{eqnarray*} && 0\leq j_{\rho^{*}(1)}\leq \cdots \leq j_{\rho^{*}(N)} \leq 1,\\ && j_{\rho^{*}(t)}=j_{\rho^{*}(t+1)}\text{ implies }\rho^{*}(t)>\rho^{*}(t+1), \text{ for every }1\leq t\leq N-1. \end{eqnarray*} Then for any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in\{0,1\}^N$, the components of $F$, $F^{*}$ are given by \begin{equation} (F)_{i_1\cdots i_N}^{j_1\cdots j_N}=(R_{id}^{\rho})_{i_1\cdots i_N}^{j_1 \cdots j_N}, \end{equation} \begin{equation} (F^{*})_{i_1 \cdots i_N}^{j_1 \cdots j_N}=(R_{\rho^{*}}^{id})_{i_1\cdots i_N}^{j_1 \cdots j_N}. \end{equation} \end{proposition} \begin{remark} For the special case where $N=2$, this can be directly checked from (\ref{e1}) and (\ref{e2}). \end{remark} \section{Ice model related to Tokuyama's formula}\label{Sect.4} In this section, based on the concepts introduced in Section \ref{Sect.3}, we apply the method outlined in Section \ref{Sect.1.2} to give a new derivation of the partition function of the lattice model in \cite{BBF}. As discussed in the Introduction, computing the partition function of this lattice model leads to an alternative proof of Tokuyama's formula \cite{Tok}. In Section \ref{Sect.4.1}, we review the setups of the model in \cite{BBF}. Then we present the computation of its partition function in Section \ref{Sect.4.2}. \subsection{The lattice model}\label{Sect.4.1} We review the lattice model in \cite{BBF} as follows. Let $\lambda=(\lambda_1,\cdots,\lambda_N)$ be a given partition (meaning that $\lambda_1\geq \cdots\geq\lambda_N\geq 0$), and $\rho=(N-1,\cdots,0)$. Consider a rectangular lattice with $N$ rows and $\lambda_1+N$ columns. The columns are labeled $0,1,\cdots,\lambda_1+N-1$ from right to left, and the rows are labeled $1,2,\cdots,N$ from bottom to top. Note that our notations differ from that of \cite{BBF} in that the ordering of the rows is reversed. The vertices are all $\Gamma$ ice, and the spectral parameter of the vertices in the $i$th row is given by $z_i$ for every $1\leq i\leq N$. The boundary conditions are specified as follows. On the left and bottom boundaries we assign $0$ ($+$ spin). On the right boundary we assign $1$ ($-$ spin). On the top boundary, we assign $1$ to every column labeled $\lambda_i+N-i$ for $1\leq i\leq N$, and $0$ to the rest of the columns. Let $z=(z_1,\cdots,z_N)$. We denote by $Z(\mathcal{S}_{\lambda,z})$ the partition function of the above lattice model. As an illustration, when $N=3$ and $\lambda=(3,1,0)$, the model configuration is given below: \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,1)--(7,1); \draw (0,2)--(7,2); \draw (0,3)--(7,3); \draw (1,0.5)--(1,3.5); \draw (2,0.5)--(2,3.5); \draw (3,0.5)--(3,3.5); \draw (4,0.5)--(4,3.5); \draw (5,0.5)--(5,3.5); \draw (6,0.5)--(6,3.5); \filldraw[black] (1,1) circle (1pt); \filldraw[black] (2,1) circle (1pt); \filldraw[black] (3,1) circle (1pt); \filldraw[black] (4,1) circle (1pt); \filldraw[black] (5,1) circle (1pt); \filldraw[black] (6,1) circle (1pt); \filldraw[black] (3,2) circle (1pt); \filldraw[black] (2,2) circle (1pt); \filldraw[black] (1,2) circle (1pt); \filldraw[black] (4,2) circle (1pt); \filldraw[black] (5,2) circle (1pt); \filldraw[black] (6,2) circle (1pt); \filldraw[black] (1,3) circle (1pt); \filldraw[black] (2,3) circle (1pt); \filldraw[black] (3,3) circle (1pt); \filldraw[black] (4,3) circle (1pt); \filldraw[black] (5,3) circle (1pt); \filldraw[black] (6,3) circle (1pt); \node at (0,1) [anchor=east] {$0$}; \node at (0,2) [anchor=east] {$0$}; \node at (0,3) [anchor=east] {$0$}; \node at (7,1) [anchor=west] {$1$}; \node at (7,2) [anchor=west] {$1$}; \node at (7,3) [anchor=west] {$1$}; \node at (-0.5,1) [anchor=east] {$1$}; \node at (-0.5,2) [anchor=east] {$2$}; \node at (-0.5,3) [anchor=east] {$3$}; \node at (-1,1) [anchor=east] {row}; \node at (0.5,1) [anchor=south] {$\Gamma$}; \node at (0.5,2) [anchor=south] {$\Gamma$}; \node at (0.5,3) [anchor=south] {$\Gamma$}; \node at (6.5,1) [anchor=south] {$z_1$}; \node at (6.5,2) [anchor=south] {$z_2$}; \node at (6.5,3) [anchor=south] {$z_3$}; \node at (1,3.5) [anchor=south] {$1$}; \node at (2,3.5) [anchor=south] {$0$}; \node at (3,3.5) [anchor=south] {$0$}; \node at (4,3.5) [anchor=south] {$1$}; \node at (5,3.5) [anchor=south] {$0$}; \node at (6,3.5) [anchor=south] {$1$}; \node at (1,4) [anchor=south] {$5$}; \node at (2,4) [anchor=south] {$4$}; \node at (3,4) [anchor=south] {$3$}; \node at (4,4) [anchor=south] {$2$}; \node at (5,4) [anchor=south] {$1$}; \node at (6,4) [anchor=south] {$0$}; \node at (0,4) [anchor=south] {column}; \node at (1,0.5) [anchor=north] {$0$}; \node at (2,0.5) [anchor=north] {$0$}; \node at (3,0.5) [anchor=north] {$0$}; \node at (4,0.5) [anchor=north] {$0$}; \node at (5,0.5) [anchor=north] {$0$}; \node at (6,0.5) [anchor=north] {$0$}; \end{tikzpicture} \end{equation} \subsection{Computation of the partition function}\label{Sect.4.2} In this subsection, we use the method outlined in Section \ref{Sect.1.2} and the concepts introduced in Section \ref{Sect.3} to provide a new derivation of the partition function of the lattice model as reviewed in Section \ref{Sect.4.1}. The partition function was originally derived in \cite{BBF} using a different approach, which is based on combinatorics of Gelfand-Testlin patterns. The main result is the following theorem. \begin{theorem}\label{Theorem1} \begin{equation} Z(\mathcal{S}_{\lambda,z})=\prod_{1\leq i< j\leq N}(z_j-vz_i)s_{\lambda}(z_1,\cdots,z_N), \end{equation} where $s_{\lambda}(z_1,\cdots,z_N)$ is the Schur polynomial. \end{theorem} \paragraph{We give a brief outline of the proof as follows.} Instead of writing the partition function in terms of the product of row transfer matrices as usual, we write it in terms of \underline{column operators} as introduced in Section \ref{Sect.3.1}. See (\ref{E4}) below for the concrete expression. Then, using the $F$-matrix as introduced in Section \ref{Sect.3.4} (which is based on the permutation graph introduced in Section \ref{Sect.3.3}), we conjugate each column operator by the $F$-matrix, as in (\ref{Par}) below. Thanks to the conjugation procedure, the components of the \underline{conjugated column operators} have simple explicit forms as given in Proposition \ref{P3}. The computation of these components are based on basic properties of the $F$-matrix as established in Section \ref{Sect.3.5}, as well as the Yang-Baxter equations and properties of permutation graphs. Finally, based on these components, we can write the partition function as a sum over the symmetric group $S_n$, which leads to the proof of Theorem \ref{Theorem1}. For all computations below, we assume implicitly that all quantities that appear in the denominator are non-zero, since otherwise we can vary $v$ and $z_1,\cdots,z_N$ and obtain the conclusion by continuity. Let $\vec{x}=(z_1,\cdots,z_N)$. Below we omit the argument $\vec{x}$ from the notation $S^{[\alpha]}(\vec{x})$. To compute the partition function $Z(\mathcal{S}_{\lambda,z})$, we first note that it can be written in terms of column operators \begin{equation}\label{E4} Z(\mathcal{S}_{\lambda,z})=\langle 0,\cdots,0|S^{[m_{\lambda_1+N-1}]}\cdots S^{[m_0]}|1,\cdots,1\rangle, \end{equation} where for every $0\leq j\leq \lambda_1+N-1$, $m_j=1$ if $j\in\{\lambda_i+N-i:i\in\{1,\cdots,N\}\}$ and $m_j=0$ otherwise. Here, $\langle 0,\cdots,0|$ and $|1,\cdots,1\rangle$ are determined from the left and right boundary conditions. Note that by Proposition \ref{P1}, we have \begin{equation} F^{-1}=F^{*}\Delta^{-1}. \end{equation} Hence we can write (\ref{E4}) in the following form by conjugating each column operator by the $F$-matrix: \begin{equation}\label{Par} Z(\mathcal{S}_{\lambda,z})=(\langle 0,\cdots,0| F^{-1}) (F S^{[m_{\lambda_1+N-1}]}F^{-1}) \cdots (F S^{[m_0]}F^{-1}) (F|1,\cdots,1\rangle). \end{equation} Therefore, it suffices to compute $\langle 0,\cdots,0| F^{-1}$, $F|1,\cdots,1\rangle$ and the components of the conjugated column operators $F S^{[m_{j}]}F^{-1}$ for $0\leq j\leq \lambda_1+N-1$. The motivation for introducing these conjugations is that the components of the conjugated operators can be explicitly computed. The following proposition explicitly computes $\langle 0,\cdots,0| F^{-1}$, $F|1,\cdots,1\rangle$, and the components of $F S^{[m]}F^{-1}$ for $m\in\{0,1\}$. \begin{proposition}\label{P3} We have \begin{equation} \langle 0,\cdots 0 | F^{-1}=\langle 0,\cdots,0|, \end{equation} \begin{equation} F|1,\cdots 1 \rangle=|1,\cdots,1\rangle. \end{equation} For any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in \{0,1\}^N$, we have \begin{equation} (F S^{[0]} F^{-1})_{i_1\cdots i_N}^{j_1 \cdots j_N}=\prod_{t=1}^N \mathbbm{1}_{i_t=j_t}\prod_{t: i_t=1}z_t, \end{equation} \begin{eqnarray} (F S^{[1]} F^{-1})_{i_1\cdots i_N}^{j_1\cdots j_N}&=&\sum_{m=1}^N \mathbbm{1}_{i_m=0,j_m=1}\prod_{t:1\leq t\leq N, t\neq m} \mathbbm{1}_{i_t=j_t}\prod_{t: i_t=1,j_t=1}z_t\nonumber\\ &&\times \prod_{t:t>m,i_t=1,j_t=1}\frac{z_t-v z_m}{z_m-vz_t}\prod_{t:i_t=0,j_t=0}\frac{z_t-v z_m}{z_m-z_t}. \end{eqnarray} Equivalently, we have \begin{equation} F S^{[0]} F^{-1}=\bigotimes_{t\in\{1,\cdots,N\}}\begin{pmatrix} 1 & 0\\ 0 & z_t \end{pmatrix}_t, \end{equation} \begin{eqnarray} F S^{[1]} F^{-1}=\sum_{m=1}^N \bigotimes_{t\in \{1,\cdots,m-1\}} \begin{pmatrix} \frac{z_t-vz_m}{z_m-z_t} & 0\\ 0 &z_t \end{pmatrix}_t \bigotimes \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix}_m \bigotimes_{t\in\{m+1,\cdots,N\}} \begin{pmatrix} \frac{z_t-vz_m}{z_m-z_t} & 0\\ 0 &\frac{z_t(z_t-vz_m)}{z_m-vz_t} \end{pmatrix}_t, \end{eqnarray} where the basis vectors of each $W_t$ for $t\in\{1,\cdots,N\}$ are ordered as $|0\rangle$, $|1\rangle$. \end{proposition} \begin{proof} We start with the computation of $\langle 0,\cdots,0 | F^{-1}$. For any $(j_1,\cdots,j_N)\in\{0,1\}^N$, as $F^{-1}=F^{*}\Delta^{-1}$ and $\Delta$ is a diagonal matrix, we have \begin{eqnarray*} (\langle 0,\cdots ,0| F^{-1} )^{j_1\cdots j_N}&=& \langle 0,\cdots ,0| F^{*}\Delta^{-1} |j_1,\cdots,j_N\rangle\\ &=& (F^{*})_{0\cdots 0}^{j_1\cdots j_N}(\Delta^{-1})_{j_1\cdots j_N}^{j_1\cdots j_N}. \end{eqnarray*} By Proposition \ref{P1}, \begin{eqnarray}\label{Delt} (\Delta^{-1})_{j_1\cdots j_N}^{j_1\cdots j_N} &=& \prod_{(a,b):j_a=1,j_b=0} b_2(z_a,z_b)^{-1} \prod_{(a,b):a<b,j_a=1,j_b=1}a_2(z_a,z_b)^{-1} \nonumber\\ &=& \prod_{(a,b):j_a=1, j_b=0}\frac{z_b-v z_a}{z_a-z_b} \prod_{(a,b): a<b,j_a=1,j_b=1}\frac{z_b-vz_a}{z_a-vz_b}. \end{eqnarray} By Proposition \ref{P2}, \begin{equation} (F^{*})_{0\cdots 0}^{j_1\cdots j_N}=(R_{\rho^{*}}^{id})_{0\cdots 0}^{j_1\cdots j_N}, \end{equation} where $\rho^{*}\in S_N$ is the unique permutation determined by \begin{eqnarray}\label{Con2} && 0\leq j_{\rho^{*}(1)}\leq\cdots \leq j_{\rho^{*}(N)}\leq 1,\nonumber\\ && j_{\rho^{*}(t)}=j_{\rho^{*}(t+1)}\text{ implies }\rho^{*}(t)>\rho^{*}(t+1), \text{ for every }1\leq t\leq N-1. \end{eqnarray} By the definition of permutation graph and the Boltzmann weights of the $R$-vertices, in order for $(F^{*})_{0\cdots 0}^{j_1\cdots j_N}$ to be non-vanishing, we necessarily have \begin{equation} j_1=\cdots=j_N=0. \end{equation} In this case, $(F^{*})_{0\cdots 0}^{0\cdots 0}=1$, $(\Delta^{-1})_{0\cdots 0}^{0\cdots 0}=1$. Therefore we conclude that \begin{equation} \langle 0,\cdots,0|F^{-1}=\langle 0,\cdots,0|. \end{equation} Now we compute $F|1,\cdots,1\rangle$. For any $(i_1,\cdots,i_N)\in\{0,1\}^N$, we have by Proposition \ref{P2}, \begin{eqnarray*} (F|1,\cdots,1\rangle))_{i_1\cdots i_N}= (F)_{i_1\cdots i_N}^{1\cdots 1} =(R_{id}^{\rho})_{i_1\cdots i_N}^{1\cdots 1}, \end{eqnarray*} where $\rho\in S_N$ is the unique permutation that satisfies the condition \begin{eqnarray}\label{Con1} && 0\leq i_{\rho(N)}\leq \cdots \leq i_{\rho(1)} \leq 1,\nonumber\\ && i_{\rho(t)}=i_{\rho(t+1)}\text{ implies }\rho(t)<\rho(t+1), \text{ for every } 1\leq t\leq N-1. \end{eqnarray} In order for $(F)_{i_1\cdots i_N}^{1\cdots 1}$ to be non-vanishing, necessarily \begin{equation} i_1=\cdots=i_N=1. \end{equation} In this case, $\rho=id$, and $(F)_{1\cdots 1}^{1\cdots 1}=1$. Therefore we conclude that \begin{equation} F|1,\cdots 1 \rangle=|1,\cdots,1\rangle. \end{equation} Finally, we compute the components of $F S^{[\alpha]} F^{-1}$ for $\alpha\in \{0,1\}$. For any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in\{0,1\}^N$, by Proposition \ref{P2}, we have \begin{eqnarray*} (F S^{[\alpha]} F^{-1})_{i_1\cdots i_N}^{j_1\cdots j_N} &=& (F S^{[\alpha]} F^{*})_{i_1\cdots i_N}^{j_1\cdots j_N} (\Delta^{-1})_{j_1\cdots j_N}^{j_1\cdots j_N}\\ &=& (R_{id}^{\rho} S^{[\alpha]} R_{\rho^{*}}^{id})_{i_1\cdots i_N}^{j_1\cdots j_N} (\Delta^{-1})_{j_1\cdots j_N}^{j_1 \cdots j_N}, \end{eqnarray*} where $\rho,\rho^{*}$ are determined by the conditions (\ref{Con1}) and (\ref{Con2}), respectively. We note that $(\Delta^{-1})_{j_1\cdots j_N}^{j_1 \cdots j_N}$ is already computed in (\ref{Delt}). Now we compute the component $(R_{id}^{\rho} S^{[\alpha]} R_{\rho^{*}}^{id})_{i_1\cdots i_N}^{j_1\cdots j_N}$. By the Yang-Baxter equations (the ``$RTT$'' version, see Proposition \ref{YBE1}), we can sequentially push the $R$-braids to the right, which leads to \begin{eqnarray}\label{P} (R_{id}^{\rho} S^{[\alpha]} R_{\rho^{*}}^{id})_{i_1\cdots i_N}^{j_1\cdots j_N}&=& (S^{[\alpha]}_{\rho}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{id}^{\rho} R_{\rho^{*}}^{id})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}\nonumber\\ &=& (S^{[\alpha]}_{\rho}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}. \end{eqnarray} An illustration of the permutation graph corresponding to the right hand side of (\ref{P}) is given in the following figure. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (-0.5,0) to (0.5,0); \draw (-0.5,1) to (0.5,1); \draw (-0.5,2) to (0.5,2); \draw (-0.5,3) to (0.5,3); \draw (0,-0.5) to (0,3.5); \node at (0,3.5) [anchor=south] {$\alpha$}; \node at (0,-0.5) [anchor=north] {$0$}; \draw (0.5,3) to [out=0, in=180] (5,0); \draw (0.5,0) to [out=0, in=180] (5,1); \draw (0.5,1) to [out=0,in=180] (5,3); \draw (0.5,2) to [out=0,in=180] (5,2); \node at (-0.5,1.5) [anchor=east] {$\cdots$}; \node at (-0.5,0) [anchor=east] {$i_{\rho(1)}$}; \node at (-0.5,1) [anchor=east] {$i_{\rho(2)}$}; \node at (-0.5,2) [anchor=east] {$i_{\rho(N-1)}$}; \node at (-0.5,3) [anchor=east] {$i_{\rho(N)}$}; \node at (5,0) [anchor=west] {$j_{\rho^{*}(1)}$}; \node at (5,1.5) [anchor=west] {$\cdots$}; \node at (5,1) [anchor=west] {$j_{\rho^{*}(2)}$}; \node at (5,2) [anchor=west] {$j_{\rho^{*}(N-1)}$}; \node at (5,3) [anchor=west] {$j_{\rho^{*}(N)}$}; \filldraw[black] (0.5,0) circle (1pt); \filldraw[black] (0.5,1) circle (1pt); \filldraw[black] (0.5,2) circle (1pt); \filldraw[black] (0.5,3) circle (1pt); \filldraw[black] (0,0) circle (1pt); \filldraw[black] (0,1) circle (1pt); \filldraw[black] (0,2) circle (1pt); \filldraw[black] (0,3) circle (1pt); \end{tikzpicture}\quad\quad \end{equation} Now we compute the components $(S_{\rho}^{[\alpha]}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}$. We consider the two cases $\alpha=0$ and $\alpha=1$ separately. \textbf{First consider the case where $\alpha=0$.} By the definitions of $\rho, \rho^{*}$ and spin conservation, there exists $0\leq s \leq N$, such that \begin{eqnarray*} && i_{\rho(1)}=\cdots=i_{\rho(s)}=1,\quad i_{\rho(s+1)}=\cdots=i_{\rho(N)}=0,\\ && j_{\rho^{*}(1)}=\cdots=j_{\rho^{*}(N-s)}=0,\quad j_{\rho^{*}(N-s+1)}=\cdots=j_{\rho^{*}(N)}=1. \end{eqnarray*} Moreover, by the conditions (\ref{Con1}) and (\ref{Con2}), we can deduce that \begin{eqnarray*} && \rho(1)<\cdots<\rho(s),\quad \rho(s+1)<\cdots<\rho(N),\\ && \rho^{*}(1)>\cdots>\rho^{*}(N-s),\quad \rho^{*}(N-s+1)>\cdots>\rho^{*}(N). \end{eqnarray*} By spin conservation, we can further deduce that in order for $(S_{\rho}^{[0]}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}$ to be non-vanishing, we necessarily have $j_a=i_a$ for any $1\leq a\leq N$ (which we assume in the following). It can be checked that there is a unique admissible state of the lattice model corresponding to $(S_{\rho}^{[0]}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}$, as indicated below. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (-0.5,0) to (0.5,0); \draw (-0.5,1) to (0.5,1); \draw (-0.5,3) to (0.5,3); \draw (-0.5,4) to (0.5,4); \draw (0,-0.5) to (0,4.5); \node at (0,4.5) [anchor=south] {$0$}; \node at (0,-0.5) [anchor=north] {$0$}; \draw (0.5,0) to [out=0, in=-120] (1.5,1) to [out=60, in=180] (7,4); \draw (0.5,1) to [out=0,in=180] (7,3); \draw (0.5,4) to [out=0, in=120] (1.5,3) to [out=-60, in=180] (7,0); \draw (0.5,3) to [out=0,in=180] (7,1); \node at (-0.5,0.5) [anchor=east] {$\cdots$}; \node at (-0.5,0) [anchor=east] {$i_{\rho(1)}=1$}; \node at (-0.5,1) [anchor=east] {$i_{\rho(s)}=1$}; \node at (-0.5,3.5) [anchor=east] {$\cdots$}; \node at (-0.5,3) [anchor=east] {$i_{\rho(s+1)}=0$}; \node at (-0.5,4) [anchor=east] {$i_{\rho(N)}=0$}; \node at (7,0) [anchor=west] {$j_{\rho^{*}(1)}=0$}; \node at (7,0.5) [anchor=west] {$\cdots$}; \node at (7,1) [anchor=west] {$j_{\rho^{*}(N-s)}=0$}; \node at (7,3) [anchor=west] {$j_{\rho^{*}(N-s+1)}=1$}; \node at (7,4) [anchor=west] {$j_{\rho^{*}(N)}=1$}; \node at (7,3.5) [anchor=west] {$\cdots$}; \filldraw[black] (0.5,0) circle (1pt); \filldraw[black] (0.5,1) circle (1pt); \filldraw[black] (0.5,3) circle (1pt); \filldraw[black] (0.5,4) circle (1pt); \filldraw[black] (0,0) circle (1pt); \filldraw[black] (0,1) circle (1pt); \filldraw[black] (0,3) circle (1pt); \filldraw[black] (0,4) circle (1pt); \node at (0.5,0) [anchor=south] {$1$}; \node at (0.5,1) [anchor=south] {$1$}; \node at (0.5,4) [anchor=south] {$0$}; \node at (0.5,3) [anchor=south] {$0$}; \node at (1.7,1.4) [anchor=south] {$1$}; \node at (2,1.3) [anchor=north] {$1$}; \node at (1.7,2.6) [anchor=north] {$0$}; \node at (2,2.7) [anchor=south] {$0$}; \node at (2.3,1.4) [anchor=south] {$0$}; \node at (2.3,2.6) [anchor=north] {$1$}; \node at (3.3,1.4) [anchor=south] {$1$}; \node at (3.3,2.6) [anchor=north] {$0$}; \node at (0,3.5) [anchor=east] {$0$}; \node at (0,2) [anchor=east] {$0$}; \node at (0,0.5) [anchor=east] {$0$}; \end{tikzpicture}\quad\quad \end{equation} The Boltzmann weight of this unique state is (note that there are only crossings of $a_1$, $a_2$, or $b_2$ pattern in the permutation graph part as in the above figure) \begin{eqnarray*} \prod_{t=1}^N \mathbbm{1}_{i_t=j_t}\prod_{a:i_a=1}b_2(z_a)\prod_{(a,b):a<b,i_a=1,i_b=1}a_2(z_a,z_b)\prod_{(a,b):i_a=1,i_b=0}b_2(z_a,z_b). \end{eqnarray*} Hence \begin{eqnarray*} &&(S_{\rho}^{[0]}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}\\ &=&\prod_{t=1}^N \mathbbm{1}_{i_t=j_t}\prod_{a:i_a=1}b_2(z_a)\prod_{(a,b):a<b,i_a=1,i_b=1}a_2(z_a,z_b)\prod_{(a,b):i_a=1,i_b=0}b_2(z_a,z_b). \end{eqnarray*} Therefore, we conclude that \begin{equation} (F S^{[0]} F^{-1})_{i_1\cdots i_N}^{j_1 \cdots j_N}=\prod_{t=1}^N \mathbbm{1}_{i_t=j_t}\prod_{a: i_a=1}z_a. \end{equation} \textbf{Now we consider the case where $\alpha=1$.} Again, by the definitions of $\rho, \rho^{*}$ and spin conservation, there exists $0\leq s\leq N-1$, such that \begin{eqnarray*} && i_{\rho(1)}=\cdots=i_{\rho(s)}=1,\quad i_{\rho(s+1)}=\cdots=i_{\rho(N)}=0,\\ && j_{\rho^{*}(1)}=\cdots=j_{\rho^{*}(N-s-1)}=0,\quad j_{\rho^{*}(N-s)}=\cdots=j_{\rho^{*}(N)}=1. \end{eqnarray*} Moreover, by the conditions (\ref{Con1}) and (\ref{Con2}), \begin{eqnarray*} && \rho(1)<\cdots<\rho(s),\quad \rho(s+1)<\cdots<\rho(N),\\ && \rho^{*}(1)>\cdots>\rho^{*}(N-s-1),\quad \rho^{*}(N-s)>\cdots>\rho^{*}(N). \end{eqnarray*} By spin conservation, we can deduce that in order for $(S_{\rho}^{[1]}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}$ to be non-vanishing, there is a unique integer $1\leq m\leq N$, such that $i_m=0$, $j_m=1$, and $j_a=i_a$ for any $a\neq m$ (which we assume in the following). For any admissible state corresponding to $(S_{\rho}^{[1]}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}$, the spin of some edges has to take a fixed value, as indicated below. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (-0.5,0) to (0.5,0); \draw (-0.5,1) to (0.5,1); \draw (-0.5,3) to (0.5,3); \draw (-0.5,4) to (0.5,4); \draw (-0.5,5) to (0.5,5); \draw (0,-0.5) to (0,5.5); \node at (0,5.5) [anchor=south] {$1$}; \node at (0,-0.5) [anchor=north] {$0$}; \draw (0.5,0) to [out=0,in=-120] (3.5,3.5) to [out=60,in=180] (7,5); \draw (0.5,1) to [out=0,in=180] (7,3); \draw (0.5,4) to (7,4); \draw (0.5,5) to [out=0,in=180] (7,0); \draw (0.5,3) to [out=0,in=150] (3,2) to [out=-30,in=180] (7,1); \node at (-0.5,0.5) [anchor=east] {$\cdots$}; \node at (-0.5,0) [anchor=east] {$i_{\rho(1)}=1$}; \node at (-0.5,1) [anchor=east] {$i_{\rho(s)}=1$}; \node at (-0.5,3.5) [anchor=east] {$\cdots$}; \node at (-0.5,3) [anchor=east] {$i_{\rho(s+1)}=0$}; \node at (-0.5,4.5) [anchor=east] {$\cdots$}; \node at (-0.5,5) [anchor=east] {$i_{\rho(N)}=0$}; \node at (-0.5,4) [anchor=east] {$i_{m}=0$}; \node at (7,0) [anchor=west] {$j_{\rho^{*}(1)}=0$}; \node at (7,0.5) [anchor=west] {$\cdots$}; \node at (7,1) [anchor=west] {$j_{\rho^{*}(N-s-1)}=0$}; \node at (7,3) [anchor=west] {$j_{\rho^{*}(N-s)}=1$}; \node at (7,4) [anchor=west] {$j_m=1$}; \node at (7,5) [anchor=west] {$j_{\rho^{*}(N)}=1$}; \node at (7,3.5) [anchor=west] {$\cdots$}; \node at (7,4.5) [anchor=west] {$\cdots$}; \filldraw[black] (0.5,0) circle (1pt); \filldraw[black] (0.5,1) circle (1pt); \filldraw[black] (0.5,3) circle (1pt); \filldraw[black] (0.5,4) circle (1pt); \filldraw[black] (0,0) circle (1pt); \filldraw[black] (0,1) circle (1pt); \filldraw[black] (0,3) circle (1pt); \filldraw[black] (0,4) circle (1pt); \filldraw[black] (0,5) circle (1pt); \filldraw[black] (0.5,5) circle (1pt); \node at (0,0.5) [anchor=east] {$0$}; \node at (0,2) [anchor=east] {$0$}; \node at (0.5,0) [anchor=south] {$1$}; \node at (0.5,1) [anchor=south] {$1$}; \node at (2.9,1.6) [anchor=north] {$1$}; \node at (2.8,1.6) [anchor=south east] {$1$}; \node at (3.9,1.6) [anchor=north] {$0$}; \node at (4.2,1.5) [anchor=south west] {$0$}; \node at (3.2,4) [anchor=south] {$1$}; \node at (3.4,3.8) [anchor=north west] {$1$}; \node at (1.8,2.6) [anchor=south] {$0$}; \node at (3.1,3.7) [anchor=north east] {$0$}; \node at (3.2,2.5) [anchor=south east] {$1$}; \node at (3.5,2.5) [anchor=south west] {$0$}; \node at (2.9,1.8) [anchor=south west] {$0$}; \node at (3.9,1.8) [anchor=south east] {$1$}; \end{tikzpicture}\quad\quad \end{equation} Therefore, we can remove the lines corresponding to $1\leq a\leq N$ such that $i_a=j_a=1$ (together with the corresponding part of the column configuration) up to a factor of \begin{eqnarray}\label{Q} &&\prod_{a:i_a=1,j_a=1}b_2(z_a)\prod_{(a,b):a<b,i_a=1,j_a=1,i_b=1,j_b=1}a_2(z_a,z_b)\nonumber\\ &\times& \prod_{a: a<m,i_a=1,j_a=1}a_2(z_a,z_m)\prod_{(a,b): i_a=1,j_a=1,i_b=0,j_b=0}b_2(z_a,z_b). \end{eqnarray} That is, $(S_{\rho}^{[1]}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_{1}\cdots i_{N}}^{j_1\cdots j_N}$ is equal to the factor (\ref{Q}) times the partition function of the following configuration \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (-0.5,0) to (0.5,0); \draw (-0.5,1) to (0.5,1); \draw (-0.5,2) to (0.5,2); \node at (0,2.5) [anchor=south] {$1$}; \draw (0,-0.5) to (0,2.5); \node at (0,-0.5) [anchor=north] {$0$}; \draw (0.5,0) to [out=0,in=180] (5,1); \draw (0.5,1) to [out=0,in=180] (5,2); \draw (0.5,2) to [out=0,in=180] (5,0); \node at (-0.5,0.5) [anchor=east] {$\cdots$}; \node at (-0.5,1) [anchor=east] {$i_m=0$}; \node at (-0.5,0) [anchor=east] {$i_{\rho(s+1)}=0$}; \node at (-0.5,1.5) [anchor=east] {$\cdots$}; \node at (-0.5,2) [anchor=east] {$i_{\rho(N)}=0$}; \node at (5,0) [anchor=west] {$j_{\rho^{*}(1)}=0$}; \node at (5,0.5) [anchor=west] {$\cdots$}; \node at (5,1) [anchor=west] {$j_{\rho^{*}(N-s-1)}=0$}; \node at (5,2) [anchor=west] {$j_m=1$}; \node at (5,1.5) [anchor=west] {$\cdots$}; \filldraw[black] (0.5,0) circle (1pt); \filldraw[black] (0.5,1) circle (1pt); \filldraw[black] (0.5,2) circle (1pt); \filldraw[black] (0,0) circle (1pt); \filldraw[black] (0,1) circle (1pt); \filldraw[black] (0,2) circle (1pt); \end{tikzpicture}\quad\quad \end{equation} By the Yang-Baxter equations (the ``$RTT$'' version, see Proposition \ref{YBE1}), by sequentially pushing the $R$-braids to the left, we derive that the partition function of the above configuration is equal to the partition function of the following configuration \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (5,0) to (6,0); \draw (5,1) to (6,1); \draw (5,2) to (6,2); \node at (5.5,2.5) [anchor=south] {$1$}; \draw (5.5,-0.5) to (5.5,2.5); \node at (5.5,-0.5) [anchor=north] {$0$}; \draw (0.5,0) to [out=0,in=180] (5,1); \draw (0.5,1) to [out=0,in=180] (5,2); \draw (0.5,2) to [out=0,in=180] (5,0); \node at (0.5,0.5) [anchor=east] {$\cdots$}; \node at (0.5,1) [anchor=east] {$i_m=0$}; \node at (0.5,0) [anchor=east] {$i_{\rho(s+1)}=0$}; \node at (0.5,1.5) [anchor=east] {$\cdots$}; \node at (0.5,2) [anchor=east] {$i_{\rho(N)}=0$}; \node at (6,0) [anchor=west] {$j_{\rho^{*}(1)}=0$}; \node at (6,0.5) [anchor=west] {$\cdots$}; \node at (6,1) [anchor=west] {$j_{\rho^{*}(N-s-1)}=0$}; \node at (6,2) [anchor=west] {$j_m=1$}; \node at (6,1.5) [anchor=west] {$\cdots$}; \filldraw[black] (5.5,0) circle (1pt); \filldraw[black] (5.5,1) circle (1pt); \filldraw[black] (5.5,2) circle (1pt); \filldraw[black] (5,0) circle (1pt); \filldraw[black] (5,1) circle (1pt); \filldraw[black] (5,2) circle (1pt); \end{tikzpicture}\quad\quad \end{equation} Note that there is only one admissible state of this configuration, as indicated below \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (5,0) to (6,0); \draw (5,1) to (6,1); \draw (5,2) to (6,2); \node at (5.5,2.5) [anchor=south] {$1$}; \draw (5.5,-0.5) to (5.5,2.5); \node at (5.5,-0.5) [anchor=north] {$0$}; \draw (0.5,0) to [out=0,in=180] (5,1); \draw (0.5,1) to [out=0,in=180] (5,2); \draw (0.5,2) to [out=0,in=180] (5,0); \node at (0.5,0.5) [anchor=east] {$\cdots$}; \node at (0.5,1) [anchor=east] {$i_m=0$}; \node at (0.5,0) [anchor=east] {$i_{\rho(s+1)}=0$}; \node at (0.5,1.5) [anchor=east] {$\cdots$}; \node at (0.5,2) [anchor=east] {$i_{\rho(N)}=0$}; \node at (6,0) [anchor=west] {$j_{\rho^{*}(1)}=0$}; \node at (6,0.5) [anchor=west] {$\cdots$}; \node at (6,1) [anchor=west] {$j_{\rho^{*}(N-s-1)}=0$}; \node at (6,2) [anchor=west] {$j_m=1$}; \node at (6,1.5) [anchor=west] {$\cdots$}; \filldraw[black] (5.5,0) circle (1pt); \filldraw[black] (5.5,1) circle (1pt); \filldraw[black] (5.5,2) circle (1pt); \filldraw[black] (5,0) circle (1pt); \filldraw[black] (5,1) circle (1pt); \filldraw[black] (5,2) circle (1pt); \node at (4.9,0) [anchor=south] {$0$}; \node at (2.6,0.8) [anchor=south west] {$0$}; \node at (4.9,1) [anchor=south] {$0$}; \node at (4.9,2) [anchor=south] {$0$}; \node at (5.4,0.5) [anchor=west] {$0$}; \node at (5.4,1.5) [anchor=west] {$0$}; \end{tikzpicture}\quad\quad \end{equation} The Boltzmann weight of this state is \begin{equation} c_2(z_m)=1. \end{equation} Hence we have \begin{eqnarray*} &&(S_{\rho}^{[1]}((z_{\rho(1)},\cdots,z_{\rho(N)})) R_{\rho^{*}}^{\rho})_{i_1\cdots i_{N}}^{j_1\cdots j_N}\\ &=& \prod_{a:i_a=1,j_a=1}b_2(z_a)\prod_{(a,b):a<b,i_a=1,j_a=1,i_b=1,j_b=1}a_2(z_a,z_b)\\ &\times& \prod_{a: a<m,i_a=1,j_a=1}a_2(z_a,z_m)\prod_{(a,b): i_a=1,j_a=1,i_b=0,j_b=0}b_2(z_a,z_b). \end{eqnarray*} Therefore, we conclude that \begin{eqnarray} (F S^{[1]} F^{-1})_{i_1\cdots i_N}^{j_1\cdots j_N}&=&\sum_{m=1}^N \mathbbm{1}_{i_m=0,j_m=1}\prod_{t:1\leq t\leq N, t\neq m} \mathbbm{1}_{i_t=j_t}\prod_{a: i_a=1,j_a=1}z_a\nonumber\\ &&\times \prod_{a:a>m,i_a=1,j_a=1}\frac{z_a-v z_m}{z_m-vz_a}\prod_{a:i_a=0,j_a=0}\frac{z_a-v z_m}{z_m-z_a}. \end{eqnarray} \end{proof} Now we finish the proof of Theorem \ref{Theorem1} based on Proposition \ref{P3}. \begin{proof}[Proof of Theorem \ref{Theorem1}] Note that by Proposition \ref{P3} and (\ref{Par}), $Z(\mathcal{S}_{\lambda,z})$ can be written as the sum of the following terms \begin{equation} (F S^{[m_{\lambda_1+N-1}]} F^{-1})_{0\cdots 0}^{i_1^{(\lambda_1+N-1)}\cdots i_N^{(\lambda_1+N-1)}}\cdots (F S^{[m_1]} F^{-1})_{i_1^{(2)}\cdots i_N^{(2)}}^{i_1^{(1)}\cdots i_N^{(1)}}(F S^{[m_0]} F^{-1})_{i_1^{(1)}\cdots i_N^{(1)}}^{1\cdots 1}, \end{equation} where $(i_1^{(l)},\cdots,i_N^{(l)})\in\{0,1\}^N$ for $0\leq l\leq \lambda_1+N-1$ satisfy the following condition: if $m_l=0$, then $i_k^{(l)}=i_k^{(l+1)}$ for every $1\leq k\leq N$; if $m_l=1$, then there is a unique index $\alpha_l\in\{1,2,\cdots,N\}$ such that $i_{\alpha_l}^{(l)}=1$, $i_{\alpha_l}^{(l+1)}=0$, and $i_k^{(l)}=i_k^{(l+1)}$ for every $k\neq \alpha_l$. Here, we have assumed that $(i_1^{(\lambda_1+N)},\cdots,i_N^{(\lambda_1+N)})=(0,\cdots,0)$ and $(i_1^{(0)},\cdots,i_N^{(0)})=(1,\cdots,1)$. Let $\beta_t:=\alpha_{\lambda_t+N-t}$ for every $1\leq t\leq N$. Note that $(\beta_1,\cdots,\beta_N)$ corresponds to a permutation $\sigma\in S_N$ such that $\sigma(t)=\beta_t$ for every $1\leq t\leq N$. Based on this observation and Proposition \ref{P3}, we can deduce that \begin{eqnarray*} Z(\mathcal{S}_{\lambda,z}) &=& \sum_{\sigma\in S_N} \prod_{i=1}^N z_{\sigma(i)}^{\lambda_i+N-i}\prod_{(i,j):1\leq i<j\leq N,\sigma(i)>\sigma(j)}\frac{z_{\sigma(i)}-vz_{\sigma(j)}}{z_{\sigma(j)}-vz_{\sigma(i)}}\prod_{(i,j):1\leq i<j\leq N}\frac{z_{\sigma(j)}-v z_{\sigma(i)}}{z_{\sigma(i)}-z_{\sigma(j)}}. \end{eqnarray*} Note that for any $\sigma\in S_N$, \begin{eqnarray*} &&\prod_{(i,j):1\leq i<j\leq N,\sigma(i)>\sigma(j)}\frac{z_{\sigma(i)}-vz_{\sigma(j)}}{z_{\sigma(j)}-vz_{\sigma(i)}}\prod_{(i,j):1\leq i<j\leq N}(z_{\sigma(j)}-v z_{\sigma(i)})\\ &=& \prod_{(i,j):1\leq i<j\leq N,\sigma(i)<\sigma(j)}(z_{\sigma(j)}-v z_{\sigma(i)})\prod_{(i,j):1\leq i<j\leq N,\sigma(i)>\sigma(j)}(z_{\sigma(i)}-v z_{\sigma(j)})\\ &=& \prod_{1\leq i<j\leq N}(z_j-vz_i). \end{eqnarray*} Hence we have \begin{eqnarray*} Z(\mathcal{S}_{\lambda,z}) &=& \prod_{1\leq i<j\leq N}(z_j-vz_i)\sum_{\sigma\in S_N}\prod_{i=1}^N z_{\sigma(i)}^{\lambda_i+N-i}\prod_{(i,j):1\leq i<j\leq N}(z_{\sigma(i)}-z_{\sigma(j)})^{-1}\\ &=& \prod_{1\leq i<j\leq N}(z_j-vz_i)\frac{\sum_{\sigma\in S_N}(-1)^{inv(\sigma)}z_{\sigma(i)}^{\lambda_i+N-i}}{\prod_{1\leq i<j\leq N}(z_i-z_j)}\\ &=& \prod_{1\leq i<j\leq N}(z_j-vz_i) s_{\lambda}(z_1,\cdots,z_N), \end{eqnarray*} where $inv(\sigma):=\{(i,j):1\leq i<j\leq N,\sigma(i)>\sigma(j)\}$ is the number of inversions of $\sigma\in S_N$. \end{proof} \section{Extension to lattice models related to Cartan types B and C}\label{Sect.5} In this section, we generalize the concepts in Section \ref{Sect.3} to lattice models that are related to Cartan types B and C. Examples of such models include the models in \cite{BBCG,Iva}. In such models, both $\Gamma$ and $\Delta$ vertices appear, and there are U-turn cap vertices on the right boundary. The generalized concepts in this section will be used in Section \ref{Sect.6} to compute the partition function of the lattice model in \cite{BBCG}. The main difference between this section and Section \ref{Sect.3} is that here we need to distinguish between $\Gamma$ ice and $\Delta$ ice. As in Section \ref{Sect.3}, we fix a positive integer $N$. To each site $i$ (where $1\leq i\leq N$), we associate a number $\epsilon_i\in\{1,-1\}$) to indicate $\Gamma$\slash $\Delta$ type, as detailed below. We make the following modifications to the setups in Section \ref{Sect.3.1}. For any $a,b,c,d\in\{0,1\}$, any $x_i,x_j\in\mathbb{C}$, and any $\epsilon_i,\epsilon_j\in\{1,-1\}$, we denote by $R(a,b,c,d;x_i,x_j;\epsilon_i,\epsilon_j)$ the Boltzmann weight of the following $R$-vertex (with spectral parameters $x_i,x_j$ and $\Gamma$\slash $\Delta$ type determined by $\epsilon_i,\epsilon_j$). The determination of the $\Gamma$\slash $\Delta$ type of the $R$-vertex is: if $(\epsilon_i,\epsilon_j)=(1,1)$, it is $\Gamma\Gamma$ ice; if $(\epsilon_i,\epsilon_j)=(-1,-1)$, it is $\Delta\Delta$ ice; if $(\epsilon_i,\epsilon_j)=(1,-1)$, it is $\Gamma\Delta$ ice; if $(\epsilon_i,\epsilon_j)=(-1,1)$, it is $\Delta\Gamma$ ice. \begin{equation} \begin{tikzpicture}[scale=0.7] \draw (0,0) to [out = 0, in = 180] (2,2); \draw (0,2) to [out = 0, in = 180] (2,0); \draw[fill=white] (0,0) circle (.35); \draw[fill=white] (0,2) circle (.35); \draw[fill=white] (2,0) circle (.35); \draw[fill=white] (2,2) circle (.35); \node at (0,0) {$a$}; \node at (0,2) {$b$}; \node at (2,2) {$c$}; \node at (2,0) {$d$}; \path[fill=white] (1,1) circle (.3); \node at (1,1) {$R_{x_i,x_j}$}; \end{tikzpicture} \end{equation} For any two distinct positive integers $i,j$, we define the $R$-matrix $R_{i,j}(x_i,x_j;\epsilon_i,\epsilon_j)$ that acts on $W_i \otimes W_j$ (with spectral parameters $x_i,x_j$) by \begin{equation} R_{i,j}(x_i,x_j;\epsilon_i,\epsilon_j)=\sum_{a,b,c,d\in\{0,1\}}R(a,b,c,d;x_i,x_j;\epsilon_i,\epsilon_j) E_i^{(a,c)} E_j^{(b,d)}. \end{equation} We also denote $R(x_1,x_2;\epsilon_1,\epsilon_2):=R_{12}(x_1,x_2;\epsilon_1,\epsilon_2)$. For an ordinary vertex with $\Gamma$\slash $\Delta$ type determined by $\epsilon_i$ ($\epsilon_i=1$ means $\Gamma$ ice and $\epsilon_i=-1$ means $\Delta$ ice) and spectral parameter $x_i$, we denote by $a_1(x_i;\epsilon_i)$ its Boltzmann weight for the $a_1$ state (see Figures \ref{Figure2.1}-\ref{Figure2.2}), and similarly for the other states. For an $R$-vertex with $\Gamma$\slash $\Delta$ type determined by $\epsilon_i,\epsilon_j$ and spectral parameters $x_i,x_j$, we denote by $a_1(x_i,x_j;\epsilon_i,\epsilon_j)$ it Boltzmann weight for the $a_1$ state (see Figures \ref{Figure2.3}-\ref{Figure2.6}), and similarly for the other states. The parallels of Sections \ref{Sect.3.2}-\ref{Sect.3.4} are given in Sections \ref{Sect.5.1}-\ref{Sect.5.3} below. \subsection{Column operator}\label{Sect.5.1} For any $\alpha\in\{0,1\}$, $\vec{x}=(x_1,\cdots,x_N)\in\mathbb{C}^N$, and $\vec{\epsilon}=(\epsilon_1,\cdots,\epsilon_N)\in\{1,-1\}^N$, the column operator $S^{[\alpha]}(\vec{x};\vec{\epsilon})\in End(W_1\otimes \cdots \otimes W_N)$ is defined by specifying its components $(S^{[\alpha]}(\vec{x};\vec{\epsilon}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ for any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in\{0,1\}^N$. We define $(S^{[\alpha]}(\vec{x};\vec{\epsilon}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ as follows. Consider a column of ordinary vertices whose $\Gamma$\slash $\Delta$ types and spectral parameters are given by $\epsilon_1,\cdots,\epsilon_N$ and $x_1,\cdots,x_N$ from bottom to top. The boundary conditions are specified as follows: the top edge is labeled $\alpha$, the bottom edge is labeled $0$, the left edges are labeled $i_1,\cdots,i_N$ (from bottom to top), and the right edges are labeled $j_1,\cdots,j_N$ (from bottom to top). The component $(S^{[\alpha]}(\vec{x};\vec{\epsilon}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ is defined as the partition function of this configuration. More generally, for any $\sigma\in S_N$, $\alpha\in\{0,1\}$, $\vec{x}=(x_1,\cdots,x_N)\in\mathbb{C}^N$, and $\vec{\epsilon}=(\epsilon_1,\cdots,\epsilon_N)\in\{1,-1\}^N$, we define the column operator $S^{[\alpha]}_{\sigma}(\vec{x};\vec{\epsilon})\in End(W_1\otimes \cdots \otimes W_N)$ by specifying its components $(S^{[\alpha]}_{\sigma}(\vec{x};\vec{\epsilon}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ for any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in \{0,1\}^N$. To define $(S^{[\alpha]}_{\sigma}(\vec{x};\vec{\epsilon}))_{i_1\cdots i_N}^{j_1\cdots j_N}$, consider a column of ordinary vertices whose $\Gamma$\slash $\Delta$ types and spectral parameters are given by $\epsilon_1,\cdots,\epsilon_N$ and $x_1,\cdots,x_N$ from bottom to top. We also specify the boundary condition as follows: the top edge is labeled $\alpha$, the bottom edge is labeled $0$, the left edges are labeled $i_{\sigma(1)},\cdots, i_{\sigma(N)}$ (from bottom to top), and the right edges are labeled $j_{\sigma(1)},\cdots,j_{\sigma(N)}$ (from bottom to top). The component $(S^{[\alpha]}_{\sigma}(\vec{x};\vec{\epsilon}))_{i_1\cdots i_N}^{j_1\cdots j_N}$ is defined as the partition function of this configuration. \subsection{Permutation graph}\label{Sect.5.2} For any two permutations $\rho_1,\rho_2\in S_N$, any two vectors $\vec{x}=(x_1,\cdots,x_N)\in\mathbb{C}^N$ and $\vec{\epsilon}=(\epsilon_1,\cdots,\epsilon_N) \in\{1,-1\}^N$, the ``permutation graph'' $R_{\rho_1}^{\rho_2}(\vec{x};\vec{\epsilon})$ is an element of $End(W_1\otimes \cdots \otimes W_N)$ as defined below. For the case where $\rho_1=s_i=(i,i+1)$ and $\rho_2=id$ for some $1\leq i\leq N-1$, we let \begin{equation} R_{s_i}^{id}(\vec{x};\vec{\epsilon})=R_{i(i+1)}(x_i,x_{i+1};\epsilon_i,\epsilon_{i+1}). \end{equation} For general $\rho_1\in S_N$, we let \begin{equation} R_{\rho_1}^{\rho_1}(\vec{x};\vec{\epsilon})=1,\quad R_{\rho_1\circ s_i}^{\rho_1}(\vec{x};\vec{\epsilon})=R_{\rho_1(i),\rho_1(i+1)}(x_{\rho_1(i)},x_{\rho_1(i+1)};\epsilon_{\rho_1(i)},\epsilon_{\rho_1(i+1)}), \end{equation} and recursively for any $\rho_1,\rho_2\in S_N$, \begin{equation} R_{\rho_1 \circ s_i}^{\rho_2}(\vec{x};\vec{\epsilon})=R_{\rho_1}^{\rho_2}(\vec{x};\vec{\epsilon}) R_{\rho_1\circ s_i}^{\rho_1}(\vec{x};\vec{\epsilon}). \end{equation} For general $\rho_1,\rho_2$, $R_{\rho_1}^{\rho_2}(\vec{x};\vec{\epsilon})$ can be constructed from the above definition. By the ``$RRR$'' Yang-Baxter equations and the unitarity relation (Theorems \ref{YBE2}-\ref{Unit}), $R_{\rho_1}^{\rho_2}(\vec{x};\vec{\epsilon})$ is well-defined. \subsection{$F$-matrix}\label{Sect.5.3} Now we define the $F$-matrix. For any $\rho\in S_N$, $I(\rho)$ and $I'(\rho)$ are defined as in Section \ref{Sect.3.4}. The $F$-matrices $F(\vec{x};\vec{\epsilon}),F^{*}(\vec{x};\vec{\epsilon})\in End(W_1 \otimes \cdots \otimes W_N)$ are defined by \begin{equation} F(\vec{x};\vec{\epsilon}):=\sum_{\rho\in S_N}\sum_{(k_1,\cdots,k_N)\in I(\rho)}\prod_{i=1}^N E_{\rho(i)}^{(k_i,k_i)} R_{id}^{\rho}(\vec{x};\vec{\epsilon}), \end{equation} \begin{equation} F^{*}(\vec{x};\vec{\epsilon}):=\sum_{\rho\in S_N}\sum_{(k_1,\cdots,k_N)\in I'(\rho)}R_{\rho}^{id}(\vec{x};\vec{\epsilon})\prod_{i=1}^N E_{\rho(i)}^{(k_i,k_i)}. \end{equation} Hereafter, the arguments $\vec{x},\vec{\epsilon}$ may be omitted from the $F$-matrix when they are clear from the context. Proposition \ref{P2} directly generalizes to the setting here (with the newly defined permutation graph and $F$-matrix), and Proposition \ref{P1} generalizes to the following proposition. \begin{proposition} $\Delta:=F F^{*}$ is a diagonal matrix. The diagonal entries of $\Delta$ are given by \begin{eqnarray*} (\Delta)_{i_1\cdots i_N}^{i_1\cdots i_N}=\prod_{(a,b):i_a=1,i_b=0} b_2(x_a,x_b;\epsilon_a,\epsilon_b)\prod_{(a,b):a<b,i_a=1,i_b=1}a_2(x_a,x_b;\epsilon_a,\epsilon_b) \end{eqnarray*} for every $(i_1,\cdots,i_N)\in\{0,1\}^N$. \end{proposition} \section{Ice model representing a Whittaker function on the metaplectic double cover of $\mathrm{Sp}(2r,F)$}\label{Sect.6} In this section, we apply our method to compute the partition function of the lattice model introduced in \cite{BBCG}. As reviewed in the Introduction, the partition function of this model represents a Whittaker function on the metaplectic double cover of $\mathrm{Sp}(2r,F)$ with $F$ a non-archimedean local field. The computation of this partition function is more involved than that in Section \ref{Sect.4}, as both types of ordinary vertices ($\Gamma$ ice and $\Delta$ ice) are involved in the model, and there are cap vertices connecting adjacent rows of $\Delta$ ice and $\Gamma$ ice on the right boundary. \subsection{The lattice model}\label{Sect.6.1} In this subsection, we briefly introduce the lattice model in \cite{BBCG}. Let $\lambda=(\lambda_1,\cdots,\lambda_{r})$ be a given partition (meaning that $\lambda_1\geq\cdots\geq \lambda_N\geq 0$). Consider a rectangular lattice with $2r$ rows and $\lambda_1+r$ columns. The columns are labeled $\frac{1}{2},\frac{3}{2},\cdots,\lambda_1+r-\frac{1}{2}$ from right to left, and the rows are labeled $1,2,\cdots, 2r$ from bottom to top. Note that the ordering of the rows here is reversed from that in \cite{BBCG}. Every odd-numbered row is a row of $\Gamma$ ice, and every even-numbered row is a row of $\Delta$ ice. For every $1\leq i\leq r$, the spectral parameter of the vertices in the $i$th row of $\Gamma$ ice is $z_i^{-1}$, and that of the vertices in the $i$th row of $\Delta$ ice is $z_i$. The boundary conditions are given as follows. On the left boundary, we assign $0$ ($+$ spin) to each row (note that this is different from the notations in \cite{BBCG} as we switch the signs of spins on horizontal edges of $\Delta$ ice); on the bottom boundary, we assign $0$ to each boundary edge; on the top boundary, we assign $1$ ($-$ spin) to each column labeled $\lambda_i+r+\frac{1}{2}-i$ for every $1\leq i \leq r$, and $0$ to the rest of the columns; on the right, the $i$th row of $\Gamma$ ice and the $i$th row of $\Delta$ ice are connected by a cap vertex with spectral parameter $z_i$. Recall that the Boltzmann weights of the caps are given in Figure \ref{Figure2.7}. Let $z=(z_1,\cdots,z_r)$. Hereafter, we denote by $Z(\mathcal{T}_{\lambda,z})$ the partition function of the above lattice model. As a simple example, when $r=2$, $\lambda=(2,1)$, the model configuration is shown as below. \begin{equation} \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,1)--(5,1); \draw (0,2)--(5,2); \draw (0,3)--(5,3); \draw (0,4)--(5,4); \draw (1,0.5)--(1,4.5); \draw (2,0.5)--(2,4.5); \draw (3,0.5)--(3,4.5); \draw (4,0.5)--(4,4.5); \filldraw[black] (1,1) circle (1pt); \filldraw[black] (2,1) circle (1pt); \filldraw[black] (3,1) circle (1pt); \filldraw[black] (4,1) circle (1pt); \filldraw[black] (3,2) circle (1pt); \filldraw[black] (2,2) circle (1pt); \filldraw[black] (1,2) circle (1pt); \filldraw[black] (4,2) circle (1pt); \filldraw[black] (1,3) circle (1pt); \filldraw[black] (2,3) circle (1pt); \filldraw[black] (3,3) circle (1pt); \filldraw[black] (4,3) circle (1pt); \filldraw[black] (1,4) circle (1pt); \filldraw[black] (2,4) circle (1pt); \filldraw[black] (3,4) circle (1pt); \filldraw[black] (4,4) circle (1pt); \filldraw[black] (5.5,1.5) circle (1pt); \filldraw[black] (5.5,3.5) circle (1pt); \draw (5,1) arc(-90:90:0.5); \draw (5,3) arc(-90:90:0.5); \node at (0,1) [anchor=east] {$0$}; \node at (0,2) [anchor=east] {$0$}; \node at (0,3) [anchor=east] {$0$}; \node at (0,4) [anchor=east] {$0$}; \node at (-0.5,1) [anchor=east] {$1$}; \node at (-0.5,2) [anchor=east] {$2$}; \node at (-0.5,3) [anchor=east] {$3$}; \node at (-0.5,4) [anchor=east] {$4$}; \node at (-1,1) [anchor=east] {row}; \node at (0.5,1) [anchor=south] {$\Gamma$}; \node at (0.5,2) [anchor=south] {$\Delta$}; \node at (0.5,3) [anchor=south] {$\Gamma$}; \node at (0.5,4) [anchor=south] {$\Delta$}; \node at (4.5,1) [anchor=south] {$z_1^{-1}$}; \node at (4.5,2) [anchor=south] {$z_1$}; \node at (4.5,3) [anchor=south] {$z_2^{-1}$}; \node at (4.5,4) [anchor=south] {$z_2$}; \node at (1,4.5) [anchor=south] {$1$}; \node at (2,4.5) [anchor=south] {$0$}; \node at (3,4.5) [anchor=south] {$1$}; \node at (4,4.5) [anchor=south] {$0$}; \node at (1,5) [anchor=south] {$\frac{7}{2}$}; \node at (2,5) [anchor=south] {$\frac{5}{2}$}; \node at (3,5) [anchor=south] {$\frac{3}{2}$}; \node at (4,5) [anchor=south] {$\frac{1}{2}$}; \node at (0,5) [anchor=south] {column}; \node at (1,0.5) [anchor=north] {$0$}; \node at (2,0.5) [anchor=north] {$0$}; \node at (3,0.5) [anchor=north] {$0$}; \node at (4,0.5) [anchor=north] {$0$}; \end{tikzpicture} \end{equation} \subsection{Computation of the partition function}\label{Sect.6.2} The explicit form of the partition function of the lattice model reviewed in Section \ref{Sect.5.1} was conjectured in \cite{BBCG} and first proved in \cite{MSW}. In this subsection, we use the method outlined in Section \ref{Sect.1.2} and the concepts introduced in Section \ref{Sect.5} to give a new proof of this conjecture. The argument is considerably simpler than that of \cite{MSW}, and directly leads to the expression of the partition function (without the need to guess the formula as in \cite{MSW}). First we set up some notations. Let $[\pm r]:=\{1,\overline{1},\cdots,r,\overline{r}\}$, and let $B_r$ be the hyperoctahedral group of degree $r$. We identify $\overline{\overline{i}}$ with $i$ for $1\leq i\leq r$. Any element $\sigma\in B_r$ can be viewed as a permutation of $[\pm r]$ such that $\sigma(\overline{i})=\overline{\sigma(i)}$ for any $1\leq i\leq r$. For any rational function $f(z)$ of $z=(z_1,\cdots,z_r)$, we define $\sigma f(z):=f(\sigma z)$, where $\sigma(z):=(z_{\sigma(1)},\cdots,z_{\sigma(r)})$ with $z_{\overline{i}}:=z_i^{-1}$ for every $1\leq i\leq r$. The main result is the following theorem. \begin{theorem}\label{Theorem2} \begin{equation} Z(\mathcal{T}_{\lambda,z})=z^{-\rho_B}\prod_{i=1}^r(1-\sqrt{v}z_i)\prod_{1\leq i<j\leq r}((1-vz_iz_j)(1-vz_jz_i^{-1}))\sum_{\sigma\in B_r}\sigma(z^{\lambda+\rho_C}\prod_{i=1}^r (1+\sqrt{v}z_i^{-1})\Delta_C(z)^{-1}), \end{equation} where $\rho_B=(r-\frac{1}{2},r-\frac{3}{2},\cdots,\frac{1}{2})$, $\rho_C=(r,r-1,\cdots,1)$, and \begin{equation*} \Delta_C(z)=\prod_{1\leq i<j\leq r}((z_i^{\frac{1}{2}}z_j^{-\frac{1}{2}}-z_i^{-\frac{1}{2}}z_j^{\frac{1}{2}})(z_i^{\frac{1}{2}}z_j^{\frac{1}{2}}-z_i^{-\frac{1}{2}} z_j^{-\frac{1}{2}}))\prod_{i=1}^r (z_i-z_i^{-1}). \end{equation*} \end{theorem} Throughout the rest of this section, we take $N=2r$, $\vec{x}=(z_1^{-1},z_1,\cdots,z_r^{-1},z_r)$ and $\vec{\epsilon}=(1,-1,\cdots,1,-1)$. For any $j\in \{1,2,\cdots,2r\}$, we denote by $x_j$ the $j$th entry of $\vec{x}$. Below we omit the arguments $\vec{x},\vec{\epsilon}$ from the notations $S^{[\alpha]}(\vec{x};\vec{\epsilon})$, $F(\vec{x};\vec{\epsilon})$, etc. In order to treat the cap vertices, we define the ``cap vector'' $K\in W_1\otimes \cdots \otimes W_N $ as follows. For every $\alpha,\beta\in\{0,1\}$, we denote by $C(\alpha,\beta;z)$ the Boltzmann weight of the cap of spectral parameter $z$ with $\alpha$ on the bottom edge and $\beta$ on the top edge. We define $K\in W_1\otimes \cdots \otimes W_N$ by specifying its component \begin{equation} K_{i_1 \cdots i_N}:=\prod_{l=1}^r C(i_{2l-1},i_{2l};z_l) \end{equation} for every $(i_1,\cdots,i_N)\in \{0,1\}^N$. Note that the partition function $Z(\mathcal{T}_{\lambda,z})$ can be written in terms of the column operators \begin{equation}\label{E04} Z(\mathcal{T}_{\lambda,z})=\langle 0,\cdots,0|S^{[m_{\lambda_1+r-\frac{1}{2}}]}\cdots S^{[m_{\frac{1}{2}}]} K, \end{equation} where for every $j\in\{\frac{1}{2},\frac{3}{2},\cdots, \lambda_1+r-\frac{1}{2}\}$, $m_j=1$ if $j\in\{\lambda_i+r-i+\frac{1}{2}: i\in\{1,2,\cdots,r\}\}$ and $m_j=0$ otherwise. Conjugating each column operator by the $F$-matrix, we write (\ref{E04}) in the following form \begin{equation}\label{Par2} Z(\mathcal{T}_{\lambda,z})=(\langle 0,\cdots,0| F^{-1}) (F S^{[m_{\lambda_1+r-\frac{1}{2}}]}F^{-1}) \cdots (F S^{[m_{\frac{1}{2}}]}F^{-1}) (FK). \end{equation} Therefore, it suffices to compute $\langle 0,\cdots,0| F^{-1}$, the components of the conjugated column operator $F S^{[m_{j}]}F^{-1}$ for $j\in\{\frac{1}{2},\frac{3}{2},\cdots,\lambda_1+r-\frac{1}{2}\}$, and the components of $FK$. By Proposition \ref{caduc}, we can deduce the following proposition. \begin{proposition}\label{P4} Assume that $t\in\{1,2,\cdots\}$ and $K_0,K_1,\cdots,K_t$ are cap vertices with spectral parameters $z_0,z_1,\cdots,z_t$. For any fixed $\alpha_0,\beta_0,\alpha_1,\beta_1,\cdots,\alpha_t,\beta_t\in \{0,1\}$ as indicated below, if $\alpha_0=\beta_0$, then the partition function of the following configuration is $0$. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,0) to (3.5,1.75); \draw (3.5,1.75) to [out=30,in=180] (4,2); \draw (4,2) to (5,2); \draw (0,1) to (3.5,2.75); \draw (3.5,2.75) to [out=30,in=180] (4,3); \draw (4,3) to (5,3); \draw (0,2) to (3.5,3.75); \draw (3.5,3.75) to [out=30,in=180] (4,4); \draw (4,4) to (5,4); \draw (0,3) to (3.5,4.75); \draw (3.5,4.75) to [out=30,in=180] (4,5); \draw (4,5) to (5,5); \draw (0,4) to (3.5,0.25); \draw (3.5,0.25) to [out=-30,in=180] (4,0); \draw (4,0) to (5,0); \draw (0,5) to (3.5,1.25); \draw (3.5,1.25) to [out=-30,in=180] (4,1); \draw (4,1) to (5,1); \draw (5,0) arc(-90:90:0.5); \draw (5,2) arc(-90:90:0.5); \draw (5,4) arc(-90:90:0.5); \filldraw[black] (5.5,0.5) circle (2pt); \filldraw[black] (5.5,2.5) circle (2pt); \filldraw[black] (5.5,4.5) circle (2pt); \node at (0,0) [anchor=east] {$\alpha_1$}; \node at (0,1) [anchor=east] {$\beta_1$}; \node at (0,2) [anchor=east] {$\alpha_t$}; \node at (0,3) [anchor=east] {$\beta_t$}; \node at (0,4) [anchor=east] {$\alpha_0$}; \node at (0,5) [anchor=east] {$\beta_0$}; \node at (0,1.5) [anchor=east] {$\cdots$}; \node at (5.5,4.5) [anchor=west] {$K_t$}; \node at (5.5,3.5) [anchor=west] {$\cdots$}; \node at (5.5,0.5) [anchor=west] {$K_0$}; \node at (5.5,2.5) [anchor=west] {$K_1$}; \filldraw[black] (0.64,3.3) circle (2pt); \filldraw[black] (1.275,3.625) circle (2pt); \filldraw[black] (1.275,2.625) circle (2pt); \filldraw[black] (1.925,2.95) circle (2pt); \filldraw[black] (1.92,1.945) circle (2pt); \filldraw[black] (2.55,2.275) circle (2pt); \filldraw[black] (2.55,1.275) circle (2pt); \filldraw[black] (3.2,1.59) circle (2pt); \node at (4.7,-0.1) [anchor=south] {$\Gamma:z_0^{-1}$}; \node at (4.6,0.9) [anchor=south] {$\Delta:z_0$}; \node at (4.7,1.9) [anchor=south] {$\Gamma:z_1^{-1}$}; \node at (4.6,2.9) [anchor=south] {$\Delta:z_1$}; \node at (4.7,3.9) [anchor=south] {$\Gamma:z_t^{-1}$}; \node at (4.6,4.9) [anchor=south] {$\Delta:z_t$}; \end{tikzpicture} \end{equation} \end{proposition} \begin{proof} Applying the caduceus relation (Proposition \ref{caduc}) to the caduceus braid at the bottom right corner, we deduce that the partition function is a constant times the partition function of the following \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,0) to (3.5,1.75); \draw (3.5,1.75) to [out=30,in=180] (4,2); \draw (4,2) to (5,2); \draw (0,1) to (3.5,2.75); \draw (3.5,2.75) to [out=30,in=180] (4,3); \draw (4,3) to (5,3); \draw (0,2) to (3.5,3.75); \draw (3.5,3.75) to [out=30,in=180] (4,4); \draw (4,4) to (5,4); \draw (0,3) to (3.5,4.75); \draw (3.5,4.75) to [out=30,in=180] (4,5); \draw (4,5) to (5,5); \draw (0,4) to (3.5,0.25); \draw (3.5,0.25) to [out=-30,in=180] (4,0); \draw (4,0) to (5,0); \draw (0,5) to (3.5,1.25); \draw (3.5,1.25) to [out=-30,in=180] (4,1); \draw (4,1) to (5,1); \draw (5,0) arc(-90:90:0.5); \draw (5,2) arc(-90:90:0.5); \draw (5,4) arc(-90:90:0.5); \filldraw[black] (5.5,0.5) circle (2pt); \filldraw[black] (5.5,2.5) circle (2pt); \filldraw[black] (5.5,4.5) circle (2pt); \node at (0,0) [anchor=east] {$\alpha_2$}; \node at (0,1) [anchor=east] {$\beta_2$}; \node at (0,2) [anchor=east] {$\alpha_t$}; \node at (0,3) [anchor=east] {$\beta_t$}; \node at (0,4) [anchor=east] {$\alpha_0$}; \node at (0,5) [anchor=east] {$\beta_0$}; \node at (0,1.5) [anchor=east] {$\cdots$}; \node at (5.5,4.5) [anchor=west] {$K_t$}; \node at (5.5,3.5) [anchor=west] {$\cdots$}; \node at (5.5,0.5) [anchor=west] {$K_0$}; \node at (5.5,2.5) [anchor=west] {$K_2$}; \filldraw[black] (0.64,3.3) circle (2pt); \filldraw[black] (1.275,3.625) circle (2pt); \filldraw[black] (1.275,2.625) circle (2pt); \filldraw[black] (1.925,2.95) circle (2pt); \filldraw[black] (1.92,1.945) circle (2pt); \filldraw[black] (2.55,2.275) circle (2pt); \filldraw[black] (2.55,1.275) circle (2pt); \filldraw[black] (3.2,1.59) circle (2pt); \node at (4.7,-0.1) [anchor=south] {$\Gamma:z_0^{-1}$}; \node at (4.6,0.9) [anchor=south] {$\Delta:z_0$}; \node at (4.7,1.9) [anchor=south] {$\Gamma:z_2^{-1}$}; \node at (4.6,2.9) [anchor=south] {$\Delta:z_2$}; \node at (4.7,3.9) [anchor=south] {$\Gamma:z_t^{-1}$}; \node at (4.6,4.9) [anchor=south] {$\Delta:z_t$}; \end{tikzpicture} \end{equation} Repeating the procedure, we conclude that the original partition function is a constant times the Boltzmann weight of the following configuration, which is $0$ as $\alpha_0=\beta_0$. Hence the original partition function is $0$. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,0) arc(-90:90:0.5); \filldraw[black] (0.5,0.5) circle (2pt); \node at (0,0) [anchor=east] {$\alpha_0$}; \node at (0,1) [anchor=east] {$\beta_0$}; \node at (0.5,0.5) [anchor=west] {$K_0$}; \end{tikzpicture} \end{equation} \end{proof} Based on Proposition \ref{P4}, we obtain the following proposition. It explicitly computes $ \langle 0,\cdots,0|F^{-1}$, the components of $F S^{[m]}F^{-1}$ for $m\in\{0,1\}$, and the components of $FK$. \begin{proposition}\label{P5} We have \begin{equation}\label{EE1} \langle 0,\cdots,0|F^{-1}=\langle 0,\cdots,0|. \end{equation} For any $(i_1,\cdots,i_N),(j_1,\cdots,j_N)\in\{0,1\}^N$, we have \begin{equation}\label{EE2} (F S^{[0]} F^{-1})_{i_1\cdots i_N}^{j_1\cdots j_N}=\prod_{t=1}^N \mathbbm{1}_{i_t=j_t}\prod_{t: i_t=1}b_2(x_t;\epsilon_t), \end{equation} \begin{eqnarray}\label{EE3} (F S^{[1]} F^{-1})_{i_1\cdots i_N}^{j_1\cdots j_N}&=&\sum_{m=1}^N \mathbbm{1}_{i_m=0,j_m=1}\prod_{t:1\leq t\leq N, t\neq m}\mathbbm{1}_{i_t=j_t}\prod_{t: i_t=1,j_t=1}b_2(x_t;\epsilon_t)\nonumber\\ &&\times \prod_{t:t>m,i_t=1,j_t=1}a_2^{-1}(x_m,x_t;\epsilon_m,\epsilon_t)\prod_{t:i_t=0,j_t=0}b_2^{-1}(x_m,x_t;\epsilon_m,\epsilon_t), \end{eqnarray} \begin{eqnarray}\label{EE4} (FK)_{i_1,\cdots,i_{N}} &=& \prod_{a=1}^r \mathbbm{1}_{i_{2a-1}\neq i_{2a}} \prod_{a=1}^r z_a^{-\frac{1}{2}}\prod_{a:1\leq a\leq r, i_{2a-1}=0,i_{2a}=1}\frac{z_a+\sqrt{v}}{z_a^{-1}+\sqrt{v}}\nonumber\\ &\times& \prod_{(a,b):1\leq a<b\leq r, i_{2a-1}=1,i_{2b-1}=1}b_2(z_b^{-1},z_a;1,-1)\prod_{(a,b):1\leq a<b\leq r,i_{2a-1}=1,i_{2b}=1}b_2(z_b,z_a;-1,-1)\nonumber\\ &\times& \prod_{(a,b):1\leq a<b\leq r, i_{2a}=1,i_{2b-1}=1}b_2(z_b^{-1},z_a^{-1};1,1)\prod_{(a,b):1\leq a<b\leq r,i_{2a}=1,i_{2b}=1}b_2(z_b,z_a^{-1};-1,1). \end{eqnarray} Equivalently, we have \begin{equation} F S^{[0]} F^{-1}=\bigotimes_{t\in\{1,\cdots,N\}}\begin{pmatrix} 1 & 0\\ 0 & b_2(x_t;\epsilon_t) \end{pmatrix}_t, \end{equation} \begin{eqnarray} F S^{[1]} F^{-1}=\sum_{m=1}^N &&\bigotimes_{t\in \{1,\cdots,m-1\}} \begin{pmatrix} b_2^{-1}(x_m,x_t;\epsilon_m,\epsilon_t) & 0\\ 0 & b_2(x_t;\epsilon_t) \end{pmatrix}_t \bigotimes \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix}_m\nonumber\\ && \bigotimes_{t\in\{m+1,\cdots,N\}} \begin{pmatrix} b_2^{-1}(x_m,x_t;\epsilon_m,\epsilon_t) & 0\\ 0 & \frac{b_2(x_t;\epsilon_t)}{a_2(x_m,x_t;\epsilon_m,\epsilon_t)} \end{pmatrix}_t, \end{eqnarray} \begin{eqnarray} FK=(\prod_{a=1}^r z_a^{-\frac{1}{2}})\sum_{(e_1,\cdots,e_r)\in \{\pm 1\}^r}&&\prod_{a=1}^r\frac{z_a^{-e_a}+\sqrt{v}}{z_a^{-1}+\sqrt{v}} \prod_{1\leq a<b\leq r} b_2(z_b^{-e_b},z_a^{e_a};e_b,-e_a)\nonumber\\ &&\bigotimes_{a\in\{1,\cdots,r\}}|e_a\rangle_{2a-1}\otimes |-e_a\rangle_{2a}, \end{eqnarray} where the basis vectors of each $W_t$ for $t\in\{1,\cdots,N\}$ are ordered as $|0\rangle$, $|1\rangle$, and for every $1\leq t\leq N$, \begin{equation*} |+1\rangle_t=\begin{pmatrix} 0\\ 1 \end{pmatrix}_t , \quad |-1\rangle_t=\begin{pmatrix} 1\\ 0 \end{pmatrix}_t. \end{equation*} \end{proposition} \begin{proof} The results (\ref{EE1}), (\ref{EE2}), and (\ref{EE3}) can be proved in a similar manner as Proposition \ref{P3}. Below we present the proof of (\ref{EE4}). We note that by an analog of Proposition \ref{P2}, \begin{equation} (FK)_{i_1\cdots i_N}=(R_{id}^{\rho}K)_{i_1\cdots i_N}, \end{equation} where $\rho\in S_N$ is the unique permutation that satisfies the condition \begin{eqnarray} && 0\leq i_{\rho(N)}\leq \cdots \leq i_{\rho(1)} \leq 1,\nonumber\\ && i_{\rho(t)}=i_{\rho(t+1)}\text{ implies }\rho(t)<\rho(t+1), \text{ for every } 1\leq t\leq N-1. \end{eqnarray} By spin conservation and the Boltzmann weights of the caps, in order for $(R_{id}^{\rho} K)_{i_1\cdots i_N}$ to be non-vanishing, we must have \begin{eqnarray*} &&i_{\rho(1)}=\cdots=i_{\rho(r)}=1,\quad i_{\rho(r+1)}=\cdots=i_{\rho(2r)}=0,\\ && \rho(1)<\cdots<\rho(r),\quad \rho(r+1)<\cdots<\rho(2r). \end{eqnarray*} Note that $(R_{id}^{\rho} K)_{i_1\cdots i_N}$ is the partition function of a lattice model. An illustration is given below. \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (4,2) to (5,2); \draw (4,3) to (5,3); \draw (4,4) to (5,4); \draw (4,5) to (5,5); \draw (4,0) to (5,0); \draw (4,1) to (5,1); \draw (0,0) to (4,0); \draw (0,3) to [out=0,in=180] (4,1); \draw (0,1) to [out=0, in=180] (4,3); \draw (0,5) to [out=0,in=180] (4,2); \draw (0,2) to [out=0,in=180] (4,4); \draw (0,4) to [out=0,in=180] (4,5); \draw (5,0) arc(-90:90:0.5); \draw (5,2) arc(-90:90:0.5); \draw (5,4) arc(-90:90:0.5); \filldraw[black] (5.5,0.5) circle (2pt); \filldraw[black] (5.5,2.5) circle (2pt); \filldraw[black] (5.5,4.5) circle (2pt); \node at (0,0) [anchor=east] {$i_{\rho(1)}=1$}; \node at (0,1) [anchor=east] {$i_{\rho(2)}=1$}; \node at (0,3) [anchor=east] {$i_{\rho(r+1)}=0$}; \node at (0,2) [anchor=east] {$i_{\rho(r)}=1$}; \node at (0,4) [anchor=east] {$i_{\rho(2r-1)}=0$}; \node at (0,5) [anchor=east] {$i_{\rho(2r)}=0$}; \node at (0,3.5) [anchor=east] {$\cdots$}; \node at (0,1.5) [anchor=east] {$\cdots$}; \node at (5.5,4.5) [anchor=west] {$K_r$}; \node at (5.5,3.5) [anchor=west] {$\cdots$}; \node at (5.5,0.5) [anchor=west] {$K_1$}; \node at (5.5,2.5) [anchor=west] {$K_2$}; \node at (4.7,-0.1) [anchor=south] {$\Gamma:z_1^{-1}$}; \node at (4.6,0.9) [anchor=south] {$\Delta:z_1$}; \node at (4.7,1.9) [anchor=south] {$\Gamma:z_2^{-1}$}; \node at (4.6,2.9) [anchor=south] {$\Delta:z_2$}; \node at (4.7,3.9) [anchor=south] {$\Gamma:z_{r}^{-1}$}; \node at (4.6,4.9) [anchor=south] {$\Delta:z_{r}$}; \end{tikzpicture} \end{equation} By spin conservation, the Boltzmann weights of the cap vertices, and Proposition \ref{P4}, we can deduce that for any admissible state of the above lattice model, the situation that $i_{1}=i_{2}=1$ or $i_{1}=i_{2}=0$ cannot happen. Hence for any admissible state, either $i_{1}=1,i_{2}=0$, or $i_{1}=0,i_{2}=1$. For either case, by a similar argument as that in the proof of Proposition \ref{P3}, we can remove the cap vertex $K_1$ together with the two lines associated with it up to a constant factor. Repeating the above argument, we can deduce that for any admissible state and any $1\leq a\leq r$, either $i_{2a-1}=1,i_{2a}=0$, or $i_{2a-1}=0,i_{2a}=1$. Note that using a similar argument as that in the proof of Proposition \ref{P3}, if $i_{1}=1,i_2=0$, we can remove the cap vertex $K_1$ together with the two lines associated with it up to a factor of \begin{equation} z_1^{-\frac{1}{2}}\prod_{a:1<a\leq r, i_{2a-1}=1,i_{2a}=0}b_2(z_a^{-1},z_1;1,-1)\prod_{a: 1<a\leq r,i_{2a-1}=0,i_{2a}=1}b_2(z_a,z_1;-1,-1). \end{equation} Now note that the partition function of the following configuration can be computed as \begin{equation} \hfill \begin{tikzpicture}[baseline=(current bounding box.center)] \draw (0,-0.5) to [out=0,in=180] (2,0.5); \draw (0,0.5) to [out=0,in=180] (2,-0.5); \draw (2,-0.5) arc(-90:90:0.5); \filldraw[black] (2.5,0) circle (2pt); \node at (0,-0.5) [anchor=east] {$1$}; \node at (0,0.5) [anchor=east] {$0$}; \node at (2.5,0) [anchor=west] {$K_1$}; \end{tikzpicture} \end{equation} \begin{eqnarray} && c_1(z_1,z_1^{-1};-1,1)C(1,0;z_1)+b_2(z_1,z_1^{-1};-1,1)C(0,1;z_1)\nonumber \\ &=& \frac{(1-v)z_1}{z_1^{-1}-vz_1} z_1^{-\frac{1}{2}}-\frac{z_1-z_1^{-1}}{z_1^{-1}-vz_1}\sqrt{v} z_1^{\frac{1}{2}}=\frac{z_1^{-\frac{1}{2}}(z_1+\sqrt{v})}{z_1^{-1}+\sqrt{v}}. \end{eqnarray} Therefore, if $i_1=0,i_2=1$, we can remove the cap $K_1$ together with the two lines associated with it up to a factor of \begin{equation} \frac{z_1^{-\frac{1}{2}}(z_1+\sqrt{v})}{z_1^{-1}+\sqrt{v}}\prod_{a:1<a\leq r, i_{2a-1}=1,i_{2a}=0}b_2(z_a^{-1},z_1^{-1};1,1)\prod_{a: 1<a\leq r,i_{2a-1}=0,i_{2a}=1}b_2(z_a,z_1^{-1};-1,1). \end{equation} By repeatedly removing each cap and its two associated lines (from bottom to top), we conclude that \begin{eqnarray} (FK)_{i_1,\cdots,i_{N}} &=& \prod_{a=1}^r \mathbbm{1}_{i_{2a-1}\neq i_{2a}} \prod_{i=1}^r z_i^{-\frac{1}{2}}\prod_{a:1\leq a\leq r, i_{2a-1}=0,i_{2a}=1}\frac{z_a+\sqrt{v}}{z_a^{-1}+\sqrt{v}}\nonumber\\ &\times& \prod_{(a,b):1\leq a<b\leq r, i_{2a-1}=1,i_{2b-1}=1}b_2(z_b^{-1},z_a;1,-1)\prod_{(a,b):1\leq a<b\leq r,i_{2a-1}=1,i_{2b}=1}b_2(z_b,z_a;-1,-1)\nonumber\\ &\times& \prod_{(a,b):1\leq a<b\leq r, i_{2a}=1,i_{2b-1}=1}b_2(z_b^{-1},z_a^{-1};1,1)\prod_{(a,b):1\leq a<b\leq r,i_{2a}=1,i_{2b}=1}b_2(z_b,z_a^{-1};-1,1). \end{eqnarray} \end{proof} Now we finish the proof of Theorem \ref{Theorem2} using Proposition \ref{P5}. \begin{proof}[Proof of Theorem \ref{Theorem2}] Note that by Proposition \ref{P5} and (\ref{Par2}), $Z(\mathcal{T}_{\lambda,z})$ can be written as the sum of the following terms \begin{equation} (F S^{[m_{\lambda_1+r-\frac{1}{2}}]} F^{-1})_{0\cdots 0}^{i_1^{(\lambda_1+r-\frac{1}{2})}\cdots i_N^{(\lambda_1+r-\frac{1}{2})}}\cdots (F S^{[m_{\frac{3}{2}}]} F^{-1})_{i_1^{(\frac{5}{2})}\cdots i_N^{(\frac{5}{2})}}^{i_1^{(\frac{3}{2})}\cdots i_N^{(\frac{3}{2})}}(F S^{[m_{\frac{1}{2}}]} F^{-1})_{i_1^{(\frac{3}{2})}\cdots i_N^{(\frac{3}{2})}}^{i_1^{(\frac{1}{2})} \cdots i_N^{(\frac{1}{2})}}(FK)_{i_1^{(\frac{1}{2})}\cdots i_N^{(\frac{1}{2})}}, \end{equation} where $(i_1^{(l)},\cdots,i_N^{(l)})\in\{0,1\}^N$ for $l=\frac{1}{2},\frac{3}{2},\cdots, \lambda_1+r-\frac{1}{2}$ satisfy the following condition: if $m_l=0$, then $i_k^{(l)}=i_k^{(l+1)}$ for every $1\leq k\leq N$; if $m_l=1$, then there is a unique index $\alpha_l\in\{1,2,\cdots,r\}$ and a unique integer $e_l\in\{0,1\}$, such that $i_{2\alpha_l-e_l}^{(l)}=1,i_{2\alpha_l-e_l}^{(l+1)}=0,i_{2\alpha_l-1+e_l}^{(l)}=0,i_{2\alpha_l-1+e_l}^{(l+1)}=0$, and $i_k^{(l)}=i_k^{(l+1)}$ for every $k\neq 2\alpha_l-1,2\alpha_l$. Here, we have assumed that $(i_1^{(\lambda_1+r+\frac{1}{2})},\cdots,i_N^{(\lambda_1+r+\frac{1}{2})})=(0,\cdots,0)$. Let $\beta_t:=\alpha_{\lambda_t+r-t+\frac{1}{2}}$, $f_t:=2e_t-1$ for every $1\leq t\leq r$. Let $f:=(f_1,\cdots,f_r)\in\{\pm 1\}^r $. Note that $(\beta_1,\cdots,\beta_r)$ corresponds to a permutation $\sigma\in S_r$ such that $\sigma(t)=\beta_t$ for every $1\leq t\leq r$. Based on this observation and Proposition \ref{P5}, we can deduce that \begin{eqnarray*} Z(\mathcal{T}_{\lambda,z}) &=& \sum_{\sigma\in S_r}\sum_{f\in \{\pm 1\}^r}\prod_{i=1}^r z_{\sigma(i)}^{-f_i (\lambda_i+r-i)}\prod_{i=1}^r z_i^{-\frac{1}{2}}\prod_{i=1}^r\frac{z_{\sigma(i)}^{-f_i}+\sqrt{v}}{z_i^{-1}+\sqrt{v}}\prod_{i=1}^r\frac{z_i^{-1}-vz_i}{z_{\sigma(i)}^{-f_i}-z_{\sigma(i)}^{f_i}}\prod_{1\leq i<j\leq r}B(i,j,\sigma,f), \end{eqnarray*} where $B(i,j,\sigma,f)$ is given as follows. If $\sigma(i)<\sigma(j)$, then \begin{equation} B(i,j,\sigma,f):=b_2^{-1}(z_{\sigma(i)}^{-f_i},z_{\sigma(j)}^{-f_j};f_i,f_j) b_2^{-1}(z_{\sigma(i)}^{-f_i},z_{\sigma(j)}^{f_j}; f_i,-f_j); \end{equation} if $\sigma(i)>\sigma(j)$, then \begin{equation} B(i,j,\sigma,f):=a_2^{-1}(z_{\sigma(j)}^{-f_j},z_{\sigma(i)}^{-f_i};f_j,f_i)b_2^{-1}(z_{\sigma(i)}^{-f_i},z_{\sigma(j)}^{-f_j};f_i,f_j) b_2^{-1}(z_{\sigma(j)}^{-f_j},z_{\sigma(i)}^{f_i}; f_j,-f_i). \end{equation} By computation, we obtain that if $\sigma(i)<\sigma(j)$, \begin{eqnarray*} B(i,j,\sigma,f)&=& (z_{\sigma(i)}^{\frac{1}{2}}z_{\sigma(j)}^{-\frac{1}{2}}-v z_{\sigma(i)}^{-\frac{1}{2}} z_{\sigma(j)}^{\frac{1}{2}}) (z_{\sigma(i)}^{-\frac{1}{2}}z_{\sigma(j)}^{-\frac{1}{2}}-v z_{\sigma(i)}^{\frac{1}{2}} z_{\sigma(j)}^{\frac{1}{2}})\\ &&\times (z_{\sigma(i)}^{-\frac{1}{2}f_i}z_{\sigma(j)}^{\frac{1}{2}f_j}-z_{\sigma(i)}^{\frac{1}{2}f_i}z_{\sigma(j)}^{-\frac{1}{2}f_j})^{-1}(z_{\sigma(i)}^{-\frac{1}{2}f_i}z_{\sigma(j)}^{-\frac{1}{2}f_j}-z_{\sigma(i)}^{\frac{1}{2}f_i}z_{\sigma(j)}^{\frac{1}{2}f_j})^{-1}; \end{eqnarray*} if $\sigma(i)>\sigma(j)$, \begin{eqnarray*} B(i,j,\sigma,f)&=& (z_{\sigma(i)}^{-\frac{1}{2}}z_{\sigma(j)}^{\frac{1}{2}}-v z_{\sigma(i)}^{\frac{1}{2}} z_{\sigma(j)}^{-\frac{1}{2}}) (z_{\sigma(i)}^{-\frac{1}{2}}z_{\sigma(j)}^{-\frac{1}{2}}-v z_{\sigma(i)}^{\frac{1}{2}} z_{\sigma(j)}^{\frac{1}{2}})\\ &&\times (z_{\sigma(i)}^{-\frac{1}{2}f_i}z_{\sigma(j)}^{\frac{1}{2}f_j}-z_{\sigma(i)}^{\frac{1}{2}f_i}z_{\sigma(j)}^{-\frac{1}{2}f_j})^{-1}(z_{\sigma(i)}^{-\frac{1}{2}f_i}z_{\sigma(j)}^{-\frac{1}{2}f_j}-z_{\sigma(i)}^{\frac{1}{2}f_i}z_{\sigma(j)}^{\frac{1}{2}f_j})^{-1}. \end{eqnarray*} Thus we have \begin{eqnarray*} \prod_{1\leq i<j\leq r}B(i,j,\sigma,f) &=& \prod_{1\leq i<j\leq r}\frac{(z_i^{\frac{1}{2}}z_j^{-\frac{1}{2}}-vz_i^{-\frac{1}{2}}z_j^{\frac{1}{2}})(z_i^{-\frac{1}{2}}z_j^{-\frac{1}{2}}-vz_i^{\frac{1}{2}}z_j^{\frac{1}{2}})}{(z_{\sigma(i)}^{-\frac{1}{2}f_i}z_{\sigma(j)}^{\frac{1}{2}f_j}-z_{\sigma(i)}^{\frac{1}{2}f_i}z_{\sigma(j)}^{-\frac{1}{2}f_j})(z_{\sigma(i)}^{-\frac{1}{2}f_i}z_{\sigma(j)}^{-\frac{1}{2}f_j}-z_{\sigma(i)}^{\frac{1}{2}f_i}z_{\sigma(j)}^{\frac{1}{2}f_j})}. \end{eqnarray*} Therefore, we conclude that \begin{equation} Z(\mathcal{T}_{\lambda,z})=z^{-\rho_B}\prod_{i=1}^r(1-\sqrt{v}z_i)\prod_{1\leq i<j\leq r}((1-vz_iz_j)(1-vz_jz_i^{-1}))\sum_{\sigma\in B_r}\sigma(z^{\lambda+\rho_C}\prod_{i=1}^r (1+\sqrt{v}z_i^{-1})\Delta_C(z)^{-1}). \end{equation} \end{proof} \newpage \bibliographystyle{acm}
1,108,101,562,623
arxiv
\section{Conclusion} \label{sec:conc} In this paper, we have advocated the TIIS mission for quick realization of ideas as working systems and the modernization of our techniques and tools to better support programming in the coming decades. Given the ubiquity of connected computer systems --- mostly in the form of smartphones --- we are just at the beginning of an explosion of ideas and applications that wait to be realized by professional developers or end users. Consequently, it is even more important that we do our best as a community to improve our programming practice to adapt it for the future challenges we will likely face. Achieving the TIIS mission will require significant efforts spanning many directions. We have identified, as a first step, several directions centered around four principles. We hope that the community unite to move the state-of-the-art forward toward the TIIS vision along these and other important pertinent directions. \section{Directions and Challenges} \label{sec:idea} The vision for quick transformation of ideas into software is broad, and advances in a number of directions are necessary and can move the state of affairs forward. We discuss several directions that we have identified that can be influential toward our goal. We have done early work along some of these directions and hope the community as a whole can help accelerate the progress toward improving programming and in particular, the pace of concretizing ideas. \subsection{Quick Experimentation} Live programming has gained momentum following Bret Victor's presentation~\cite{bretvictor:talk2012}, in which he highlighted the importance of immediate connection between the idea and observing its effect, not just as a catalyst, but as an enabler, in an effective creative process. Since then, several live programming environments, \hbox{\emph{e.g.}}\xspace Xcode~\cite{xcode} (via its Playground feature) and LightTable~\cite{lighttable} that have been influenced by this principle. Prorogued programming~\cite{prorogue:onward2012} is a programming paradigm that explicitly deals with the issue of quick experimentation. It is focused on liberating the programmer from having to deal with programming concerns that are necessary to get a partial, incomplete, program running and meaningfully experiment with it and observe its behavior. It does so by providing the ability to annotate function calls or type instantiations with a special keyword, \texttt{prorogue}. The prorogue keyword acts as a hint for the compiler to let it know that the implementation for the particular method being called is unavailable. At runtime, after a prorogued call is executed, a lazy \emph{future} object is returned in lieu of the return value and the program execution continues. Later, if the value of that object is consulted during the program execution, the user will be asked to provide a concrete return value for the call interactively, while presenting him the actual arguments in that specific invocation. The user interaction will then be recorded and persisted for the rest of the program execution and for subsequent runs, so that the program can be run and experimented with in spite of the unimplemented method body. In effect, prorogued programming aids quick experimentation and top-down design by letting the programmer freely rearrange his workflow as he sees fit, rather than having to follow an order imposed by the toolchain they are using. More interestingly, through \emph{hybrid computation}, prorogued calls can act as hooks to glue a program written in an imperative textual programming language into more domain-specific programming systems that would capture the human intent much better and in a more concise fashion for particular purposes. The other end of hybrid computation does not even have to be an imperative program. It can be a machine learning model that is trained to provide the desired function that would be hard to express the host language. Alternatively, it can be an interactive system that computes the desired output through some user interaction. It is possible to have a hybrid computation engine that is mostly similar to mainstream textual programming languages, except it is much \emph{softer} when it comes to interpreting programmer intent, leaving room for the compiler to make educated guesses and at the same time be more lenient to programmer mistakes, at the expense of precision. \subsection{Programming Knowledge Reuse} Software is rarely written from scratch. Rather, programs are generally composed of smaller pieces. That makes software engineering activity largely a system integration process. Software engineers build more complex abstractions out of simpler ones and that lets them build increasingly sophisticated systems. While seeing the effects of a program live helps, the question remains that given that there are vast amounts of source code available on the Internet, should we move from writing new code to casting programming as a search problem? The programming knowledge publicly available today comes in various forms, such as questions and answers on Stack Overflow~\cite{stackoverflow}, sometimes including code snippets as well as answers, or through publicly accessible code repositories such as the ones hosted on GitHub~\cite{github}. Commercial software development endeavors also collect internal data about their development process, including the version history of the code base, data about bugs and defects, and free-form knowledge in form of comments written on the code review tool, wikis, and sometimes in other forms, like tracking the time the programmers spend on various tasks, storing the search queries they perform~\cite{caitlin:codesearch2015}, or looking at their behavior within the development environment. In software engineering practice, major effort is expended to integrate various systems and assemble a program from building blocks. Given the large amount of code available, it is conceivable that what a programmer plans to write is already written and available in some shape or form. Effective code search can help the programmer discover the existing functionality from existing code bases import it in the code being written~\cite{bingcodesearch}. With a mechanism to locate pieces of functionality through existing APIs or code snippets mined from the Internet, we need to be able to run the resulting \emph{mashup} consisting of the different pieces and quickly experiment with them. A programming paradigm like prorogued programming is well-suited for this task. Proroguing programming concerns not only helps in piecing together the building blocks of functionality discovered in the existing code bases, but also provides a way to effectively insert \emph{holes} in the program, which can be filled later. Filling these holes can be done through traditional implementation, \hbox{\emph{i.e.}}\xspace writing a body for the unimplemented method, or it can be done through more innovative means, like acting as a signal in addition to the search query and helping the search engine know the context in which the code snippet being searched for is going to be live in. In addition to providing that context, the input/output examples persisted during runtime invocation of prorogued calls are a great source of input for an I/O-based code search engine and act as a final filter for validation of code found by a simple keyword based code search engine. Collecting data about the programmer's actions is helpful in other ways as well. By looking at the actions the programmer performs within their development environment, for example, it is possible to predict what they intend to accomplish and propose shortcuts to achieve what they are aiming for more efficiently~\cite{ideplusplus,Murphy-Hill:recommending:fse12}, thereby educating the programmer and making them more effective in the future. Obviously, this can help the IDE designer improve the development environment and simplify its user interface as well. Reusing programming knowledge is also beneficial in activities beyond writing code. For instance, we are able to leverage debugging knowledge accumulated over the previous debugging sessions to automatically help the programmer fix the new, similar, issues~\cite{oscilloscope:oopsla2012}. One way that has been accomplished is by collecting and matching the program traces that exhibit buggy behavior and pattern matching new traces against the ones in the bug database, revealing information about the nature of the bug and how it was previously fixed, potentially helping the programmer understand and fix the new issue. \subsection{Proactive Programming Assistant} Many programming analysis tools have been developed. In practice, program analyses are primarily left to compile time and later. We believe that we should surface as much relevant information as possible to the programmer as soon as possible. Programming tools should capture runtime data and run background static or dynamic analysis while the code is being written, and guide the programmer throughout the coding process. With the popularity of compiler-as-a-library solutions like libclang~\cite{clangtooling} or Roslyn~\cite{roslyn}, we are already seeing this shift accelerating. Our editors are indeed becoming more proactive in issuing compiler warnings and providing safe refactoring tools. That said, in particular, the potential for capturing dynamic information and surfacing it in useful ways while coding remains largely untapped. Among other things, the captured data can feed into the live programming aspects of the system, providing the user with a concrete view of the program, instead of a purely abstract one relying solely on static analysis. We believe what information is useful to the programmer and how to best surface it will be an exciting and impactful avenue for further research. Speculative analysis~\cite{BrunHEN2010:FOSER} and its follow up work can perhaps be viewed as a specific instance of this direction, where the focus is on using speculative analysis in the background to help developers make certain decisions. \subsection{Human Interface Innovation} Textual code is a precise and expressive medium for communicating intent. Looking back at the past half century of programming history, it is hard to see it going away anytime soon. However, most of the computing devices shipped today are phones that do not have a physical keyboard and mouse. While it is conceivable that most of the professional programming activity would not be done on such devices, at least without some external accessories, it is almost certain that end-users would want to use them to accomplish custom computational goals or control systems by defining actions that would happen in response to specific events. Accomplishing this requires innovation both on the human interface front and on the backend engine. It is likely that many of the functionalities will be exposed via the artificial intelligence-based assistant, and will be expressed as interactive voice conversations. On the backend, we need to build more interactive programming systems that can make educated guesses and synthesize programs with incomplete specification, and interactively adapt it as the specification is perfected by gradually asking for and capturing additional user input. Moreover, even on more traditional computers, \hbox{\emph{e.g.}}\xspace desktops and laptops, we need fundamental interface innovations to support alternative programmers~\cite{TobyS:onward2012:recursive_drawing}, \hbox{\emph{i.e.}}\xspace people who are not professional programmers and write programs that does computation and produces a result, which is the object of interest to them, as opposed to the program itself. An important class of people who would benefit from such interface innovations are people doing analysis on various data sets. Already, tools like IPython~\cite{ipython} that have more interactive characteristics and suit domain-specific use-cases well have gained widespread adoption in that community. We believe that there is enormous potential to carry out research that would substantially impact the life of alternative programmers in a positive way in this area. \section{Introduction} \label{sec:intro} There has been no shortage of creative technological ideas, but few have been realized --- it is a daunting task to transform an idea into a working prototype. Indeed, software engineering --- the process of expressing and refining ideas in a programming language --- has been regarded one of the most challenging human endeavors. Programming innovations, such as procedural abstraction and object orientation, have helped increase programmer productivity. However, we still build software essentially the same way as we did decades ago. As a community, we should rethink and redesign methodologies and techniques for programming to make software development more natural and painless to help people realize their creative ideas. We believe that \emph{transforming ideas into software (TIIS)} should be identified as a long-term, catalytic mission for the software engineering community. Decades of research and development have led to better languages, methodologies, tools, environments, and processes. However, it is fair to say that most have been incremental improvements and do not promise significant advances demanded for the mission. Identifying and highlighting the TIIS mission can help unite the community and clarify important research focuses to achieve significant innovations. The TIIS mission requires a multi-faceted approach, which we organize around several key principles: \begin{itemize} \item \emph{Quick experimentation}: to provide developers with immediate feedback on their code modifications and allow them to experiment with incomplete systems; \item \emph{Programming knowledge reuse}: to allow developers quick access to the vast amount of accumulated programming knowledge and wisdom; \item \emph{Proactive programming assistant}: to monitor the developers' actions and proactively feed them relevant information about the program; and \item \emph{Intelligent, conversational interfaces}: to provide alternative interfaces that allow developers to express their intentions and conduct interactive exchanges with the system. \end{itemize} The two core questions in programming are ``What'' and ``How'': (1) ``What'' specifies the intention, and (2) ``How'' concerns the solution. The first three principles center around the ``How'' question, while the last principle the ``What''. Next, we discuss the above principles, and pinpoint specific research problems and challenges.
1,108,101,562,624
arxiv
\section{Old Introduction} \begin{acks} We thank Dr Maria Angela Ferrario for her comments on an early version of this paper, and the anonymous reviewers for their constructive feedback. \end{acks} \bibliographystyle{ACM-Reference-Format} \section{Introduction} \label{section:introduction} Software engineers generate and transform abstractions but they are not themselves abstractions. Software engineers have the same human characteristics as non--software engineers. They have goals \cite{van2001goal} and intentions \cite{ghapanchi2011antecedents, lenberg2015behavioral}; they wrestle with motivation \cite{beecham2008motivation}, happiness \cite{graziotin2017consequences}, stress \cite{ostberg2020methodology}, politics \cite{bergman2002large}, ethics \cite{singer2002ethical} and human values \cite{winter2018measuring}. They work individually, and in teams, projects, organisations, the wider software industry, and society \cite{curtis1988field, defranco2017review}. The software engineer's work with abstractions therefore takes place within the common sphere of human activity and experience. In other words, software engineering (SE) is a human--centric activity \cite{grundy2020towards, lenberg2015behavioral}. Haidt writes that the human mind is a story processor not a logic processor (\cite{haidt2012righteous}, p. 328) and that, ``\dots [a]mong the most important stories we know are stories about ourselves\dots'' (\cite{haidt2012righteous}, p. 328). Storytelling would therefore be a natural way for software engineers to make sense of their own and other's behaviour, and for researchers to better understand this human--centric activity. Given the above, this paper explores the following simply--stated question: what contribution can storytelling make to human--centric software engineering research? The paper identifies several opportunities for storytelling to contribute to software engineering research, recognises there are risks (that can be managed) in using storytelling, and proposes next steps. The remainder of this paper is organised as follows: in Section \ref{section:foundations} we provide foundations, including concepts, challenges in SE research and a brief review of prior research; in Section \ref{section:contribution-of-storytelling} we discuss several ways in which storytelling can contribute to SE research; and in Section \ref{section:discussion} we briefly consider next steps, and then conclude. \section{Foundations} \label{section:foundations} \subsection{Concepts} There are many definitions for the terms \textit{story}, \textit{storytelling} and \textit{narrative}, and disciplines use other terms too. For example, Shaffer et al. \cite{shaffer2018usefulness} observe that journalism uses the term \textit{exemplar}, marketing uses \textit{testimonial}, psychology uses \textit{case study} or \textit{case history}, sociologists use \textit{personal stories} and medicine uses \textit{narrative}. Polkinghorne \cite{polkinghorne1995narrative} explores contrasting definitions of the term \textit{narrative}, distinguishing between the following: narrative as a prosaic discourse differentiated from poetic discourse, narrative as the qualitative inquiry into naturally produced linguistic expressions in their context, narrative as the collected body of data for analysis, narrative as the research output from qualitative inquiry, and narrative as a special type of discourse production, i.e., the story. We use Polkinghorne's \cite{polkinghorne1995narrative} definition/s of storytelling, summarised in Table \ref{table:definitions_of_story}. It can be convenient to distinguish between story and storytelling, where story is the (static) output from a storytelling process. An exemplar of story is the novel. Although the story may in some sense be static, the story remains a dynamic interaction between the output and the reader. So although in some sense a printed story is fixed it is available as input to a re--telling process: the story\textit{telling} continues in the reading of the story. Digressing briefly, this suggests an intriguing metaphor for software: software is static until it is read -- `retold' -- by a processor. \begin{table*} \caption{Example descriptions of storied narrative (from Polkinghorne \cite{polkinghorne1995narrative})} \label{table:definitions_of_story} \begin{tabular}{l p{13cm}} \toprule \# & Description\\ \midrule 1 & ``A story is a special type of discourse production. In a story, events and actions are drawn together into an organized whole by means of a plot. A plot is a type of conceptual scheme by which a contextual meaning of individual events can be displayed.'' (\cite{polkinghorne1995narrative}, p. 7)\\ 2 & ``\dots a specific kind of prose text (the story) and\dots the particular kind of configuration that generates a story (emplotment).'' (p. 5) \\ 3 & ``A storied narrative is the linguistic form that preserves the complexity of human action with its interrelationship of temporal sequence, human motivation, chance happenings, and changing interpersonal and environmental contexts. In this context, \textit{story} refers not only to fictional accounts but also to narratives describing `ideal' life events such as biographies, autobiographies, histories, case studies, and reports of remembered episodes that have occurred.'' (\cite{polkinghorne1995narrative}, p. 7; emphasis in original), and\\ 4 & ``The subject--matter of stories is human action. Stories are concerned with human attempts to progress to a solution, clarification, or unraveling of an incomplete situation. '' (\cite{polkinghorne1995narrative}, p. 7)\\ \bottomrule \end{tabular} \end{table*} Just as researchers and story--analysts do not agree on definitions for narrative and for story, so there is disagreement on the necessary elements of a story narrative. Indicative elements of storytelling include: a plot, a narrative structure, one or more protagonists, one or more antagonists, conflict, inciting incident/s, locations in time/s and space/s, and `space' for the reader to `inhabit' the story. (See, for example, \cite{coyne2015story,storr2020science} for further information.) Shaffer and Zikmund--Fisher \cite{shaffer2013all} identify several purposes and outcomes for storied narrative, and Shaffer et al. \cite{shaffer2018usefulness} identify nine effects of narratives. These purposes, outcomes and effects are summmarised in Table \ref{table:purpose_outcome_effects_of_story}. As the table indicates, storytelling has many outcomes relevant to software engineering practice and research beyond entertainment. \begin{table} \caption{Purpose, outcome \cite{shaffer2013all}, \& effect \cite{shaffer2018usefulness} of storytelling} \label{table:purpose_outcome_effects_of_story} \begin{tabular}{l p{2.6cm} p{4.2cm}} \toprule \# & Purpose & Outcome\\ \midrule a & To inform & Improved knowledge\\ & & Improved affective forecasting \\ b & To engage & Greater engagement\\ & & Greater transportation \\ & & Greater time spent with materials \\ c & To model behaviour & Increased participation\\ & & Increased shared decision making \\ & & Altered behavioural intentions \\ & & Increased uptake of behaviours \\ d & To persuade & Altered behavioural intentions\\ & & Increased uptake of behaviours \\ e & To provide comfort & Reduced psychological distress \\ & & Reduced anxiety\\ \midrule \# & Effect & \\ \midrule \multicolumn{3}{l}{Communicate information more effectively}\\ 1 & \multicolumn{2}{l}{More engaging} \\ 2 & \multicolumn{2}{l}{Better recall} \\ 3 & \multicolumn{2}{l}{Develop fewer counter--arguments}\\ \multicolumn{3}{l}{Change attitudes, judgements and behaviours}\\ 4 & \multicolumn{2}{l}{Increase attitudes} \\ 5 & \multicolumn{2}{l}{Reduce prejudice} \\ 6 & \multicolumn{2}{l}{Promote positive behaviour} \\ 7 & \multicolumn{2}{l}{Reduce negative behaviour} \\ 8 & \multicolumn{2}{l}{Improve work performance} \\ 9 & \multicolumn{2}{l}{Ignore base rate information} \\ & \multicolumn{2}{l}{and increase narrative information}\\ \bottomrule \end{tabular} \end{table} \subsection{Challenges to software engineering} Like all disciplines, software engineering faces fundamental challenges. We very briefly consider some of those challenges here, emboldened in the following discussion. In Section \ref{section:contribution-of-storytelling}, we suggest that storytelling can help us to make progress on at least some of these challenges. Software engineering practice and research is \textbf{multidisciplinary}, integrating many different disciplines from social science through to discrete, and now quantum, mathematics. There is the challenge of respecting the contrasting ways of knowing and types of knowledge from these disciplines. Evidence Based Software Engineering (EBSE) aims ``\dots to improve decision making related to software development and maintenance by integrating current best evidence from research with practical experience and human values'' (\cite{dyba2005evidence}, p. 59). To these we may add system constraints. These different elements -- research, practical experience, human values and constraints -- imply different kinds of knowledge, and there is the challenge of \textbf{integrating different kinds of knowledge}. The \textbf{context} of software practice undermines our ability to generalise findings from research. Context also introduces postmodernist perspectives into software engineering research, e.g., that there is no objective truth at the levels of reality relevant to software engineering. Practitioners and researchers value different kinds of \textbf{evidence}. Different kinds of evidence align with different ways of knowing and support different kinds of knowledge. This introduces the challenge of persuading software practice to improve on the basis of research. There is increasing \textbf{methodological diversity} bringing a challenge to evaluating the quality of research. Finally, there is the challenge of \textbf{retaining, even honouring, meaning} in software engineering in the face of abstraction. In two recent tweets, Michael ``GeePaw'' Hill writes the following: ``The software trade has a great many problems, but among the most debilitating and dangerous is the steadfast refusal to adequately incorporate the humanity of the makers into its culture, organization, and reasoning.'' \cite{hill2021a}, and, ``By a process of relentless abstraction and ruthless compartmentalization, we seek time and again to suppress, distract from, and minimize the most central fact of software development: humans make software.'' \cite{hill2021b}. \subsection{Story and storytelling in research} \label{subsection:prior-research-on-story} Storytelling is recognised as a legitimate focus of study in other scientific disciplines, e.g., energy and climate change research \cite{moezzi2017using}, law \cite{anderson2005analysis, twining1994rethinking}, organisational research \cite{weick1995sensemaking}, and behavioural medicine \cite{shaffer2018usefulness}. And whilst storytelling is recognised in a variety of disciplines related to software engineering -- e.g., information systems \cite{Schwabe2019specialissue}, human--computer interaction \cite{blythe2017research}, computer supported cooperative work \cite{orr1986narratives}, information visualisation \cite{gershon2001storytelling}, multimedia systems \cite{lugmayr2017serious} and computer science education \cite{kelleher2007using, naul2020story} -- software engineering practice and research has tended to direct attention only at one particular type of \textit{story}, i.e., the user story. In behavioural medicine, Shaffer et al. \cite{shaffer2018usefulness} cite prior work to assert several benefits with narrative--as--story. But Shaffer et al. \cite{shaffer2018usefulness} also write that, ``\dots interventions with narratives appear to escape the scrutiny that interventions with statistical evidence receive from people who disagree with the message. This has important implications for health behavior change interventions [and, for the current paper, there are important implications for interventions in software engineering practice], where the goal is often to change attitudes towards an unhealthy or harmful health behavior [or, for the current paper, the goal of improving a suboptimal or counterproductive software practice].'' (\cite{shaffer2018usefulness}, p. 431). We return to Shaffer et al.'s work \cite{shaffer2018usefulness, shaffer2013all} later in this paper. There is some attention in software engineering research directed at narrative analysis and synthesis (e.g., \cite{cruzes2015case}) however that work uses the term \textit{narrative} in the sense of the qualitative inquiry into and with texts rather than as \textit{story}. We find several papers that explicitly recognise story and storytelling in software engineering research: Ahonen and Sihvonen's \cite{ahonen2005things} story of software process improvement; Lenberg et al.'s unpublished manuscript \cite{lenberg2017behavioral} on guidelines for qualitative studies of behavioural software engineering; and several papers (e.g., \cite{lutters2007revealing,sim2011getting}) that study war stories using storytelling as a method of data collection. We consider these publications in the next section of the paper. \section{Rationale for a vision} \subsection*{Notes} \begin{enumerate} \item SE is multidisciplinary \item Within SE research, there is the emergence of human--centric software engineering and behavioural software engineering. These place people at the centre of software engineering. \begin{itemize} \item Meaning, and abstraction \item Beliefs -- Devanbu eg al. \item Values \item Experiences \item Motivation \item Happiness \end{itemize} \item There are fundamental challenges to software engineering research \begin{itemize} \item Context \& postmodernism \item Quote EBSE and discuss \item What constitutes evidence \item What constitutes acceptable knowledge \begin{itemize} \item For practice cf. Rainer's persuading paper \item For research \item Methodical and amethodical \end{itemize} \item What is acceptable methodology \end{itemize} \item There is a greater awareness of the impact of software engineering and of the value system within which software engineering takes place. These can't be addressed within existing ways of knowing and accepted types of knowledge. \item Counter--balance to `big data' and AI/ML \item Counter--balance to abstraction cf. quote McGee Paw \end{enumerate} \subsection{Story and storytelling in research} Stories are recognised as a legitimate focus of study in other scientific disciplines, e.g., energy and climate change research \cite{moezzi2017using}, law \cite{anderson2005analysis, twining1994rethinking}, organisational research \cite{weick1995sensemaking}, and behavioural medicine \cite{shaffer2018usefulness}. And whilst story and storytelling is recognised in a variety of disciplines related to software engineering (e.g., information systems \cite{Schwabe2019specialissue}, human--computer interaction \cite{blythe2017research}, computer supported cooperative work \cite{orr1986narratives}, information visualisation \cite{gershon2001storytelling}, multimedia systems \cite{lugmayr2017serious} and computer science education \cite{kelleher2007using, naul2020story}), software engineering \textit{research} has tended to direct attention only at one particular type of \textit{story}, i.e., the user story. In behavioural medicine, Shaffer et al. \cite{shaffer2018usefulness} cite prior work to assert several benefits with narrative--as--story. But Shaffer et al. \cite{shaffer2018usefulness} also write that, ``\dots interventions with narratives appear to escape the scrutiny that interventions with statistical evidence receive from people who disagree with the message. This has important implications for health behavior change interventions [and, for the current paper, there are important implications for interventions in software engineering practice], where the goal is often to change attitudes towards an unhealthy or harmful health behavior [or, for the current paper, the goal of improving a suboptimal or counterproductive software practice].'' (\cite{shaffer2018usefulness}, p. 431).We briefly review Shaffer et al.'s work \cite{shaffer2018usefulness, shaffer2013all} later in this paper. There is some attention in software engineering research directed at narrative analysis and synthesis (e.g., \cite{cruzes2015case}) however that work uses the term \textit{narrative} in the sense of the qualitative inquiry into and with texts rather than as \textit{story}. We find several papers that explicitly recognise story and storytelling in software engineering research: Ahonen and Sihvonen's \cite{ahonen2005things} story of software process improvement; Lenberg et al.'s unpublished manuscript \cite{lenberg2017behavioral} on guidelines for qualitative studies of behavioural software engineering; and a several papers (e.g., \cite{lutters2007revealing,sim2011getting}) that study war stories using storytelling as a method of data collection. We consider these publications later in this paper. \section{The contribution of storytelling to human--centric SE research} \label{section:contribution-of-storytelling} We present several arguments below for why and how storytelling can contribute to software engineering research and practice. \subsection{Storytelling as collected data} Lutters and Seaman \cite{lutters2007revealing} developed a simple protocol for collecting war stories from SE practitioners. That protocol guided practitioners on how to tell their stories. Lutters and Seaman \cite{lutters2007revealing} already demonstrate that software engineering research collects some kinds of story from software practitioners. The stories collected have been stories of exception, e.g., of something that has gone wrong. What constitutes an exception will vary from practitioner to practitioner, e.g., by definition an expert experiences different kinds of exception to a novice. Shaffer et al. \cite{shaffer2018usefulness} show that non--exceptional stories, e.g., of process, can also be valuable. Sim and Alspaugh \cite{sim2011getting} show that collecting war stories is not necessarily just about acquiring a text for analysis. So although stories have been collected in software engineering research, there is much greater opportunity for collecting stories than has been undertaken to date. \subsection{Storytelling as analysis} \label{subsection:storytelling-as-analysis} Sim and Alspaugh \cite{sim2011getting} show that software engineering research has, to date, constrained the way it analyses stories and therefore limited the opportunities for learning from stories. They demonstrate richer ways of analysing stories, drawing on the humanities. They provide example approaches that, due to space, we are unable to review here. \subsection{Storytelling and ways of knowing} There are many different classifications of knowledge and therefore of ways of knowing. We use Heron's \cite{heron1996co} four ways of knowing, briefly summarised here in Table \ref{table:heron_four_ways_of_knowing}. For a detailed exploration of these four ways of knowing, see Heron's book, \textit{Co--operative inquiry: research into the human condition} \cite{heron1996co}. A research field such as software engineering research that is, or seeks to be, multidisciplinary (e.g., \cite{sim2001beg}) must be open to different kinds of knowledge, and therefore different ways of knowing, and not only open to different propositional knowledge. \subsubsection{Grounding ways of knowing} In Table \ref{table:heron_four_ways_of_knowing}, the four ways of knowing are ordered, with experiential knowing positioned `lowest' and practical knowing positioned 'highest'. This positioning is because the higher--positioned ways of knowing are grounded in the lower--positioned ways of knowing. In software engineering research, for example, we `ground' our propositional knowledge in empirical evidence, and we seek to ground recommendations to practitioners (cf. practical knowledge) in propositional knowledge. \subsubsection{Intuition and presentational knowing} Heron's description of presentational knowing as ``\dots an intuitive grasp of the significance of patterns'' might misleadingly suggest that presentational knowing is an unconscious or semi--conscious activity. Storytelling is a form of conscious, intentional presentational knowledge. This is of course most obvious with published novels. \subsubsection{Assumptions about the status of ways of knowing} Heron is challenging at least two assumptions, i.e., 1) that propositional knowledge should be the pre--eminent way of knowing, and 2) that propositional knowledge is self--sufficient. Sims et al. \cite{sim2011getting,sim2008marginal} discuss methodical and amethodical knowledge, also challenging the pre--eminence of any particular kind of knowledge. \subsubsection{The status of experiential and presentational ways of knowing in SE research} It is not clear whether these two types of knowing are distinctly recognised in software engineering research. When we conduct interviews, surveys, focus groups etc. we appear to be accessing presentational ways of knowing. It is frequently hard to directly access the experiential knowledge of others. \begin{table} \caption{Heron's \cite{heron1996co} four ways of knowing} \label{table:heron_four_ways_of_knowing} \begin{tabular}{l p{12cm}} \toprule Way & Brief description\\ \midrule Practical & `\dots knowing how to exercise a skill\dots'' (p. 52)\\ Propositional & ``\dots intellectual statements, both verbal and numeric, conceptually organized in ways that do not infringe the rules of logic and evidence. Propositional knowledge is regarded both as pre--eminent and self--sufficient.'' (p. 33)\\ Presentational & ``\dots an intuitive grasp of the significance of patterns as expressed in graphic, plastic, moving, musical and verbal art--forms\dots'' (p. 52), e.g., story as presentational knowing.\\ Experiential & ``\dots imaging [not imagining] and feeling the presence of some energy, entity, person, place, process or thing.'' (p. 52)\\ \bottomrule \end{tabular} \end{table} \subsection{Integrating storytelling with evidence and argument for evaluation and assessment} \label{subsection:story-evidence-argument} Anderson et al. \cite{anderson2005analysis} write: ``\dots in factual enquiries\dots for a story to be accepted as true it needs to be warranted by (anchored in) evidence. A well--informed story needs to be coherent, but to be true, it must be both plausible and backed by particular evidence.'' (\cite{anderson2005analysis}, p. 283) Rainer \cite{rainer2017using} developed a preliminary methodology for identifying, extracting and analysing arguments, evidence and explanations from texts, and for presenting those in a structured, integrated way. One type of explanation was the story. He demonstrated the application of the methodology to several examples taken from a blog post by Joel Spolsky \cite{Spolsky2006}, these examples being `micro--war stories'. The methodology seeks to address Anderson's position, e.g., to evaluate the story using evidence. The methodology is open to the limitation that Sim and Alspaugh \cite{sim2011getting} observed with Lutters and Seaman's \cite{lutters2007revealing} approach, i.e., it extracts facts and information from a text. There is therefore the opportunity to extend the methodology. For example, there are opportunities to use story to help evaluate and assess software engineering. Anderson \cite{anderson2005analysis} shows how storytelling can help identify gaps in evidence and can help generate hypotheses. \subsection{Storytelling as output} There appear to be very few examples where SE researchers publish their research output \textit{as a story}. Ahonen and Sihvonen's \cite{ahonen2005things} paper is the only complete example we can find. Ahonen and Sihvonen \cite{ahonen2005things} present a real--world story over a 2.5yr period, told from the point of view of an individual software engineer, of a Software Process Improvement (SPI) effort. They write, ``The story\dots is based mainly on the personal experiences of a single software engineer. Those experiences have been documented by the engineer\dots, but those experiences have [also] been checked by interviewing several other people\dots'' and, ``The reason why a story like this should be interesting for others is that the mistakes made in SPI efforts during the documented time are quite universal.'' There are many examples of where researchers present fragments of story, e.g., Sim and Alspaugh \cite{sim2011getting, lutters2007revealing}. One explanation for the limited recounting of stories is the amount of publication `real estate' they require. \subsection{Storytelling as intervention} \label{subsection:story-intervention} To explain how storytelling can act as an intervention we draw on Shaffer et al.'s \cite{shaffer2018usefulness} explanatory model. We choose this model because it has been developed by and for scientists. Shaffer et al. \cite{shaffer2018usefulness} developed the Narrative Immersion Model (NIM) to better understand how storied narrative works so that storied narrative might be used to help behaviour change, e.g., in patients. The NIM may therefore be understood as a model to support intervention. Whilst an intervention might occur \textit{after} the research has taken place it is often the case that we design for intervention, e.g., design our studies with the intention of using the results to improve software practice. One implication is that researchers consider storytelling as part of the design of the study. Drawing on their prior work \cite{shaffer2013all}, Shaffer et al. \cite{shaffer2018usefulness} define three types of story (see Table \ref{table:how-story-works}). These types are not intended to be exclusive: an actual story might fit more than one type. \textit{Outcome} stories describe how a situation ends. \textit{Process} stories describe how a decision was made. \textit{Experience} stories describe what a real--world situation was really like. Each type of story has a different effect on behaviour. Depending on the kind of effect the researchers seek (cf. Table \ref{table:how-story-works}), the researchers might design for different kinds of story. \begin{table} \caption{Shaffer et al.'s types of story \cite{shaffer2018usefulness, shaffer2013all}} \label{table:how-story-works} \begin{tabular}{l p{8cm}} \toprule Types & Effect\\ \midrule Outcome & Ability to persuade\\ & Change attitudes \\ & Alter intentions \\ & Alter behaviours \\ Process & Identify relevant decision attributes\\ & Model decision processes\\ & Change how people search for information \\ Experience & Reduce affective forecasting errors\\ & Facilitate more accurate perspective--taking\\ & Improve resilience\\ \bottomrule \end{tabular} \end{table} Comparing Table \ref{table:how-story-works} with Table \ref{table:purpose_outcome_effects_of_story}, notice that the three types of story identified by Shaffer et al. \cite{shaffer2018usefulness} in Table \ref{table:how-story-works} do not seem to have a direct impact on improving knowledge; or in other words, these stories do not seem to have the function to inform. This may be because Shaffer et al. have not had the opportunity to investigate this outcome. We make this observation because software engineering research often focuses on knowledge and, more specifically, on propositional knowledge. Storytelling does not seem to naturally fit that type of knowledge, but instead storytelling complements it. As well as talking about types of story and their effects, Shaffer et al. \cite{shaffer2018usefulness} also talk about the magnitude of effect of a story. Shaffer et al. \cite{shaffer2018usefulness} propose a continuum through which a reader may `travel' from Interest through Involvement to Immersion. For Shaffer et al., the deeper into the narrative a reader `travels' the more powerful the narrative will be to influence behaviour. Characteristics of the narrative promote deeper involvement. These characteristics appear to align the NIM with elements of storytelling (cf. Section \ref{section:foundations}) and with advice on creative writing. \subsection{Storytelling as advocacy} \label{storytelling-as-advocacy} Stories and storytelling provide a way to advocate for, and/or to raise awareness of, important issues such as human values. Consider a software engineering team developing a software system for managing information about children in state care homes. What insights might Sissay's \cite{sissay2019my} memoir, of his life in managed care, offer to that software engineering team to help ensure they develop a human--centric software system, and not simply develop an algorithmic bureaucracy? Or, as another example, what insights might Kafka's \textit{The Trial} and \textit{The Castle} provide on AI--based decision--making (e.g., \cite{beck2012weber})? Other examples include: Ford's recently published memoir, \emph{Think Black}~\cite{ford2021thinkblack}, about his father, John Stanley Ford, hired by Thomas J. Watson to become IBM's first black software engineer; Wiener's memoir, \emph{Uncanny Valley}~\cite{Wiener2020uncanny}, describing her experiences working in technology companies in Silicon Valley; and Kim's two fictional accounts, one co--written with colleagues, about DevOps \cite{kim2018phoenix} and software development \cite{kim2019unicorn}. \subsection{The risks of storytelling} Anderson et al. \cite{anderson2005analysis} write, ``\dots story telling is vulnerable to abuse. It may be true that stories and storytelling are psychologically necessary to decision--making in legal contexts, but they are dangerous in that they often can be used to violate logical standards, appeal to emotion rather than reason, and subvert legal principles and conventions.'' (\cite{anderson2005analysis}, p. 280). Anderson et al. \cite{anderson2005analysis} caution about the careful use of story but do not argue \emph{against} the use of story. They list a number of examples of dangers with story. They also develop a `rough working protocol' for assessing the plausability, coherence and evidentiary support for a story. All research methods have strengths and weaknesses and the researcher seeks to increase their awareness of the threats to validity that arise with each method. One contribution of storytelling is to help researchers remain aware of threats. Such threats don't just occur in the telling of stories. For example, Sim and Alspaugh \cite{sim2011getting} observe that storytelling is performance: interviewees select and filter what and how they share information. Such selection and filtering doesn't just occur with the telling of a story, however. Any interview can be understood as a performance, with the threats to validity that might arise. A highly--influential paper in software engineering research, Parnas' \cite{parnas1986rational} \textit{A rational design process: how and why to fake it}, explicitly recognises the value of `faking' something: ``We will never find a process that allows us to design software in a perfectly rational way. The good news is that we can \textit{fake it}. We can \emph{present} our system to others as \emph{if we} had been rational designers and it pays to \textit{pretend} [to] do so during development and maintenance. (\cite{parnas1986rational}, p. 251; emphasis added). Parnas explains the scope of the pretence and argues for its value in certain circumstances. In a fundamental way, abstraction is also pretence, for it presents a decontextualised and therefore simplified version of reality. We accept that pretence because there is value in doing so. We suggest that storytelling shouldn't be summarily dismissed on the basis of pretence. \section{Steps toward realising the vision} \subsection*{Notes} \begin{enumerate} \item Focus here on research, but at least some applies to practice \begin{itemize} \item Steps for research \item Steps for practice \end{itemize} \item Develop frameworks or models cf. AXE \item Developing new skills \item Reading groups \item Writing groups cf. Faber Academy \item Storytelling platforms \& resources \begin{itemize} \item Network \item StoryCorps \item Agile Corps \item Story Collider \end{itemize} \item Dissemination of \textit{argued stories} and \textit{evidence--based storytelling} \begin{itemize} \item Conference workshop/s \item Sections in journals \end{itemize} \item Change to metrics of impact \item Standards for acceptable empirical research in SE \item Multidisciplinary collaboration \item Supplementary materials (papers as code, papers as story) \begin{itemize} \item cf. PROMISE \item papers with code \end{itemize} \item Methodology \begin{enumerate} \item War story interview \item Story Stem Technique \item Protocol/s for handling stories \begin{enumerate} \item Argument schemes \item Legal protocols \end{enumerate} \item Integrating story with other kinds of knowing \end{enumerate} \end{enumerate} \section{Discussion} \label{section:discussion} \subsection{Where next?} There are many aspects of storytelling in human--centric software engineering that we have not discussed in this paper, and further research is needed. Areas requiring further attention include: \begin{enumerate} \item A systematic review of storytelling that builds on the brief review presented in Section \ref{subsection:prior-research-on-story}. Such a review should consider both the humanities and the sciences, as well as creative writing and `story analysts', e.g., \cite{coyne2015story, storr2020science}. \item The development of conceptual frameworks for storytelling that build on the foundations introduced in Section \ref{section:foundations}. Included within this framework would be the development of appropriate terminology, e.g., \textit{argued storytelling} or \textit{evidence--based storytelling} are phrases that could communicate the intent of the kind of storytelling we consider here. \item The development of research methodology and methods that draw on work from other disciplines. Several examples have been discussed in this paper, e.g., \cite{shaffer2018usefulness,lutters2007revealing,sim2011getting,anderson2005analysis,rainer2017using,tang2016making}. \item Guidance on the use of methodology and on assessing the quality of storytelling research, cf. \cite{ralph2021empirical}. \item Developing ways in which stories in SE can be evaluated and assessed, but also how storytelling might be used to evaluate and assess software engineering. For example, Section \ref{subsection:storytelling-as-analysis} briefly discussed the use of storytelling to generate hypotheses, Section \ref{subsection:story-evidence-argument} discussed the integration of storytelling with argument and evidence, and Section \ref{subsection:story-intervention} discussed the use of storytelling for intervention. \item Investigating the use of storytelling as an approach to knowledge exchange with industry, and as a method of appropriate intervention in practice, cf. \cite{shaffer2018usefulness}. \item Investigating the benefits of a more sophisticated understanding of storytelling, beyond the user story, for software engineering practice. \item The use of both reading groups and writing groups for developing skills in appreciating stories in research and in practice. Section \ref{storytelling-as-advocacy} presented examples that reading groups might use. \item The development of platforms, resources and (social) networks to disseminate work, including supplementary materials. Examples to draw on include StoryCorps \cite{StoryCorps2021}, The Story Collider \cite{StoryCollider2021} and experience repositories \cite{schneider2003effective}. \end{enumerate} \subsection{Conclusion} In this paper, we position storytelling as a particular kind of narrative. We show that whilst narrative synthesis is recognised in software engineering, storytelling has not been properly considered. We explore types of story and the outcomes and effects of these different story types. We argue that storytelling has a valuable, even necessary, contribution to many areas of human--centric software engineering. We recognise that stories can be dangerous (e.g., because they can mislead) but also that these dangers might be assessed and mitigated. We suggest that storytelling may act as a potential counter--balance to abstraction, and a means to retain and honour meaning in software engineering.
1,108,101,562,625
arxiv
\section{Introduction}\label{sec_intro} The origin of Phobos and Deimos is intensively debated in recent years. Historically, a capture of a passing D-type asteroid, i.e., the capture hypothesis, has been motivated due to spectral similarities to those of the moons \citep{Bur92,Mur91}. Alternatively, a giant impact on Mars could form a debris disk around Mars \citep{Cra11,Hyo17a,Hyo17b,Hyo18a}, i.e., the giant impact hypothesis, from which Phobos and Deimos may accrete as rubble-pile objects \citep{Ros16,Can18}. It is recently proposed that today's Phobos may not be primordial as a direct consequence of, e.g., either a capture or a giant impact. Instead, after the first generation of Phobos (Phobos's ancestor) is formed, it may have been tidally spiraled inwardly within the Martian Roche limit, recycled into rings via tidal disruption, and then resurrected as a smaller moon via ring's spreading\footnote{In this hypothesis, Deimos is primordial because Deimos orbits outside the Martian synchronous orbit and thus does not tidally spiral inward.}. Today's Phobos may appear after several of this ring-moon recycling evolution \citep{Hes17,Cuk20}, although this view is challenged by the fact that today's Mars does not possess bright particulate rings that are expected to be left behind as a natural consequence of this ring-moon recycling hypothesis (Madeira et al. in prep). Alternatively, by performing tidal-evolution calculations integrated backward in time, \cite{Bag21} reported that Phobos and Deimos could once have non-zero eccentricities and thus Phobos's apocenter and Deimos's pericenter could cross, while their semi-major axes reside inside and outside the Martian synchronous orbit ($\sim 6 R_{\rm Mars}$ where $R_{\rm Mars}$ is the radius of Mars), respectively. From these findings, they envisioned that Phobos and Deimos were once a single large moon, which was later split into two $-$ as Phobos and Deimos $-$ presumably via a catastrophic impact. However, their view raises several challenging issues. First, the impact process itself was not studied and thus the likelihood of such an impact, i.e., impact probability and the outcome of impact $-$ whether it splits a single moon into only two with reasonable eccentricity and inclination $-$ were not demonstrated. Second, even if an impact indeed successfully could form two moons as Phobos and Deimos, the successive orbital evolution including mutual interactions (gravity and collision) between the moons were not investigated. The orbital evolution of \cite{Bag21} integrated backward in time was solved based on the orbital elements (e.g., semi-major axis, eccentricity, and inclination) and not on the direct $N$-body approach, neglecting the gravitational interactions and collisions during a moon-moon close encounter. Because Phobos and Deimos initially have orbits that cross each other, the successive orbital evolution may not be as simple as those envisioned and may result in a destructive collision between two moons. In this study, we especially focus on the second question $-$ the successive orbital evolution after the hypothetical splitting of a single moon into Phobos and Deimos $-$ using a direct $N$-body approach for numerical integration. We focus on the short-term evolution ($<10^4$ years) where the tidal evolution of the moons can be ignored (see Sec.~\ref{sec_tides}). We then show that the two moons in principle collide during the successive orbital evolution within $\sim 10^4$ years. We argue that the impact accompanies a disruptive outcome and the formation of a debris ring. Such an evolutionary path is completely different from the one \cite{Bag21} has envisioned. This paper is structured as follows. In Section \ref{sec_method}, we describe our methods of orbital integration. In Section \ref{sec_results}, we present the numerical results of the orbital integrations of the two moons that are hypothetically split from a single ancestral moon and show that the two moons likely collide. In Section \ref{sec_impact}, we perform additional impact simulations of the two moons and present that the outcome is disruptive, forming a debris ring. In Section \ref{sec_discussion}, we discuss the dynamical fate of the debris ring and envision the formation of multiple moons (more than three moons). Finally, Section \ref{sec_summary} summarizes our conclusions. \section{Numerical method}\label{sec_method} \begin{deluxetable*}{ccc} \tablenum{1} \tablecaption{Parameters used in this study \label{table}} \tablewidth{0pt} \tablehead{ \colhead{name} & \colhead{mass [kg]} & \colhead{mean radius [km]} } \startdata Mars & $6.39 \times 10^{23}$ & 3389.5 \\ Phobos & $1.06 \times 10^{16}$ & 11.3 \\ Deimos & $1.48 \times 10^{15}$ & 6.3 \\ \enddata \end{deluxetable*} \begin{figure*}[t!] \plotone{Fig_a_e_initial.eps} \caption{Initial distribution of $e_{\rm Dei}$ and $a_{\rm Dei}$. Each black point is the initial conditions of our numerical simulations (600 points). Left, middle, and right panels show cases of $a_{\rm Pho}=5.0$, $5.5$, and $6.0R_{\rm Mars}$, respectively. We set $e_{\rm Pho}=0.15-0.35$, $e_{\rm Dei}=0.0-0.2$, and $a_{\rm Dei}=6.5-7.5R_{\rm Mars}$, following \cite{Bag21}. $a_{\rm Dei}$ is obtained from Eq.~(\ref{eq_ini}) with $e_{\rm pho}$ and $e_{\rm Dei}$ randomly distributed within the ranges. The blue, green, and red curves indicate $a_{\rm Dei}$ for $e_{\rm Pho}=0.15$, $0.25$, and $0.35$, respectively. \label{fig_initial}} \end{figure*} \subsection{Orbital calculation} We performed three-body (Mars-Phobos-Deimos) numerical simulations. Orbits of the bodies were integrated by using the fourth-order Hermite method \citep{Mak92,Kok04} and the numerical code was originally developed in previous studies \citep{Hyo16}. We included the second-order and fourth-order oblateness moments of Mars (i.e., $J_2$ and $J_4$)\footnote{The $J_3$ term could periodically change eccentricity and inclination but it is negligible for our chosen parameters \citep{Liu21}.}. The equation of motions in this study ($xy$-plane is the Martian equatorial plane) are \begin{align} \ddot{x}_i &= -GM_{\rm Mars} \displaystyle \frac{x_i}{|r_i|^3} \left( 1 - J_2\Psi_{i2} -J_4 \Psi_{i4} \right) - \sum_{j \neq i} G m_j \displaystyle \frac{x_i - x_j}{r_{ij}^3} \\ \ddot{y}_i &= -GM_{\rm Mars} \displaystyle \frac{y_i}{|r_i|^3} \left( 1 - J_2\Psi_{i2} -J_4 \Psi_{i4} \right) - \sum_{j \neq i} G m_j \displaystyle \frac{y_i - y_j}{r_{ij}^3} \end{align} \begin{align} \ddot{z}_i = -GM_{\rm Mars} \displaystyle \frac{z_i}{|r_i|^3} \left( 1 - J_2\Psi_{i2} -J_4 \Psi_{i4} + J_2\Phi_{i2} + J_4\Phi_{i4} \right) \nonumber \\ - \sum_{j \neq i} G m_j \displaystyle \frac{z_i - z_j}{r_{ij}^3} , \end{align} where $G$ and $M_{\rm Mars}$ are the gravitational constant and the mass of Mars, respectively. Subscripts of $i$ and $j$ indicate Phobos or Deimos. $r_{i}=\left( x_{i}, y_{i}, z_{i} \right)$ is the position vector and $r_{\rm ij}=|r_{\rm i} - r_{\rm j}|$. $m_{\rm j}$ is the mass of Phobos or Deimos. $\Psi_{i2}$, $\Psi_{i4}$, $\Phi_{i2}$, and $\Phi_{i4}$ are \citep[][]{Sin85} \begin{eqnarray} \Psi_{i2} &=& \displaystyle \frac{R_{\rm Mars}^2}{r_i^2} P'_3 \left( \displaystyle \frac{z_i}{r_i} \right) \\ \Psi_{i4} &=& \displaystyle \frac{R_{\rm Mars}^4}{r_i^4} P'_5 \left( \displaystyle \frac{z_i}{r_i} \right) \\ \Phi_{i2} &=& 3\displaystyle \frac{R_{\rm Mars}^2}{r_i^2}, \hspace{2em} \\ \Phi_{i4} &=& \displaystyle \frac{R_{\rm Mars}^4}{r_i^4} Q_4 \left(\displaystyle \frac{z_i}{r_i} \right) , \end{eqnarray} where the $P_{\rm n}'(x)$ terms are the derivative of the Legendre polynomial, $P_{n}(x)$, and $P'_4(x)=xQ_4(x)$ given as \begin{eqnarray} P'_3(x) &=& \displaystyle \frac{15}{2}x^2 - \displaystyle \frac{3}{2} \\ P'_5(x) &=& \displaystyle \frac{315}{8}x^4 - \displaystyle \frac{105}{4}x^2 + \displaystyle \frac{15}{8} \\ Q_4(x) &=& \displaystyle \frac{35}{2}x^2 - \displaystyle \frac{15}{2} . \end{eqnarray} Here, $J_{2}= 1.96 \times 10^{-3}$ and $J_{4} = -1.54 \times 10^{-5}$ \citep{Yor95,Liu11}.\\ We note that the other external perturbation forces may slightly change the eccentricities of the moons. The most important perturbation could be evection \citep[][in analogy to Triton]{Gol89}. The amplitude of the periodic change in $e$ due to evection is of the order of $\sim (n_{\rm p}/n_{\rm s}) e$, where $n_{\rm p}$ and $n_{\rm s}$ are the mean motions of the planet and the satellite, respectively \citep[][see their Eq.~(19)]{Cuk04}. As $n_{\rm p}/n_{\rm s} \lesssim 10^{-3}$ for the cases of Phobos and Deimos, the amplitude of change of pericenter of the moons is estimated to be smaller than the size of the moons, making our calculations largely unaffected. Thus, we neglected evection in this study. \subsection{Initial conditions} \label{sec_initial} Following the view of \cite{Bag21} in that Phobos and Deimos are split from a single progenitor moon, we set Phobos's apocenter and Deimos's pericenter initially equal as \begin{equation} r_{\rm Pho,apo} \left( = a_{\rm Pho} \left( 1 + e_{\rm Pho} \right) \right) = r_{\rm Dei,peri} \left( = a_{\rm Dei} \left( 1 - e_{\rm Dei} \right) \right) , \label{eq_ini} \end{equation} where $a$ and $e$ are semi-major axis and eccentricity, respectively. In this paper, the subscripts of "Pho" and "Dei" indicate Phobos and Deimos, respectively. For the orbits of Phobos and Deimos to initially be in touch, we set the argument of periapsis, $\omega$, and the longitude of ascending node, $\Omega$, as $|\omega_{\rm Pho} - \omega_{\rm Dei}| = \pi$ and $\Omega_{\rm Pho}=\Omega_{\rm Dei}$, respectively. Inclinations, $i$, of Phobos and Deimos hardly change in billions of years of tidal evolution. We used $i_{\rm Pho}=0.021$ rad ($\sim 1.2$ deg) and $i_{\rm Dei}=0.015$ rad ($\sim 0.86$ deg) that would be the largest difference between those of Phobos and Deimos \citep[see][]{Bag21}. A smaller difference in their inclinations indicates that the two orbital planes are more coincident, leading to a more frequent close encounter. Even if we use today's values (i.e., the Laplace plane of Deimos is not the same as that of Phobos or the Martian equator), it should not significantly affect the collisional timescale reported in this study. This is because the inclination of Deimos to the Laplace plane ($\sim 2$ deg) is larger than the Laplace plane tilt ($< 1$ deg). We initially randomized the eccentric anomaly that defines the position of a body along a given elliptic Kepler orbit. Table \ref{table} lists other physical parameters used in this study. We note, importantly, that the two orbits initially could be those ``crossed" (i.e., $r_{\rm Pho,apo} > r_{\rm Dei,peri}$) at the hypothetical splitting, although here they were set to ``touch" each other (i.e., $r_{\rm Pho,apo} = r_{\rm Dei,peri}$; Eq.~(\ref{eq_ini})). Such initial orbits would be more prone to collide as the orbits cross. \cite{Bag21} reported that, at the hypothetical splitting of a single large moon into Phobos and Deimos (i.e., the initial condition of our orbital integrations), $a_{\rm Pho} \sim 5-6 R_{\rm Mars}$, $e_{\rm Pho} \sim 0.15-0.35$, and $e_{\rm Dei} \sim 0.0-0.2$. The minimum and maximum $a_{\rm Dei}$ are $\sim 6.5R_{\rm Mars}$ and $7.5R_{\rm Mars}$, respectively \citep{Bag21}. We fixed $a_{\rm Pho}=5.0$, $5.5$, and $6R_{\rm Mars}$ and randomly distributed $e_{\rm Pho}$ and $e_{\rm Dei}$ within the aforementioned ranges to create the initial conditions for our numerical simulations. Using these values, $a_{\rm Dei}$ is derived from Eq.~(\ref{eq_ini}). Figure \ref{fig_initial} shows the initial conditions of our numerical simulations (black points). We performed 600 simulations each for $a_{\rm Pho}=5.0$, $5.5$, and $6.0R_{\rm Mars}$. We terminated the simulations when a collision of two moons is detected or when simulation time exceeds $1 \times 10^4$ years. \section{Results}\label{sec_results} \subsection{General outcome after splitting} \label{sec_general} \begin{figure*}[t!] \centering \includegraphics[width=1.5\columnwidth]{Fig_Tcol_cumulative.eps} \caption{Cumulative distribution of the time taken to collide in years. Blue, green, and red lines represent cases of $a_{\rm Pho}=5.0$, $5.5$, and $6.0R_{\rm Mars}$, respectively. Two distinct timescales of collisions are seen; $t_{\rm col} \sim 10^{-2} - 10^{-1}$ years and $t_{\rm col} \gtrsim 30$ years. Less than $10$ \% of our runs at $1 \times 10^4$ years still do not experience a collision between the moons. \label{fig_time_to_collide}} \end{figure*} In short, after the hypothetical splitting of a single moon into two, presumably as Phobos and Deimos, these two moons most likely collide at around the apocenter of Phobos and at the pericenter of Deimos. More than $>90$\% of our simulations result in a collision (no specific correlation exists between the outcome and the initial conditions as three-body problem has chaotic behaviour). Figure \ref{fig_time_to_collide} shows the cumulative distribution of the time of collision between the moons, $t_{\rm col}$, since the start of our numerical simulations for different initial semi-major axes ($a_{\rm Pho} =5.0$, $5.5$, and $6.0 R_{\rm Mars})$. Two distinct cases are observed: $t_{\rm col} \sim 10^{-2} - 10^{-1}$ years ($\sim 10-20$\% of runs) and $t_{\rm col} \gtrsim 30$ years ($\sim 70-80$\% of runs). Cases of $t_{\rm col} \sim 10^{-2} - 10^{-1}$ years can be explained by the following two timescales. First, because Phobos's apocenter and Deimos's pericenter are initially in touch (Eq.~(\ref{eq_ini})) but they have different semi-major axes, they can potentially collide with a timescale of their synodic period. The synodic period is given as \begin{eqnarray} T_{\rm syn} &=& \frac{2\pi a}{\frac{3}{2}\Delta a \Omega_{\rm K}} \nonumber \\ &\sim& 0.01 {\, \rm years} \left( \frac{a}{6R_{\rm Mars}} \right)^{5/2}\left( \frac{\Delta a}{R_{\rm Mars}} \right)^{-1} , \label{eq_tsyno} \end{eqnarray} where $\Delta a$ is the difference in semi-major axes and $\Omega_{\rm K}$ is the Keplerian orbital frequency. For typical values of $a \sim 6R_{\rm Mars}$ and $\Delta a \sim 1R_{\rm Mars}$, $t_{\rm syn} \sim 0.01$ years. Second, when precession takes place, the argument of pericenter, $\omega$, and the longitude of ascending node, $\Omega$, of Phobos and Deimos relatively change. This leads to a misalignment of the pericenter-to-apocenter from the initial configuration. These precession rates, $\dot{\omega}$ and $\dot{\Omega}$, are dominated by the $J_2$ term (because $J_2 \gg J_4$) and are described as \citep{Kau66,Dan92} \begin{eqnarray} \dot{\omega} &=& \frac{3 n}{\left( 1-e^2 \right)^2} \left( \frac{R_{\rm Mars}}{a} \right)^2 \left( 1-\frac{5}{4}\sin^2(i) \right) J_2 \label{eq_pre_omega} \\ \dot{\Omega} &=& -\frac{3n \cos(i)}{2\left( 1-e^2 \right)^2} \left( \frac{R_{\rm Mars}}{a} \right)^2 J_2, \label{eq_pre_Omega} \end{eqnarray} where $n=\sqrt{GM_{\rm Mars}/a^3}$ is the orbital mean motion. For a small eccentricity and inclination, the synodic periods of the relative precession timescale of the argument of pericenter, $T_{\rm syn,\omega}$, and of the longitude of ascending node, $T_{\rm syn,\Omega}$, between two moons can be written as \begin{align} T_{\rm syn,\omega} &\equiv \frac{2\pi}{ \frac{d \dot{\omega}}{da} \Delta a} \sim 29.4 {\, \rm years}\left( \frac{a}{6R_{\rm Mars}} \right)^{9/2} \left( \frac{\Delta a}{R_{\rm Mars}} \right)^{-1} \label{eq_time_pre_omega}\\ T_{\rm syn,\Omega} &\equiv \frac{2\pi}{ \frac{d \dot{\Omega}}{da} \Delta a} \sim 58.8 {\, \rm years} \left( \frac{a}{6R_{\rm Mars}} \right)^{9/2} \left( \frac{\Delta a}{R_{\rm Mars}} \right)^{-1} . \label{eq_time_pre_Omega} \end{align} As $\omega$ precesses faster than $\Omega$ (Eqs.~(\ref{eq_pre_omega})-(\ref{eq_pre_Omega}) and Eqs.~(\ref{eq_time_pre_omega})-(\ref{eq_time_pre_Omega})), the relative precession of $\omega$ initially dominates a misalignment of the pericenter-to-apocenter. When the change in the relative radial distance (i.e., the difference in the radial distances between Phobos and Deimos at the synodic period) through the relative precession of $\omega$ becomes larger than the sum of the moons' radii, $R_{\rm moon}$, the orbits of the two moons are no longer in touch and a collision does not anymore occur. During their relative precession, the minimum distance between Phobos and Deimos at the synodic period changes from $0$ (i.e., the initial pericenter-to-apocenter alignment) to $a_{\rm Dei}\left( 1+ e_{\rm Dei} \right) - a_{\rm Pho}\left( 1+ e_{\rm Pho} \right) \sim a_{\rm Dei} - a_{\rm Pho}$ for nearly circular orbits (i.e., when they relatively precess by $\pi$ from the initial configuration). Thus, assuming a steady change, the critical time, $T_{\rm sep,ini}$, needed to radially separates the two moons from the initial configuration of the pericenter-to-apocenter alignment via the relative precession of $\omega$ is given as \begin{align} T_{\rm sep,ini} \sim \frac{T_{\rm syn,\omega}}{2 \pi} \theta_{\rm cri} \sim \frac{T_{\rm syn,\omega}}{2 \pi} \frac{R_{\rm moon} \pi}{a_{\rm Dei} - a_{\rm Pho}} \sim 0.1 {\, \rm years} , \end{align} where $\theta_{\rm cri}$ is the critical angle between the arguments of periapsis of Phobos and Deimos (in radian) to physically separate the two moons ($R_{\rm moon} \sim \left( a_{\rm Dei} - a_{\rm Pho} \right) \frac{\theta_{\rm cri}}{\pi}$). Here, $R_{\rm moon} \sim 20$ km and $a_{\rm Dei} - a_{\rm Pho} \sim R_{\rm Mars}$ are used. Hence, $t_{\rm col} \sim 10^{-2} - 10^{-1}$ years indicates that Phobos and Deimos collide just after the start of numerical simulations before the Martian oblateness (mainly by $J_{2}$) precesses their orbits large enough to radially separate them. Cases of $t_{\rm col} \gtrsim 30$ years can be explained as follows. When Phobos and Deimos avoid a collision during the first $\sim 10^{-2} - 10^{-1}$ years, an orbital precession due to Martian oblateness (mainly by $J_{2}$) effectively changes the moon's relative orbital configurations so that their orbits no longer cross (after $\sim 10^{-1}$ years). Precession of $\omega$ changes the direction of the pericenter, while that of $\Omega$ changes the position where the orbits of the moons pass through the reference plane. Thus, assuming no significant change in orbits occur during close encounters, the orbits of the two moons do not cross again until (1) $\Omega_{\rm Pho}=\Omega_{\rm Dei}$ via the relative precession of $\Omega$ and (2) the apocenter of Phobos is pointed towards the pericenter of Deimos, i.e., $|\omega_{\rm Pho}-\omega_{\rm Dei}|=\pi$, via the relative precession of $\omega$. The synodic periods of the relative precession timescale of $\omega$ and $\Omega$ are given in Eqs.~(\ref{eq_time_pre_omega}) and (\ref{eq_time_pre_Omega}). These two timescales indicate that Phobos and Deimos have a chance to collide every $\sim 30$ years, which is consistently observed in the results of numerical simulations (i.e., $t_{\rm col} \gtrsim 30$ years). \subsection{Impact conditions} \label{sec_condition} \begin{figure*}[t!] \plotone{Fig_impact_conditions.eps} \caption{Cumulative distributions of impact velocity (left) and impact angle (right) obtained from our numerical simulations. Blue, green, and red colors indicate $a_{\rm Pho}=5.0$, $5.5$, and $6.0R_{\rm Mars}$, respectively. In the right panel, the black curve shows the cumulative of $P(\theta_{\rm imp})=\sin(2\theta_{\rm imp})$ which has a peak at $\theta_{\rm imp} = 45$ deg.} \label{fig_impact_conditions} \end{figure*} In Sec.~\ref{sec_general}, most of our numerical simulations (more than $90$\% of our runs) showed that the two moons that are split from a single moon envisioned by \cite{Bag21} eventually collide with each other within $\sim 10^4$ years. Here, by further analyzing the data of our numerical simulations, we show the impact conditions at the collisions (i.e., impact velocity, $v_{\rm imp}$, and impact angle, $\theta_{\rm imp}$). Figure \ref{fig_impact_conditions} shows the cumulative distributions of the impact velocity (left) and the impact angle (right). Blue, green, and red colors indicate cases of $a_{\rm Pho}=5.0$, $5.5$, and $6.0R_{\rm Mars}$ ($a_{\rm Dei} > 6.5R_{\rm Mars}$; see Sec.~\ref{sec_initial}), respectively. As $a_{\rm Pho}$ becomes smaller, the impact velocity becomes larger. This is because the Keplerian velocity depends on $a^{-1/2}$ and because the relative velocity between Phobos and Deimos increases with increasing the difference in their semi-major axes (see also Fig.~\ref{fig_initial}). For $a_{\rm Pho}=5-6R_{\rm Mars}$ and $a_{\rm Dei}=6.5-7.5R_{\rm Mars}$, $v_{\rm imp} \simeq 100-300$ m s$^{-1}$. This is reasonably understood by considering the random velocity, $v_{\rm ran}$, as $v_{\rm ran} \simeq \sqrt{e^2 + i^2} v_{\rm K} \sim 100-400$ m s$^{-1}$ for typical values of the Keplerian velocity of $v_{\rm K} \simeq 1450$ m s$^{-1}$ at $a=6R_{\rm Mars}$ and of $e \sim 0.1-0.3$ with $i \sim 0$. The distribution of the impact angle, defined to be $\theta_{\rm imp} = 0$ deg for a head-on collision and $\theta_{\rm imp} = 90$ deg for a perfect grazing impact, indicates that its probability distribution follows nearly $P(\theta_{\rm imp})=\sin(2\theta_{\rm imp})$ with a peak at $\theta_{\rm imp} = 45$ deg (the black line in the right panel of Figure \ref{fig_impact_conditions}). Thus, the impact direction is nearly an isotropic distribution. \begin{figure*}[t!] \plotone{Fig_impact_outcomes.eps} \caption{Outcomes of collision between Phobos and Deimos. Masses of the largest remnant ($M_{\rm lr}$; red points), the second largest remnant ($M_{\rm slr}$; blue points), and the debris ring ($M_{\rm ring} = M_{\rm tot} - M_{\rm lr} - M_{\rm slr}$ where $M_{\rm tot}=m_{\rm Pho}+m_{\rm Dei}$; black points) as a function of impact velocity are shown. The red and blue horizontal dashed lines indicate masses of Phobos and Deimos, respectively. The open circles are the results of $N$-body simulations \citep[][the cases of the impactor-to-target mass ratio of $\gamma=0.1$ and $\theta_{\rm imp}=45$ deg]{Lei12}. The open squares are the results obtained from our SPH impact simulations. In our impact simulations, the masses of the target and the impactor are $m_{\rm Pho}$ and $m_{\rm Dei}$, respectively. $\theta_{\rm imp}=45$ deg is used. \label{fig_impact}} \end{figure*} \section{Fate of impact between two moons} \label{sec_impact} In Section \ref{sec_results}, we show that the hypothetical two moons, presumably as Phobos and Deimos, that are split from a single ancestral moon collide during the successive orbital evolution. Collision velocity is $v_{\rm imp} \sim 100-300$ m s$^{-1}$, that is, $v_{\rm imp}\sim 10-30 v_{\rm esc}$ (the escape velocities of Phobos and Deimos are $v_{\rm esc} \sim 5-10$ m s$^{-1}$). Such a high-velocity collision may result in a disruptive outcome, while their small mass ratio between the two moons (their mass ratio is $\gamma \simeq 0.1$) may lead to a less catastrophic outcome for the larger one (i.e., target) compared to the case of an impact between comparable masses \citep{Lei12}. Here, we additionally performed 3D impact simulations, using the smoothed particle hydrodynamics (SPH) approach \citep{Mon92}, to examine the typical outcome of an impact. We employed the impact velocity $v_{\rm imp}=100-300$ m s$^{-1}$ and the impact angle $\theta_{\rm imp}=45$ deg. Masses of the target and the impactor were set of Phobos and Deimos, respectively. The numerical code is the same as that used in \cite{Hyo17c} that was originally developed in \cite{Gen12}. Regarding the EOS, Murchison EOS was used \citep{Nak22}. The total number of SPH particles was $N \simeq 1.1 \times 10^{5}$. Figure \ref{fig_impact} shows the results of our SPH impact simulations (open squares). The masses of the largest remnant ($M_{\rm lr}$; red points), the second largest remnant ($M_{\rm slr}$; blue points), and the debris ring ($M_{\rm ring} = M_{\rm tot} - M_{\rm lr} - M_{\rm slr}$; black points) are shown. We additionally included the results of independent simulations of $\gamma=0.1$, $v_{\rm imp}=100-300$ m s$^{-1}$, and $\theta_{\rm imp}=45$ deg from \cite{Lei12}, where they performed $N$-body impact simulations of rubble-pile bodies (open circles)\footnote{We note that the exact total mass of \cite{Lei12} is $\sim 40$\% of the total mass of Phobos and Deimos. However, because their impact conditions (i.e., $\gamma=0.10$, $v_{\rm imp}=100-300$ m s$^{-1}$, and $\theta_{\rm imp}=45$ deg) are very similar to ours (i.e., $\gamma \simeq 0.14$, $v_{\rm imp}=100-300$ m s$^{-1}$, and $\theta_{\rm imp}=45$ deg), we used their numerical results with the assumption of $M_{\rm tot}=m_{\rm Pho}+m_{\rm Dei}$ in Fig.~\ref{fig_impact} (i.e., the mass fraction of the largest and the second largest remnants to the total mass).}. Here, the $N$-body approach (open circles) may be more appropriate than the SPH approach (open squares) because Phobos and Deimos are considered to be rubble-pile objects and a prominent impact shock with a phase change would not be produced for $v_{\rm imp}=100-300$ m s$^{-1}$ considered here. Both simulations $-$ our SPH simulations and the $N$-body simulations of \cite{Lei12} $-$ show that the mass of the largest remnant (red points) decreases with increasing the impact velocity, while the mass of the second-largest remnant (blue points) increases with increasing the impact velocity. The remaining mass, defined as the mass of the debris ring (black points), increases with increasing the impact velocity, indicating more impact debris is produced with increasing the impact velocity. These results indicate that (1) the impacts, in general, significantly reduce the masses of the moons (i.e., indicated by the points below the dashed lines, where the dashed lines represent their original masses), (2) $v_{\rm imp}=100$ m s$^{-1}$ leads to a catastrophic disruption of Deimos (the mass is reduced more than one order of magnitude; see the blue points and the blue dashed line), and (3) $v_{\rm imp}=300$ m s$^{-1}$ significantly reduces the mass of Phobos (nearly one order of magnitude) in addition to that of Deimos, indicating that most of the mass is distributed as a debris ring (black points). Typical impacts of $v_{\rm imp}=100-300$ m s$^{-1}$ with $\theta_{\rm imp}=45$ deg, therefore, are not in agreement with the view of \cite{Bag21} $-$ two moons comparable to Phobos and Deimos that are split from an ancestral single moon would tidally evolve to the orbital configurations of Phobos and Deimos we see today $-$ and imply that the evolution after the hypothetical splitting is not as simple as it was envisioned. Subsequent gravitational and collisional interactions between partially disrupted (and/or catastrophically disrupted) moons and particles in the debris rings, although it is beyond the scope of this study, need to be carefully considered. Changing the impact angle, $\theta_{\rm imp}$, changes the degree of disruption. However, either Phobos (target here) or Deimos (impactor here) would be significantly disrupted, forming a debris ring, for $v_{\rm imp}=100-300$ m s$^{-1}$. This is because here $v_{\rm imp} \gtrsim 10v_{\rm esc}$ \citep[e.g., see the dependence on the impact angle in][]{Lei12}. For example, if the impact is grazing, it could significantly disrupt the impactor (smaller one), while the target (larger one) could be less disrupted compared to the case of a $45$-deg impact. Thus, changing the impact angle would not change the above conclusion -- Both Phobos and Deimos cannot be intact after the high-velocity impact. Assuming a progenitor is a rubble-pile object, the particle size distribution of the impact debris may not significantly change from that of the original constituent particles, although it is not directly extracted from the impact simulations. This is because impacts with $v_{\rm imp} \simeq 100-300$ m s$^{-1}$ would not cause noticeable melting and vaporization of the impacted materials. Only around the impact point, particles may be damaged and fragmentation may occur. Lastly, we note that our SPH simulations and $N$-body simulations of \cite{Lei12} neglect, e.g., the material strength and frictions. Including these additional effects may quantitatively change the masses of the impact remnants, especially for small bodies as small as a few kilometers and less \citep[e.g.,][]{Ben99,Jut10}. However, it is expected that the disruptive outcomes (here $v_{\rm imp} \gtrsim 10v_{\rm esc}$) and the dependence on the impact velocity $-$, i.e., a higher impact velocity results in a more disruptive outcome \citep[e.g.,][]{Lei12} $-$ do not qualitatively change, validating our conclusion above. \begin{figure*}[t!] \plotone{Fig_orbits_of_debris.eps} \caption{Orbital elements, $a$ and $e$, of the debris particles obtained by using the data of SPH simulations (top panels) and corresponding surface densities using the equivalent circular orbital radius, $a_{\rm eq}$, (bottom panels). From left to right panels, cases of $v_{\rm imp}=100$, $200$, and $300$ m s$^{-1}$ are shown. \label{fig_debris}} \end{figure*} \section{Discussion}\label{sec_discussion} \subsection{The successive evolution of the remnant fragments and the debris ring} \label{sec_successive} As demonstrated in Sec.~\ref{sec_impact}, a disruptive impact between the two moons that are split from a single moon occurs, forming a few large fragments and a debris ring. Using the data obtained from the 3D SPH simulations in free space (i.e., positions and velocities of the debris particles), we constructed the orbits of the debris particles around Mars \citep[a similar approach was used in the Moon-forming giant impact of][]{Jac12}. The top panels of Figure \ref{fig_debris} show the orbits of the debris particles around Mars ($a$ and $e$) for cases of $v_{\rm imp}=100$, $200$, and $300$ m s$^{-1}$. To produce the figure, we assumed that the center of the masses of the two colliding moons orbits around Mars at the Martian synchronous radius ($a_{\rm sync} = 6R_{\rm Mars}$) with eccentricity $e=0$. We assumed that the impact happens in the Martian equatorial plane (i.e., $z=0$ and thus particles have $i \sim 0$), followed by the assumption of \cite{Bag21} that putative Phobos and Deimos formed near the Martian equatorial plane. For the statistical arguments, the debris particles were isotopically distributed in the impact plane ($xy$-plane) to take into account the nature of the isotropic impact direction in the $xy$-plane \citep{Jac12,Hyo18b}. Figure \ref{fig_debris} indicates that most of the debris is concentrated around the synchronous orbit, suggesting that the debris indeed forms a ring-like structure. Such debris particles would experience a successive dynamical evolution through collisions and gravitational interactions among particles. During the inelastic collisional evolution, the eccentricities are damped, while the angular moment of particles is conserved. The equivalent circular orbital radius, $a_{\rm eq}$, defined as the circular orbit while conserving the angular momentum of a Keplerian orbit with an initial non-zero eccentricity, is given as \begin{equation} a_{\rm eq} = a_{\rm ini} \left( 1 - e_{\rm ini}^2 \right) , \label{eq_aeq} \end{equation} where $a_{\rm ini}$ and $e_{\rm ini}$ are the initial semi-major axis and eccentricity, respectively. Now, using $a_{\rm eq}$, we may estimate the surface density of the debris when the eccentricities are damped to zero. The bottom panels of Fig.~\ref{fig_debris} show surface densities using the data obtained from the SPH simulations. Most of the mass is concentrated within $\sim 5-7R_{\rm Mars}$. Peaks seen at around $\sim 6R_{\rm Mars}$ indicate the largest remnant, which depends on our chosen size of the bins\footnote{Here, we used equally spaced $100$ bins between $1-10R_{\rm Mars}$. Thus, the peak of the surface density becomes, for example, $\sim M_{\rm lr}/(2 \pi a_{\rm syn} \Delta a) \sim 100$ kg m$^{-2}$, where we used $M_{\rm lr}=5 \times 10^{15}$ kg, $a_{\rm syn}=6R_{\rm Mars}$, and $\Delta a = 9R_{\rm Mars}/100$.}. A small number of particles are further distributed in a wide range of the radial direction ($\sim 3-10R_{\rm Mars}$). The arguments presented here, using $a_{\rm eq}$, is an extreme case where the collisional damping is most efficient. In reality, accretion would also take place while collisional damping occurs. To do so, a full $N$-body simulation is required to understand the fate of the debris ring, which is beyond the scope of this paper. The key message from Fig~\ref{fig_debris} is that the debris ring would be distributed with a radial width of $ \gtrsim 1R_{\rm Mars}$. The total mass of the debris is only the sum of those of Phobos and Deimos, indicating that the Hill sphere of the total debris mass around Mars ($\sim 38$ km at $a_{\rm syn}=6R_{\rm Mars}$, assuming it is a single object) is about two orders of magnitude smaller than the radial width of the debris ring or less ($\Delta a_{\rm ring} > R_{\rm Mars} \simeq 3390$ km). From this simple consideration, it is expected that more than three moons would accrete from the debris ring because the radial separation of bodies reaching the isolation mass is $\sim 5-10$ times the Hill sphere \citep[e.g.,][]{Kok95}. This separation is still an order of magnitude smaller than the ring width. Furthermore, moons accreted in a ring tend to have small eccentricities and the tidal evolution is not efficient especially outside the Martian synchronous orbit, likely leaving the system of multiple moons in the same configuration as it was formed over billions of years. Such an outcome differs from the Martian moon system we see today where only Deimos exits beyond the Martian synchronous orbit. This is the reason why the formation of a large ancient inner moon accreted from an inner debris disk $-$ produced within the Martian Roche limit presumably by a giant impact \citep{Hyo17a,Hyo17b} $-$ was proposed for the formation of Phobos and Deimos, i.e., the mean motion resonances of a large single inner moon swept up an outer debris disk concentrated around the Martian synchronous radius, forming only two moons -- Phobos and Deimos -- at specific radial locations \citep{Ros16}. Alternatively, \cite{Can18} considered a less massive extended disk formed by a small impactor compared to that in \cite{Ros16}. This disk spawned transient multiple small inner moons (still massive compared to Phobos and Deimos) that rapidly tidally decayed and did not perturb Phobos and Deimos who naturally accreted from the outer regions of the disk. Therefore, it seems challenging that only Phobos and Deimos accrete from a debris ring without any external influence (e.g., resonances and/or tides). Instead, multiple small moons would form, i.e., a completely different Martian moons system from the one we observe today. \subsection{Tidal evolution of the moons} \label{sec_tides} In this study, we ignored the tidal evolution of the moons that changes their semi-major axes, eccentricities, and inclinations. The tidal evolution of inclination over billions of years is not prominent, while the changes in the pericenter and apocenter distances (a function of the semi-major axis and eccentricity) are not negligible \citep[][their panel (a) in Figure 1]{Bag21}. A crude estimate, then, can be made for the rate of changes in the apocenter and pericenter distances of Phobos and Deimos, respectively ($\dot{a}_{\rm apo,Pho}$ an $\dot{a}_{\rm per, Dei}$, respectively) as $\dot{a}_{\rm apo,Pho} \sim 4R_{\rm Mars}/10^{9} \sim 1.4 \times 10^{-2}$ m year$^{-1}$ and $\dot{a}_{\rm per,Dei} \sim 1R_{\rm Mars}/10^{9} \sim 3.4 \times 10^{-3}$ m year$^{-1}$ for Phobos and Deimos, respectively \citep[see][]{Bag21}. When the tidal evolution is significant enough so that the radial difference in the apocenter distance of Phobos and the pericenter distance of Deimos becomes comparable to the size of the larger of the two moons (in this case, $r_{\rm Pho} \simeq 11.3$ km of Phobos), the orbits of the two moons no longer cross. This occurs with the timescale longer than $\sim r_{\rm Pho}/(\dot{a}_{\rm apo,Pho} + \dot{a}_{\rm peri,Dei}) \sim 6.5 \times 10^{5}$ years. Therefore, in this study, we neglected the effects of tides in our orbital integrations of $<10^{4}$ years. \subsection{Other challenges in \cite{Bag21} scenario} In this study, we showed that two moons split from a hypothetical progenitor quickly re-collide and are disrupted into much smaller moons (Sec.~\ref{sec_results} and Sec.~\ref{sec_impact}; see also Fig.~\ref{fig_impact}). One may wonder if the progenitor could be a larger object and the two moons were also larger than \cite{Bag21} considered. Correspondingly, the largest two impact fragments (i.e., in Fig.~\ref{fig_impact}) from the disruptive collision could become Phobos and Deimos, although a complex interplay between the large fragments and small debris needs to be carefully studied (see Sec.~\ref{sec_successive}). However, if this is the case, it already completely changes the picture that \cite{Bag21} envisioned. More importantly, the physical process of the putative splitting of the progenitor envisioned in \cite{Bag21}, in the first place, seems unlikely. For only two large fragments to be formed (here as Phobos and Deimos), their putative initial ejection velocities (at the time of splitting, i.e., just after the impact) should be comparable to their mutual escape velocity \citep[e.g.,][]{Ben99}. A higher ejection velocity indicates that the impact was more energetic and a larger number of smaller fragments were formed, and vice versa. \cite{Bag21}, however, envisioned that only two impact fragments existed (as Phobos and Deimos) at the same time their putative ejection velocities (a few hundred meters per second; see Eq.~(\ref{eq_ini}) and Fig.~\ref{fig_initial}) were much larger than their mutual escape velocity (about ten meters per second). Thus, from the above consideration, this situation seems physically unlikely. Furthermore, \cite{Bag21} envisioned that the two moons orbit near the Martian equatorial plane. This implicitly assumed that the putative impact and the splitting occurred near the Martian equatorial plane. However, the nature of the impactor to the progenitor should be isotropic. From the statistical consideration, the probability that the orbit of the colliding object lies close to the equatorial plane is low. Although each of the above processes may need to be studied in detail, a number of challenges in \cite{Bag21} scenario already exist. Together with our results -- putative two split moons (as Phobos and Deimos) initially on equatorial, eccentric, and crossing orbits would likely quickly collide --, we conclude that \cite{Bag21} scenario is unlikely.\\ \section{Summary}\label{sec_summary} \cite{Bag21} envisioned that Phobos and Deimos directly originate from a splitting of a single ancestral moon at around the Martian synchronous orbit ($\sim 6R_{\rm Mars}$) a few billion years ago. At the time of splitting, Phobos and Deimos were envisioned to have moderate eccentricities and orbit near the Martian equatorial plane. Their semi-major axes were assumed to be located inside and outside the synchronous orbit, respectively, followed by a tidal evolution that led to the orbital configuration we see today. By performing orbital integrations of Phobos and Deimos that are hypothetically formed by the splitting, we found that the two moons likely collide each other during the successive $<10^{4}$ years, and a collision results in a disruptive outcome, forming a debris ring at around the Martian synchronous radius. This process occurs much faster than the tidal forces can evolve moons' orbits away from intersection. The width of the debris ring is $\gtrsim R_{\rm Mars}$ and thus multiple small moons are likely to accrete. This evolutionary path differs from that envisioned by \cite{Bag21} and would form a different moons' system from the one we observe today. Therefore, we conclude that Phobos and Deimos are unlikely to split directly from a single ancestral moon. In the coming 2024, Martian Moons eXploration (MMX), developed by the Japan Aerospace Exploration Agency (JAXA), is expected to be launched. The MMX mission plans to collect a sample of $>10$ g from Phobos's surface and return to Earth in 2029 with the aims of elucidating the origin of Martian moons \citep{Fuj19,Usu20}, collecting geochemical information about the evolution of Martian surface environment \citep{Hyo19}, and searching for traces of Martian life \citep{Hyo21}. Therefore, theoretical studies including ours will be finally tested by the MMX mission.\\ \noindent R.H. acknowledges the financial support of MEXT/JSPS KAKENHI (Grant Number JP22K14091). R.H. also acknowledges JAXA's International Top Young program. H.G. acknowledges the financial support of MEXT/JSPS KAKENHI (Grant Number 21H04514, 20KK0080).
1,108,101,562,626
arxiv
\section{Introduction} Let $F\subset {\overline{\mathbb Q}}$ be a totally real number field, let $D$ be a quaternion algebra over $F$ and let $G/F$ be the algebraic group $D^*/F^*$. Let $d$ be the number of archimedean places of $F$ where $D$ is unramified and let ${\mathfrak p}$ be a fixed nonarchimedean place of $F$ that is unramified in $D$ lying above a prime number $p$. Let $\pi =\bigoplus_v' \pi_v$ be a cuspidal automorphic representation of $G({\mathbb A})$ (where ${\mathbb A}$ denotes the adele ring of $F$) of parallel weight 2 \footnote{By that we mean that the Jacquet-Langlands transfer of $\pi$ to $\PGL_2({\mathbb A})$ corresponds to a cuspidal Hilbert modular eigenform of parallel weight 2} such that the local component $\pi_{{\mathfrak p}}$ is the Steinberg representation of $G(F_{{\mathfrak p}}) \cong \PGL_2(F_{{\mathfrak p}})$. Moreover we fix a finite extension $E/{\mathbb Q}_p$ with uniformizer $\varpi$ so that $\pi^{\infty} =\bigoplus_{v\nmid \infty}' \pi_v$ is defined over $E$. Generalizing a construction of Darmon \cite{darmon}, the author (in the case $D=M_2(F)$) and Gehrmann (for arbitrary $D$) introduced certain $p$-adic numbers called {\it automorphic ${\mathscr L}$-invariants ${\mathscr L}_{{\epsilon}}(\pi, \psi)$} of $\pi$ (see \cite{spiess1}, \cite{gehrmann1}). They are defined in terms of the group cohomology of $G(F)$. Here ${\epsilon}$ is a character of the group of connected components of $\pi_0(G(F\otimes {\mathbb R}))\cong ({\mathbb Z}/2{\mathbb Z})^d$ and $\psi$ is a continuous homomorphism from the multiplicative group of the local field $F_{{\mathfrak p}}^*$ to the additive group of $E$. On the other hand let $\rho_{\pi}: \Gal({\overline{\mathbb Q}}/F)\to \GL_E(V_{\pi})$ be the Galois representation associated to $\pi$. The fact that $\pi_{{\mathfrak p}}$ is the Steinberg representation implies that $\rho_{\pi}$ is a direct summand of the Tate module of an abelian variety defined over $F$ with split multiplicative reduction at ${\mathfrak p}$. Hence one can associate an {\it arithmetic} (i.e.\ Mazur-Tate-Teitelbaum type) ${\mathscr L}$-invariant ${\mathscr L}(V_{\pi}, \psi)$ to $V_{\pi}$ and $\psi$. Our main result (see Theorem \ref{theo:linvmttautom}) is the equality between automorphic and arithmetic ${\mathscr L}$-invariants \begin{equation} \label{lautarth} {\mathscr L}_{{\epsilon}}(\pi, \psi) \, =\, {\mathscr L}(V_{\pi}, \psi). \end{equation} In particular ${\mathscr L}_{{\epsilon}}(\pi, \psi)$ is independent of the the sign character ${\epsilon}$. For the history of this problem we refer to Remark \ref{remark:mainthm}. Our proof consists of three steps. If $d=0$ (i.e.\ if $D$ is totally definite) the ${\mathscr L}$-invariant ${\mathscr L}_{{\epsilon}}(\pi, \psi)$ is essentially Teitelbaum's ${\mathscr L}$-invariant \cite{teitelbaum} and the equality \eqref{lautarth} can be deduced from $p$-adic uniformization of Shimura curves due to \u{C}erednik (for $F={\mathbb Q}$) and Boutot-Zink (for arbitrary $F$). If $d=1$ our approach is novel and uses as an essential tool the cohomology theory of {\it ${\mathscr S}$-varieties} introduced in \cite{spiess2}. It allows us to construct directly\footnote{i.e.\ without deforming $V_{\pi}$ into a $p$-adic family first} infinitesimal deformations of the Galois representation $V_{\pi}$ involving the automorphic ${\mathscr L}$-invariant. More precisely since both sides of \eqref{lautarth} are linear in $\psi$ and since both sides are $=1$ if $\psi= v_{{\mathfrak p}}: F_{{\mathfrak p}}^* \to {\mathbb Z}\subseteq E$ is the normalized valuation at ${\mathfrak p}$ it suffices to see that the vanishing of the left hand side of \eqref{lautarth} implies the vanishing of the right hand side. We will see that the vanishing of \eqref{lautarth} is equivalent to the fact that \[ \widetilde{V}_{\pi} \, :=\, {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}^D; \widetilde{E}(\Theta_{\psi}), E)_{\pi} \] is a non-trivial infinitesimal Galois deformation of $V_{\pi}$. We recall that given an admissible $E$-Banach space representation $W$ of the standard maximal torus $T_{{\mathfrak p}} \cong F_{{\mathfrak p}}^*$ of $\PGL_2(F_{{\mathfrak p}})$ we have constructed in \cite{spiess2} cohomology groups \[ {\mathbb H}_{\varpi-\ad}^n(X^D_{{\overline{\mathbb Q}}}; W, E), \qquad n\ge 0 \] associated to a certain ${\mathscr S}$-curve $X=X^D$ (i.e.\ an ${\mathfrak p}^{\infty}$-tower of Shimura resp.\ modular curves equipped with a $\PGL_2(F_{{\mathfrak p}})$-action). These are $E$-vector spaces equipped with a continuous $\Gal({\overline{\mathbb Q}}/F)$-action as well as a spherical Hecke action. So we can consider in particular the $E$-Banach representation $W= \widetilde{E}(\Theta_{\psi})$ where $\widetilde{E}=E[{\varepsilon}]$ denote the dual numbers and $\Theta_{\psi}: F_{{\mathfrak p}}^*\to \widetilde{E}$ the character $\Theta_{\psi}(x) = 1+ \psi(x) {\varepsilon}$. In (\cite{spiess2}, \S 5.6 and 5.7), we have shown that $\widetilde{V}_{\pi}$ admits a two-step filtration and that the action of $\Gal({\overline{\mathbb Q}}_p/F_{{\mathfrak p}})$ on the associated graded modules factors through the maximal abelian quotient $\Gal({\overline{\mathbb Q}}_p/F_{{\mathfrak p}})^{\ab}$ and can be described explicitly. This enables us to follow the argument of Greenberg and Stevens (\cite{grstevens}, proof of Thm.\ 3.14) to deduce ${\mathscr L}(V_{\pi}, \psi)=0$. In the case $d>1$ we reduce the proof of \eqref{lautarth} to the case $d\le 1$ by proving a certain Jacquet-Langlands functoriality for ${\mathscr L}_{{\epsilon}}(\pi, \psi)$. More precisely if ${\overline{D}}$ denotes the quaternion algebra over $F$ which is ramified at the same nonarchimedean places as $D$ and at all infinite places except possibly one and if $\pi'$ denotes the Jacquet-Langlands transfer of $\pi$ to ${\overline{G}}({\mathbb A})$ (with ${\overline{G}} = {\overline{D}}^*/F^*)$ then we will show (see Theorem \ref{theo:linvjl}) \begin{equation} \label{ljaclang} {\mathscr L}_{{\epsilon}}(\pi, \psi) = {\mathscr L}(\pi', \psi). \end{equation} Let $\bar{d}\in \{0,1\}$ denote the number of archimedean places ramified in ${\overline{D}}$. As a key tool in proving \eqref{ljaclang} we study the cohomology groups \[ {\mathbb H}_{\varpi-\ad}^d((X^D)^{\an}; \fOrd_{\varpi-\ad}^{\bar{d}}((X^{{\overline{D}}})^{\an}, {\mathcal O})_E, E) \] The cohomology group $\fOrd_{\varpi-\ad}^{\bar{d}}((X^{{\overline{D}}})^{\an}, {\mathcal O})_E$ has been introduced in (\cite{spiess2}, \S 4.1). It is an admissible $E$-Banach space representation of $T_{{\mathfrak p}}$. \section{Admissible Banach space representations and extensions of the Steinberg representations} \label{section:borelind} \paragraph{Notation and preliminary remarks} Let $R$ be a commutative noetherian ring and let ${\mathcal G}$ be a locally profinite group. Recall that a left $R[{\mathcal G}]$-module $W$ is called discrete (or smooth) if $W$ is discrete as a ${\mathcal G}$-module, i.e.\ if the stabilizer $\Stab_{{\mathcal G}}(w)=\{g\in {\mathcal G}\mid gw=g\}$ is open in ${\mathcal G}$ for every $w\in W$. A discrete $R[{\mathcal G}]$-module $W$ is called admissible if $W^U$ is a finitely generated $R$-module for every open subgroup $U$ of ${\mathcal G}$. Throughout this section $F$ denotes a $p$-adic field, i.e.\ a finite extension of ${\mathbb Q}_p$. We let $v_F: F\to {\mathbb Z}\cap\{+\infty\}$ denote the normalized valuation of $F$, ${\mathcal O}_F$ its valuation ring, ${\mathfrak p}$ its valuation ideal and $U_F = {\mathcal O}_F^*$ the group of units. Let $G=\PGL_2(F) = \GL_2(F)/Z$ and let $\pr: \GL_2(F)\to G$ be the projection. Let $B$ be the standard Borel subgroup of $G$ and let $B = T N$ be the Levi decomposition of $B$, so $T$ consists of the diagonal matrices (modulo the center $Z$ of $\GL_2(F)$) and $N$ consists of the upper triangular matrices with $1$ as diagonal entries (mod $Z$). In the following we are often going to identify $T$ with the onedimensional split torus $F^*$ via the isomorphism \begin{equation} \label{splittorus} \delta: F^*\longrightarrow T, \,\, x\mapsto \delta(x)=\left(\begin{matrix} x & \\ & 1\end{matrix}\right) \mod Z \end{equation} and $N$ with the additive group $F$ via the isomorphism \begin{equation} \label{unipotent} n: F\longrightarrow N, \,\, y\mapsto n(y)=\left(\begin{matrix} 1 & y\\ 0 & 1\end{matrix}\right) \mod Z. \end{equation} so any $b\in B$ can be written uniquely in the form $b=\delta(x)\cdot n(y)$ with $x\in F^*$, $y\in F$. We denote by $T_0$ the maximal compact open subgroup of $T$, so $T_0$ corresponds to $U_F$ under \eqref{splittorus}. We let $N_0$ be the subgroup of $N$ hat corresponds to the ${\mathcal O}_F$ under the isomorphism \eqref{unipotent}, i.e.\ $N_0=\{ n(y)\mid y\in {\mathcal O}_F\}$ and put $T^+= \{t\in T\mid N_0^t= t N_0 t^{-1} \subseteq N_0\}$. For an integer $n\ge 0$ we let $K({\mathfrak p}^n) = \Ker(\GL_2({\mathcal O}_F) \to \GL_2({\mathcal O}_F/{\mathfrak p}^n))$ and define $K(n)$ to be the image of $K({\mathfrak p}^n)$ under the projection $\pr:\GL_2(F)\to G$. For a closed subgroup $H$ of $T^0$ we put \begin{equation} \label{congruence3} K_H(n) \, =\, K(n) H N_0. \end{equation} We fix another finite field extension $E/{\overline{\mathbb Q}}_p$ with valuation ring ${\mathcal O}$, unifomizer $\varpi\in {\mathcal O}$ and residue field $k={\mathcal O}/(\varpi)$. For $m\ge 1$ we put ${\mathcal O}_m ={\mathcal O}/(\varpi^m)$. More generally for an ${\mathcal O}$-module $N$ we denote its torsion submodule by $N_{\tor}$ its the maximal torsionfree quotient by $N_{\fl}$ and we put $N_E= N\otimes_{{\mathcal O}} E$. We also set $N_m= N\otimes_{{\mathcal O}}{\mathcal O}_m$ and $N[\varpi^m] = \Hom_{{\mathcal O}}({\mathcal O}_{m}, N)$ for $m\ge 1$. Multiplication by $\varpi$ and $\varpi^m$ induces an exact sequence \begin{equation} \label{kercoker} 0 \longrightarrow N[\varpi]\longrightarrow N[\varpi^{m+1}]\longrightarrow N[\varpi^m]\longrightarrow N_1\longrightarrow N_{m+1}\longrightarrow N_m \longrightarrow 0 \end{equation} for every $m\ge 1$. In this section we will consider certain representation of $G$ and $T$ on ${\mathcal O}$-modules or $E$-vector spaces. We briefly review the notion of $\varpi$-adically admissible representations and admissible Banach space representations of $T$ and the relation to finitely generated augmented $T$-modules (for further details we refer to \cite{spiess2}, \S 3.4). Then we are going to recall the construction of certain canonical extensions of the Steinberg representation. \paragraph{$\varpi$-adically admissible representations of $T$} We fix a closed subgroup $H$ of $T^0$ and put ${\overline{T}} = T/H$ and ${\overline{T}}^0 = T^0/H$. We briefly review the notion of $\varpi$-adically continuous and $\varpi$-adically admissible ${\mathcal O}[{\overline{T}}]$-modules (\cite{emerton}, \S 2.4). An ${\mathcal O}[{\overline{T}}]$-module $W$ is called $\varpi$-adically continuous if (i) $W$ is $\varpi$-adically complete and separated, (ii) $W_{\tor}$ is of bounded exponent (i.e.\ $\varpi^m W=0$ for $m\ge 1$ sufficiently large) and (iii) $W_m$ is a discrete ${\mathcal O}_m[{\overline{T}}]$-module for every $m\ge 1$. A $\varpi$-adically admissible ${\mathcal O}[{\overline{T}}]$-module $W$ is a $\varpi$-adically continuous ${\mathcal O}[{\overline{T}}]$-module $W$ such that $W_1$ is an admissible $k[{\overline{T}}]$-module. The exactness of the sequence \eqref{kercoker} implies that $W_m$ is an admissible ${\mathcal O}_m[{\overline{T}}]$-module for every $m\ge 1$. The full subcategory of the category of ${\mathcal O}[{\overline{T}}]$-modules consisting of $\varpi$-adically admissible ${\mathcal O}[{\overline{T}}]$-modules will be denoted by $\Mod_{{\mathcal O}}^{\varpi-\adm}({\overline{T}})$. It is an abelian category. We recall also recall the notion of an augmented ${\mathcal O}[{\overline{T}}]$-module (see \cite{emerton}, \S 2). For an open subgroup $U$ of ${\overline{T}}$ we consider the ${\mathcal O}$-algebra \begin{equation*} \label{iwasawa1} \Lambda_{{\mathcal O}}(U) \, =\, \prolim_{U'} {\mathcal O}[U/U'] \end{equation*} where $U'$ runs over all open compact subgroups of $U$. If $U$ itself is compact then ${\Lambda}_{\mathcal O}(U)= {\mathcal O}[\![U]\!]$ is the usual completed ${\mathcal O}$-group algebra of $U$. Note that we have $\Lambda_{{\mathcal O}}({\overline{T}}) ={\mathcal O}[\![{\overline{T}}_0]\!][t_0^{\pm 1}]$ where $t_0\in T$ corresponds to a uniformizer in $F$ under \eqref{splittorus}. In particular the ring $\Lambda_{{\mathcal O}}({\overline{T}})$ is noetherian. A $\Lambda_{{\mathcal O}}({\overline{T}})$-module $L$ is called an {\it augmented ${\mathcal O}[{\overline{T}}]$-module}. The category of augmented ${\mathcal O}[{\overline{T}}]$-modules will be denote by $\Mod_{{\mathcal O}}^{\aug}({\overline{T}})$. An augmented ${\mathcal O}[{\overline{T}}]$-module $L$ is called finitely generated if $L$ is finitely generated as $\Lambda_{{\mathcal O}}(U)$-modules (with respect to the canonical embedding $\Lambda_{{\mathcal O}}(U)\hookrightarrow \Lambda_{{\mathcal O}}({\overline{T}})$) for some (equivalently, any) compact open subgroup $U$ of ${\overline{T}}$. The full subcategory of $\Mod_{{\mathcal O}}^{\aug}({\overline{T}})$ of finitely generated augmented ${{\mathcal O}}[{\overline{T}}]$-modules will be denoted by $\Mod_{{\mathcal O}}^{\fgaug}({\overline{T}})$. There exists a natural (profinite) topology on every objects $L$ of $\Mod_{{\mathcal O}}^{\fgaug}({\overline{T}})$ (see \cite{emerton}, Prop.\ 2.1.3) such that the action $\Lambda_{{\mathcal O}}({\overline{T}}) \times L \to L$ is continuous. This topology is called the {\it canonical topology}. \begin{remark} \label{remark:padicadm} \rm Let $\psi: F^*\to {\mathcal O}$ be a continuous homomorphism (i.e.\ we have $\psi(xy) = \psi(x) + \psi(y)$ for all $x,y\in F^*$) and let ${\widetilde{\mathcal O}} = {\mathcal O}[{\varepsilon}] = {\mathcal O}[X]/(X^2)$, ${\varepsilon} := X + (X^2)$ be the ${\mathcal O}$-algebra of dual numbers. The character \begin{equation} \label{extstein1} \Theta_{\psi}: F^*\longrightarrow {\widetilde{\mathcal O}}^*,\,\, \Theta_{\psi}(x)= 1+ \psi(x){\epsilon}. \end{equation} is again continuous. It induces an $F^*$- hence via \eqref{splittorus} also a $T$-action on ${\widetilde{\mathcal O}}$. We denote the resulting ${\mathcal O}[T]$-module by ${\widetilde{\mathcal O}}(\Theta_{\psi})$. It is clearly $\varpi$-adically admissible. Note that ${\widetilde{\mathcal O}}(\Theta_{\psi})$ is an admissible ${\mathcal O}[T]$-module if and only if $\Ker(\psi)$ is open in $F^*$ (this holds if and only if $\psi$ is multiple $c\cdot v_F$, $c\in {\mathcal O}$ of the normalized valuation $v_F$ of $F$). \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi \end{remark} \paragraph{Banach space representations} Recall that an $E$-Banach space representation of ${\overline{T}}$ is an $E$-Banach space $V=(V, \| \cdot \|)$ together with a continuous $E$-linear action ${\overline{T}}\times V \to V, (t,v)\mapsto t\cdot v$. An $E$-Banach space representation $V$ of ${\overline{T}}$ is called admissible if there exists an open and bounded ${\mathcal O}[{\overline{T}}]$-submodule $W\subseteq V$ such that the $U$-invariant $(W/V)^U$ of the quotient $V/W$ are an ${\mathcal O}$-module of cofinite type for every open subgroup $U$ of ${\overline{T}}$ (i.e.\ the Pontrjagin dual $\Hom((W/V)^U, E/{\mathcal O})$ is a finitely generated ${\mathcal O}$-module).\footnote{Note that this condition implies that we can choose the norm $\|\wcdot \|$ on $V$ so that $(V, \| \cdot \|)$ is a unitary Banach space representation of ${\overline{T}}$, i.e.\ we have $\|t\cdot v\|=\|v\|$ for every $t\in {\overline{T}}$ and $v\in V$.} The category of admissible $E$-Banach space representations of ${\overline{T}}$ will be denoted by $\Ban_E^{\adm}({\overline{T}})$. It is an abelian category. This can be easily deduced from the duality theorem below or from the fact that $\Ban_E^{\adm}({\overline{T}})$ is equivalent to the localized category $\Mod_{{\mathcal O}}^{\varpi-\adm}({\overline{T}})_E$. Recall that for an ${\mathcal O}$-linear additive category ${\mathscr A}$ the $E$-linear additive category ${\mathscr A}_E$ has the same objects as ${\mathscr A}$ whereas the morphisms are given by $\Hom_{{\mathscr A}_E}(A, B) = \Hom_{{\mathscr A}}(A, B)\otimes_{{\mathcal O}} E$. The functor \begin{equation} \label{locbanach2} \Mod_{{\mathcal O}}^{\varpi-\adm}(T)_E\longrightarrow \Ban_E^{\adm}(T), \quad W\mapsto (W_E, \| \wcdot\|) \end{equation} is an equivalence of categories. Here for $W\in \Mod_{{\mathcal O}}^{\varpi-\adm}(T)$ we let $\|\wcdot\|$ be the norm on $V=W_E$, so that $\,\Image(W\to V, w\mapsto w\otimes 1)$ is the unit ball $\{v\in V\mid \|v\|\le 1\}$ in $V$. Next we review the duality theorem. A $\Lambda_{{\mathcal O}}({\overline{T}})_E$-module $M$ will be called an augmented $E[{\overline{T}}]$-module. Again, $\Mod_E^{\aug}({\overline{T}})$ denotes the category of augmented $E[{\overline{T}}]$-modules. An augmented $E[{\overline{T}}]$-module $M$ is called finitely generated if there exists a ${\Lambda}_{{\mathcal O}}({\overline{T}})$-submodule $L$ with $L\in \Mod_{{\mathcal O}}^{\fgaug}({\overline{T}})$ and $L_E = M$. We equip $M$ with the topology induced by the canonical topology on $L$, i.e.\ $M$ is a topological vector space and the inclusion is $L\hookrightarrow M$ is open and continuous (hence $M$ is locally compact). As before $\Mod_E^{\fgaug}({\overline{T}})$ denotes the full category of $\Mod_E^{\aug}({\overline{T}})$ of finitely generated augmented $E[{\overline{T}}]$-modules. For an $E$-Banach space $V=(V, \| \wcdot\|)$ we define ${\mathcal D}(V) = V'$ to be the dual space equipped with the weak topology. It follows immediately from (\cite{schneider-teitelbaum}, Thm.\ 3.5) that the functor \begin{equation} \label{pontrajagin4} {\mathcal D}: \Ban_E^{\adm}({\overline{T}})\longrightarrow \Mod_E^{\fgaug}({\overline{T}}), \,\, V\mapsto {\mathcal D}(V)= V' \end{equation} is an anti-equivalence of $E$-linear abelian categories. Its quasi-inverse is given by \begin{equation*} \label{pontrajagin5} \Mod_E^{\fgaug}({\overline{T}}) \longrightarrow \Ban_E^{\adm}({\overline{T}}), \, \,\, M\mapsto \Hom_{E, \cont}(M, E). \end{equation*} For $V\in \Ban_E^{\adm}({\overline{T}})$ one can show that the $E[{\overline{T}}]$-action on $V$ extends naturally to a ${\Lambda}_{{\mathcal O}}({\overline{T}})_E$-action and that the functor \eqref{pontrajagin4} is ${\Lambda}_{{\mathcal O}}({\overline{T}})_E$-linear. \begin{remarks} \label{remarks:locdualpply} \rm (a) Let $V_1$, $V_2\in \Ban_E^{\adm}({\overline{T}})$ and assume that $\dim_E(V_1) <\infty$. The duality theorem implies that the $E$-vector space $\Hom_{E[{\overline{T}}]}(V_1, V_2)$ is finite-dimensional as well. In fact any ${\overline{T}}$-equivariant homomorphism $V_1\to V_2$ is automatically continuous so we have \begin{equation} \label{pontrajagin5a} \Hom_{E[{\overline{T}}]}(V_1, V_2)\, \cong \, \Hom_{\Mod_E^{\fgaug}({\overline{T}})}({\mathcal D}(V_2), {\mathcal D}(V_1)). \end{equation} If $U$ is a compact open subgroup of ${\overline{T}}$ then the $\Lambda_{{\mathcal O}}(U)_E$-module \eqref{pontrajagin5a} is finitely generated, hence it has finite length since ${\mathcal D}(V_1)$ has finite length (because of $\dim_E({\mathcal D}(V_1))= \dim_E(V_1) <\infty$). It is therefore finite-dimensional as an $E$-vector space. \noindent (b) Let $\psi: F^*\to E$ be a continuous character and let $\widetilde{E} = E[{\varepsilon}]$ be the $E$-algebra of dual numbers. Since the image of $\psi$ is bounded in $E$ the image of the character \begin{equation} \label{extstein1a} \Theta_{\psi}: F^*\longrightarrow \widetilde{E}^*,\,\, \Theta_{\psi}(x)= 1+ \psi(x){\epsilon}. \end{equation} is bounded in $\widetilde{E}$. Therefore similar to Remark \ref{remark:padicadm} we obtain an admissible $E$-Banach space representation $\widetilde{E}(\Theta_{\psi})$ of $T$. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi \end{remarks} \paragraph{Extensions of the Steinberg representation} For a $\varpi$-adically continuous ${\mathcal O}[T]$-module $W$ the $\varpi$-adically continuous parabolic induction of $W$ is defined by \begin{equation*} \label{parindcont} \Ind_{B, \cont}^G W \,= \, \{\Phi\in C_{\cont}(G, W)\mid \Phi(tng) = t\Phi(g) \,\,\forall t\in T, n\in N, g\in G\} \end{equation*} where $C_{\cont}(G, W)$ denotes the ${\mathcal O}$-module of maps $G \to W$ that are continuous with respect to the $\varpi$-adic topology on $W$. The $G$-action on $\Ind_{B, \cont}^G W$ is induced by right multiplication. By (\cite{emerton}, Lemma 4.1.3) we have \begin{equation*} \label{parindcont2} \Ind_{B, \cont}^G W\, = \, \prolimm \Ind_B^G W_m. \end{equation*} Moreover, for a $\varpi$-adically admissible ${\mathcal O}[T]$-module $W$ the ${\mathcal O}[G]$-module $\Ind_{B, \cont}^G W$ is $\varpi$-adically admissible as well and the functor \begin{equation*} \label{parindcont3} \Ind_{B, \cont}^G : \Mod_{{\mathcal O}}^{\varpi-\adm}(T) \to \Mod_{{\mathcal O}}^{\varpi-\adm}(G) \end{equation*} is exact (see \cite{emerton}, Prop.\ 4.1.5 and Prop.\ 4.1.7; the category $\Mod_{{\mathcal O}}^{\varpi-\adm}(G)$ of $\varpi$-adically admissible ${\mathcal O}[G]$-modules is defined in the same way as $\Mod_{{\mathcal O}}^{\varpi-\adm}(T)$; see \cite{emerton}, Def.\ 2.4.7). Recall that the Steinberg representation $\St_{G}(R)$ for any ring $R$ is the admissible $R[G]$-module defined by the short exact sequence \begin{equation*} \label{steinberg} \begin{CD} 0@>>> R(0) @>(\star)>> \Ind_{B}^G R(0) @>\pr >> \St_{G}(R) @>>> 0. \end{CD} \end{equation*} Here $R(0)$ denotes $R$ with the trivial $G$-action and the map $(\star)$ is given by sending $x\in R$ to the constant map $G\to R, g\mapsto x$. We may also consider the continuous Steinberg representation $\St_{G, \cont}({\mathcal O})$ defined by the sequence \begin{equation*} \label{steinbergcont} \begin{CD} 0@>>> {\mathcal O}(0) @>(\star)>> \Ind_{B, \cont}^G {\mathcal O}(0) @>\pr >> \St_{G, \cont} @>>> 0. \end{CD} \end{equation*} \begin{remark} \label{remark:steinbergcont} \rm The continuous Steinberg representation is a $\varpi$-adically admissible ${\mathcal O}[G]$-module. In fact we have $\St_{G, \cont}({\mathcal O})=\prolimm \St_{G, \cont}({\mathcal O})_m $ and $\St_{G, \cont}({\mathcal O})_m = \St_{G}({\mathcal O}_m)$ for all $m\ge 1$. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi \end{remark} We now recall the construction of a certain extension of $\St_{G, \cont}$ associated to a continuous homomorphism $\psi: F^*\to {\mathcal O}$ (see \cite{spiess1}, \S 3.7). Let ${\widetilde{\mathcal O}}(\Theta_{\psi})$ be the $\varpi$-adically admissible ${\mathcal O}[T]$-module defined in Remark \ref{remark:padicadm}. Multiplication by ${\varepsilon}$ induces a short exact sequence of $\varpi$-adically admissible ${\mathcal O}[T]$-modules \begin{equation*} \label{extstein6} 0\longrightarrow {\mathcal O}(0) \stackrel{\alpha}{\longrightarrow} {\widetilde{\mathcal O}}(\Theta_{\psi})\stackrel{\beta}{\longrightarrow} {\mathcal O}(0) \longrightarrow 0 \end{equation*} (i.e.\ $\alpha\circ \beta$ is multiplication with ${\varepsilon}$). By applying $\Ind_{B, \cont}^G$ we obtain a short exact sequence of $\varpi$-adically admissible ${\mathcal O}[G]$-modules \begin{equation} \label{extstein7} 0\longrightarrow \Ind_{B, \cont}^G {\mathcal O}(0) \longrightarrow \Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi}) \longrightarrow \Ind_{B, \cont}^G {\mathcal O}(0) \longrightarrow 0. \end{equation} By taking the pull-back of \eqref{extstein7} with respect to the map $(\star)$ (for $R={\mathcal O}$) we obtain a short exact sequence \begin{equation} \label{extstein8} 0\longrightarrow \Ind_{B, \cont}^G {\mathcal O}(0) \longrightarrow {\widetilde{\mathscr E}}(\psi) \longrightarrow {\mathcal O}(0) \longrightarrow 0 \end{equation} i.e.\ ${\widetilde{\mathscr E}}(\psi)$ is the ${\mathcal O}[G]$-submodule of $\Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi})$ given by $\Phi\in \Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi})$ so that $\Phi \!\!\mod {\varepsilon}$ is a constant map $G(F) \to {\mathcal O}$. \begin{df} \label{df:extstein} The ($\varpi$-adically admissible) ${\mathcal O}[G]$-module ${\mathscr E}(\psi)$ is defined as the push-out of \eqref{extstein8} under the projection $\pr: \Ind_{B, \cont}^G {\mathcal O}(0) \to \St_{G,\cont}({\mathcal O})$ so that we have a short exact sequence of ${\mathcal O}[G]$-modules \begin{equation} \label{extstein9} 0\longrightarrow \St_{G, \cont}({\mathcal O}) \longrightarrow {\mathscr E}(\psi) \longrightarrow {\mathcal O}(0) \longrightarrow 0. \end{equation} \end{df} We need the following Lemma. \begin{lemma} \label{lemma:extsteinfunc} The map \begin{equation} \label{extstein10} \Hom_{\cont}(F^*, {\mathcal O}) \longrightarrow \Ext_{\Mod_{{\mathcal O}}^{\varpi-\adm}(G)}({\mathcal O}(0), \St_{G, \cont}({\mathcal O})), \,\, \psi\mapsto [{\mathscr E}(\psi)]_{\sim} \end{equation} is a homomorphism of ${\mathcal O}$-modules. \end{lemma} For the proof of the additivity of \eqref{extstein10} see (\cite{gehrstein}, Lemma 1 (c)). \begin{remark} \label{remark:extsteindisc} \rm For $\psi= v_F$ the sequence \eqref{extstein6} consists of admissible ${\mathcal O}[T]$-modules. Therefore we may apply usual parabolic induction so we obtain the sequence \begin{equation} \label{extstein7a} 0\longrightarrow \Ind_{B}^G {\mathcal O}(0) \longrightarrow \Ind_{B}^G {\widetilde{\mathcal O}}(\Theta_{\psi}) \longrightarrow \Ind_{B}^G {\mathcal O}(0) \longrightarrow 0. \end{equation} By mimicking the construction of the extension \eqref{extstein9} we thus obtain a sequence of admissible ${\mathcal O}[G]$-modules \begin{equation} \label{extstein9a} 0\longrightarrow \St_{G}({\mathcal O}) \longrightarrow {\mathscr E}_0 \longrightarrow {\mathcal O}(0) \longrightarrow 0. \end{equation} We remark that the ${\mathcal O}[G]$-module ${\mathscr E}_0$ admits the following resolution \begin{equation} \label{extstein9b} \begin{CD} 0 @>>> \cInd^{G}_{K(0)} {\mathcal O}(0) @> T-(q+1)\id >> \cInd^{G}_{K(0)} {\mathcal O}(0)@>>> {\mathscr E}_0@>>> 0 \end{CD} \end{equation} This fact will be used in next section. Note that the sequence \eqref{extstein9} (for $\psi=v_F$) is the push-out of \eqref{extstein9a} with respect to the natural embedding $\St_G({\mathcal O})\hookrightarrow \St_{G, \cont}({\mathcal O})$. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi \end{remark} \paragraph{Universal extension of the Steinberg representation} Following \cite{gehrstein, berggehr} the definition of the extension \eqref{extstein9} can be refined as follows. Let $A$ be a locally profinite abelian group and let $\psi: F^* \to A$ be a continuous homomorphism. There exists a canonical extension \begin{equation} \label{extstein9c} 0\longrightarrow \St_{G, \cont}(A) \longrightarrow {\mathscr E}'(\psi) \longrightarrow {\mathbb Z}(0) \longrightarrow 0 \end{equation} associated to $\psi$ defined as follows. Let ${\mathcal R} = {\mathbb Z}+ {\epsilon} A$ be the topological ring of formal sums $m + {\varepsilon} a$ with $m\in {\mathbb Z}$, $a\in A$. Addition and multiplication is given by $(m_1 + {\varepsilon} a_1) + (m_2 + {\varepsilon} a_2)= (m_1 + m_2) + {\varepsilon}(a_1 + a_2)$ and $(m_1 + {\varepsilon} a_1) \cdot (m_2 + {\varepsilon} a_2)= (m_1 \cdot m_2) + {\varepsilon}(m_ 2 a_1 + m_1 a_2)$ respectively (the topology is the product topology with ${\mathbb Z}$ being equipped with the discrete topology). Then $x\mapsto \Theta_{\psi}(x) = 1+ \psi(x){\epsilon}$ can be viewed as a continuous homomorphism $\Theta_{\psi}: F^*\to {\mathcal R}^*$, so as before we can consider the continuous parabolic induction $\Ind_{B, \cont}^G {\mathcal R}[\Theta_{\psi}]$ of the ${\mathcal R}[T]$-module ${\mathcal R}[\Theta_{\psi}]$. By applying the functor $\Ind_{B, \cont}^G$ to the short exact sequence \begin{equation*} \label{extstein9d} \begin{CD} 0@>>> A(0) @> a \mapsto {\varepsilon} a >> {\mathcal R}[\Theta_{\psi}]@> m + {\varepsilon} a \mapsto m >> {\mathbb Z}(0) @>>> 0 \end{CD} \end{equation*} we obtain a short exact sequence of $G$-modules \begin{equation*} \label{extstein9f} 0\longrightarrow \Ind_{B, \cont}^G A(0) \longrightarrow \Ind_{B, \cont}^G {\mathcal R}(\Theta_{\psi}) \longrightarrow \Ind_B^G {\mathbb Z}(0) \longrightarrow 0. \end{equation*} Again by pulling back this sequence with respect to the canonical inclusion ${\mathbb Z}(0) \hookrightarrow \Ind_B^G {\mathbb Z}(0)$ and then taking the push-out with respect to the projection $\Ind_{B, \cont}^G A(0) \to \St_{G, \cont}(A) := \left(\Ind_{B, \cont}^G A(0)\right)/A(0)$ yields \eqref{extstein9c}. In the special case where $\psi$ is the identity $\id: F^* \to F^*$ we obtain the universal extension of the Steinberg representation \begin{equation} \label{extsteinuniv} 0\longrightarrow \St_{G, \cont}(F^*) \longrightarrow {\mathscr E}_{\univ} \longrightarrow {\mathbb Z}(0) \longrightarrow 0 \end{equation} (where ${\mathscr E}_{\univ}:= {\mathscr E}'(\id)$). \section{Automorphic and arithmetic ${\mathscr L}$-Invariants} \label{section:linv} In this section we mostly recall the definition of automorphic and arithmetic ${\mathscr L}$-invariants and state our main result. To begin with we introduce some notation. For the remainder of this paper $F$ denotes a totally real number field with ring of algebraic integers ${\mathcal O}_F$. We fix a nonarchimedian place ${\mathfrak p}$ of $F$ lying above a prime number $p$. We denote by ${\mathbb A}$ (resp.\ ${\mathbb A}^{{\mathfrak p}}$, resp.\ ${\mathbb A}_f$, resp.\ ${\mathbb A}_f^{{\mathfrak p}}$) the ring of adeles of $F$ (resp.\ ring of prime-to-${\mathfrak p}$, resp.\ finite, resp.\ finite prime-to-${\mathfrak p}$ adeles). We let $S_{\infty}$ denote the set of archimedean places of $F$ and put $F_{\infty} = F\otimes_{{\mathbb Q}} {\mathbb R}$. For a nonarchimedean place ${\mathfrak q}$ of $F$ we let ${\mathcal O}_{{\mathfrak q}}$ denote the valuation ring in $F_{{\mathfrak q}}$ and we put $U_{{\mathfrak q}}^{(0)} = U_{{\mathfrak p}} = {\mathcal O}_{{\mathfrak q}}^*$ and $U_{{\mathfrak q}}^{(n)} =1 +{\mathfrak q}^n{\mathcal O}_{{\mathfrak q}}$ and for $n\ge 1$. For a place $v$ of $F$ and an algebraic group ${\mathcal G}/F$ we often write ${\mathcal G}_v$ for ${\mathcal G}(F_v)$. Let $D$ be a quaternion algebra over $F$, let $\widetilde{G}=D^*$ (viewed as an algebraic group over $F$), let $Z\cong {\mathbb G}_m$ be the center of $\widetilde{G}$ and put $G = \widetilde{G}/Z$ (thus $G={\PGL_2}{_{/F}}$ if $D=M_2(F)$ is the algebra of $2\times 2$ matrices). We let $\Ram_D$ be the set of (archimedian or nonarchimedian) places of $F$ that are ramified in $D$. We assume that ${\mathfrak p}$ does not lie in $\Ram_D$ so that $G_{{\mathfrak p}} = \PGL_2(F_{\mathfrak p})$. We denote by $B_{{\mathfrak p}}$ the standard Borel subgroup of $G_{{\mathfrak p}}$ and by $T_{{\mathfrak p}}$ its maximal torus. We denote by $\Sigma$ the set of archimedean places of $F$ that split $D$ (i.e.\ $\Sigma = S_{\infty}\setminus \Ram_D$) and we put $d=\sharp{\Sigma}$. If $D$ is not totally ramified then we choose an ordering $\Sigma = \{\sigma_1, \ldots, \sigma_d\}$ of the places in $\Sigma$. We let ${\mathfrak d}$ be the ideal of ${\mathcal O}_F$ that is the product of the primes which are ramified in $D$, so that $\Ram_D = (S_{\infty}-\Sigma) \cup \{{\mathfrak q}\mid {\mathfrak d}\}$. For $v\in S_{\infty}$ we denote by $G_{v, +}$ the connected component of $1$ in $G_v$. Thus $G_{v, +}=(\widetilde{G}_v)_+/Z_v$ where $(\widetilde{G}_v)_+$ is the subgroup of elements $g\in \widetilde{G}_v$ with $\Nrd(g) >0$. We also put $G(F)_+ = G(F) \cap \prod_{v\mid \infty} G_{v, +}$ and define \begin{equation*} \label{pi0ginfty} \Delta := \, G(F)/G(F)_+ \, \cong \, \{\pm1\}^d. \end{equation*} A compact open subgroup $K_f^{{\mathfrak p}}$ of $G({\mathbb A}_f^{{\mathfrak p}})$ will be called a (prime-to-${\mathfrak p}$) level. We consider in particular levels of the form $K_0({\mathfrak n})^{{\mathfrak p}}$. We recall the definition. Let ${\mathfrak n}\ne (0)$ be an ideal of ${\mathcal O}_F$ that is relatively prime to $\{{\mathfrak p}\} \cup \Ram_D$. Let ${\mathcal O}_D$ be an Eichler order of level ${\mathfrak n}$ in $D$ (if $D= M_2(F)$ then we choose ${\mathcal O}_D$ to be the subalgebra $M_0({\mathfrak n})\subseteq M_2({\mathcal O}_F)$ of matrices that are upper triangular modulo ${\mathfrak n}$). For a nonarchimedean place ${\mathfrak q}$ of $F$ we put ${\mathcal O}_{D, {\mathfrak q}} = {\mathcal O}_D\otimes_{{\mathcal O}_F} {\mathcal O}_{{\mathfrak q}}$ and define $K_0({\mathfrak n})^{{\mathfrak p}}$ to be the image of \begin{equation} \label{level} \widetilde{K}_0({\mathfrak n})^{{\mathfrak p}} \, =\, \prod_{{\mathfrak q}\nmid {\mathfrak p} \infty} {\mathcal O}_{D, {\mathfrak q}}^* \end{equation} under the projection $\widetilde{G}({\mathbb A}_f^{{\mathfrak p}})\to G({\mathbb A}_f^{{\mathfrak p}})$. Throughout this section we consider a fixed cuspidal automorphic representation $\pi = \otimes_v \, \pi_v$ of $G({\mathbb A}_F)$ of parallel weight $2$, i.e.\ it has following properties \medskip \noindent $-$ $\pi_v$ is the discrete series representation of $G_v$ of weight $2$ for all $v\in S_{\infty}-\Ram_D$, \medskip \noindent $-$ $\pi_v$ is the trivial representation of $G_v$ for all $v\in S_{\infty}\cap \Ram_D$. \medskip \noindent For simplicity we also assume that $\pi_v$ is a onedimensional representation of $G_v$ for all $v\in \Ram_D-S_{\infty}$. \paragraph{Automorphic ${\mathscr L}$-invariants} We put $\pi_f= \pi^{\infty} = \otimes_{v\nmid \infty} \, \pi_v$ and $\pi_0= \pi^{{\mathfrak p}{\mathfrak d}, \infty}= \otimes_{v\nmid {\mathfrak p}{\mathfrak d}\infty} \, \pi_v$ and assume that its conductor is ${\mathfrak f}(\pi_0)= {\mathfrak n}$. In order to associate to $\pi$ automorphic and arithmetic ${\mathscr L}$-invariants we need to assume that $\pi$ is of split multiplicative type at ${\mathfrak p}$, i.e.\ we assume that the following holds \begin{equation} \label{steinbergaut} \pi_{\mathfrak p} \,\cong\, \St_{G_{{\mathfrak p}}}({\mathbb C}). \end{equation} This implies in particular that the conductor of $\pi$ is ${\mathfrak f}(\pi) = {\mathfrak p}{\mathfrak n}$. The $G({\mathbb A}_f)$-representation $\pi_f$ can be defined over an algebraic number field ${\mathbb Q}_{\pi}\subseteq {\overline{\mathbb Q}} \subseteq {\mathbb C}$, i.e.\ $\pi_f$ (and therefore also $\pi_0$) admits a ${\mathbb Q}_{\pi}$-structure. The spherical (prime-to $p$) Hecke algebra ${\mathbb T} = {\mathbb T}^{S_0}={\mathbb Z}[T_{{\mathfrak q}}|{\mathfrak q}\nmid p{\mathfrak d}{\mathfrak n}]$ (where $S_0 = \{{\mathfrak q} \mid p{\mathfrak d}{\mathfrak n}\}$) acts on the onedimensional vector space $\pi_0^ {K_0({\mathfrak n})^{{\mathfrak p}}}$ via a ringhomomorphism ${\lambda}_{\pi}: {\mathbb T}\longrightarrow {\mathbb C}$ (the Hecke eigenvalue homomorphism) whose image is contained in ${\mathcal O}_{{\mathbb Q}_{\pi}}$. We choose a place ${\mathfrak P}$ of ${\mathbb Q}_{\pi}$ above $p$, or equivalently, we choose an embedding ${\mathbb Q}_{\pi}\hookrightarrow {\mathbb C}_p$ and a subfield $E$ of ${\mathbb C}_p$ that is finite over ${\mathbb Q}_p$ and contains the image of ${\mathbb Q}_{\pi}$ (we may choose $E$ to be the completion of ${\mathbb Q}_{\pi}$ at ${\mathfrak P}$). Let ${\mathcal O}$ be the valuation ring of $E$ and let $\varpi\in {\mathcal O}$ be a prime element. Then ${\lambda}_{\pi}$ can be viewed as a ringhomomorphism \begin{equation*} \label{eigenvalue} {\lambda}_{\pi}: {\mathbb T}_{{\mathcal O}}\longrightarrow {\mathcal O}. \end{equation*} For a ${\mathbb T}_{{\mathcal O}}$-module $N$ we denote by $N_{\pi}$ the localization of $N$ with respect to $\ker({\lambda}_{\pi})$. We will also consider ${\mathbb T}$-modules $N$ with an additional $\Delta$-action. If ${\epsilon}:\Delta\to \{\pm \}$ is a character then the pair $({\lambda}_{\pi}, {\epsilon})$ defines a homomorphism $({\lambda}_{\pi}, {\epsilon}): {\mathbb T}[\Delta]\to {\mathcal O}$ and we define $N_{\pi, {\epsilon}}$ to be the localization of $N$ with respect to its kernel. As in \cite{spiess1} for a ring $R$, a level $K_f^{{\mathfrak p}}$ and an $R[G_{{\mathfrak p}}]$-module $M$ we let ${\mathcal A}_R(M, K_f^{{\mathfrak p}}; R)$ denote the $R[G(F)]$-module of maps $\Phi: M\times G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}} \to R$ that are homomorphism of $R$-modules in the first component. The $G(F)$-action is given by $(\gamma \Phi)(m, gK_f^{{\mathfrak p}}) \, =\, \Phi(\gamma^{-1} m, \gamma^{-1}gK_f^{{\mathfrak p}})$ for $\Phi\in {\mathcal A}_R(M, K_f^{{\mathfrak p}}; R)$, $\gamma\in G(F)$, $m\in M$ and $g\in G({\mathbb A}_f^{{\mathfrak p}})$. The cohomology groups \begin{equation} \label{derham} H^{{\bullet}}(G(F)_+, {\mathscr A}_R(M, K_f^{{\mathfrak p}}; R)) \end{equation} where considered in \cite{spiess1}. Since ${\mathcal A}_R(M, K_f^{{\mathfrak p}}; R)$ is an $R[G(F)]$-module the groups \eqref{derham} are naturally $R[\Delta]$-modules. There is also a canonical action of the Hecke algebra $R[G({\mathbb A}_f^{{\mathfrak p}})//K_f^{{\mathfrak p}}]$ on \eqref{derham} commuting with the $\Delta$-action. The automorphic ${\mathscr L}$-invariants associated to $\pi$ are defined in terms of the cohomology groups \begin{equation*} \label{derham1} H^n(G(F)_+, {\mathcal A}_{{\mathcal O}}(M, K_0({\mathfrak n})^{{\mathfrak p}}, {\mathcal O}))_E \end{equation*} for $n=d$, $M =\St_{G_{{\mathfrak p}}, \cont}({\mathcal O})$ and the extension classes \eqref{extstein9} (compare \cite{spiess1}, 6.1 and \cite{gehrmann1}, 2.5). They carry a canonical ${\mathbb T}_E[\Delta]$-module structure. In the following we will abbreviate \begin{equation*} \label{derham2} {\mathcal H}^n(M)\colon = H^n(G(F)_+, {\mathcal A}_{{\mathcal O}}(M, K_0({\mathfrak n})^{{\mathfrak p}}; {\mathcal O}))_E. \end{equation*} We recall the connection to certain cohomology groups of ${\mathscr S}$-spaces and ${\mathscr S}$-varieties introduced in \cite{spiess2} (see Prop.\ 5.5 and Prop.\ 5.11 in loc.\ cit.). If $d\ge 0$ and if $W$ is an admissible ${\mathcal O}[T_{{\mathfrak p}}]$-module then we have \begin{equation} \label{derham3} {\mathcal H}^{{\bullet}}(\Ind_{B_{{\mathfrak p}}}^{G_{{\mathfrak p}}} W)\, \cong \, {\mathbb H}^{{\bullet}}(X^{\an}; W, {\mathcal O})_E \end{equation} Furthermore, if $W$ is an $\varpi$-adically admissible ${\mathcal O}[T_{{\mathfrak p}}]$-module and if $V=W_E$ is the associated Banach space representation of $T_{{\mathfrak p}}$ then \begin{equation} \label{derham4} {\mathcal H}^{{\bullet}}(\Ind_{B_{{\mathfrak p}}, \cont}^{G_{{\mathfrak p}}} W)\, \cong \, {\mathbb H}_{\varpi-\ad}^{{\bullet}}(X^{\an}; V, E). \end{equation} where the cohomology groups on the right hand side of \eqref{derham3}, \eqref{derham4} have been introduced in (\cite{spiess2}, \S 4.1, 4.3). Here $X^{\an}$ denotes the (complex) quaternionic Hilbert modular ${\mathscr S}$-manifold $(X_0^D({\mathfrak n})^{{\mathfrak p}})^{\an}$ in the sense of (\cite{spiess2}, \S 5.1). Moreover if $d\ge 1$ then we also have \begin{equation} \label{derham5} {\mathcal H}^{{\bullet}}(\Ind_{B_{{\mathfrak p}}, \cont}^{G_{{\mathfrak p}}} W)\, \cong \, {\mathbb H}_{\varpi-\ad}^{{\bullet}}(X_{{\overline{\mathbb Q}}}; V, E). \end{equation} where now $X= X_0^D({\mathfrak n})^{{\mathfrak p}}$ is quaternionic Hilbert modular ${\mathscr S}$-variety. The cohomology groups ${\mathbb H}^{{\bullet}}(X^{\an}; W, {\mathcal O})_E$ and ${\mathbb H}_{\varpi-\ad}^{{\bullet}}(X^{\an}; V, E)$ are equipped with a $\Delta$-action and the isomorphism \eqref{derham3}, \eqref{derham4} are Hecke and $\Delta$-equivariant. The groups ${\mathbb H}_{\varpi-\ad}^{{\bullet}}(X_{{\overline{\mathbb Q}}}; V, E)$ are equipped with a Galois action. If $W$ is an admissible ${\mathcal O}[T_{{\mathfrak p}}]$-module that is free and of finite rank as an ${\mathcal O}$-module then $V=W_E$ is an admissible Banach space representation of $T_{{\mathfrak p}}$ and we have (see \cite{spiess2}, \S 4.3) \begin{equation} \label{derham6} {\mathbb H}^{{\bullet}}(X^{\an}; W, {\mathcal O})_E\, \cong \, {\mathbb H}_{\varpi-\ad}^{{\bullet}}(X_{{\overline{\mathbb Q}}}; V, E). \end{equation} Note that if $0 \to M_1 \stackrel{\alpha}{ \longrightarrow} M_2 \stackrel{\beta}{ \longrightarrow} M_3 \to 0$ is a short exact sequence of ${\mathcal O}[G_{{\mathfrak p}}]$-modules that splits as a sequence of ${\mathcal O}$-modules then the sequence of $G(F)$-modules \[ 0\longrightarrow {\mathcal A}_{{\mathcal O}}(M_3, K_0({\mathfrak n})^{{\mathfrak p}}; {\mathcal O})\longrightarrow {\mathcal A}_{{\mathcal O}}(M_2, K_0({\mathfrak n})^{{\mathfrak p}}; {\mathcal O})\longrightarrow {\mathcal A}_{{\mathcal O}}(M_1, K_0({\mathfrak n})^{{\mathfrak p}}; {\mathcal O})\longrightarrow 0 \] is exact as well, so we obtain a long exact sequence \begin{equation} \label{extdelta} \ldots \longrightarrow {\mathcal H}^n(M_3) \stackrel{\beta^*}{ \longrightarrow} {\mathcal H}^n(M_2) \stackrel{\alpha^*}{ \longrightarrow} {\mathcal H}^n(M_1)\stackrel{\delta^n}{\longrightarrow} {\mathcal H}^{n+1}(M_3) \longrightarrow \ldots \end{equation} of ${\mathbb T}_E[\Delta]$-modules. For example the sequence \eqref{extstein9a} yields a long exact sequence \begin{equation} \label{extdeltaord} \ldots \longrightarrow {\mathcal H}^n({\mathcal O}(0)) \longrightarrow {\mathcal H}^n({\mathscr E}_0) \longrightarrow {\mathcal H}^n(\St_{G_{{\mathfrak p}}})\stackrel{\delta_0^n}{\longrightarrow} {\mathcal H}^{n+1}({\mathcal O}(0)) \longrightarrow \ldots \end{equation} \begin{prop} \label{prop:harder} \noindent (a) We have \begin{equation} \label{cohomautom1} {\mathcal H}^n(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O}))_{\pi}\, =\, {\mathcal H}^n(\St_{G_{{\mathfrak p}}}({\mathcal O}))_{\pi} \, =\, \left\{\begin{array}{cc} E[\Delta] & \mbox{if $n=d$,}\\ 0 &\mbox{otherwise.}\end{array}\right. \end{equation} for every $n\ge 0$. In particular we get \begin{equation*} \label{cohomautom2} {\mathcal H}^n(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O}))_{\pi, {\epsilon}}\, =\, \left\{\begin{array}{cc} E & \mbox{if $n=d$,}\\ 0 &\mbox{otherwise.} \end{array}\right. \end{equation*} for every character ${\epsilon}: \Delta\to \{\pm 1\}$ and $n\ge 0$. \noindent (b) The connecting homomorphisms in \eqref{extdeltaord} induces an isomorphisms $(\delta_0)_{\pi}: {\mathcal H}^n(\St_{G_{{\mathfrak p}}}({\mathcal O}))_{ \pi} \to {\mathcal H}^{n+1}({\mathcal O}(0))_{\pi}$. Consequently, we have \begin{equation*} \label{cohomautom3} {\mathcal H}^n({\mathcal O}(0))_{\pi, {\epsilon}}\, =\, \left\{\begin{array}{cc} E & \mbox{if $n=d+1$,}\\ 0 &\mbox{otherwise} \end{array}\right. \end{equation*} for every character ${\epsilon}: \Delta\to \{\pm 1\}$ and $n\ge 0$. \noindent (c) We have \begin{equation*} \label{cohomautom4} {\mathcal H}^n(\Ind_{B_{{\mathfrak p}}, \cont}^{G_{{\mathfrak p}}}{\mathcal O}(0))_{\pi, {\epsilon}}\, =\, {\mathcal H}^n(\Ind_{B_{{\mathfrak p}}}^{G_{{\mathfrak p}}}{\mathcal O}(0))_{\pi, {\epsilon}}\, =\,\left\{\begin{array}{cc} E & \mbox{if $n=d, d+1$,}\\ 0 &\mbox{otherwise} \end{array}\right. \end{equation*} for every character ${\epsilon}: \Delta\to \{\pm 1\}$ and $n\ge 0$. \end{prop} \begin{proof} (a) For the first equality \eqref{cohomautom1} note that we have a commutative diagram \begin{equation} \label{picoho1} \small{\begin{CD} \ldots {\mathcal H}^n(\St_{G_{{\mathfrak p}}}({\mathcal O})) @>>> {\mathcal H}^n(\Ind_{B_{{\mathfrak p}}}^{G_{{\mathfrak p}}}{\mathcal O}(0)) @>>> {\mathcal H}^n({\mathcal O}(0))@>>> {\mathcal H}^{n+1}(\St_{G_{{\mathfrak p}}}({\mathcal O})) \ldots \\ @VVV @VV\eqref{derham6}V @VV \id V @VVV\\ \ldots {\mathcal H}^n(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O})) @>>> {\mathcal H}^n(\Ind_{B_{{\mathfrak p}}, \cont}^{G_{{\mathfrak p}}}{\mathcal O}(0)) @>>> {\mathcal H}^n({\mathcal O}(0)) @>>> {\mathcal H}^{n+1}(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O})) \ldots \\ \end{CD}} \end{equation} where the rows are induced by \eqref{steinberg} and \eqref{steinbergcont} respectively. By (\cite{spiess2}, Cor.\ 5.12) and the five lemma all vertical maps in \eqref{picoho1} are isomorphisms. For the second equality in \eqref{cohomautom1} see (\cite{spiess2}, Prop.\ 5.13). (b) We will show that ${\mathcal H}^n({\mathscr E}_0)_{\pi} = 0$ for every $n\ge 0$. For that we use the long exact sequence induced by the resolution \eqref{extstein9b} of ${\mathscr E}_0$ localized at $\pi$ \begin{equation*} \label{extdeltase0} \ldots{\mathcal H}^{n-1}(\cInd^{G}_{K} {\mathcal O}(0))_{\pi}\longrightarrow {\mathcal H}^n({\mathscr E}_0)_{\pi} \longrightarrow {\mathcal H}^n(\cInd^{G}_{K} {\mathcal O}(0))_{\pi} \longrightarrow {\mathcal H}^n(\cInd^{G}_{K} {\mathcal O}(0))_{\pi}\ldots \end{equation*} where $K=G({\mathcal O}_{{\mathfrak p}})$. Therefore it suffices to see that ${\mathcal H}^n(\cInd^{G}_{K} {\mathcal O}(0))_{\pi}=0$ for every $n\ge 0$. By (\cite{spiess2}, Prop.\ 3.38) we have \[ {\mathcal H}^n(\cInd^{G}_{K} {\mathcal O}(0))\,\cong \, H^n(X_K^{\an}, {\mathcal O})_E\,\cong \, H^n(X_K^{\an}, E). \] Since the conductor of $\pi$ is ${\mathfrak p}{\mathfrak n}$ and the level of the quaternionic Hilbert modular variety $X_K^{\an}$ is $= K_0({\mathfrak n})\subseteq G({\mathbb A}_f)$ we have $H^n(X_K^{\an}, E)_{\pi} = 0$. (c) follows from (a) and (b) using diagram \eqref{picoho1}. \end{proof} \begin{remark} \label{remark:ramanunjan} \rm If $D$ is a totally definite division algebra then the connecting homomorphism \begin{eqnarray*} \label{cohomautom5} && H^0(G(F), {\mathscr A}_{{\mathbb Z}}(\St_{G_{{\mathfrak p}}}({\mathbb Z}), K_0({\mathfrak n})^{{\mathfrak p}}; {\mathbb Z}))_E \stackrel{\delta_0^0}{\longrightarrow} \\ && \hspace{2cm} \longrightarrow H^1(G(F), {\mathscr A}_{{\mathbb Z}}({\mathbb Z}, K_0({\mathfrak n})^{{\mathfrak p}}; {\mathbb Z}))_E = H^1(G(F), C(G({\mathbb A}_f^{{\mathfrak p}})/K_0({\mathfrak n})^{{\mathfrak p}}; {\mathbb Z}))_E\nonumber \end{eqnarray*} is an isomorphism for any field $E$ of characteristic 0 (i.e. without passing to the localization at $\pi$). In fact, in this case we have \[ H^0(X_K, E) \, =\, \left\{\begin{array}{cc} C(G({\mathbb A}_f/K_0({\mathfrak n}), E) & \mbox{if $n=0$,}\\ 0 &\mbox{if $n\ge 1$} \end{array}\right. \] and $T_{{\mathfrak p}}-(q+1)\id\in \End(C(G({\mathbb A}_f/K_0({\mathfrak n}), E))$ is an isomorphism. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi \end{remark} Let $\psi: F_{{\mathfrak p}}^*\to {\mathcal O}$ be a continuous homomorphism and consider the long exact sequence \begin{equation} \label{extdeltapsi} \ldots \longrightarrow {\mathcal H}^n({\mathcal O}(0)) \longrightarrow {\mathcal H}^n({\mathscr E}(\psi)) \longrightarrow {\mathcal H}^n(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O}))\stackrel{\delta_{\psi}}{\longrightarrow} {\mathcal H}^{n+1}({\mathcal O}(0)) \longrightarrow \ldots \end{equation} induced by the short exact sequence \eqref{extstein9}. Recall (\cite{spiess1}, Def.\ 6.3), (\cite{gehrmann1}, 2.1) the {\it automorphic ${\mathscr L}$-invariant} ${\mathscr L}_{{\epsilon}}(\pi, \psi)\in E$ is the uniquely determined scalar satisfying \begin{equation*} \label{linv} {\mathscr L}_{{\epsilon}}(\pi, \psi)\cdot (\delta_0)_{\pi, {\epsilon}} \, =\, (\delta_{\psi})_{\pi, {\epsilon}}: {\mathcal H}^d(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O}))_{\pi, {\epsilon}} \to {\mathcal H}^{d+1}({\mathcal O}(0))_{\pi, {\epsilon}}. \end{equation*} We will show in the next that ${\mathscr L}_{{\epsilon}}(\pi, \psi)$ independent of the choice of the character ${\epsilon}: \Delta \to \{\pm 1\}$. \begin{lemma} \label{lemma:linvlin} Let ${\epsilon}:\Delta\to \{\pm 1\}$ be a character. We have \noindent (a) For the normalized valuation $v_{{\mathfrak p}}: F_{{\mathfrak p}}\to {\mathbb Z}\cup\{+ \infty\}$ we have ${\mathscr L}_{{\epsilon}}(\pi, v_{{\mathfrak p}}) =1$. \noindent (b) The map \begin{equation} \label{linv2} {\mathscr L}_{{\epsilon}}(\pi, \wcdot): \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O}) \longrightarrow E, \,\, \psi \mapsto {\mathscr L}_{{\epsilon}}(\pi, \psi) \end{equation} is a homomorphism of ${\mathcal O}$-modules. \end{lemma} \begin{proof} (a) follows from $\delta_{v_{\mathfrak p}} = \delta_0$. For (b) note that by Lemma \ref{lemma:extsteinfunc} and standard properties of $\delta$-functors the map \begin{equation*} \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O}) \longrightarrow \Hom_{{\mathcal O}}({\mathcal H}^d(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O}))_{\pi, {\epsilon}}, {\mathcal H}^{d+1}({\mathcal O}(0))_{\pi, {\epsilon}}), \, \psi\mapsto \delta_{\psi} \end{equation*} is a homomorphism of ${\mathcal O}$-modules. \end{proof} Since $\Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})\otimes_{{\mathcal O}} E = \Hom_{\cont}(F_{{\mathfrak p}}^*, E)$, Lemma \ref{lemma:linvlin} implies that \eqref{linv2} extends to an $E$-linear functional \begin{equation} \label{linv3} {\mathscr L}_{{\epsilon}}(\pi, \wcdot): \Hom_{\cont}(F_{{\mathfrak p}}^*, E) \longrightarrow E, \,\, \psi \mapsto {\mathscr L}_{{\epsilon}}(\pi, \psi). \end{equation} We need the following criterion characterizing the kernel of \eqref{linv2}. \begin{prop} \label{prop:linvinlin2} Let $\psi\in \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})$ and let ${\epsilon}: \Delta\to \{\pm 1\}$ be a character. The following conditions are equivalent \medskip \noindent (i) ${\mathscr L}_{{\epsilon}}(\pi, \psi) = 0$. \medskip \noindent (ii) For either $n=d$ or $n=d+1$ the $\widetilde{E}$-module ${\mathbb H}_{\varpi-\ad}^n(X^{\an}; \widetilde{E}(\Theta_{\psi}), E)_{\pi, {\epsilon}}$ is free of rank $1$. \end{prop} Here the $\widetilde{E}$-module structure on ${\mathbb H}_{\varpi-\ad}^n(X^{\an}; \widetilde{E}(\Theta_{\psi}), E)$ is induced by the $\widetilde{E}$-module structure on $\widetilde{E}(\Theta_{\psi})$. \begin{proof} Clearly, (i) is equivalent to the vanishing of the homomorphism \begin{equation} \label{linv4} (\delta_{\psi})_{\pi, {\epsilon}}: {\mathcal H}^d(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O}))_{\pi, {\epsilon}}\longrightarrow {\mathcal H}^{d+1}({\mathcal O}(0))_{\pi, {\epsilon}}. \end{equation} Consider the long exact sequence \eqref{extdelta} associated to \eqref{extstein7} \begin{eqnarray} \label{limEmSS4} & \ldots \longrightarrow {\mathcal H}^n(\Ind_{B, \cont}^G {\mathcal O}(0))\stackrel{\beta^*}{\longrightarrow}{\mathcal H}^n(\Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi})) \stackrel{\alpha^*}{\longrightarrow} {\mathcal H}^n(\Ind_{B, \cont}^G {\mathcal O}(0)) \\ & \stackrel{\delta_{\Theta_{\psi}}}{\longrightarrow}{\mathcal H}^{n+1}(\Ind_{B, \cont}^G {\mathcal O}(0)) \longrightarrow\ldots \nonumber \end{eqnarray} Prop.\ \ref{prop:harder} (c) together with the fact that $\beta^* \circ \alpha^*$ is multiplication with ${\varepsilon}$ implies \begin{equation} \label{limEmSS5} \dim_E \left( {\mathcal H}^n(\Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi}))_{\pi, {\epsilon}}/ {\varepsilon} {\mathcal H}^n(\Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi}))_{\pi, {\epsilon}} \right) \, =\, 1, \end{equation} i.e.\ the $\widetilde{E}$-module ${\mathcal H}^n(\Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi}))_{\pi, {\epsilon}}$ is generated by one element if $n=d,d+1$. It follows from standard functorial properties of connecting homomorphisms that the diagram \[ \begin{CD} {\mathcal H}^n(\Ind_{B, \cont}^G {\mathcal O}(0)) @>\delta_{\Theta_{\psi}} >> {\mathcal H}^{n+1}(\Ind_{B, \cont}^G {\mathcal O}(0))\\ @AAA @VVV\\ {\mathcal H}^n(\St_{G_{{\mathfrak p}}, \cont}({\mathcal O}))@> \delta_{\psi} >> {\mathcal H}^{n+1}({\mathcal O}(0)) \end{CD} \] commutes for each $n\in {\mathbb Z}$. Here the vertical maps are induced by the homomorphisms appearing in the sequence \eqref{steinbergcont}. If $n=d$ the vertical maps are -- after localization with respect to $(\pi, {\epsilon})$ -- isomorphisms of onedimensional $E$-vector spaces by Prop.\ \ref{prop:harder} (a), (b). Hence the vanishing of the map \eqref{linv4} is equivalent to the vanishing of \begin{equation} \label{connecan} (\delta_{\Theta_{\psi}})_{\pi, {\epsilon}}: {\mathcal H}^d(\Ind_{B, \cont}^G {\mathcal O}(0))_{\pi, {\epsilon}} \longrightarrow {\mathcal H}^{d+1}(\Ind_{B, \cont}^G {\mathcal O}(0))_{\pi, {\epsilon}}. \end{equation} Localizing \eqref{limEmSS4} with respect to $(\pi, {\epsilon})$ and using Prop.\ \ref{prop:harder} (c) this implies that \[ {\mathscr L}_{{\epsilon}}(\pi, \psi) = 0 \Leftrightarrow \dim_E {\mathcal H}^d(\Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi}))_{\pi, {\epsilon}} =2 \Leftrightarrow \dim_E {\mathcal H}^{d+1}(\Ind_{B, \cont}^G {\widetilde{\mathcal O}}(\Theta_{\psi}))_{\pi, {\epsilon}} =2. \] Using \eqref{limEmSS5} and this yields the equivalence of the conditions (i) and (ii). \end{proof} \paragraph{Arithmetic ${\mathscr L}$-invariants} We recall the definition of the arithmetic (i.e.\ Mazur-Tate-Teitelbaum) ${\mathscr L}$-invariant ${\mathscr L}(V_{\pi}, \psi)$ (see \cite{mtt}, \cite{grstevens}). It is defined in terms of the $\varpi$-adic Galois representation $V_{\pi}$ attached to $\pi$. Recall (\cite{carayol}, \cite{wiles}, \cite{taylor}) that there exists a twodimensional $E$-vector space $V = V_{\pi}$ together with a continuous homomorphism $\rho= \rho_{\pi}: {\mathfrak G}:=\Gal(\overline{F}/F)\to \GL(V_{\pi})$ with the following properties \medskip \noindent (i) $\rho$ is unramified outside the set of primes dividing $p{\mathfrak d}{\mathfrak n}$. \medskip \noindent (ii) If $\Frob_{{\mathfrak q}} \in \Gal(\overline{F}/F)$ is a Frobenius for a prime ${\mathfrak q}\nmid p{\mathfrak n}{\mathfrak d}$ then we have \begin{equation} \label{galrephmf} \Tr(\rho(\Frob_{{\mathfrak q}})) = {\lambda}(T_{{\mathfrak q}}), \qquad \det( \rho(\Frob_{{\mathfrak q}})) \, =\, \Norm({\mathfrak q}). \end{equation} \medskip \noindent (iii) Let ${\mathfrak G}_{{\mathfrak p}} \subseteq \Gal({\overline{\mathbb Q}}/F)$ be the decomposition group of a prime above ${\mathfrak p}$. Then there exists a short exact sequence of $E[{\mathfrak G}_{{\mathfrak p}}]$-modules \begin{equation} \label{tateperiod} 0 \longrightarrow E(1) \longrightarrow V \longrightarrow E(0) \longrightarrow 0. \end{equation} where $E(m):= E \otimes_{{\mathbb Z}_p} {\mathbb Z}_p(m)$, $m \in {\mathbb Z}$, i.e.\ $E(m)=E$ equipped with ${\mathfrak G}_{{\mathfrak p}}$-action via the $m$-th power of the cyclotomic character $\chi_{\cycl}: {\mathfrak G}_{{\mathfrak p}}\to \Gal(F_{{\mathfrak p}}(\mu_{p^{\infty}})/F_{{\mathfrak p}})\subseteq {\mathbb Z}_p^*$. The ${\mathfrak G}_{{\mathfrak p}}$-representation $V$ is semistable but not crystalline\footnote{Since $V$ can be realized as a direct summand of $T_p(A)_E$ of an abelian variety $A/F$ with split multiplicative reduction at ${\mathfrak p}$.}. By \cite{ribet} (see also \cite{taylor2}, Prop.\ 3.1) the Galois representation $\rho$ is simple and uniquely determined by the properties (i) and (ii). We denote by \begin{equation*} \label{tateperiod2} \xi= \xi(\pi) \in \Ext_{E[{\mathfrak G}_{{\mathfrak p}}]}^1(E(0), E(1)) \,= \,H^1(F_{{\mathfrak p}}, E(1)) \end{equation*} the class of the extension \eqref{tateperiod}. Here $H^1(F_{{\mathfrak p}}, \wcdot )= H_{\cont}^1({\mathfrak G}_{{\mathfrak p}}, \wcdot)$ denotes continuous Galois cohomology. Also the local reciprocity map $\rec: F_{{\mathfrak p}}^*\to \Gal(\overline{F_{{\mathfrak p}}}/F_{{\mathfrak p}})^{\ab} \cong {\mathfrak G}_{{\mathfrak p}}^{\ab}$ induces an isomorphism \begin{equation*} \label{reciproc} H^1(F_{{\mathfrak p}} , E(0)) \, =\, \Hom_{\cont}({\mathfrak G}_{{\mathfrak p}}, E) \, \longrightarrow \, \Hom_{\cont}(F_{{\mathfrak p}}^*, E), \, \varphi\mapsto \varphi\circ \rec. \end{equation*} We denote the inverse map by \begin{equation*} \label{recproc2} \partial: \Hom_{\cont}(F_{{\mathfrak p}}^*, E)\, \longrightarrow \,H^1(F_{{\mathfrak p}}, E(0)), \,\, \psi \mapsto \partial(\psi). \end{equation*} Following (\cite{grstevens}, Def.\ 3.9) we define \begin{df} \label{df:arithmlinv} The ${\mathscr L}$-invariant of $V_{\pi}$ associated to $\psi\in \Hom_{\cont}(F_{{\mathfrak p}}^*, E)$ is the scalar ${\mathscr L}(V_{\pi}, \psi)\in E$ characterized by \[ {\mathscr L}(V_{\pi}, \psi)\, \partial(v_{{\mathfrak p}}) \cup \xi(\pi) \, =\, \partial(\psi) \cup \xi(\pi). \] \end{df} Here the cup-product is Tate's local duality pairing \[ H^1(F_{{\mathfrak p}}, E(0))\times H^1(F_{{\mathfrak p}}, E(1))\longrightarrow H^2(F_{{\mathfrak p}}, E(1))\cong E. \] The fact that $V$ is not crystalline implies $\partial(\ord_{{\mathfrak p}}) \cup \xi(\pi) \ne 0$. Indeed, since the image of $\xi$ under the projection $H^1(F_{{\mathfrak p}}, E(1))\to H_{/f}^1(F_{{\mathfrak p}}, E(1)) := H^1(F_{{\mathfrak p}}, E(1))/H_f^1(F_{{\mathfrak p}}, E(1))$ is non-trivial, its cup-product with the $\partial(v_{{\mathfrak p}})\in H_{f}^1(F_{{\mathfrak p}}, E(0))$ is non-trivial as well. \begin{remark} \label{remark:extsteinfunc} \rm The map \[ {\mathscr L}(V_{\pi}, \wcdot): \Hom(F_{{\mathfrak p}}^*, E)\longrightarrow E, \,\, \psi \mapsto {\mathscr L}(V_{\pi}, \psi) \] is an $E$-linear functional with ${\mathscr L}(V_{\pi}, v_{{\mathfrak p}})=1$. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi \end{remark} Our main result is \begin{theo} \label{theo:linvmttautom} We have ${\mathscr L}_{{\epsilon}}(\pi, \psi)\, =\, {\mathscr L}(V_{\pi}, \psi)$ for every $\psi\in \Hom_{\cont}(F_{{\mathfrak p}}^*, E)$ and every character ${\epsilon}:\Delta\to \{\pm 1\}$.\footnote{After this work was completed the author was informed that Gehrmann and Rosso have obtained the same result; see \cite{gehrrosso}.} \end{theo} \begin{remark} \label{remark:mainthm} \rm The equality of ${\mathscr L}$-invariants appeared as a conjecture in \cite{greenberg} (in the case $D=M_2(F)$ and $F$ has class number 1) and in \cite{gms} (for $D$ a division algebras). Theorem \ref{theo:linvmttautom} has been known previously in the following cases: $F={\mathbb Q}$, $D$ a definite division algebra (see \cite{teitelbaum}), $F={\mathbb Q}$, $D=M_2({\mathbb Q})$ (by \cite{grstevens}, \cite{darmon} and \cite{dasgupta}), $F={\mathbb Q}$, $D$ a definite division algebra (by \cite{dasgreen} and \cite{lrv}) and for $F$ arbitrary, $D=M_2(F)$, $\psi=\log_p\circ \Norm_{F_{{\mathfrak p}}/{\mathbb Q}_p}$ and ${\epsilon}=1$ (by \cite{spiess1}. Note thought that even in the classical case $F={\mathbb Q}$ and $D=M_2({\mathbb Q})$ our proof is new as it does not use big Galois representations associated to Hida families. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi \end{remark} \section{Proof of the comparison Theorem if $d=0$ and $d= 1$} \label{section:shimuracurve} \paragraph{The case $d=0$} The proof of Thm.\ \ref{theo:linvmttautom} in this case can be deduced from $p$-adic uniformization of Shimura curves. If $F={\mathbb Q}$ and $\pi$ is defined over ${\mathbb Q}$ (i.e.\ if $V_{\pi}$ is the Galois representation associated to an elliptic curve $E/{\mathbb Q}$) then this has been pointed out already in \cite{teitelbaum}. The proof in the general case does not require any new idea. Note that $D$ is a totally definite quaternion algebra, $X$ is a 0-dimensional ${\mathscr S}$-space and $\Delta =1$. Therefore there exists only one automorphic ${\mathscr L}$-invariant ${\mathscr L}(\pi, \psi)$ associated with $\pi$ (and $\psi$ and ${\mathfrak p}$) and we can (and will) drop ${\epsilon}$ from the notation. We fix an archimedian place $v$ of $F$. Let ${\overline{D}}$ be the quaternion algebra with $\Ram_{{\overline{D}}} =\Ram_D \setminus \{v\}\cup \{{\mathfrak p}\}$. Let ${\overline{G}}$ denote the algebraic group corresponding to ${\overline{D}}^*/F^*$ and fix an Eichler order ${\mathcal O}_{{\overline{D}}}$ of level ${\mathfrak n}$ in ${\overline{D}}$. We choose isomorphism ${\overline{D}}_{\mathfrak q}\cong D_{\mathfrak q}$ for every nonarchimedean place ${\mathfrak q}\ne {\mathfrak p}$ that respects the local Eichler orders (i.e.\ ${\mathcal O}_{{\overline{D}}, {\mathfrak q}}$ is mapped onto ${\mathcal O}_{D,{\mathfrak p}}$), so that we can (and will) identify $G({\mathbb A}_f^{{\mathfrak p}})$ the groups $G({\mathbb A}_f^{{\mathfrak p}})$ and ${\overline{G}}({\mathbb A}_f^{{\mathfrak p}})$. Let ${\overline{K}}_v$ be a maximal compact open subgroup of ${\overline{G}}_v \cong \PGL_2({\mathbb R})$. For a $K_f^{{\mathfrak p}} \subseteq G({\mathbb A}_f^{{\mathfrak p}})$ we let $Y = Y(K_f^{{\mathfrak p}})/F$ denote the Shimura curve of level ${\overline{G}}_{{\mathfrak p}} \times K_f^{{\mathfrak p}}$ associated to ${\overline{D}}$, i.e.\ the associated compact Riemann surface $Y^{\an}=Y({\mathbb C})$ is given by $Y^{\an}= {\overline{G}}(F) \backslash \left({\overline{G}}_v/{\overline{K}}_v\times {\overline{G}}({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}\right)$. We recall the ${\mathfrak p}$-adic uniformization of $Y$ due to \u{C}erednik \cite{cerednik} if $F={\mathbb Q}$ and by Boutot-Zink \cite{boutot-zink} for arbitrary $F$. Let ${\mathbb C}_{{\mathfrak p}} = \widehat{{\overline{F}}}_{{\mathfrak p}} = {\mathbb C}_p$ be the completion of the algebraic closure of $F_{{\mathfrak p}}$ and let $Y(K_f^{{\mathfrak p}})^{\rig}$ be the rigid analytic $F_{{\mathfrak p}}$-variety associated to $Y(K_f^{{\mathfrak p}})\otimes F_{{\mathfrak p}}$. For $K_f^{{\mathfrak p}}$ sufficiently small there exists a $\Gal({\mathbb C}_{{\mathfrak p}}/F_{{\mathfrak p}})$-equivariant isomorphism of rigid analytic $F_{{\mathfrak p}}$-varieties \begin{equation} \label{cerednik} Y(K_f^{{\mathfrak p}})^{\rig}\, =\, G(F) \backslash \left({\mathcal H}_{{\mathfrak p}} \times G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}\right) \end{equation} where ${\mathcal H}_{{\mathfrak p}} = {\mathbb P}^1_{/F_{{\mathfrak p}}}\setminus {\mathbb P}^1(F_{{\mathfrak p}})$ is the ${\mathfrak p}$-adic upper half plane. Thus the curve $Y(K_f^{{\mathfrak p}})^{\rig}$ is a disjoint union of Mumford curves. This implies that the Jacobian $J= \Jac(Y(K_f^{{\mathfrak p}}))$ of $Y(K_f^{{\mathfrak p}})$ has split multiplicative reduction at ${\mathfrak p}$. Using the work of Manin and Drinfeld \cite{mandrin} we can deduce from the isomorphism \eqref{cerednik} an explicit description of the rigid analytic $F_{{\mathfrak p}}$-torus $J^{\rig}$ associated to $J\otimes {\mathbb F}_{{\mathfrak p}}$ hence also of the Tate module $T_p(J) = H^1(Y(K_f^{{\mathfrak p}})_{{\overline{\mathbb Q}}}, {\mathbb Z}_p(1))$. We use here the reformulation in (\cite{berggehr} and \cite{dasgupta}) of the theorem of Manin-Drinfeld. Firstly, note that if $K_f^{{\mathfrak p}}$ is sufficiently small the ${\mathbb Z}$-module $H_1(G(F), C_c(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, {\mathbb Z}))$ is free of finite rank. Indeed, let $g_1, \ldots, g_h\in G({\mathbb A}_f^{{\mathfrak p}})$ be a system of representatives of $G(F)\backslash G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}$ and put $\Gamma_i = G(F) \cap g_i K_f^{{\mathfrak p}}$. If $K_f^{{\mathfrak p}}$ is small enough then the groups $\Gamma_1, \ldots, \Gamma_h$ are finitely generated and free hence \[ H_1(G(F), C_c(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, {\mathbb Z})) \, \cong \, \bigoplus_{i=1}^h H_1(\Gamma_i, {\mathbb Z})\, \cong\, \bigoplus_{i=1}^h \Gamma_i^{\ab} \] is free-abelian of finite rank. Let ${\mathcal T}/F_{{\mathfrak p}}$ be the split algebraic torus with character group $H_1(G(F), \break C_c(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, {\mathbb Z})$). Since the map \begin{equation*} \label{caph1} H^1(G(F), C(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*)) \longrightarrow \Hom(H_1(G(F), C_c(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, {\mathbb Z}), F_{{\mathfrak p}}^*) \end{equation*} induced by the $\cap$-product is an isomorphism we can identify $H^1(G(F), C(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*)$ with set of $F_{{\mathfrak p}}$-points of ${\mathcal T}$. The universal extension of the Steinberg representation \eqref{extsteinuniv} induces a homomorphism \begin{equation} \label{deltauniv} j: H^0(G(F), {\mathcal A}_{{\mathbb Z}}(\St_{G_{{\mathfrak p}}}({\mathbb Z}), K_f^{{\mathfrak p}}, {\mathbb Z})) \longrightarrow H^1(G(F), C(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*))\, =\, {\mathcal T}(F_{{\mathfrak p}}) \end{equation} as follows. Firstly, we note that there exists a canonical map \begin{eqnarray} \label{deltauniv2} && {\mathcal A}_{{\mathbb Z}}(\St_{G_{{\mathfrak p}}}({\mathbb Z}), K_f^{{\mathfrak p}}, {\mathbb Z}) \, =\, \Hom(\St_{G_{{\mathfrak p}}}({\mathbb Z}), \Maps(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, {\mathbb Z}))\\ && \longrightarrow \, \prolimn \Hom(\St_{G_{{\mathfrak p}}}({\mathbb Z})\otimes F_{{\mathfrak p}}^*/U_{{\mathfrak p}}^{(n)}, \Maps(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, {\mathbb Z})\otimes F_{{\mathfrak p}}^*/U_{{\mathfrak p}}^{(n)}) \nonumber \\ && \longrightarrow \, \prolimn \Hom(\St_{G_{{\mathfrak p}}}(F_{{\mathfrak p}}^*/U_{{\mathfrak p}}^{(n)}), \Maps(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*/U_{{\mathfrak p}}^{(n)})) \nonumber \\ && \longrightarrow \, \Hom(\prolimn \St_{G_{{\mathfrak p}}}(F_{{\mathfrak p}}^*/U_{{\mathfrak p}}^{(n)}), \prolimn \Maps(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*/U_{{\mathfrak p}}^{(n)})) \nonumber \\ && = \, \Hom(\St_{G_{{\mathfrak p}}, \cont}(F_{{\mathfrak p}}^*), C(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*))\nonumber \end{eqnarray} (for the second map note that $\St_{G_{{\mathfrak p}}}({\mathbb Z})\otimes A= \St_{G_{{\mathfrak p}}}(A)$ for any discrete abelian group $A$). The extension \eqref{extsteinuniv} yields a short exact sequence \begin{eqnarray*} && 0 \longrightarrow C(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*) \longrightarrow \Hom({\mathscr E}_{\univ}, C(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*))\\ && \hspace{2cm} \longrightarrow \Hom(\St_{G_{{\mathfrak p}}, \cont}(F_{{\mathfrak p}}^*), C(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*))\longrightarrow 0 \end{eqnarray*} and \eqref{deltauniv} is defined as the composite \begin{eqnarray} \label{deltauniv3} && H^0(G(F), {\mathcal A}_{{\mathbb Z}}(\St_{G_{{\mathfrak p}}}({\mathbb Z}), K_f^{{\mathfrak p}}, {\mathbb Z})) \longrightarrow H^0(G(F), \Hom(\St_{G_{{\mathfrak p}}, \cont}(F_{{\mathfrak p}}^*), C(G({\mathbb A}_f^{{\mathfrak p}})/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*)))\\ && \hspace{2cm} \longrightarrow H^1(G(F), C(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*))\nonumber \end{eqnarray} where the first map is induced by \eqref{deltauniv2} and the second is the connecting homomorphism induced by \eqref{deltauniv3}. By (\cite{berggehr}, Thm.\ 4.9) and (\cite{dasgupta}, Thm.\ 2.5) we have an isomorphism of rigid analytic $F_{{\mathfrak p}}$-tori \begin{equation} \label{mandrin1} J^{\rig}\, =\, \Jac(Y(K_f^{{\mathfrak p}}))\, \cong \, {\mathcal T}^{\rig}/j(\Lambda) \end{equation} where $\Lambda = H^0(G(F), {\mathcal A}_{{\mathbb Z}}(\St_{G_{{\mathfrak p}}}({\mathbb Z}), K_f^{{\mathfrak p}}, {\mathbb Z}))$. We fix $\psi\in \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})$. As in (\cite{grstevens}, \S 3) we can associate an ${\mathscr L}$-invariant to ${\mathscr L}(J^{\rig}, \psi)$ to $J^{\rig}$ and $\psi$ as follows. The homomorphisms $v_{{\mathfrak p}}, \psi$ induce homomorphisms \[ (v_{{\mathfrak p}})_*, \psi_* : H^1(G(F), C(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, F_{{\mathfrak p}}^*))_E \to H^1(G(F), C(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, {\mathcal O}))_E. \] The ${\mathscr L}$-invariant ${\mathscr L}(J^{\rig}, \psi)$ is defined as the endomorphism \begin{equation} \label{linvabvar} {\mathscr L}(J^{\rig}, \psi): H^0(G(F), {\mathcal A}_{{\mathbb Z}}(\St_{G_{{\mathfrak p}}}({\mathbb Z}), K_f^{{\mathfrak p}}, {\mathbb Z}))_E \longrightarrow H^0(G(F), {\mathcal A}_{{\mathbb Z}}(\St_{G_{{\mathfrak p}}}({\mathbb Z}), K_f^{{\mathfrak p}}, {\mathbb Z}))_E \end{equation} such that $\psi_* \circ j_E \circ {\mathscr L}^{\rig}(\psi) = (v_{{\mathfrak p}})_* \circ j_E$. Note that for every $\psi\in \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})$ the map $\psi_* \circ j_E$ can be identified with the connecting homomorphism $\delta_{\psi}^0$ in \eqref{extdeltapsi}. So by Remark \ref{remark:ramanunjan} the ${\mathscr L}$-invariant \eqref{linvabvar} can be defined even if $K_f^{{\mathfrak p}}$ is not small enough. In particular for $K_f^{{\mathfrak p}} = K_0({\mathfrak n})^{{\mathfrak p}}$ the endomorphism \eqref{linvabvar} is Hecke equivariant and localizing at $\pi$ yields \[ {\mathscr L}(J^{\rig}, \psi)_{\pi} \, =\, {\mathscr L}(\pi, \psi). \] On the other hand \eqref{mandrin1} implies that there exists a short exact sequence of ${\mathfrak G}_{{\mathfrak p}}$-module \begin{equation} \label{cerednikext} 0 \longrightarrow T_p({\mathcal T}) = {\Lambda}'_E(1) \longrightarrow H^1(Y(K_f^{{\mathfrak p}})_{{\overline{\mathbb Q}}}, {\mathbb Z}_p(1))_E \longrightarrow {\Lambda}_E(0)\longrightarrow 0 \end{equation} where ${\Lambda}' = \Hom({\mathbb G}_m, {\mathcal T}) = H^1(G(F), C(G({\mathbb A}_f)/K_f^{{\mathfrak p}}, {\mathbb Z}))$. Let $\xi\in \Ext_{E[{\mathfrak G}_{{\mathfrak p}}]}^1({\Lambda}_E(0), {\Lambda}'_E(1))$ denote the class of \eqref{cerednikext}. By applying the Yoneda pairing we obtain a map \[ \wcdot \cdot \xi: \Lambda \otimes H^1(F_{{\mathfrak p}}, E(0)) = \Ext_{E[{\mathfrak G}_{{\mathfrak p}}]}^1(E(0), {\Lambda}_E(0)) \longrightarrow \Ext_{E[{\mathfrak G}_{{\mathfrak p}}]}^2(E(0), {\Lambda}'_E(1)) \cong {\Lambda}'_E. \] By (\cite{grstevens}, Thm.\ 3.11) the diagram \begin{equation} \label{lriglgal} {\xymatrix@+0.5pc{\Lambda_E\ar[dd]_{{\mathscr L}^{\rig}(\psi)}\ar[drr]^{{\lambda}\mapsto ({\lambda} \otimes \partial(v_{{\mathfrak p}}))\cdot \xi}\\ && \Lambda_{E'}\\ {\Lambda}_E\ar[urr]^{{\lambda} \otimes \partial(\psi))\cdot \xi} }} \end{equation} commutes. If $K_0({\mathfrak n})^{{\mathfrak p}}$ is sufficiently small the sequence \eqref{cerednikext} localized at $\pi$ can be identified with \eqref{tateperiod}. Hence the commutativity of \eqref{lriglgal} implies \[ {\mathscr L}(\pi, \psi)\, =\, {\mathscr L}(J^{\rig}, \psi)_{\pi} \, =\, {\mathscr L}(V_{\pi}, \psi). \] In the general case this equality still holds as can be seen by choosing an open normal subgroup $K_f^{{\mathfrak p}}$ of $K_0({\mathfrak n})^{{\mathfrak p}}$ that is sufficiently small and passing in \eqref{cerednikext} and \eqref{lriglgal} to $K_0({\mathfrak n})^{{\mathfrak p}}/K_f^{{\mathfrak p}}$-invariants. \paragraph{The case $d=1$} We first assume that $D$ is a division algebra. The initial step is to show that the $E$-linear functional \eqref{linv3} does not depend on the character ${\epsilon}: \Delta\to \{\pm 1\}$. By Lemma \ref{lemma:linvlin} (a) it suffices to see that for $\psi\in \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})$ the vanishing of ${\mathscr L}_{{\epsilon}}(\pi, \psi)$ does not depend on ${\epsilon}$. We fix $\psi$ and consider the long exact sequence \begin{eqnarray*} \label{banreppsi} && \ldots \longrightarrow {\mathbb H}_{\varpi-\ad}^n(X_{{\overline{\mathbb Q}}}; E(0), E) \longrightarrow {\mathbb H}_{\varpi-\ad}^n(X_{{\overline{\mathbb Q}}}; \widetilde{E}(\Theta_{\psi}), E) \longrightarrow {\mathbb H}_{\varpi-\ad}^n(X_{{\overline{\mathbb Q}}}; E(0), E)\\ && \hspace{3cm} \stackrel{\delta_{\Theta_{\psi}}^n}{\longrightarrow}{\mathbb H}_{\varpi-\ad}^{n+1}(X_{{\overline{\mathbb Q}}}; E(0), E)\longrightarrow \ldots \nonumber \end{eqnarray*} associated to the short exact sequence of $0\to E(0)\to \widetilde{E}(\Theta_{\psi}) \to E(0)\to 0$ of admissible $E$-Banach space representations of $T_{{\mathfrak p}}$ (i.e.\ the sequence \eqref{extstein7} tensored with $\otimes_{{\mathcal O}} E$). By \eqref{derham5} it can be identified with the sequence \eqref{limEmSS4} as a sequence of Hecke modules. The localization ${\mathbb H}_{\varpi-\ad}^n(X_{{\overline{\mathbb Q}}}; E(0), E)_{\pi}$ for $n=1,2$ can be identified with the ${\mathfrak G}=\Gal({\overline{\mathbb Q}}/F)$-representation $V:=V_{\pi}$. Indeed, by Prop.\ \ref{prop:harder} and (\cite{spiess2}, Thm.\ 5.26) ${\mathbb H}_{\varpi-\ad}^n(X_{{\overline{\mathbb Q}}}; E(0), E)_{\pi}$ is a twodimensional $E$-vector space equipped with a continuous $E$-linear ${\mathfrak G}$-action unramified outside the set of primes dividing $p{\mathfrak d}{\mathfrak n}$ such that the Eichler-Shimura relations \eqref{galrephmf} hold. The uniqueness of $V$ thus implies \[ {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; E(0), E)_{\pi}\, \cong \, V\, \cong \,{\mathbb H}_{\varpi-\ad}^2(X_{{\overline{\mathbb Q}}}; E(0), E)_{\pi}. \] By using the fact that the $E[{\mathfrak G}]$-module $V$ is simple we see that \begin{equation} \label{connecet} (\delta_{\Theta_{\psi}})_{\pi}: {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; E(0), E)_{\pi} \longrightarrow {\mathbb H}_{\varpi-\ad}^2(X_{{\overline{\mathbb Q}}}; E(0), E) \end{equation} is either injective or $=0$. Therefore (in the case $d=1$) the vanishing of \eqref{connecet}, hence also of \eqref{connecan} and of ${\mathscr L}_{{\epsilon}}(\pi, \psi)$ is independent of the character ${\epsilon}: \Delta\to \{\pm 1\}$ so we will drop ${\epsilon}$ from the notation. Since ${\mathscr L}(\pi, v_{{\mathfrak p}}) = 1= {\mathscr L}(V, v_{{\mathfrak p}})$ and ${\mathscr L}(\pi, \wcdot)$, ${\mathscr L}(V, \wcdot)$ are $E$-linear functionals to finish the proof it suffices to show that for $\psi\in\Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})$ with ${\mathscr L}(\pi, \psi)=0$ we also have ${\mathscr L}(V, \psi)=0$, i.e.\ $\partial(\psi) \cup \xi(V) =0$. Put $\widetilde{V} = {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; \widetilde{E}(\Theta_{\psi}), E)_{\pi}$, $\widetilde{V}^0 = {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; \widetilde{E}(\Theta_{\psi}), E)^0_{\pi}$, $\widetilde{V}^{\et}= {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; \widetilde{E}(\Theta_{\psi}), E)^{\et}_{\pi}$, $V^0= {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; E(0), E)^0_{\pi}$, $V^{\et}:= {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; E(0), E)^{\et}_{\pi}$ so that we obtain a diagram \begin{equation} \label{deformgalrep} \begin{CD} @. 0 @. 0 @. 0 @.\\ @. @VVV @VVV @VVV @.\\ 0@>>> V^0 @>>> V @>>> V^{\et} @>>> 0\\ @. @VVV @VVV @VVV @.\\ 0@>>> \widetilde{V}^0 @>>> \widetilde{V} @>>> \widetilde{V}^{\et} @>>> 0\\ @. @VVV @VVV @VVV @.\\ 0@>>> V^0 @>>> V @>>> V^{\et} @>>> 0\\ @. @VVV @VVV @VVV @.\\ @. 0 @. 0 @. 0 @.\\ \end{CD} \end{equation} By (\cite{spiess2}, Thm.\ 5.26) the rows are exact and we have $V^0=E(1)$, $V^{\et} = E(0)$ (i.e.\ the first and third row can be identified with the sequence \eqref{tateperiod}). Moreover ${\mathfrak G}_{{\mathfrak p}}$ acts on the $\widetilde{E}$-modules $\widetilde{V}^0$ and $\widetilde{V}^{\et}$ via the characters $\widetilde{\chi}\cdot \chi_{\cycl}$ and $\widetilde{\chi}^{-1}$ respectively. Here $\widetilde{\chi}: {\mathfrak G}_{{\mathfrak p}} \to {\widetilde{\mathcal O}}^*$ denotes the character of ${\mathfrak G}_{{\mathfrak p}}$ that corresponds to $\Theta_{\psi}$ under the reciprocity map, i.e.\ we have $\widetilde{\chi}(\sigma)= 1+ (\partial\psi)(\sigma){\epsilon}$ for every $\sigma\in {\mathfrak G}_{{\mathfrak p}}$. By Prop.\ \ref{prop:linvinlin2} the $\widetilde{E}$-module $\widetilde{V}$ is a free of rank $2$ and the middle column of \eqref{deformgalrep} is exact. We claim that the first and third column is exact as well. If we view them as complexes of $\widetilde{E}[{\mathfrak G}_{{\mathfrak p}}]$-modules denoted by $C^1_{{\bullet}}$ and $C^3_{{\bullet}}$ then the exactness of the middle column implies $H_n(C^3_{{\bullet}})\cong H_{n-1}(C^1_{{\bullet}})$ for every $n$. However the ${\mathfrak G}_{{\mathfrak p}}$-action on $H_n(C^1_{{\bullet}})$ and $H_{n-1}(C^3_{{\bullet}})$ are not compatible: on the first group ${\mathfrak G}_{{\mathfrak p}}$ acts via the character $\widetilde{\chi}\cdot \chi_{\cycl}$ and on the second via $\widetilde{\chi}^{-1}$. It follows $H_n(C^3_{{\bullet}})=0= H_{n-1}(C^1_{{\bullet}})$ for every $n$, i.e.\ the rows of \eqref{deformgalrep} are exact. In particular we see that $\widetilde{V}^0$ and $\widetilde{V}^{\et}$ are free of rank 1 as $\widetilde{E}$-modules hence $\widetilde{V}^0= \widetilde{E}(\widetilde{\chi}\cdot \chi_{\cycl})$ and $\widetilde{V}^0= \widetilde{E}(\widetilde{\chi}^{-1})$. By twisting the ${\mathfrak G}_{{\mathfrak p}}$-action of each $\widetilde{E}[{\mathfrak G}_{{\mathfrak p}}]$-module in \eqref{deformgalrep} with $\widetilde{\chi}^{-1}$ we see that the first and third row remain the same whereas the middle row becomes \[ 0 \longrightarrow \widetilde{E}(1)\longrightarrow \widetilde{V}' := \widetilde{V}(\widetilde{\chi}^{-1}) \longrightarrow \widetilde{E}(\widetilde{\chi}^{-2})\longrightarrow 0. \] Now we follow the argument in (\cite{grstevens}, proof of Thm.\ 3.14). Consider the diagram of Galois cohomology groups \[ \begin{CD} @.@. H^0(F_{{\mathfrak p}}, E(0))@.\\ @.@.@VV\delta V@. \\ H^1(F_{{\mathfrak p}}, E(1)) @>>> H^1(F_{{\mathfrak p}}, V) @>> > H^1(F_{{\mathfrak p}}, E(0)) @>> \delta' > H^2(F_{{\mathfrak p}}, E(1)) \\ @VVV@VVV@VV{\iota}_2 V@VV{\iota}_1 V\\ H^1(F_{{\mathfrak p}}, \widetilde{E}(1)) @>>> H^1(F_{{\mathfrak p}}, \widetilde{V}') @>>> H^1(F_{{\mathfrak p}}, \widetilde{E}(0)) @>>> H^2(F_{{\mathfrak p}}, \widetilde{E}(1)) \\ \end{CD} \] induced by diagram \eqref{deformgalrep} (twisted by $\widetilde{\chi}^{-1}$). A simple computation shows that the image of $1\in E=H^0(F_{{\mathfrak p}}, E(0))$ under $\delta$ is equal to $-2 \partial(\psi)$. Since $H^2(F_{{\mathfrak p}}, {\mathbb Q}(1))={\mathbb Q}_p$ the map ${\iota}_1$ can be identified with $E \to \widetilde{E}, x\mapsto x{\epsilon}$ hence it is injective. Since ${\iota}_2\circ \delta=0$ we also have ${\iota}_1\circ \delta'\circ \delta =0$ hence $\delta'\circ \delta=0$. It follows \[ 0\, =\, \delta'\circ \delta(1) \, =\, -2 \,\delta'(\partial(\psi)) \, =\, \pm 2 \,\xi\cup \partial(\psi) \] and therefore $\xi\cup \partial(\psi)=0$. Finally, it remains to consider the case $F={\mathbb Q}$ and $D= M_2(F)$. The above proof can be easily adapted to this case using (\cite{spiess2}, Thm.\ 5.34) instead of (\cite{spiess2}, Thm.\ 5.26) and by noting that \begin{eqnarray} \label{h1modcurve6} && {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; E(0), E)_{\pi}\, =\, {\mathbb H}_{\varpi-\ad, !}^1(X_{{\overline{\mathbb Q}}}; E(0), E)_{\pi},\\ && {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; \widetilde{E}(\Theta_{\psi}), E)_{\pi}\, =\, {\mathbb H}_{\varpi-\ad,!}^1(X_{{\overline{\mathbb Q}}}; \widetilde{E}(\Theta_{\psi}), E)_{\pi}.\nonumber \end{eqnarray} Indeed, by (\cite{spiess2}, Prop.\ 5.30) all terms of the sequence \[ 0 \longrightarrow \Hom_{E[T_{{\mathfrak p}}]}(E(0) ,C^1_E)_{\pi} \longrightarrow \Hom_{E[T_{{\mathfrak p}}]}(\widetilde{E}(\Theta_{\psi}) ,C^1_E)_{\pi} \longrightarrow \Hom_{E[T_{{\mathfrak p}}]}(E(0) ,C^1_E)_{\pi} \] vanish. Therefore by localizing the sequence (see \cite{spiess2}, \S 5.7) \begin{equation*} \label{h1modcurve3} 0\longrightarrow {\mathbb H}_{\varpi-\ad, !}^1(X_{{\overline{\mathbb Q}}}; A(\chi), E) \longrightarrow {\mathbb H}_{\varpi-\ad}^1(X_{{\overline{\mathbb Q}}}; A(\chi), E) \longrightarrow \Hom_{E[T_{{\mathfrak p}}]}(A(\chi^{-1}),C^1_E). \end{equation*} at $\pi$ for $A(\chi) = E(0)$ and $A(\chi) = \widetilde{E}(\Theta_{\psi})$ yields \eqref{h1modcurve6}. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi \section{Jacquet-Langlands functoriality for automorphic ${\mathscr L}$-invariants} \label{section:mainthm} In this section we assume that $d\ge 2$. Up to isomorphism there exists a unique quaternion algebra ${\overline{D}}$ with \begin{equation} \label{prinunits} \Ram_{{\overline{D}}} \, = \, \left\{\begin{array}{cc} \Ram_D\cup \,\Sigma & \mbox{if $d$ is even;}\\ \Ram_D\cup \{\sigma_2, \ldots, \sigma_d\} & \mbox{if $d$ is odd.} \end{array}\right. \end{equation} Let $\bar{d}$ be the number of archimedean places $v$ that split ${\overline{D}}$ (thus $\bar{d}\in \{0,1\}$ and $\bar{d}\equiv d \!\!\!\mod 2$). We denote the algebraic group corresponding to ${\overline{D}}^*/F^*$ by ${\overline{G}}$ and put $\barDelta = {\overline{G}}(F)/{\overline{G}}(F)_+ \cong \{\pm 1\}^{\bar{d}}$. Let ${\mathcal O}_{{\overline{D}}}$ be an Eichler order of level ${\mathfrak n}$ in ${\overline{D}}$. We choose an isomorphism ${\overline{D}}_v\cong D_v$ for every nonarchimedean place $v$ that respects the local Eichler orders (i.e.\ ${\mathcal O}_{{\overline{D}}, v}$ is mapped onto ${\mathcal O}_{D,v}$). This allows us to view the level $K_0({\mathfrak n})^{{\mathfrak p}}\subseteq G({\mathbb A}_f^{{\mathfrak p}})$ as a level in ${\overline{G}}({\mathbb A}_f^{{\mathfrak p}})$ and also to view ${\mathscr K}$ as a subset of the set of compact open subgroups of ${\overline{G}}_{{\mathfrak p}}$. The associated ${\mathscr S}$-space (resp.\ ${\mathscr S}$-scheme if $\bar{d}= 1$) will be abbreviated by ${\overline{X}}=X^{{\overline{D}}}_0({\mathfrak n})^{{\mathfrak p}}$. By the Jacquet-Langlands correspondence there exists an automorphic representation $\JL(\pi) = \pi' = \otimes_v \, \pi'_v$ such that $\pi'_v\cong \pi_v$ for all places $v$ of $F$ where ${\overline{D}}_v\cong D_v$ and so that $\pi'_v$ is the trivial representation of ${\overline{G}}_v$ for the places $v$ where ${\overline{D}}_v\not\cong D_v$ (i.e.\ for the places in $\Sigma$ resp.\ in $\{\sigma_2, \ldots, \sigma_d\}$ if $\bar{d}=0$ resp.\ if $\bar{d} =1$). The main result in this section is \begin{theo} \label{theo:linvjl} We have ${\mathscr L}_{{\epsilon}}(\pi, \psi) = {\mathscr L}(\pi', \psi)$ for every $\psi\in \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})$ and every character ${\epsilon}: \Delta\to \{\pm 1\}$. \end{theo} In particular we see that the automorphic ${\mathscr L}$-invariant ${\mathscr L}_{{\epsilon}}(\pi, \psi)$ is independent of the choice of the character ${\epsilon}$. In the previous section we have shown that Theorem \ref{theo:linvmttautom} holds for $\pi'$ (since $\bar{d} \le 1$). Since the $\varpi$-adic Galois representation attached to $\pi$ and $\pi'$ are the same this implies ${\mathscr L}_{{\epsilon}}(\pi, \psi)= {\mathscr L}(V_{\pi}, \psi)$ thus completing the proof of Thm.\ \ref{theo:linvmttautom}. Theorem \ref{theo:linvjl} follows from the following seemingly weaker assertion \begin{lemma} \label{lemma:linvkey} Let $\psi\in \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})$ and let $V= \ker(\psi)\cap U_{{\mathfrak p}}$. Assume that ${\mathscr L}(\pi', \psi) =0$ and $U_{{\mathfrak p}}/V\cong {\mathbb Z}_p$. Then we have ${\mathscr L}_{{\epsilon}}(\pi, \psi)=0$ for every character ${\epsilon}: \Delta\to \{\pm 1\}$. \end{lemma} \begin{proof}[Proof of Thm.\ \ref{theo:linvjl} assuming Lemma \ref{lemma:linvkey}] We fix a character ${\epsilon}: \Delta\to \{\pm 1\}$. Note that the $E$-vector space $\Hom_{\cont}(F_{{\mathfrak p}}^*, E)$ has dimension $r+1$ where $r:= [F_{{\mathfrak p}}:{\mathbb Q}_p] = \dim(U_{{\mathfrak p}}^{(1)}\otimes_{{\mathbb Z}_p}{\mathbb Q}_p)$. We choose a specific basis of $\Hom_{\cont}(F_{{\mathfrak p}}^*, E)$ as follows. Let $V_1, \ldots, V_r$ be closed subgroups of $U_{{\mathfrak p}}$ with $U_{{\mathfrak p}}/V_i\cong {\mathbb Z}_p$ and so that $V_1\cap\ldots \cap V_r$ is the torsion subgroup of $U_{{\mathfrak p}}$. We can find $\psi_1', \ldots, \psi_r' \in \Hom_{\cont}(F_{{\mathfrak p}}^*, {\mathcal O})$ so that $\Ker(\psi_i') \cap U_{{\mathfrak p}}= V_i$ for $i=1, \ldots, r$. Then $\psi_1', \ldots, \psi_r', v_{{\mathfrak p}}$ form a basis of $\Hom_{\cont}(F_{{\mathfrak p}}^*, E)$. If we put $\psi_i = \psi_i' - {\mathscr L}(\pi', \psi_i')\cdot v_{{\mathfrak p}}$ then we have $\Ker(\psi_i) \cap U_{{\mathfrak p}}= \Ker(\psi_i') \cap U_{{\mathfrak p}}= V_i$ and \[ {\mathscr L}(\pi', \psi_i) \, =\, {\mathscr L}(\pi', \psi_i') - {\mathscr L}(\pi', \psi) \cdot {\mathscr L}(\pi', v_{{\mathfrak p}}) \, =\, 0 \] for $i=1, \ldots, r$ by Lemma \ref{lemma:linvlin}. Therefore $\psi_1, \ldots, \psi_r, v_{{\mathfrak p}}$ form a basis of $\Hom_{\cont}(F_{{\mathfrak p}}^*, E)$ so that ${\mathscr L}_{{\epsilon}}(\pi, \psi_i) =0 = {\mathscr L}(\pi', \psi_i)$ for $i=1, \ldots, r$ and ${\mathscr L}_{{\epsilon}}(\pi, v_{{\mathfrak p}}) = {\mathscr L}(\pi', v_{{\mathfrak p}}) = 1$ by Lemmas \ref{lemma:linvlin} (a) and \ref{lemma:linvkey}. Using the fact that both types of ${\mathscr L}$-invariants are linear as functions of $\psi$ we obtain ${\mathscr L}_{{\epsilon}}(\pi, \psi) = {\mathscr L}(\pi', \psi)$ for every $\psi\in\Hom_{\cont}(F_{{\mathfrak p}}^*, E)$. \end{proof} The proof of Lemma \ref{lemma:linvkey} requires some preparation. We put ${\overline{T}}_{{\mathfrak p}} = T_{{\mathfrak p}}/V$ and consider the Iwasawa algebra \begin{equation} \label{iwasawa} {\Lambda} \, =\, {\mathcal O}[\![U_{{\mathfrak p}}/V]\!] \, =\, \prolim_n {\mathcal O}[U_{{\mathfrak p}}/V U_{{\mathfrak p}}^{(n)}]. \end{equation} The augmentation ideal of ${\Lambda}_E$, i.e.\ the kernel of the canonical projection $\aug: {\Lambda}_E\to E$, will be denote by ${\mathfrak a}$. Since $U_{{\mathfrak p}}/V\cong {\mathbb Z}_p$ we see that ${\Lambda}$ is non-canonical isomorphic to the power series ring ${\mathcal O}[\![T]\!]$ in one variable over ${\mathcal O}$. In particular we note that ${\Lambda}_E$ is a principal ideal domain. We fix a character $\barep: \barDelta\to \{\pm 1\}$ and consider the following $E[{\overline{T}}_{{\mathfrak p}}]$-module \begin{equation} \label{modforms1} {\mathcal V}\, =\, \fOrd_{\varpi-\ad}^{V, \bar{d}}({\overline{X}}^{\an}, {\mathcal O})_{E, \barep}. \end{equation} By (\cite{spiess2}, Prop.\ 5.19 (b) and Prop.\ 5.20) the vector space ${\mathcal V}$ is an admissible $E$-Banach space representation of ${\overline{T}}_{{\mathfrak p}}$ and its dual ${\mathcal M} ={\mathcal D}({\mathcal V})\in \Mod_E^{\fgaug}({\overline{T}}_{{\mathfrak p}})$ is free and of finite rank as a ${\Lambda}_E$-module. The ${\Lambda}_{{\mathcal O}}({\overline{T}}_{{\mathfrak p}})_E$-module ${\mathcal V}$ is equipped additionally with an action of the Hecke algebra ${\mathbb T}_E$ commuting with the ${\Lambda}_{{\mathcal O}}({\overline{T}}_{{\mathfrak p}})_E$-action. Let \begin{equation} \label{heckeimage} R\, =\, \Image({\mathbb T}_E\otimes_E {\Lambda}_{{\mathcal O}}({\overline{T}}_{{\mathfrak p}})_E \longrightarrow \End_{\Ban_E^{\adm}({\overline{T}}_{{\mathfrak p}})}({\mathcal V})). \end{equation} If $r_0=\rank_{{\Lambda}_E} {\mathcal M}$ then using the duality functor \eqref{pontrajagin4} yields \[ R\, \subseteq \,\End_{\Ban_E^{\adm}({\overline{T}}_{{\mathfrak p}})}({\mathcal V}) \, \cong \, \End_{\Mod_{E}^{\fgaug}({\overline{T}}_{{\mathfrak p}})}({\mathcal M})\subseteq \End_{{\Lambda}_E}({\mathcal M}) \, \cong \, {\Lambda}_E^{r_0^2}. \] Hence $R$ is a finite and torsionfree, hence flat ${\Lambda}_E$-algebra. We put ${\mathfrak X} = \Spec R$ and let $q: {\mathfrak X}\to \Spec {\Lambda}_E$ be the induced finite flat morphism. We denote by \begin{equation} \label{heckeimage2} {\lambda}: {\mathbb T}_E \longrightarrow R_E \end{equation} the canonical $E$-algebra homomorphism and by \begin{equation} \label{heckeimage3} \chi: F_{{\mathfrak p}}^* \, \cong \,T_{{\mathfrak p}} \longrightarrow {\overline{T}}_{{\mathfrak p}} \longrightarrow R^* \end{equation} the canonical (continuous) character. Let $|{\mathfrak X}|$ be the set of closed points of ${\mathfrak X}$. For $x\in |{\mathfrak X}|$ with residue field $k(x)$ we put \[ \begin{CD} {\lambda}_x : =\, {\lambda} \!\! \mod x: {\mathbb T}_E@>{\lambda} >> R @> \!\mod x >> k(x) \end{CD} \] and \[ \begin{CD} \chi_x : =\, \chi \!\! \mod x: {\mathbb T}_E@>\chi >> R @> \! \mod x >> k(x)^*. \end{CD} \] \begin{lemma} \label{lemma:piinfx} (a) There exists a point $x_0={\mathfrak m}_0\in |{\mathfrak X}|$ that is associated to $\pi$, i.e\ we have, $\chi_{x_0} = 1$, ${\lambda}_{x_0}={\lambda}_{\pi}$ and $k(x_0) = E$. The point $x_0$ lies above ${\mathfrak a}$ and ${\mathfrak X}\to \Spec {\Lambda}_E$ is {\'e}tale in $x_0$. Moreover the $R_{{\mathfrak m}_0}$-module ${\mathcal M}_{{\mathfrak m}_0}$ is free of rank 1. \noindent (b) We have ${\mathcal V}[{\mathfrak m}_0^2]\, \cong \, \widetilde{E}[\Theta_{\psi}]$ as $E$-Banach space representations of ${\overline{T}}_{{\mathfrak p}}$. \end{lemma} \begin{proof} (a) Since the anti-equivalence of categories ${\mathcal D}: \Ban_E^{\adm}({\overline{T}}_{{\mathfrak p}})\to \Mod_{E}^{\fgaug}({\overline{T}}_{{\mathfrak p}})$ is ${\Lambda}_{{\mathcal O}}({\overline{T}}_{{\mathfrak p}})_E$-linear we have \[ {\mathcal M}/{\mathfrak a} {\mathcal M} \, =\, {\mathcal M}\otimes_{{\Lambda}_E, \aug} E \, \cong \, {\mathcal D}({\mathcal V}[{\mathfrak a}]). \] By (\cite{spiess2}, Prop.\ 5.19 (e)) we have \begin{eqnarray*} \fOrd_{\varpi-\ad}^{V, \bar{d}}({\overline{X}}^{\an}, {\mathcal O})_E[{\mathfrak a}] & = & \Hom_{E[{\overline{T}}_{{\mathfrak p}}]}(\cInd_{{\overline{T}}_{{\mathfrak p}}^0}^{{\overline{T}}_{{\mathfrak p}}} E, \fOrd_{\varpi-\ad}^{V, \bar{d}}({\overline{X}}^{\an}, {\mathcal O})_E)\\ &= &{\mathbb H}_{\varpi-\ad}^{\bar{d}}({\overline{X}}^{\an}; \cInd_{{\overline{T}}_{{\mathfrak p}}^0}^{{\overline{T}}_{{\mathfrak p}}} E, E)\\ &=& \Hom_{E[{\overline{T}}_{{\mathfrak p}}]}(\cInd_{{\overline{T}}_{{\mathfrak p}}^0}^{{\overline{T}}_{{\mathfrak p}}} E, \fOrd_{\varpi-\ad}^{T_{{\mathfrak p}}^0, \bar{d}}({\overline{X}}^{\an}, {\mathcal O})_E) = H^{\bar{d}}({\overline{X}}_{K_0(1)}^{\an}, {\mathcal O})^{\ord}_E. \end{eqnarray*} Let ${\overline{\bD}}$ be $={\overline{D}}$ if $\bar{d}=0$ and let ${\overline{\bD}}$ be the totally ramified incoherent quaternion algebra over ${\mathbb A}$ (in the sense \cite{yuanzhangzhang}) with ramification set $\Ram_{{\overline{\bD}}} = \Ram_{{\overline{D}}} \cup \{\sigma_1\} = S_{\infty}\cup \Ram_D$ if $\bar{d}=1$. Following (\cite{yuanzhangzhang}, bottom of page 70) for a subfield $E$ of ${\mathbb C}$ we define the set ${\mathcal A}({\overline{\bD}}^*/{\mathbb A}^*, E)$ of automorphic representations of ${\overline{\bD}}^*/{\mathbb A}^*$ over $E$ as the set isomorphism classes of irreducible representations of ${\overline{\bD}}^*/{\mathbb A}^*$ such that $\pi\otimes_E {\mathbb C}$ is a sum of automorphic representations of ${\overline{\bD}}^*/{\mathbb A}^*$ of weight $0$. As in the proof (\cite{spiess2}, Lemma 3.25) we have a decomposition \begin{equation} \label{jacshimuraord} {\mathcal V}[{\mathfrak a}]\,=\,H^{\bar{d}}({\overline{X}}_{K_0(1)}^{\an}, {\mathcal O})^{\ord}_{E, \barep}\, =\, \bigoplus_{\widetilde{\pi}\in {\mathcal R}} \, M_{\widetilde{\pi}, {\mathfrak p}}^{\ord} \otimes_{{\mathcal O}_{\widetilde{\pi}}} (\widetilde{\pi}^{{\mathfrak p}})^{K_0({\mathfrak n})^{{\mathfrak p}}} \end{equation} where ${\mathcal R}$ denotes the set of $\widetilde{\pi}\in {\mathcal A}({\overline{\bD}}^*/{\mathbb A}^*, E)$ such that $\widetilde{\pi}_{{\mathfrak p}}^{K(0)}\ne 0 \ne (\widetilde{\pi}^{{\mathfrak p}})^{K_0({\mathfrak n})^{{\mathfrak p}}}$ and $M_{\widetilde{\pi}, {\mathfrak p}}^{\ord}\ne 0$. Here -- as in the proof of (\cite{spiess2}, Lemma 3.25) -- $M_{\widetilde{\pi}, {\mathfrak p}}$ denotes a $T_{{\mathfrak p}}^+$-stable ${\mathcal O}_{\widetilde{\pi}}$-lattice in $\widetilde{\pi}_{{\mathfrak p}}^{K_1(n)}$. Moreover, by (\cite{spiess2}, Prop.\ 2.22) there exists an unramified quasicharacter $\chi_{\widetilde{\pi}}: F_{{\mathfrak p}}^* \to {\mathcal O}^*$ such that $M_{\widetilde{\pi}, {\mathfrak p}}^{\ord} \cong {\mathcal O}_{\widetilde{\pi}}(\chi_{\widetilde{\pi}})$ for every $\widetilde{\pi}\in {\mathcal R}$. Thus we get \begin{equation} \label{jacshimuraord2} {\mathcal V}[{\mathfrak a}]\,=\, \bigoplus_{\widetilde{\pi}\in {\mathcal R}} \, E_{\widetilde{\pi}}(\chi_{\widetilde{\pi}}) \otimes_{E_{\widetilde{\pi}}} (\widetilde{\pi}^{{\mathfrak p}})^{K_0({\mathfrak n})^{{\mathfrak p}}}. \end{equation} In particular the action of ${\mathbb T}_E\otimes_E {\Lambda}_{{\mathcal O}}({\overline{T}}_{{\mathfrak p}})_E$, hence also of $R$, on ${\mathcal V}[{\mathfrak a}]$ is semisimple. Since $R/{\mathfrak a} R$ is a finite $\Lambda_E/{\mathfrak a}=E$-algebra we conclude \[ {\mathcal V}[{\mathfrak a}] \, \cong \, \bigoplus_{{\mathfrak m}\in \Spec R/{\mathfrak a} R} {\mathcal V}[{\mathfrak m}] \] (here and in the following we view $\Spec R/{\mathfrak a} R$ as a subset of $|{\mathfrak X}|$). In particular the localization ${\mathcal V}[{\mathfrak a}]_{{\mathfrak m}}$ is equal to ${\mathcal V}[{\mathfrak m}]$ for every ${\mathfrak m}\in \Spec R/{\mathfrak a} R$. Applying again the duality functor ${\mathcal D}$ yields \[ {\mathcal M}_{{\mathfrak m}}/{\mathfrak a} {\mathcal M}_{{\mathfrak m}} \,= \, {\mathcal D}({\mathcal V}[{\mathfrak a}])_{{\mathfrak m}} \, =\, {\mathcal D}({\mathcal V}[{\mathfrak m}])\, =\, {\mathcal M}_{{\mathfrak m}}/{\mathfrak m} {\mathcal M}_{{\mathfrak m}} \, = \, {\mathcal M}\otimes_R R/{\mathfrak m} \] hence \begin{equation} \label{semisimplepi} {\mathfrak a} {\mathcal M}_{{\mathfrak m}} \,=\, {\mathfrak m} {\mathcal M}_{{\mathfrak m}} \end{equation} for every ${\mathfrak m} \in \Spec R/{\mathfrak a} R\subset |{\mathfrak X}|$. Since ${\mathfrak f}(\pi') ={\mathfrak f}(\pi) = {\mathfrak p}{\mathfrak n}$ and $\pi'_{{\mathfrak p}} = \pi_{{\mathfrak p}} = \St_{G_{{\mathfrak p}}}(E)$ the automorphic representation $\pi$ corresponds to an element of ${\mathcal R}$ that (by abuse of notation) will also be denoted by $\pi$. Since $\dim_E((\pi^{{\mathfrak p}})^{K_0({\mathfrak n})^{{\mathfrak p}}})=1$ the $R$-submodule \[ \{ v\in {\mathcal V}[{\mathfrak a}]\mid t\cdot v= {\lambda}_{\pi}(h)v, \, t \cdot v = v \,\forall \, h\in {\mathbb T}_E,t\in {\overline{T}}_{{\mathfrak p}}\} \] of ${\mathcal V}[{\mathfrak a}]$ is onedimensional, so it is of the form ${\mathcal V}[{\mathfrak m}_0]$ for a unique ideal ${\mathfrak m}_0 \in \Spec R/{\mathfrak a} R$, i.e.\ a unique point $x_0\in {\mathfrak X}$ lying above ${\mathfrak a}$. It follows $k(x_0) = E$ and $\dim_E({\mathcal M}\otimes_R R/{\mathfrak m}_0) = 1$. By Nakayama's Lemma we obtain \[ {\mathcal M}_{{\mathfrak m}_0} \cong \, R_{{\mathfrak m}_0}/\Ann_{R_{{\mathfrak m}_0}}({\mathcal M}_{{\mathfrak m}_0})\, \cong \, (R/\Ann_R({\mathcal M}))_{{\mathfrak m}_0}. \] However since ${\mathcal M}$ is a faithful $R$-module we have $\Ann_R({\mathcal M})=0$, hence ${\mathcal M}_{{\mathfrak m}_0}$ is a free $R_{{\mathfrak m}_0}$-module of rank $1$. The equality \eqref{semisimplepi} for ${\mathfrak m} ={\mathfrak m}_0$ now yields ${\mathfrak a} R_{{\mathfrak m}_0} = {\mathfrak m}_0 R_{{\mathfrak m}_0}$, i.e.\ ${\mathfrak X} \to \Spec {\Lambda}_E$ is unramified, hence {\'e}tale in ${\mathfrak m}_0=x_0$. (b) Firstly, we remark that \[ \Hom_{E[{\overline{T}}_{{\mathfrak p}}]}( \widetilde{E}(\Theta_{\psi}), \fOrd_{\varpi-\ad}^{V, \bar{d}}({\overline{X}}^{\an}, {\mathcal O})_E)\, \cong\, {\mathbb H}_{\varpi-\ad}^{\bar{d}}({\overline{X}}^{\an}; \widetilde{E}(\Theta_{\psi}), E) \] by (\cite{spiess2}, Prop.\ 3.19 (e)). Together with Prop.\ \ref{prop:linvinlin2} it follows that $\Hom_{E[{\overline{T}}_{{\mathfrak p}}]}( \widetilde{E}(\Theta_{\psi}), {\mathcal V}_{{\mathfrak m}_0})$ is a free $\widetilde{E}$-module of rank 1, so there exists a monomorphism of $E[{\overline{T}}_{{\mathfrak p}}]$-modules \begin{equation} \label{linvdeform} \widetilde{E}(\Theta_{\psi}) \longrightarrow {\mathcal V}_{{\mathfrak m}_0}. \end{equation} The source carries a canonical ${\Lambda}_E$-module structure and it is easy to see that \eqref{linvdeform} is ${\Lambda}_E$-linear and that ${\mathfrak a}^2 \widetilde{E}(\Theta_{\psi})=0$. It follows that the image of \eqref{linvdeform} is contained in ${\mathcal V}[{\mathfrak a}^2]_{{\mathfrak m}_0}$. By (a) we have \[ {\mathcal M}_{{\mathfrak m}_0}/{\mathfrak a}^2{\mathcal M}_{{\mathfrak m}_0} = {\mathcal M}_{{\mathfrak m}_0}/{\mathfrak m}_0^2{\mathcal M}_{{\mathfrak m}_0}= {\mathcal M}/{\mathfrak m}_0^2\cong R/{\mathfrak m}_0^2 \cong {\Lambda}_E/{\mathfrak a}^2 \] hence ${\mathcal V}_{{\mathfrak m}_0}[{\mathfrak a}^2]\, =\, {\mathcal V}[{\mathfrak m}_0^2]$ and $\dim_E({\mathcal V}[{\mathfrak m}_0^2]) =2$. Therefore \eqref{linvdeform} can be viewed as an isomorphism $\widetilde{E}(\Theta_{\psi}) \longrightarrow {\mathcal V}[{\mathfrak m}_0^2]$. \end{proof} A point $x\in |{\mathfrak X}|$ will be called {\it classical} if $\chi_x\ne 1$ is a quasicharacter and if there exists a $\pi(x)\in {\mathcal A}({\mathbb D}^*/{\mathbb A}^*, E)$ \medskip \noindent (i) The $\pi(x)$ is defined over $k(x)$, i.e.\ there exists $\iota: E_{\pi(x)}:=\End_E(\pi(x))\cong k(x)$; \medskip \noindent (ii) The conductor of $\pi(x)^{{\mathfrak d}{\mathfrak p}, \infty}$ divides ${\mathfrak n}$; \medskip \noindent (iii) $\pi(x)_{{\mathfrak p}}$ is the principal series representation $\pi(x)_{{\mathfrak p}} = \pi(\chi_x^{-1}| \wcdot |_{{\mathfrak p}}^{-1/2}, \chi_x| \wcdot |_{{\mathfrak p}}^{1/2})$; \medskip \noindent (iv) The Hecke eigenvalue homomorphism ${\lambda}_{\pi(x)}: {\mathbb T} \to E_{\pi(x)}\stackrel{\iota}{\cong} k(x)$ is equal to $={\lambda}_x$. The set of classical points of ${\mathfrak X}$ will be denoted by ${\mathfrak X}_{\cl}$. \begin{lemma} \label{lemma:classicalpoints} ${\mathfrak X}_{\cl}$ is dense in ${\mathfrak X}$. \end{lemma} \begin{proof} For $n\ge 1$ let $U_n$ be the open subgroups of $T_{{\mathfrak p}}^0$ of index $p^n$ containing $V$. Put ${\Lambda}_n = {\mathcal O}[T_{{\mathfrak p}}^0/U_n]$ and ${\mathfrak a}_n :=\Ker({\Lambda}_E\to ({\Lambda}_n)_E)$. Since $\bigcap_{n\ge 1} {\mathfrak a}_n = 0$, the set $(\bigcup_{n \ge 1}\Spec R/{\mathfrak a}_n R)\setminus \Spec R/{\mathfrak a} R$ is dense in ${\mathfrak X}$, so it suffices to see that it consists of classical points. Similar to \eqref{jacshimuraord}, \eqref{jacshimuraord2} one shows that for fixed $n\ge 1$ we have \begin{equation*} \label{jacshimuraord3} {\mathcal V}[{\mathfrak a}_n]\,=\, H^{\bar{d}}({\overline{X}}_{K_{U_n}(n)}^{\an}, {\mathcal O})^{\ord}_{E, \barep}\, \cong \, \bigoplus_{\widetilde{\pi}\in {\mathcal R}} \, E_{\widetilde{\pi}}(\chi_{\widetilde{\pi}}) \otimes_{E_{\widetilde{\pi}}} (\widetilde{\pi}^{{\mathfrak p}})^{K_0({\mathfrak n})^{{\mathfrak p}}} \end{equation*} where ${\mathcal R}$ denotes the set of $\widetilde{\pi}\in {\mathcal A}({\mathbb D}^*/{\mathbb A}^*, E)$ such that $\widetilde{\pi}_{{\mathfrak p}}^{K_{U_n}(n)}\ne 0 \ne (\widetilde{\pi}^{{\mathfrak p}})^{K_0({\mathfrak n})^{{\mathfrak p}}}$ and $M_{\widetilde{\pi}, {\mathfrak p}}^{\ord}\ne 0$. As in the proof of Lemma \ref{lemma:piinfx} (a) this implies that the action of $R$ on ${\mathcal V}[{\mathfrak a}_n]$ is semisimple and that \[ {\mathcal V}[{\mathfrak a}] \, \cong \, \bigoplus_{{\mathfrak m}\in \Spec R/{\mathfrak a}_n R} {\mathcal V}[{\mathfrak m}]. \] Let ${\mathfrak m} \in \Spec R/{\mathfrak a}_n R\setminus \Spec R/{\mathfrak a} R$. We have ${\mathcal V}[{\mathfrak m}]\ne 0$ (since $\Ann_R({\mathcal M})=0$ hence ${\mathcal D}({\mathcal V}[{\mathfrak m}])={\mathcal M}/{\mathfrak m}{\mathcal M}\ne 0$), so there exists a unique\footnote{Since the Hecke eigenvalue homomorphism ${\lambda}_{\widetilde{\pi}}$ determines $\widetilde{\pi}$ uniquely.} $\widetilde{\pi} \in {\mathcal R}$ such that \[ {\mathcal V}[{\mathfrak m}]= E_{\widetilde{\pi}}(\chi_{\widetilde{\pi}}) \otimes_{E_{\widetilde{\pi}}} (\widetilde{\pi}^{{\mathfrak p}})^{K_0({\mathfrak n})^{{\mathfrak p}}}=\{ v\in {\mathcal V}[{\mathfrak a}]\mid t\cdot v= {\lambda}_{\widetilde{\pi}}(h)v, \, t \cdot v = \chi_{\widetilde{\pi}}(t)v \,\forall \, h\in {\mathbb T}_E,\, t\in {\overline{T}}_{{\mathfrak p}}\}. \] The condition ${\mathfrak m} \not \in \Spec R/{\mathfrak a} R$ implies that the quasicharacter $\chi_{\widetilde{\pi}}$ is ramified hence $\widetilde{\pi}_{{\mathfrak p}}=\Ind_{B_{{\mathfrak p}}}^{G_{{\mathfrak p}}} \chi_{\widetilde{\pi}}^{-1} = \pi(\chi_{\widetilde{\pi}}^{-1}| \wcdot |_{{\mathfrak p}}^{-1/2}, \chi_{\widetilde{\pi}}| \wcdot |_{{\mathfrak p}}^{1/2})$ by (\cite{spiess2}, Prop. 2.22). It follows that ${\mathfrak m}$ is classical. \end{proof} We fix a character ${\epsilon}:\Delta\to\{\pm 1\}$ and consider the cohomology group ${\mathbb H}^d(X^{\an}; {\mathcal V}, E)_{{\epsilon}}$. It is a finitely generated augmented $E[{\overline{T}}_{{\mathfrak p}}]$-module equipped with a ${\mathbb T}_R= {\mathbb T}_E\otimes_E R$-action (the $R$-action is induced by the $R$-action on ${\mathcal V}$). For a ${\mathbb T}_R$-module $N$ we let $N_{{\lambda}}$ be the maximal quotient of $N$ where ${\mathbb T}_E$ acts via ${\lambda}:{\mathbb T}_E\to R$, i.e.\ $N_{{\lambda}} = N\otimes_{{\mathbb T}_R, {\lambda}\otimes \id_R} R$. \begin{lemma} \label{lemma:jacquetlanglandsfam} Let ${\mathcal N} \colon \,=\, ({\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)_{{\epsilon}})_{{\lambda}}$. \noindent (a) The $R$-module ${\mathcal N}$ is finitely generated. \noindent (b) There exists an open neighbourhood ${\mathfrak U}$ of $x_0$ in ${\mathfrak X}$ such that ${\mathfrak U} \hookrightarrow {\mathfrak X} \to \Spec {\Lambda}_E$ is {\'e}tale and ${\widetilde{\mathcal N}}|_{{\mathfrak U}}$ is an invertible ${\mathcal O}_{{\mathfrak U}}$-module.\footnote{Recall that $\widetilde{M}$ denotes the quasicoherent ${\mathcal O}_{{\mathfrak X}}$-module associated to an $R$-module $M$.} \noindent (c) The canonical map $({\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)\otimes_R k(x_0))_{\pi, {\epsilon}}\to {\mathcal N}\otimes_R k(x_0)$ is an isomorphism of onedimensional $E$-vector spaces. Moreover we have ${\mathbb H}_{\varpi-\ad}^{d+1}(X^{\an}; {\mathcal V}, E)[{\mathfrak m}_0]_{\pi, {\epsilon}}=0$. \end{lemma} \begin{proof} (a) follows immediately from the fact that ${\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)$ is finitely generated as a ${\Lambda}_E$- hence also as an $R$-module. (b) By Lemma \ref{lemma:piinfx} (a) there exists an open neighbourhood ${\mathfrak U}\subseteq {\mathfrak X}$ of $x_0$ such that ${\mathfrak U} \hookrightarrow {\mathfrak X} \to \Spec {\Lambda}_E$ is {\'e}tale and ${\widetilde{\mathcal M}}|_{{\mathfrak U}}$ is an invertible ${\mathcal O}_{{\mathfrak U}}$-module. Note that for $x\in |{\mathfrak U}|$ we have $\dim_{k(x)} {\mathcal V}[x]=\dim_{k(x)} {\mathcal D}({\mathcal V}[x]) = \dim_{k(x)}({\mathcal M}\otimes_R k(x)) =1$, hence ${\mathcal V}(x) = k(x)(\chi_x)$ as ${\overline{T}}_{{\mathfrak p}}$-representation. Since ${\mathfrak U}$ is regular of dimension 1, we have an exact sequence \begin{equation} \label{jlf1} 0\to {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)\otimes_R k(x) \to {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}[x], E) \to {\mathbb H}_{\varpi-\ad}^{d+1}(X^{\an}; {\mathcal V}, E)[x]\to 0 \end{equation} for every $x\in |{\mathfrak U}|$ (as shown in appendix (A3) of \cite{spiess2}). Since the $R$-module ${\mathbb H}^{d+1}(X^{\an}; {\mathcal V}, E)$ is finitely generated we may assume -- after shrinking ${\mathfrak U}$ if necessary -- that ${\mathbb H}^{d+1}(X^{\an}; {\mathcal V}, E)[x]=0$ for all $x\in {\mathfrak U}-\{x_0\}$. So by applying the functor $N\mapsto N_{{\lambda}}$ to the sequence \eqref{jlf1} we get \begin{equation*} \label{jlf2} {\mathcal N}\otimes_R k(x) \, \cong \, {\mathbb H}_{\varpi-\ad}^d(X^{\an}; k(x)(\chi_x), E)\otimes_{{\mathbb T}_E, {\lambda}_x} k(x) \end{equation*} for all $x\in |{\mathfrak U}|-\{x_0\}$. Let $x\in {\mathfrak U}_{\cl} = {\mathfrak U} \cap {\mathfrak X}_{\cl}$, put $E'=k(x)$ and let ${\mathcal O}'$ be the valuation ring of $E'$. We claim that \begin{equation} \label{jlf3} \dim({\mathcal N}\otimes_R k(x)) \, \ge \, 1 \end{equation} Indeed, by \eqref{derham6} and (\cite{spiess2}, Lemma 3.45) we have \begin{equation*} {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}[x], E)_{{\epsilon}} \, = \, {\mathbb H}_{{\mathcal O}}^d(X^{\an}; {\mathcal O}'(\chi_x), {\mathcal O})_{E, {\epsilon}}\, =\, \\ {\mathbb H}_{{\mathcal O}'}^d(X^{\an}; {\mathcal O}'(\chi_x), {\mathcal O}')_{E', {\epsilon}}. \end{equation*} so by applying $\wcdot \otimes_{{\mathbb T}_{E'}, {\lambda}_{\pi(x)}} E'$ we obtain \eqref{jlf3} by (\cite{spiess2}, Prop.\ 5.13). For the point $x_0$ we have \begin{equation} \label{jlf4} \dim_{k(x_0)}({\mathcal N}\otimes_R k(x_0)) \, \le \, 1. \end{equation} For that note that since ${\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}[x_0], E)\cong H_{{\mathcal O}}^d(X^{\an}; {\mathcal O}(0), {\mathcal O})_E$, the $E$-vector space \begin{equation*} \label{jlf5} {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}[x_0], E)_{\pi, {\epsilon}} \, =\, ({\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}[x_0], E)_{{\epsilon}})\otimes_{{\mathbb T}_E, {\lambda}_{\pi}} E \end{equation*} is onedimensional by. Since the sequence \eqref{jlf1} for $x=x_0$ remains exact after localizing with respect to ${\mathfrak m}_{\pi} = \Ker({\lambda}_{\pi})$ it follows that \begin{equation} \label{jlf6} \left({\mathbb H}_{\varpi-\ad}^d(X^{\an}, {\mathcal V}, E)\otimes_R k(x_0)\right)_{\pi, {\epsilon}} \end{equation} is an $E$-vector space of dimension $\le 1$. This implies \eqref{jlf4} since ${\mathcal N}\otimes_R k(x_0)$ is the maximal semistable quotient of \eqref{jlf6}. The two inequalities \eqref{jlf3} and \eqref{jlf4} combined with the facts that $x\mapsto \dim_{k(x)} {\mathcal N}\otimes_R k(x)$ is upper semi-continuous and ${\mathfrak U}_{\cl}$ is dense in ${\mathfrak U}$ yields -- after shrinking ${\mathfrak U}$ if necessary -- that $\dim_{k(x)} ({\mathcal N}\otimes_R k(x))=1$ for every $x\in {\mathfrak U}$. Hence ${\widetilde{\mathcal N}}|_{{\mathfrak U}}$ is an invertible ${\mathcal O}_{{\mathfrak U}}$-module. For (c) consider the diagram \[ \begin{CD} \left( {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)\otimes_R k(x_0)\right)_{\pi, {\epsilon}} @>>> {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}[x], E)_{\pi, {\epsilon}}\\ @VVV@VVV\\ {\mathcal N}\otimes_R k(x_0) @>>> {\mathbb H}_{\varpi-\ad}^d(X^{\an}, {\mathcal V}[x], E)\otimes_{{\mathbb T}_E, {\lambda}_{\pi}} E. \end{CD} \] The upper horizontal map is injective with cokernel $\cong {\mathbb H}_{\varpi-\ad}^{d+1}(X^{\an}; {\mathcal V}, E)[{\mathfrak m}_0]_{\pi, {\epsilon}}$. As already seen in the proof of (b) the first vertical map is surjective and the second is an isomorphism. Hence the first vertical map is injective as well. The fact that $\dim_{k(x_0)}({\mathcal N}\otimes_R k(x_0))=1=\dim_E({\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}[x], E)_{\pi, {\epsilon}})$ implies that all maps are isomorphisms and ${\mathbb H}_{\varpi-\ad}^{d+1}(X^{\an}; {\mathcal V}, E)[{\mathfrak m}_0]_{\pi, {\epsilon}}\break =0$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:linvkey}] There exists an exact sequence \begin{eqnarray*} && 0\longrightarrow \left({\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)\otimes_R R/{\mathfrak m}_0^2\right)_{\pi, {\epsilon}} \longrightarrow {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}[{\mathfrak m}_0^2], E)_{\pi, {\epsilon}}\hspace{2cm}\\ && \hspace{4cm} \longrightarrow {\mathbb H}_{\varpi-\ad}^{d+1}(X^{\an}; {\mathcal V}, E)[{\mathfrak m}_0^2]_{\pi, {\epsilon}}\longrightarrow 0. \end{eqnarray*} (see \cite{spiess2}, appendix (A3)). The third group vanishes by Lemma \ref{lemma:jacquetlanglandsfam} (c). Together with Lemma \ref{lemma:piinfx} (b) we obtain \[ {\mathbb H}_{\varpi-\ad}^d(X^{\an}; \widetilde{E}[\Theta_{\psi}], E)_{\pi, {\epsilon}}\, \cong\, \left( {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)\otimes_R R/{\mathfrak m}_0^2\right)_{\pi, {\epsilon}}. \] Thus by Prop.\ \ref{prop:linvinlin2} it suffices to see that the group on the right is a free $\widetilde{E}$-module of rank 1. For that let $a\in {\Lambda}_E$ be a generator of the prime ideal ${\mathfrak a}$, put ${\mathcal N}'= {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)$ and consider the diagram \[ \begin{CD} @. ({\mathcal N}'\otimes_R R/{\mathfrak m}_0)_{\pi, {\epsilon}} @> a\cdot >> ({\mathcal N}'\otimes_R R/{\mathfrak m}_0^2)_{\pi, {\epsilon}} @> \pr >> ({\mathcal N}'\otimes_R R/{\mathfrak m}_0)_{\pi, {\epsilon}} @>>> 0\\ @. @VVV @VVV @VVV @.\\ 0@>>> {\mathcal N}\otimes_R R/{\mathfrak m}_0 @> a\cdot >> {\mathcal N}\otimes_R R/{\mathfrak m}_0^2 @> \pr >> {\mathcal N}\otimes_R R/{\mathfrak m}_0 @>>> 0 \end{CD} \] where the vertical maps are induced by the canonical projection ${\mathcal N}' \to {\mathcal N}'_{{\lambda}} ={\mathcal N}$. By Lemma \ref{lemma:piinfx} (a) and Lemma \ref{lemma:jacquetlanglandsfam} (b), (c) the rows are exact and the first and third vertical maps are isomorphisms. Hence the middle vertical map is an isomorphism and we get \[ \left( {\mathbb H}_{\varpi-\ad}^d(X^{\an}; {\mathcal V}, E)\otimes_R R/{\mathfrak m}_0^2\right)_{\pi, {\epsilon}}\, \cong\, {\mathcal N}\otimes_R R/{\mathfrak m}_0^2\, \cong \, R/{\mathfrak m}_0^2\, \cong \, {\Lambda}_E/{\mathfrak a}^2\,\cong\, \widetilde{E}. \] \end{proof}
1,108,101,562,627
arxiv
\section{Introduction} For a long time, discussion of the role of black holes in globular clusters has been dominated by the theme of intermediate-mass black holes. While no-one would doubt that stellar-mass black holes once existed in these objects, one reason for their neglect is a long-standing theoretical prediction \citep{KHM1993,SH1993} that there should be virtually none at the present day. As we shall see, this view is in need of revision, but in the meantime interest in the population of stellar-mass black holes in globular clusters was sustained by two ideas. The first is the realisation that they may be a prolific source of black-hole binaries and thus of sources of gravitational radiation \citep{PZM2000,B2006,BSRB2006,MS2009,BBK2010,DBGS2011,Ta2013}. The second is the role that stellar-mass black holes play in the evolution of cluster cores \citep{MPPZH2004,Hu2007,MWDG2008}. This paper focuses on the black hole population itself, and, in particular, how many are to be expected at the present day in one particular old globular cluster. Thanks to software advances over many years, it is now quite straightforward to perform simulations of star clusters and to study the evolution of the black hole population directly. What is still hard, however, is to do this with models which resemble globular star clusters. The most sophisticated direct $N$-body techniques have been applied to this problem \citep[and several previous references by other authors]{Aa2012}, but the restriction to systems which initially possessed only of order $10^5$ stars vitiates their direct application to all except the least populous globular clusters. There are two solutions, one being to {\sl scale} the results of $N$-body models, provided that this can be done in a way which preserves the time-scales of the main evolutionary processes at work; the paper by \citet{SH2013} is an example, very relevant to the scientific aims of the present paper, and we return to it in our discussion (Sec.\ref{sec:bh-and-evolution}). The second solution is the use of Monte Carlo codes, which are not restricted to small values of $N$, though they are less free of assumptions and approximations, and require cross-validation with $N$-body results in the range of $N$ where the two techniques overlap \citep{GHH2008, GHHH2013}. A Monte Carlo code is the main tool adopted in the present paper, but an independent code has also been applied to a similar problem by \citet{MUFR2013}. Even with a Monte Carlo code, however, different sets of initial conditions will give different answers for the number of stellar-mass black holes expected to survive to the present day. In previous papers \citep{HG2008,GH2009,GH2011} Monte Carlo evolutionary models for three clusters are described: M4, NGC6397 and 47 Tuc. The number of black holes throughout the evolution, which was not discussed much in these papers, is plotted in Fig.\ref{fig:nbh}. In our model of M4, no natal kicks were applied to black holes, but the population had decreased from about 1000 to one by an age of about 9Gyr (see \citet[Fig.15]{HG2008}), and the last black hole escaped before 12Gyr. The last black holes were expelled from our model of NGC6397 even earlier. The contrast with our model of 47 Tuc is striking. Though natal kicks (with a 1-dimensional dispersion of 190km/s) were applied, and the number of retained black holes decreased abruptly to 34 (near the left margin of Fig.\ref{fig:nbh}), more than half were still present at 12Gyr. \begin{figure} {\includegraphics[height=12cm,angle=0,width=9cm]{nbh.eps}} \caption{Number of stellar-mass black holes in published Monte Carlo models of three globular clusters. For the model of 47 Tuc, where natal kicks were applied, the results drop abruptly from the number initially created (about 1600).} \label{fig:nbh} \end{figure} \red{The three examples in Fig.\ref{fig:nbh} already raise an interesting question: how is it that our model of NGC6397, which initially retains far more stellar-mass black holes than 47 Tuc, ends up at the present day with none, while our model of 47 Tuc still has an appreciable population? The answer is that it depends on the phase of dynamical evolution in which the cluster is found. Our model of NGC6397 undergoes a phase of core collapse which ends at about the time when the last stellar-mass black hole escapes \citep{GH2009}, but for our model of 47 Tuc this phase of core collapse lies far in the future \citep{GH2011}. We return to this issue in Section \ref{sec:bh-and-evolution}, remarking here only that we refer to this episode as {\sl second core collapse}, to distinguish it from a very early phase in which the mass segregation of the system of black holes comes to an end.} \red{With reference to Fig.\ref{fig:nbh},} our aim in this paper is to provide similar theoretically-based expectations for the globular cluster M22 (NGC6656), motivated by the recent discovery in that cluster of two stellar-mass black holes \citep{St2012}, which may even represent only a sample of a considerably larger population. From what has been said, our first step will be to construct a Monte Carlo evolutionary model which, like those of the other three clusters we have studied, resembles the star cluster at the present day. This we do in the following section by iterating on the initial conditions so that, after 12 Gyr of evolution, the model provides an approximate fit to the observed surface brightness and velocity dispersion profiles of the cluster, and to its local stellar mass function (or, strictly, luminosity function). We repeat the exercise for different assumptions about the natal kicks of stellar-mass black holes, in each case reporting the number which survive to 12 Gyr (Sec.\ref{sec:BHpop}). Our final section summarises our conclusions, and discusses them in the context of other recent research. \section{A Monte Carlo model of M22} \subsection{Observational data}\label{sec:observations} Our source for the surface brightness profile of M22 is the compilation of \citet{Tr1995}. As will be seen in Fig.\ref{fig:sbp}, the data are quite scattered (by about half a magnitude) within the core, and (surprisingly) a little fainter inside the core than at the edge of the core. For the velocity dispersion profile we have adopted results from \citet[their Fig.3]{La2009}. The cluster rotates, with a maximum projected rotation velocity of about 3km/s \citep[\red{their} Fig.2]{La2009}. It is not clear whether this has been removed from their velocity dispersion profile. In any event the rotation is one dynamical property of the cluster which cannot be modelled with the existing Monte Carlo technique; this is confined to non-rotating and spherically symmetric systems. \begin{center} \begin{figure} \includegraphics[width=9cm]{m22.eps} \caption{\red{The field where the luminosity function of \citet{PZ1999} was obtained, overlaid on an optical image of M22. The luminosity function was obtained from only part of the WFPC2 field illustrated.}} \label{fig:hst} \end{figure} \end{center} The stellar luminosity function we have used is the $V$ luminosity function at a projected radius of about 4.5 arcmin given by \citet{PZ1999}. This paper gives the data for each magnitude bin as the number of stars per `HST area'; since the luminosity functions have been obtained from a single WFPC2 chip \red{(see Fig.\ref{fig:hst})}, we take this to be an area of 1.78 arcmin$^2$ {\red{(based on data in the online {\sl HST Data Handbook for WFPC2})}}. \red{As the core radius is about 1.33 arcmin \citep{Ha1996}, it is possible that appreciable mass segregation is present within the observed field, but only one luminosity function is given. In principle, the Monte Carlo model should be mapped to the observed field, but we have simply compared the observed luminosity function with that in the Monte Carlo model at the radius of the centre of the field.} Most other data on M22 (distance, metallicity, extinction) have been taken from the on-line revision of December 2010 of the Harris catalogue \citep{Ha1996}. The only exception is the binary abundance, where we have adopted a value of around 0.05 from the detailed results of \citet{Mi2012}. Those results cover the region within the half-mass radius, and the\red{re is no statistically significant difference in the} fraction inside the core. In fact changing the binary fraction within reasonable limits makes no \red{appreciable} difference to the overall evolution. \subsection{Model assumptions}\label{sec:model} The initial conditions we have adopted are similar (except of course for several numerical values, given in Secs.\ref{sec:ics} \red{and \ref{sec:full-size}}) to those used in our previous papers (see especially \citet[Table 1]{GH2011}). Briefly these are King models, with no initial mass segregation. The single stars have a two-part power law initial mass function in the range from 0.1 to 100$M_\odot$, while the initial properties of the binaries are taken from \citet{Kr1995}. The Galactic tide is implemented as described in \citet{GHHH2013}, but in any implementation it has to be treated as static in the Monte Carlo model. Note that \citet{Di1999} give for M22 an orbit with apo- and peri-galactic distances of about $R_a = 9.3$ and $R_p =2.9$kpc, respectively, corresponding to an eccentricity (defined as $e = (R_a-R_p)/(R_a+R_p)$) of $e \simeq 0.52$. There is, however, substantial evidence from $N$-body simulations that a cluster on such an orbit retains a remarkably steady profile throughout each orbit \citep{Ku2010}, and loses mass at a rate like a cluster on a circular orbit at a fixed intermediate radius \citep{BM2003}. For the age of the cluster we have adopted 12Gyr, though it could be even older \citep{MF2009}; and for the metallicity we have taken $Z = 0.0004$ \citep{Ha1996}. \red{Newly born neutron stars are given a kick using a Gaussian distribution with a one-dimensional dispersion of 190km/s, except for one model (Model C) reported in Sec.\ref{sec:full-size} and subsequent sections of the paper, for which the dispersion was 253km/s. Natal kicks for black holes are more uncertain, and different choices are discussed (in connection with three models, called A,B,C), in Sec.\ref{sec:BH}.} \subsection{Initial parameter values}\label{sec:ics} The procedure now is to choose parameters specifying the initial conditions (see Table \ref{tab:models}) so as to optimise the fit of an evolved Monte Carlo model (Sec.\ref{sec:model}) to the observational data (Sec.\ref{sec:observations}). In our previous papers, this has been a prolonged and laborious search, a process of trial and error guided by intuition. Since then it has been substantially automated. Our procedure now is to use the results of scaled models, i.e. Monte Carlo models with a number of stars $N_\ast$ much smaller than the number of stars in the star cluster, but adjusted so that the relaxation time of the scaled model is the same as that of the actual cluster (see \citet[Sec.2.4]{HG2008}). Then a measure of goodness of fit is constructed along $\chi^2$ lines; for each kind of data (surface brightness, velocity dispersion and luminosity function) we adopt a single measure of the dispersion of the error, one for each kind of data, and based on information in the sources quoted in Sec.\ref{sec:observations}. These are normalised by the number of data points in each kind of data, and simply summed, giving a measure $Z$ of goodness of fit. {(The Monte Carlo data are also subject to sampling error, but this has been ignored in the construction of $Z$.)} Given values of the seven adjustable parameters in Table \ref{tab:models} (i.e. $N,r_t,r_h,W_0,\alpha_1,\alpha_2,m_b$), we run the Monte Carlo code and compute $Z$. To optimise over the parameter space we employ the Downhill Simplex algorithm, coded as {\sl amoeba} in \citet{PTVF1992}. For purposes of brevity in this paper we refer to this procedure as {\sl ICFind}. It is remarkable that the method is successful, as the algorithm is designed for optimisation of a smooth function, whereas the results of the Monte Carlo code are stochastic. Nevertheless it appears to converge, from a wide variety of starting points, after computation of order 50-100 models. For $N_\ast = 10^5$ this takes a few days. By ``convergence'' here we mean that the code finds a best model which cannot be improved on in the number of iterations stated; from other starting points, the best model may well be different. Unfortunately, even with this automatic method, we do not have any quantitative way of deciding the range of acceptable models, which would require improvement of our procedure for defining and calculating $Z$, and much greater computational effort. Furthermore, while computing the last 50 or so models, our experience is that the code evolves models with very similar initial conditions and chooses the best; it is, in effect, sampling the distribution of models which all result from these initial conditions by different choices of random numbers. Results for 100\% expulsion of black holes by natal kicks, and for 100\% retention, are given in columns 2 and 3 of Table \ref{tab:ics}, referred to as Models a and c, respectively. From what has been said, it is not known whether the differences between these initial conditions are significant. Nevertheless it can be argued that the cluster will expand less with 100\% expulsion \citep{MPPZH2004,MWDG2008}, and therefore the larger value for the initial half-mass radius $r_h$ in this case is what would be expected. Figs. \ref{fig:convergence} and \ref{fig:convergence-of-Z} give an impression of the convergence of the run with 100\% retention. We defer to the following subsection a discussion of how well the models agree with the observational data, as the full-sized models discussed in the next section are the focus of subsequent examination of the black hole population and its evolution. \red{The status of the listed retention factors requires some comment. We do not directly control this number, i.e. by ensuring that a given fraction of black holes are expelled. In our simulations, on the other hand, it is an indirect outcome of other choices, such as the dispersion of kicks (see Sec.\ref{sec:BH}), as well as of the evolution of the model (which affects the escape velocity, for instance).} % \begin{center} \begin{figure} \includegraphics[width=9cm]{convergence.eps} \caption{Convergence of the initial values of $N$ and $r_h$, where $N$ is the number of objects (single stars and binary stars) and $r_h$ is the half-mass radius, for the determination of initial conditions in the case of 100\% retention of black holes. The plot gives an impression of the range of values sampled by the code. \red{Large symbols give the first 20 iterates, medium-sized symbols give iterates 21--60, and small symbols give the remainder (101 iterations altogether).}} \label{fig:convergence} \end{figure} \end{center} \begin{center} \begin{figure} \includegraphics[width=9cm]{convergence-of-Z.eps} \caption{Marginal dependence of $Z$ (a measure of goodness of fit) on the initial value of $N$ (the initial number of objects), for the determination of initial conditions in the case of 100\% retention of black holes. Default large values of $Z$ may occur if the model failed to reach the required age of 12Gyr. Large values may also occur close to the best-fitting value of $N$ (about $7.8\times10^5$) if other parameters are far from optimal. \red{The meaning of the symbols is given in the caption to Fig.\ref{fig:convergence}. The inset gives the evolution of the goodness-of-fit parameter with iteration number.}} \label{fig:convergence-of-Z} \end{figure} \end{center} \begin{table} \caption{Initial conditions for M22, and the resulting black hole population} \begin{tabular}{ccc|ccc} Model&a&c&A&B&C\\ \hline $N_\ast/10^5$&$1$&$1$&$9.21$&$8.32$&$7.57$\\ $N/10^5$&$8.0$&$7.8$&$8$&$8.32$&$7.57$\\ $r_t$ (pc)&93&100&77&89&102\\ $r_h$ (pc)&3.1&2.5&2.67&2.72&2.43\\ $W_0$&3.7&2.9&6.0&7.4&2.93\\ $\alpha_1$&1.1&0.90&1.12&1.21&0.90\\ $\alpha_2$&2.7&2.7&2.43&2.72&2.8\\ $m_b$&0.73&0.67&0.84&0.96&0.67\\ \hline $N_{BH0}$&--&--&1799&675&\red{450}\\ Retention factor&0\%&100\%&0.1\%&62\%&100\%\\ $N_{BH12}$&--&--&2&14&\red{43}\\ $N_{SBH12}$&--&--&1&7&\red{39}\\ $N_{BHBH12}$&--&--&0&2&\red{1}\\ $N_{BHNS12}$&--&--&0&0&\red{0}\\ $N_{BHWD12}$&--&--&0&1&\red{0}\\ $N_{BHMS12}$&--&--&1&2&\red{2}\\ \hline \end{tabular} \label{tab:ics} Explanation: \begin{enumerate} \item $N_\ast =$ the actual number of objects (single stars plus binary stars) in the model \item $N =$ the number of objects when the model is scaled to M22 \item $r_t =$ initial tidal radius in parsecs \item $r_h =$ initial half-mass radius in parsecs \item $W_0 =$ initial value of the scaled central potential of a King model \item $\alpha_1,\alpha_2,m_b$: parameters of the initial mass function, which is a two-part power law with powers $m^{-\alpha}$, where $\alpha = \alpha_1$ for mass $m < m_b$, and $\alpha = \alpha_2$ above $m_b$. \item $N_{BH0} =$ number of stellar-mass black holes formed in the normal course of stellar evolution. \item Retention fraction = fraction of black holes remaining after the escape of those escaping as a result of natal kicks. \item $N_{BH12} =$ number of stellar-mass black holes remaining at 12Gyr. \item $N_{SBH12},N_{BHBH12},N_{BHNS12},N_{BHWD12},N_{BHMS12} = $ number of single black holes, black hole-black hole binaries, black hole-neutron star binaries, black hole-white dwarf binaries, and black hole-main sequence star binaries, respectively, at 12Gyr. \item[] { Note: some other data on model B at 12Gyr are given in Sec.\ref{sec:de}.} \end{enumerate}\label{tab:models} \end{table} \subsection{Full-sized models}\label{sec:full-size} The initial conditions in \red{the second and third columns of} Table \ref{tab:ics} were determined with small-scale models, but results like these from {\sl ICFind} have also been used as the basis of a number of full-scale models, i.e. models in which $N=N_\ast$. In some of these full-sized models the values given by {\sl ICFind} were adjusted manually when it was judged that this might improve the fit with observations. For example the slope of the lower mass function might be altered when it was judged that this would further improve the fit with the luminosity function. Apart from $N_\ast$, the other main difference between the full-sized models and those from {\sl ICFind} is that the runs have % been carried out with a more advanced version of the Monte Carlo code, which is called MOCCA and is described in \citet{GHHH2013}. The essential differences for the present purpose are (i) that escape of particles is modelled more closely on our current understanding of escape in tidal fields (rather than through the use of a tidal cut-off), and (ii) that three- and four-body interactions are calculated with a few-body code {\citep[{\sl Fewbody}, see][]{Fr2004}} instead of with cross sections. Since the mass-dependence of the cross sections used in the older code is based on theory, and the masses of black holes represent an extreme situation, it might be expected (because of the second of these changes) that the results could differ significantly from those of the older code. So far we have computed almost 40 full-sized models, each of which takes a few days, though a few failed for technical reasons before reaching 12 Gyr (our assumed age for M22). The best of these, as judged by comparison with the observational data, use the initial conditions in the last three columns of Table \ref{tab:ics}, i.e. Models A, B and C. The basis of the initial conditions was a set of earlier runs of {\sl ICFind} than those which produced Models a and c. Note that the values of $N_\ast$ (the initial number of objects in the model) and $N$ (the assumed initial number of objects in M22) are equal in Models B and C; in these full-scale runs we have generally not optimised over the choice of $N$, though Model A is an exception. For Model B, the quality of the fit to the observational data is displayed in Figs.\ref{fig:sbp} -- \ref{fig:lf} and is discussed in detail in the following paragraphs, along with abbreviated comments about Models A and C. \subsubsection{Surface brightness profile}\label{sec:sbp} The surface brightness profile of the model is compared in Fig.\ref{fig:sbp} with the observational data from \citet{Tr1995} and a Chebyshev polynomial fit which they provide. The model is generally somewhat fainter than the observations, by about 0.3 mag. It looks particularly faint at the edge of the core, especially when compared with the actual observational data and not to the smooth fit to the observational data. The mismatch looks considerably smaller in the halo, but the profile there is much steeper. \begin{center} \begin{figure} \includegraphics[width=9cm]{sbp.eps} \caption{The surface brightness distribution of the Monte Carlo model (made with the new version of the code called MOCCA), with initial conditions in Column 5 of Table \ref{tab:ics}, compared with data from \citet{Tr1995}. The solid line is their Chebyshev fit to the observation data. \red{The result for the Monte Carlo model is obtained by treating each star as a spherical shell, and projecting the model on the sky. When a line of sight passes just inside the shell of a bright star the inferred surface brightness shows a spike, as in this model at a projected radius of about 15 arcsec.} } \label{fig:sbp} \end{figure} \end{center} For Model A the surface brightness (not shown) is close to 17.5 up to a projected radius of 20 arcsec, too faint (by up to 0.5 mag in places) from there up to about 100 arcsec, and quite satisfactory thereafter. \red{The surface brightness of Model C matches the observational data rather well (within the size of the symbols in Fig.\ref{fig:sbp}).} \red{While this discussion has been expressed in terms of a comparison of surface brightness at a given radius, other interpretations are possible. For example, a model which exhibits an underluminous halo may simply be one that is too small (in radius).} \subsubsection{The projected velocity dispersion profile} It is hard to characterise the fit of the velocity dispersion profile of Model B (Fig.\ref{fig:vdp}) with confidence, because of the large scatter in the observational data, but it may be best summarised by saying that it would be hard to improve. Perhaps the subjective impression is that the velocity dispersion of the model is a little too small, but a number of factors should be borne in mind. First, the outermost point includes stars close to the tidal radius (about 32 arcmin, according to \citet{Ha1996}), where velocity dispersion profiles are elevated by the effects of the tidal field \citep{Ku2010}\footnote{ This is especially true around perigalacticon. Note that the current Galactocentric distance of M22 is 4.9kpc \citep{Ha1996}, i.e. about one third of the way from peri- to apogalacticon.}, and these effects are not included in the Monte Carlo models. Second, even with a binary fraction of 5\%, the velocity dispersion may be elevated by the internal motion of binaries. Third, membership was determined on the basis of two spectral line indices, the radial velocity, and projected distance from the cluster centre, which led to the inclusion of only 345 stars out of the total of 3407 spectra, and so interlopers may still exist. Fourth, the typical uncertainty in the radial velocity of an individual star is about 3kms$^{-1}$. On the modelling side, it is also important to recall that the Monte Carlo model ignores the rotation of the cluster. \begin{center} \begin{figure} \includegraphics[width=9cm]{vdp.eps} \caption{The line-of-sight velocity dispersion profile of the same model as in Fig.\ref{fig:sbp}, compared with data from \citet[\red{their} Fig.3]{La2009}. The model data are plotted only up to the outermost radius of the observational surface brightness profile shown in Fig.\ref{fig:sbp}. } \label{fig:vdp} \end{figure} \end{center} The projected velocity dispersion profile for Model A is very similar to that for Model B (just described), but for Model C the result is \red{noticeably poorer. Though the central line-of-sight velocity dispersion is satisfactory at about 7 kms$^{-1}$, at larger radii it falls more steeply than the result for Model B shown in Fig.\ref{fig:vdp}, closely following the lower envelope of the observational data outside about 5 arcmin.} \subsubsection{The local luminosity function}\label{sec:lf} Compared with the observational data, the luminosity function of Model B (Fig.\ref{fig:lf}) shows a deficit of stars brighter than turn-off ($m_V\simeq 18.4$) and in a section of the main sequence. At the brightest magnitudes the deficit may reach 0.5 dex or more, though the absence of error estimates in the observational data here makes this uncertain. In the rest of the main sequence the agreement seems satisfactory, especially in the absence of any estimate of the uncertainty in the Monte Carlo prediction. These results may go some way to explaining the fact that the surface brightness of the model is generally a bit too low (Sec.\ref{sec:sbp}). \begin{center} \begin{figure} \includegraphics[width=10cm]{lf.eps} \caption{The luminosity function of the same model as in Fig.\ref{fig:sbp}, compared with data from \citet{PZ1999}. Though some of this is ground-based, all the data is scaled to their HST field. The error bars, where available, have been read from their Figs.9 and 11.} \label{fig:lf} \end{figure} \end{center} Since the agreement near turn-off seems satisfactory, it is difficult to understand how the deficit can rise so much within the small mass-range of stars brighter than turn-off, unless there is some flaw in the stellar evolution package in post-main sequence evolution. Similarly, it is difficult to know how the agreement along the main sequence could be improved by varying the mass function index $\alpha_1$ (see Table \ref{tab:ics}). The mismatch of the luminosity function of Model A to the observational data is of a similar magnitude to that for Model B, but is qualitatively different. The model has an excess of stars brighter than about magnitude 25, and a deficit at fainter magnitudes. Model C \red{is qualitatively similar to Model B, except that the fit to the observational data is slightly worse around magnitude $m_V \simeq 18$ and around $m_V = 25$, but fits better between these limit .} \subsubsection{Dynamical evolution}\label{sec:de} { As we shall see in Sec.\ref{sec:CD}, the dynamical evolutionary phase of a cluster is one of the main factors in assessing its likely population of stellar-mass black holes, and so we discuss the dynamical evolution of Model B here. The initial mass of the model is about $5.70\times10^5M_\odot$, and shrinks to about $3.20\times10^5M_\odot$ at 12 Gyr. The resulting modest decrease in the tidal radius is shown in Fig.\ref{fig:radii}. The value at the present day, about 73.6pc, greatly exceeds the observational value \citep{Ha1996} of about 29.7pc, but these can mean very different things in the case of a model which underfills its tidal radius, as here: the initial edge radius of the King model is 25.0pc, compared with the initial tidal radius of 89pc (Table \ref{tab:ics}). The value of the tidal radius of Model B is consistent with the Galactic potential. As mentioned in Sec.\ref{sec:model}, the Galactic orbit of the cluster takes it between 3 and 9 kpc from the Galactic Centre \citep{Di1999}, and in a Galactic potential with a flat rotation curve at 220km/s the range of tidal radii is from about 50 to 105pc, which includes the value in the model, but not the observational one. Fig.\ref{fig:radii} also shows two versions each of the core and half-mass radii, i.e. a theorist's version and an observer's one. The observer's values compare well with those given in \citet{Ha1996}, which are about 1.24pc and 3.12pc for the core and half-light radii, respectively. } \begin{center} \begin{figure} \includegraphics[width=10cm]{radii.eps} \caption{The evolution of the tidal radius, two versions of the half-mass radius, and two versions of the core radius, for Model B.} \label{fig:radii} \end{figure} \end{center} \section{The population of stellar-mass black holes}\label{sec:BHpop} \subsection{Evolution of total numbers}\label{sec:BH} Finally we turn to the main motivation of our study, which is the number of stellar-mass black holes at 12Gyr. Depending on the slope of the upper mass function, the number of stellar-mass black holes formed is between about 200 and 900 as the slope of the upper mass function, $\alpha_2$, is decreased from 3.0 to 2.6. The numbers for the three best full-sized models are given in Table \ref{tab:ics} and are labelled as $N_{BH0}$, although of course it is not the initial number. Though the numbers vary widely, this is almost entirely explained by the variation in $\alpha_2$. { The subsequent evolution of the number of black holes depends crucially on the primordial kicks given to all new black holes, as these three models illustrate. If all new black holes are given a kick with a 1-dimensional dispersion of 190kms$^{-1}$ (as is commonly considered for neutron stars), almost all escape promptly, and very few are still present at 12 Gyr. This is illustrated in Fig.\ref{fig:nbhABC} by the data for Model A, and the final number is listed as $N_{BH12}$ in Table \ref{tab:ics}. For this particular quantity the result has not been scaled from $N_\ast$ to $N$, since the scaling factor is nearly unity. } \begin{center} \begin{figure} \includegraphics[width=10cm]{nbhABC.eps} \caption{The evolution of the number of stellar-mass black holes in three models. For two models the initial number formed lies outside the plotted range, but is given in Table \ref{tab:ics}. } \label{fig:nbhABC} \end{figure} \end{center} { In the absence of natal kicks, exemplified by Model C in Table \ref{tab:ics} and Fig.\ref{fig:nbhABC}, the fraction of all stellar-mass black holes surviving at 12 Gyr is about \red{10}\%. Similar values were obtained in the small-scale models produced by {\sl ICFind}. The other recipe for natal kicks of black holes that we tried is the fall-back procedure of \citet{BKB2002}, which applies no kick if a large amount of mass from the supernova envelope falls back onto the degenerate remnant. Model B is the best of the full-sized models in which this procedure was adopted, and the fraction of black holes remaining at 12Gyr was about 2\% } \subsection{The black hole population at 12Gyr} { As may be expected from the effects of mass segregation, the spatial distribution of the black holes at 12Gyr is very centrally concentrated (Fig.\ref{fig:bhdist}, \red{which shows data for Model C}). \red{The spatial distribution of the few} binaries in which at least one component is a black hole \red{is statistically indistinguishable}. In addition to two-body interactions leading to mass segregation, such binaries are subject to energetic dynamical interactions which can send the binary into the halo of the cluster, but there is no evidence from Fig.\ref{fig:bhdist} that this is noticeable in their spatial distribution. In Model B the outermost black hole binary is at 0.35pc from the centre. The projected distances from the centre of M22 of the two black holes found by \citet{St2012} are 0.25pc and 0.4pc. } \begin{center} \begin{figure} \includegraphics[width=10cm]{bhdist.eps} \caption{The spatial distribution of the black holes and black hole binaries \red{in Model C} at 12Gyr, compared with the spatial distribution of all stellar mass (including black holes). No axis labels or ticks are given for the number of black holes binaries (i.e. binaries in which at least one component is a black hole); the curve represents the cumulative fraction of the number, scaled to the vertical size of the frame. \red{For comparison, the two black holes discovered by \citet{St2012} lie at {\sl projected} radii of 0.25 and 0.4pc, respectively.}} \label{fig:bhdist} \end{figure} \end{center} Table \ref{tab:ics} gives more detail on the numbers of black hole binaries, broken down according to the type of the companion. Again as expected (from the effect of exchange interactions and their mass-dependence) the companions \red{tend to be} drawn from the relatively high-mass populations in the model. Because of the interest in finding a source for the emission of observed black holes in M22, details are given in { Table 2} of the companions and orbital parameters. The binary identifier gives the model (Table \ref{tab:ics}) and the order in increasing radial distance within the model. Unfortunately not one of these binaries is close to Roche-lobe overflow. { Though the data correspond to conditions at a time close to 12Gyr, we have also checked that none of the black hole binaries are accreting at any time in the period 10--12Gyr, in any of the three full-sized models A--C. \blue{From the total of 59 black holes in these models, it follows that the probability that a black hole is accreting from a binary companion is at most a few percent. Therefore, if it is assumed that the two black holes in M22 have Roche-lobe filling companions, model A can be ruled out, i.e. the model in which all black holes experience natal kicks similar to those of neutron stars. This conclusion is model-dependent, however, for reasons given in the next paragraph and in Sec.\ref{sec:limitations}.}} Despite the evidence of models A--C, we also checked three other models with rather similar initial parameters, and found altogether four examples of black holes accreting from evolving stellar companions. According to the models, accretion continued for at least 0.5Gyr, and so the probability \blue{that a black hole is accreting from a binary companion} \blue{could indeed be a few percent}. It is, however, difficult to estimate this probability, especially as it is likely to depend on the choice of parameters for the primordial binary population, and we have not attempted to explore this. \begin{table*} \begin{minipage}{166mm} \begin{center} \caption{Black hole binaries in models of M22} \end{center} \begin{tabular}{rccccccccc} Binary&A1 &B1&B2&B3&B4&B5&C1&C2&C3\\ Primary mass ($M_\odot$)&10.0&12.6&10.0&11.0&10.0&8.8&\red{14.6}&\red{9.0}&\red{3.5}\\ Companion mass ($M_\odot$)&0.23&0.78&9.9&0.69&10.0&0.64&\red{10.0}&\red{0.33}&\red{0.61}\\ Companion type&MS&MS&BH&MS&BH&WD&\red{BH}&\red{MS}&\red{MS}\\ Companion radius ($R_\odot$)&0.24&1.02&--&0.70&--&0.012&\red{--}&\red{0.31}&\red{0.57}\\ Semi-major axis ($R_\odot$)&312&177&106&132&399&449&\red{112}&\red{157}&\red{161}\\ Eccentricity&0.52&0.62&0.28&0.44&0.99&0.38&\red{0.84}&\red{0.46}&\red{0.80}\\ \end{tabular} \end{minipage} \end{table*} \section{Conclusions and discussion}\label{sec:CD} \subsection{Conclusions} \red{Motivated by the recent discovery of two stellar-mass black holes in the Galactic globular cluster M22, we have constructed dynamic evolutionary models of this object in order to assess the survival of its population of black holes to the present day. We find that the result depends heavily on the assumptions made about natal kicks applied to new stellar-mass black holes. For kicks with a one-dimensional dispersion of 190km/s, the number of stellar-mass black holes at the present day is no more than one or two (Model A in Table \ref{tab:ics}). If no kicks are applied, then the fraction remaining at the present day is of order 0.1, resulting in a number of order 40 (Model C). Model B represents an intermediate, but physically motivated assumption about natal kicks, and results in a present-day population numbering 14.} \red{ We computed the dynamical evolution of our models with a Monte Carlo method. This code, of which we used two versions, includes two-body relaxation, binaries and their dynamical interactions, escape in the Galactic tide, and procedures for the internal evolution of both single and binary stars. Using a new procedure, we have explored hundreds of sets of initial conditions so as to produce models which, after 12Gyr of simulated evolution, resemble M22 in their surface brightness profile, velocity dispersion profile and stellar luminosity function. Possible initial conditions obtained by this procedure are summarised in Table \ref{tab:ics}, and Figs.\ref{fig:sbp}--\ref{fig:lf} compare one of the evolved models with the observational data.} \subsection{Black holes and cluster evolution}\label{sec:bh-and-evolution} It is useful to try to draw some general lessons about the surviving black hole populations in old globular clusters from the modelling of M22 described in this paper, and from similar models of a few other objects, summarised here in Fig.\ref{fig:nbh}. Some, like M4 and NGC6397, lose all, or almost all, of their black holes well before the present day, while others (47 Tuc and M22) retain a\red{n appreciable} fraction (assuming\red{, in the case of M22,} that natal kicks are moderated in some way). These facts are related to the evolution of the core. As we have seen (Fig.\ref{fig:radii}) the core of M22 shows no sign of collapsing yet. Even the very concentrated cluster 47 Tuc is no more than half-way to core collapse \citep{GH2011}. Of the four clusters which are under discussion, these are also the two with \red{appreciable} residual populations of black holes (provided that kicks do not eject almost all new black holes). The link between black hole populations and the evolution of the core has been noticed before \citep{MPPZH2004}, and is underpinned by a recent theoretical treatment by \citet{BH2013}. These results show that expansion of the core (and indeed of the half-mass radius) can be driven by dynamical interactions among the black holes, which inevitably lead to their escape. Eventually the population of black holes is insufficient to sustain the flow of energy by relaxation in the outer parts of the cluster, and then the core begins to \red{contract}. \red{As a result of this, energy is increasingly generated by interactions between the remaining black holes and the stars of lower mass, and the rate of escape of black holes declines. This change can be seen in Fig.\ref{fig:nbhABC} at about the time when the core radius reaches its largest values (Fig.\ref{fig:radii}). } This phase of core \red{contraction} ends at what \citet{BH2013} call ``second core collapse'' (the first being the original collapse of the black hole subsystem), when some other mechanism of generating energy (e.g. primordial binaries) becomes efficient enough. \red{The evolution of our model of M22 is more complicated than that of the idealised models considered by \citet{BH2013}, but does not differ qualitatively. Indeed, though} stellar evolution also contributes to the early expansion of the half-mass radius, \citet{GH2011} showed that primordial binaries make little difference at the early stages \red{(in their Monte Carlo model of 47 Tuc)}. The upshot of th{\red{ese discussions} is that \red{appreciable} populations of stellar-mass black holes are only to be expected in clusters which have not yet passed (second) core collapse. Other things being equal, this means clusters which have a sufficiently long evolutionary time scale, and we note that the half-mass relaxation times of NGC6397 and M4 are under 1Gyr ($\log t_{rh}=8.60,8.93$, respectively, according to \citet{Ha1996}), while those of M22 and 47 Tuc exceed 1Gyr ($\log t_{rh}=9.23,9.55$, respectively). These considerations allow us to synthesise not only our modelling of the four globular clusters that we have discussed, but also two other recent studies. \begin{enumerate} \item \citet{MUFR2013} also used a Monte Carlo code to study the problem, though the model was not specifically geared to M22. The retention factor was high, about 86\%, and more than half of the retained black holes still survived in the cluster at 12Gyr. We estimate the half-mass relaxation time at 12Gyr to be about $6\times10^9$yr, though this is based on the half-mass radius, whereas the estimates above are based on the half-light radii. Estimating these radii from Model B (Fig.\ref{fig:radii}), we find that the comparable value of the half-mass relaxation time is about $2\times10^9$yr, a little larger than the value for M22. \item The other model which we mention here is a direct $N$-body model \citep{SH2013} with $N = 2.5\times10^5$ initially, and a similar binary fraction to our Monte Carlo models. Though smaller than M22 in mass, the larger initial radius of the $N$-body model gives it a value for the relaxation time at 12Gyr of about 2.1Gyr. The initial retention fraction was 10\%, but even so 16 remained at 12Gyr. Naively, this would scale to about 50 for an initial model comparable in size to our suggested initial conditions for M22. \end{enumerate} } { Despite this tidy picture, mention must be made of M62, which has a recently announced black hole candidate \citep{Ch2013}, despite an uncomfortably low relaxation time: $\log t_{rh} = 8.98$.} \subsection{Limitations of the modelling}\label{sec:limitations} Now we consider some aspects of the models which could have a bearing on these conclusions. In the first place we can make no claim for the uniqueness of the initial conditions we have derived. In particular, if more compact initial conditions exist (i.e. with a smaller half-mass radius), then the central escape velocity would be higher than in the existing models (for example, about 57km/s at the start of Model B), and the retention fraction of black holes would be greater, under any reasonable hypothesis on the magnitude \red{of natal kicks}. More problematic are aspects of the evolutionary history of globular clusters which are not modelled at present in the Monte Carlo code. It has recently been suggested \citep{LBMP2013} that accretion of interstellar gas (slow ejecta from stellar evolution) will act as a resistive force on the motions of black holes. This takes us to perhaps the most popular scenario for the formation of second generations in Galactic globular clusters (see, for example, \citealt{DE2008}), in which the ejected gas sinks to the centre of the original cluster of first-generation stars, and forms a second generation, while much of the first generation escapes. No evolutionary simulation yet includes these complex processes, and one can only argue qualitatively about how this may affect our conclusions. In this scenario it is often argued that the first generation may be more massive than the second. Therefore the cluster in which the first generation of stellar-mass black holes formed would have had a much higher escape velocity than we have envisaged, making retention of a large fraction of these black holes much more likely. They would be centrally concentrated (by mass segregation), and would not be expected to escape, unlike much of the rest of the first generation. Equally, it is hard to see how the survival of black holes in the second generation would be adversely affected by being immersed in the potential well of the remaining first generation. \blue{Finally, these considerations suggest that sufficient numbers of black holes might well survive to the present day in this scenario, even if they were subject to natal kicks as in our model A.} \section*{Acknowledgements} We thank Jay Strader for guidance on the choice of observational data on M22, \red{and the referee for his comments, which have markedly improved our efforts}. This work was partly supported by the Polish Ministry of Science and Higher Education through the grant N N203 38036, \red{and by the National Science Centre through the grant DEC-2012/07/B/ST9/04412}.
1,108,101,562,628
arxiv
1,108,101,562,629
arxiv
\section{Introduction} The Advanced Wakefield (AWAKE) experiment is a proof-of-principle plasma wakefield accelerator with demonstrated energy gains for \SI{\sim1}{\pico\coulomb} electron bunches of up to \SI{2}{GeV} over \SI{10}{\metre} of rubidium plasma \cite{Adli2018}, using proton bunches from the SPS at CERN as a driver. The charge and energy gain are measured using a spectrometer at the end of the beamline \cite{Bauche2019} comprising a quadrupole doublet, dipole and scintillating screen. An electron beam derived from the stripping of \ensuremath{^{208}\textnormal{Pb}^{81+}}\ ions was delivered to this device in order to study the charge response of the screen and the electron optics. The possibility to strip the ions at different locations, and the imaging capabilities of the spectrometer and stripping foil also allowed the electron beam properties to be studied, with a view to assessing its suitability for future AWAKE experiments. The experimental set-up is illustrated in Figure \ref{fig:layout}.\par As part of the Gamma-Factory project \cite{Krasny2015} machine development (MD) runs, partially stripped Pb ions (PSI) were accelerated in the SPS. In order to study the stability of high energy atomic beams, Pb$^{81+}$ and Xe$^{39+}$ were accelerated up to rigidity-equivalent energies to \SI{400}{GeV} protons, that is, the total relativistic energy $E_{ion}$: \begin{align} E_{ion}^2 = Z_{ion}^2\left(E_p^2 - E_{0(p)}^2\right) + E_{0(ion)}^2 \end{align} where $Z$ is the ion charge, $E_p$ the proton energy (\SI{400}{GeV} in this case), and $E_{0(p), (ion)}$ the rest mass energy of the proton or ion. For the AWAKE PSI run, only $^{208}$Pb$^{81+}$---hydrogen-like Pb---was used, meaning the ions were accelerated to \SI{32.40}{TeV}, or \SI{155.7}{GeV/n}. The remaining electron can be stripped by passing the beams through a thin foil or screen, to produce electron beams with well defined energies and narrow energy spreads. The energy of the resultant electron beam can be calculated from simple kinematic arguments; the binding energy of the electron being ignored, the ions and ionized electrons have the same Lorentz factor $\gamma$, so \begin{align} E_{e} = \frac{E_{ion}}{E_{0(ion)}}E_{0(e)}\label{eqn:eenergy} \end{align} or \SI{85.46}{MeV} for H-like Pb ($E_{0(e)} = \textnormal{\SI{0.511}{MeV}}$). \begin{figure} \centering \includegraphics[width=\columnwidth]{figure1.pdf} \caption{Schematic layout of the partially-stripped ion experiment at AWAKE. Diagram not to scale.} \label{fig:layout} \end{figure} \section{Cross-section measurement method} \subsection{Aluminium} Partially-stripped ions delivered to AWAKE first pass through a \SI{200}{\micro\metre} Al vacuum window separating the SPS vacuum system from that of AWAKE. This is followed by a dipole, whose function is ordinarily to allow merging of the proton and laser beams for the AWAKE experiment. In this case, it provides horizontal separation of the \ensuremath{^{208}\textnormal{Pb}^{81+}}\ and \ensuremath{^{208}\textnormal{Pb}^{82+}}\ beams produced when part of the \ensuremath{^{208}\textnormal{Pb}^{81+}}\ beam is stripped by passage through the vacuum window. Approximately \SI{25}{\metre} downstream of this bend, the beam is imaged on a \SI{300}{\micro\meter} Si BTV screen. The experiment layout is shown in Figure \ref{fig:layout}. The relative intensities of the two beamspots provides the stripping fractions, from which the stripping cross-section can be calculated using the Beer--Lambert law: \begin{align} \ensuremath{\sigma_{s\left(Al\right)}} = \frac{-\log P}{n_{Al}l_{Al}} \end{align} where $P$ is the proportion of ions that remain in the $81+$ state, $n_{Al}$ is the number density of the Al target and $l_{Al}$ the target thickness.\par \subsection{Silicon} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure2.pdf} \caption{Results of BDSIM simulation showing variation of transport efficiency through the AWAKE spectrometer electron optic with initial angular divergency. Simulations were performed with 1000 particles per data point.} \label{fig:sxp1} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure3.pdf} \caption{GEANT4 simulation results for variation of electron bunch angular divergency with stripping cross-section in Si.} \label{fig:sxp2} \end{figure} The Si BTV screen acts as a second stripping foil for the remaining \ensuremath{^{208}\textnormal{Pb}^{81+}}\ population, and the electrons which are stripped at this position can be transported to the AWAKE spectrometer. The spectrometer consists of a quadrupole doublet followed by a single dipole and a Lanex scintillating screen \SI{1}{\metre} in length. The screen charge-to-light calibration was determined independently using the CERN Linear Electron Accelerator for Research (CLEAR) facility \cite{Bauche2019}, meaning the electron bunch charge incident on the screen is known. As the original ion bunch charge $Q_{ion}$ is measured using a beam charge transformer in the CERN SPS ring, the stripping cross-section for \ensuremath{^{208}\textnormal{Pb}^{81+}}\ in Si (\ensuremath{\sigma_{s\left(Si\right)}}) can also be determined using the Beer--Lambert law, using the bunch population of the unstripped ion beam ($P$) reaching the BTV screen, and the electron bunch charge ($Q_e$), corrected for transport losses ($\epsilon_t$) from BTV to spectrometer: \begin{align} \ensuremath{\sigma_{s\left(Si\right)}} = \frac{-\log \left(1 - \frac{81Q_e}{\epsilon_tPQ_{ion}}\right)}{n_{Si}l_{Si}}\label{eqn:sixs} \end{align} where $n_{Si}$ and $l_{Si}$ are the target density and thickness. Determination of $\epsilon_t$ was achieved using Beam Delivery Simulation (BDSIM) \cite{Nevay2020} tracking simulations and measurement of the electron optical properties of the spectrometer with the generated electron beam, and GEANT4 \cite{Agostinelli2003,Allison2006,Allison2016} simulations to derive the cross-section--angular-divergence relation. Since $\epsilon_t$ is a function of the angular divergence of the beam, which itself is a function of \ensuremath{\sigma_{s\left(Si\right)}}, it can be eliminated from Equation \ref{eqn:sixs}. The resulting equation depends on the choice of fitting functions for $\epsilon_t\left(\sigma_{x'}\right)$ and $\sigma_{x'}\left(\ensuremath{\sigma_{s\left(Si\right)}}\right)$, but can be solved numerically for \ensuremath{\sigma_{s\left(Si\right)}} (though with larger uncertainty than \ensuremath{\sigma_{s\left(Al\right)}}). Here, the model is (see Figures \ref{fig:sxp1} and \ref{fig:sxp2}): \begin{align} \epsilon_t\left(\sigma_{x'}\right) &= \frac{a_0}{a_0 + \sigma_{x'}^{a_1}}\\ \sigma_{x'}\left(\ensuremath{\sigma_{s\left(Si\right)}}\right) &= b_0\left(1 - \exp\left(\frac{-\ensuremath{\sigma_{s\left(Si\right)}}}{b_1}\right)\right) + b_2 \end{align} where $a, b$ are fitted parameters. \subsection{Cross-section calculation method} The stripping cross-section was calculated using the plane-wave Born approximation, following the method of References \cite{Anholt1987,Anholt1979,Khandelwal1969}, with modifications following \cite{Sorensen1998}. This defines the cross-section $\sigma_s$ as the sum of two components, corresponding to a Coulomb interaction ($\sigma_{Coul}$) and a transverse interaction ($\sigma_{trans}$), with: \begin{align} \sigma_{Coul} = f\left(\eta_k\right)\frac{4\pi a_0^2 Z_t^2 \alpha}{Z_p^2}\label{eqn:coul} \end{align} and \begin{align} \sigma_{trans} = 5.23\times10^3\left(\frac{Z_t}{Z_p}\right)^2\left(\frac{\log\gamma^2 - \beta^2}{\beta^2}\right)\label{eqn:trans} \end{align} defined in barns, where $Z_t$, $Z_p$ are target and projectile atomic number, $\eta_k = \left(\frac{\beta}{Z_p \alpha}\right)^2$, $a_0$ the Bohr radius, $\alpha$ the fine structure constant, $f$ is a slowly varying factor precalculated and tabulated for interpolation in \cite{Khandelwal1969}, and $\beta$ and $\gamma$ the usual relativistic factors. It can be seen that the transverse interaction will eventually come to dominate this calculated cross-section as $\sigma_{Coul}$ approaches a constant and $\sigma_{trans} \propto \log\gamma^2$, an observation not borne out by experiment \cite{Krause1998,Krause2001}, and a correction \cite{Sorensen1998} to this calculation by defining a critical value for $\gamma$, \begin{align} \gamma_c \sim \frac{60\left(\alpha Z_p\right)^2}{Z_t^{1/3}}\label{eqn:saturate} \end{align} is used to compensate for this. $\gamma$ is then replaced with a value which saturates at $\gamma_c$; at the energy considered in this paper, this amounts to using $\gamma_c$ in the calculation instead. \section{Cross-section and spectrometer calibration results} Figure \ref{fig:btvspot} shows the BTV image of the two charge states in the beam at the second stripping position. From a fit to this with the sum of two rotated 2-D Gaussian functions offset from one another, the relative bunch populations can be determined; such a fit is show in the Figure. Note that the two beamspot sizes are free parameters, yet the major and minor axis lengths agree, providing confidence that although the weaker spot is quite faint, the fitting procedure is behaving correctly. This leads to a value for \ensuremath{\sigma_{s\left(Al\right)}}\ of \SI{1.24\pm0.11e-25}{\metre\squared}, compared to a calculated value of \SI{1.09\pm0.22e-25}{\metre\squared}, which is in good agreement, lending further weight to the correctness of the adjustment to the calculated value of \cite{Sorensen1998}. This also agrees well with previous measurements of \cite{Krause1998} of \SI{1.3\pm0.1e-25}{\metre\squared}. The uncertainty on the measurement is dominated by the shot-to-shot scatter of the beamspot areas, while calculation uncertainty is taken to be 20\%, arising from dependence of the choice of atomic photoabsorption cross-section used, as well as the basic method used by \cite{Sorensen1998} which follows \cite{Williams1935,Jackson1975} by separating contributions into Coulombic and transverse. For \ensuremath{\sigma_{s\left(Si\right)}}, a value of \SI{1\pm0.5e-25}{\metre\squared} was determined, which given the large uncertainty is in agreement with that predicted by calculation \SI{1.26\pm0.25e-25}{\metre\squared}. For \ensuremath{\sigma_{s\left(Si\right)}}, the uncertainty is dominated by the fact that the stripping probability is very high for \SI{300}{\micro\metre} Si, so uncertainties in the transport efficiency, ion beam charge etc., propagated through Equation \ref{eqn:sixs}, become relatively large. The cross-sections for Al and Si are expected to be similar as the target atomic numbers are close to each other. This is borne out by the measured and calculated values. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure4.pdf} \caption{Double ion beamspot at downstream stripping position, showing the contours of the fitted double Gaussian.} \label{fig:btvspot} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure5.pdf} \caption{Fits to beam size at the spectrometer screen as a function of quadrupole current. The best fit values for the beam divergence width are $\sigma_{xp} = \SI{4.89}{mrad}$ and $\sigma_{yp} = \SI{2.63}{mrad}$, consistent with the expected divergence distribution width after losses from high divergence tails. Error bars are 1 standard deviation.} \label{fig:beamsize} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure6.pdf} \caption{Comparison of measured central energy and energy distribution from GEANT4 simulation of material effects on the generated electron beam.} \label{fig:eneres} \end{figure} Figure \ref{fig:beamsize} is a study of the electron optics of the spectrometer. The fitted horizontal and vertical divergence widths are much lower than that predicted by GEANT4. However, losses (by collisions with the beampipe) from high-divergence areas of phase space are observed in the BDSIM transport simulation using the GEANT4 divergence widths, which lowers the width of the distributions observed at the screen accordingly. In addition, this lowers the predicted transport efficiency (see Figure \ref{fig:sxp1}). Using the calculated value for \ensuremath{\sigma_{s\left(Si\right)}}, these measurements and simulation can also provide an in situ calibration of the spectrometer screen charge-to-light response, which is found to be consistent with that determined at CLEAR. This is a useful cross-check, as the CLEAR calibration is performed with different experiment geometry and therefore requires a number of corrections to map back to AWAKE. Finally, Figure \ref{fig:eneres} shows a verification of the spectrometer energy scale, which illustrates that within the resolution of the spectrometer, the energy scale is correct. The resolution limit in this case originates from the optical line between spectrometer screen and viewing camera. \section{PSI beam diagnostics and electron beam quality} Study of the electron beam generated by this method can provide information on the ion beam parameters itself, which could lead to the use of the technique as a PSI beam diagnostic. Specifically, information about divergence and energy spread of the ion bunch could in principle be recovered from spectrometry of the stripped electrons. To extract this information, it would be necessary to unfold the divergence of the electron beam introduced by post-stripping scattering within the stripping foil. This effect can be well predicted by simulation, but a future instrument based around this method could optimize for minimal scattering (while still producing an appreciable electron beam). Figure \ref{fig:effdiv1} shows the stripping efficiency and electron beam divergence determined by simulation for a totally collimated initial ion beam, against foil thickness (for three different materials). This indicates that even regular kitchen aluminium foil (approximately \SI{16}{\micro\metre} thick) would introduce only $\sim$ \SI{1}{\milli\radian} errors into divergency measurements of the ion beam, while producing an electron beam signal of nearly 10\% of the ion beam particle count.\par \begin{figure} \centering \includegraphics[width=\columnwidth]{figure7.pdf} \caption{Stripping yield (unfilled points) and angular divergence width (filled points) for beryllium, aluminium and iron foils, extracted from GEANT4 simulations.} \label{fig:effdiv1} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure8.pdf} \caption{Yield vs. divergence width, showing an approximately common curve for beryllium, aluminium and iron.} \label{fig:effdiv2} \end{figure} Figure \ref{fig:effdiv1} also allows one to determine optimal operating conditions in the use-case where the electron beam is not only a diagnostic of the ion beam, but of utility in and of itself. Such applications, in addition to the specific calibration task considered here, might include injection of a PSI-derived electron beam into the plasma wakefield driven by the ion beam in an AWAKE-like acceleration experiment. Ion bunch population in the present work was $\sim 3\times10^8$, which leads to a maximum electron bunch charge of \SI{48}{\pico\coulomb} with, however, a beam divergency of $\sim$ \SI{5}{\milli\radian}. The simulation results for different foil thicknesses indicate that a power law emittance scaling might be observed with foil thickness, favouring very thin foils. However, the electron yield falls exponentially with foil thickness, and moreover, yield as a function of divergence (as shown in Figure \ref{fig:effdiv2}) appears to be a common curve, so no choice of material is better than any other in this regard.\par For certain ion species and charge states, stripping via laser photoionization might be considered as an alternative. This is only possible for ions where the Doppler shifted ionization energy falls in the range accessible by lasers. This does not include \ensuremath{^{208}\textnormal{Pb}^{81+}}\ at $\gamma = 167$, but, for instance, Ca$^{17+}$ at $\gamma = 205$ would have a photoionization threshold corresponding to \SI{439}{\nano\metre} light from a counter-propagating laser. Ionization at the threshold, where the cross-section is large, with laser light could in principle produce electron beams with low divergences, where in the worst case the excess energy over threshold is added perpendicular to the beam direction and so directly contributes to electron beam divergence. Emittance growth in this case then arises from the transfer of the ion beam energy spread into electron beam divergence because the high energy tail sees laser photons Doppler shifted above the ionization threshold. \section{Conclusion} The stripping cross-sections for ultrarelativistic lead ions in two different materials have been measured, with both measurements being in broad agreement with theory and previous measurements. Consistency between two methods of calibrating the charge response of the spectrometer screen was also achieved, using the electron beam generated by the stripping process. This technique could be useful for future calibration exercises, but also potentially other situations requiring correlated ion and electron beams, for instance, particle-driven wakefield experiments---provided that the required beam parameters can be generated. \section{Acknowledgements} Special thanks are given to the CERN SPS operators for their hard work in setting up partially-stripped ion delivery to AWAKE. This work was supported in parts by a Leverhulme Trust Research Project Grant RPG-2017-143 and by STFC (AWAKE-UK and UCL consolidated grants), United Kingdom. M. Wing acknowledges the support of DESY, Hamburg.
1,108,101,562,630
arxiv
\section{Introduction}\label{sec:1} \indent\indent The spectral properties of quasars in ultraviolet-optical range have been a subject of a number of studies. The smooth part (continuum) of this UV bump, called the ``Big Blue Bump'', is traditionally considered to be thermal emission from an accretion disk, while the emission lines are attributed to this emission reprocessed in surrounding clumped gas in a form of broad and narrow line regions (BLR and NLR) as well as in a dust obscured torus, but many details of this paradigm are still purely understood, as well as the nature of broad absorption lines (BAL), possessed by about 10\% of quasars. Thus, on the one hand, the detailed study of the quasar spectra plays an important role in understanding of the quasar nature. On the other hand, the accurate quasar spectrum shape, which does not vary significantly from one object to another, is needed as a template for quasar redshift measurements in large redshift surveys. Finally, the precise knowledge of the intrinsic quasar spectrum shape, prior to absorption in intergalactic medium, is required for studying the intergalactic gas, manifested as absorption features in the quasar spectra, first of all as the Hydrogen Ly$\alpha$-forest. In all of the fields, mentioned above the composite (mean) spectra are widely used. The possibility of their utilisation is related to the fact that quasar spectra are remarkably similar from one object to another. Due to their high signal-to-noise ratio composite spectra reveal weak features that are rarely detectable in individual quasar spectra. The composite spectra were compiled for a wide set of quasar samples, e.\,g. the Large Bright Quasar Survey (LBQS) \citep{francis+91}, the First Bright Quasar Survey (FBQS) \citep{brotherton+01}, the Sloan Digital Sky Survey (SDSS) \citep{vandenberk+01,pieri+10}, quasar spectra from the Hubble Space Telescope (HST) \citep{zheng+97,telfer+02}, and Far Ultraviolet Spectroscopic Explorer (FUSE) \citep{scott2+04}. The part of UV-optical quasar spectrum redward of the Ly$\alpha$ (1215.67\,\AA) emission line (free of hydrogen Lyman-series forests of absorption lines) is studied quite well. As it was shown e.\,g. by \citet{vandenberk+01} this region up to $\approx5000$\AA\ is described with a smooth continuum, well approximated with a power law $\sim\lambda^{\alpha_{\lambda}}$ over some limited wavelength range, broad emission and, in some cases, absorption lines. \citet{vandenberk+01} reported on $\alpha_{\lambda}$ to be $-1.54$ within 1350--4230\,\AA, that agrees well with the values $-1.54$ and $-1.68$ which were obtained by \citet{brotherton+01} within 1450--5050\,\AA\ for spectra from LBQS and FBQS respectively, and also with the results of \citet{carballo+99}, who obtained the values $-1.34\pm0.15$ and $-2.11\pm0.16$ within 1300--3000\,\AA\ and 3000--4500\,\AA\ ranges respectively for a sample of radio-loud quasars. Throughout this paper the indicated wavelengths are rest-frame ones, unless other is specified. The most complete to-date list of emission lines redward of the Ly$\alpha$ emission line found in quasar composite can be found in \citealt{vandenberk+01}. Some statistical studies were also conducted with samples of individual spectra, e.\,g. \citet{tang+12} studied the strong UV and optical emission line properties using a sample of 85 bright quasars. The part of the spectrum blueward of the Ly$\alpha$ emission line, used for the Ly$\alpha$-forest studies, is much worse investigated due to the presence of the Lyman series forests. Direct reconstruction of intrinsic quasar spectral shape in this region is possible only for high resolution spectra, the number of which is not large, that does not allow to conduct statistical studies of the subsamples with different characteristics, e.\,g. luminosity. Several-parametric modelling of the continuum level within the Ly$\alpha$ forest region suffers from the degeneration between the mean transmission and parameters of lines and continuum, that is why extrapolation of the continuum from the red part of spectrum seems to be more reliable in some cases (see e.\,g., \citealt{press+93,bernardi+03,desjacques_07,greg+10,songaila-04}). But in practice it is only an approximation of the real continuum, because the centre of the ``Big Blue Bump'' is located at $\approx1000-1300$\,\AA. This is confirmed by the values of $\alpha_{\lambda}$ obtained from HST and FUSE spectra, $-1.01\pm0.05$ within 1050--2020\,\AA\ \citep{zheng+97}, $-0.24\pm0.12$ within 500--1200\,\AA\ \citep{telfer+02}, $-1.44^{+0.38}_{-0.28}$ within 630--1155\,\AA\ \citep{scott2+04}, when comparing with the same values for longer wavelengths mentioned above. Due to lower signal-to-noise ratio in the Lyman series forests region detection of emission lines is more complicated here. For example, \citet{vandenberk+01} listed only three emission features between Ly$\alpha$ and Ly$\beta$ (1025.72\,\AA) with large uncertainties in the central wavelength. The other studies of composites and individual spectra have not enlarged significantly their number. In Table~3 we tried to summarise the known emission features, found in optical \citep{francis+91,brotherton+01,vandenberk+01,tytler+04} and UV \citep{zheng+97,telfer+02,scott2+04} composites, and in some individual spectra \citep{brotherton+94,Laor+94,Laor+1995,laor+97,Vestergaard+01,leighly+07,binette+08}, within the range of $\sim1025-1450$\,\AA\ (between the Ly$\beta$ and C{\sc iv} lines), used for studies in the present work. The spectral properties of quasars described above can be considered only as some generalised ones, because despite of general similarity in spectral shapes, they differ in spectral index (also called spectral slope), intensity of continuum and emission lines, lines' equivalent widths, etc. Some of these parameters are found to have some dependence on each other, e.\,g., the inverse correlation of equivalent width of some emission lines on monochromatic flux in UV region, known as the Baldwin effect \citep{baldwin}. Therefore, compilation of composite spectra from subsamples with similar properties (e.\,g., with similar luminosity) is of interest for studying these properties and there relation to other parameters. E.\,g., \citet{richards+11} studied the Baldwin effect constructing composite spectra of quasar subsamples, chosen on the base of C{\sc iv} (1549\,\AA) emission-line properties (equivalent width and blueshift). They also found, that the mean blueshift of C{\sc iv} line is approximately twice larger for radio-quite quasars, than that of radio-loud ones. \citet{reichard+03} studied the spectral properties of Broad Absorption Line (BAL) quasars, compiling composites from subsamples with different ionization levels of BALs, and found that there are substantial differences in the emission-line and continuum properties of high- and low-ionization BAL quasars, which, in their opinion, can be related to intrinsic quasar properties such as the continuum spectral index. Also using the composite spectra, \citet{yip+04,vandenberk+04} studied the spectral properties of quasars depending on luminosity and redshift, and concluded that aside from the Baldwin effect the average spectral properties are similar, e.\,g. the quasars with different luminosity have similar spectral index. In the present paper we use the composite spectra compiled for subsamples of spectra with similar spectral index and study the possible effects, related to neglect of the difference in this parameter. The data used in this work, the sample selection and the composite spectra compilation processes are described in Section~\ref{sec:2}. The difference in redshift measurements made with the obtained composite spectra as templates compared to that with the SDSS quasar template is studied in Section~\ref{sec:3}. Section~\ref{sec:4} gives an estimation of errors introduced into the Ly$\alpha$ forest mean transmission measurement by neglect the difference in $\alpha_{\lambda}$ when using the composite spectra for such studies. In Section~\ref{sec:5} we present the results of our search for emission lines in our composites. Conclusions are presented in Section~\ref{sec:6}. \section[]{Data}\label{sec:2} \subsection{The SDSS DR7 quasar sample}\label{sec:2.1} \indent\indent Our sample is taken from the public available release of the sky-residual subtracted spectra for the Sloan Digital Sky Survey (SDSS) Legacy Release 7 \citep{Abazadjian_2009}. This release (`WH sample' hereafter), which is described in \citet{wild_10}, contains a total of 106\,006 spectra, generated using the \citet{Wild_05} scheme from the spectra of the objects from the \citet{Schneider_2010} quasar catalogue. This scheme includes a significantly improved technique of the OH sky lines subtraction. The strong OH sky emission lines extend over almost half of the wavelength range ($>6700$\,\AA) of the SDSS spectra and is not subtracted optimally enough with the previous SDSS pipelines. The redshifts in SDSS are measured mainly with the help of two techniques --- emission line measurements and cross-correlation, or one of them. The errors of redshift (due to intrinsic line shifts etc.) could affect the results of our study hence we tried to minimize their influence using the improved redshifts of the objects from the \citet{Schneider_2010} quasar catalogue, which was generated using the \citet{hewett_10} scheme, instead of redshifts in the headers of each fits-file. According to \citet{hewett_10}, their redshifts possess systematic biases, which are for a factor of 20 smaller compared to the SDSS redshift values. \subsection{Spectra selection}\label{sec:2.2} \begin{figure} \centering \epsfig{figure=zdistr.eps,width=.98\linewidth} \caption{Redshift distribution of all objects from the HW sample with $2.3<z<4.6$ and redshift determination confidence level $>0.9$ (\textit{dash-dot}), all visually selected (\textit{dotted}) for the present study and those with the rms of the normalization constant $A$ less than 15\% (\textit{solid}).} \label{fig:zdistr} \end{figure} \indent\indent All the quasar spectra (15\,154~objects) from the HW sample with redshifts within the range 2.3$-$4.6 and redshift determination confidence level $>0.9$ were firstly selected. Because the SDSS is an automatic survey there is a possibility of pollution of the sample with `wrong' objects whose general photometry or spectroscopy properties required by the automatic selection pipelines are similar to the ones of `real' objects. Thus a visual examination was carried out. Except the non-quasar spectra the spectra of quasars with low signal-to-noise ratio, spectra of BAL quasars and those with the Damped Ly$\alpha$ (DLA) systems were also excluded during this examination. The presence of the BAL quasars in the sample could introduce additional errors into an estimation of the quasar mean continuum level. The DLA systems are a class of quasar absorbers selected for the presence of H~{\sc i} column densities $>2\cdot10^{20}$~cm$^{-2}$ (see e.\,g.~\citet{wolfe_05} for review). They are identified as absorption features with the rest equivalent width exceeding 5\,\AA. The nature of these systems is still not understood, however due to their high density they are usually related to the galaxy formation and therefore could not be used as representatives of the linear perturbations in the neutral intergalactic medium. After the visual examination the sample contains 4\,779~spectra. Redshift distributions of these objects and all the objects initially selected from HW sample are presented in Figure~\ref{fig:zdistr} (\textit{dash-dot} and \textit{dotted} correspondingly). \subsection{Subsamples and composites}\label{sec:2.3} \indent\indent Generating composite spectra of quasars includes three steps: (i) normalization of each spectrum, (ii) setting each spectrum to the rest frame and (iii) calculation of the mean spectrum. Before we smoothed all spectra with a simple moving average over 3 points. Following \citet{mcdonald+06} we removed the following wavelength regions from our analysis because of calibration problems due to strong sky lines: $5575\mbox{\AA}<\lambda<5583\mbox{\AA}$, $5888\mbox{\AA}<\lambda<5895\mbox{\AA}$, $6296\mbox{\AA}<\lambda<6308\mbox{\AA}$, and $6862\mbox{\AA}<\lambda<6871\mbox{\AA}$, where the second region is 1\AA\ wider than that presented in \citet{mcdonald+06}, because we added also Na{\sc i} (5894.6\,\AA) interstellar line to it. Normalization of each spectrum is needed to be applied due to different apparent flux densities. Taking into account the similarity of quasar spectra and following \citet{press+93} and \citet{zheng+97} we normalize each spectrum on the (arithmetic) mean flux in all pixels within the rest wavelength range 1450--1470\,\AA. This range lies blueward from the C{\sc iv} emission line and is usually considered to be free of obvious emission and absorption. The last claim appears to be not precise because of weak emission lines, which can be seen in composite spectra, but we neglect this fact, because intensity of these lines is comparable to noise level in individual spectra. For further study to reduce possible uncertainties we used only spectra with the rms of the normalization constant $A$ less than 15\%; the number of them is 3\,493 and its redshift distribution is shown in Figure~\ref{fig:zdistr} (\textit{solid} line). Considering the continuum redward of Ly$\alpha$ emission line to be a power-law $\sim\lambda^{\alpha_{\lambda}}$, we calculated its index $\alpha_{\lambda}$ for each individual spectrum within the range between Ly$\alpha$ and C{\sc iv} emission lines. For this purpose we selected the following wavelength ranges from the composite spectrum compiled from all the spectra: 1278$-$1286\,\AA\AA, 1320$-$1326\,\AA\AA, 1345$-$1360\,\AA\AA\ and 1440$-$1480\,\AA\AA. The distribution of the obtained indices is shown in Figure~\ref{fig:alpha-distr}. \begin{figure} \centering \epsfig{figure=alphadistr.eps,width=.98\linewidth} \caption{The distribution of spectral indices.} \label{fig:alpha-distr} \end{figure} \begin{figure*} \centering \begin{minipage}{.75\linewidth} \epsfig{figure=spec-all.eps,width=.99\linewidth} \caption{Composite spectra of 16 subsamples within 1025--1650\,\AA. The highest spectrum has the steepest continuum redward of the Ly$\alpha$ emission line.} \label{fig:all-spec} \end{minipage} \end{figure*} Using the obtained values of $\alpha_{\lambda}$, we selected 16 subsamples each of 200 spectra with $\alpha_{\lambda}$ closest to $-0.7-k\cdot0.1$, $k=0..15$. Then dividing each $j$-th selected spectrum onto its normalization constant $A^{j}$, rebinning them with $\Delta\lambda_{rest}=2$\,\AA\ and stacking the spectra into the rest frame, we obtained the mean arithmetic composite spectra. The dispersion $\sigma^{2}$ of each pixel of the composite spectrum was calculated from the noises $\sigma_{i}$ of pixels of individual spectra constituting it. These spectra within 1025--1650\,\AA\ are shown in Figure~\ref{fig:all-spec}. The dashed lines indicate the laboratory wavelengths of the lines identified by \citet{vandenberk+01} in this range. \subsection{Properties of subsamples}\label{sec:2.4} \indent\indent The first row of Table~\ref{tab:samples} contains the spectral indices of obtained composite spectra. In this case they were calculated in the way mentioned above, but the parts of the spectra which are the most free from emission lines were selected manually in each spectrum. The part of spectra used for $\alpha_{\lambda}$ determination is shown in Figure~\ref{fig:lena}. The regions selected for continuum fitting slightly vary from spectrum to spectrum and not all four parts used in calculation of indices of the individual spectra were taken into account in the case of composite spectra. This is clearly seen on example of the region with the shortest wavelengths: with the steepening of the spectrum its centre moves from $\sim1280$\,\AA\ to $\sim1290$\,\AA. It means that determination of the spectral index using the same regions as it was done for individual spectra is only an approximation, but it is sufficient for rough separation of spectra for compilation of composites. \begin{figure} \centering \epsfig{figure=spec-lena.eps,width=.98\linewidth} \caption{The parts of the composite spectra used for continuum approximation (\textit{short-dashed} line). The highest spectra are the steepest ones. Spectra with indices $-0.91$, $-1.62$ and $-2.14$ are shown with the \textit{solid} lines along with the fitted power-law continua (\textit{dashed} lines).} \label{fig:lena} \end{figure} The second row of Table~\ref{tab:samples} contains the mean redshift of each subsample. It is seen, that the mean redshift slightly increases with $\alpha_{\lambda}$, that might be an evidence for redshift evolution of the quasar spectral shape. On the other hand, the absence of such evolution in Fig.~\ref{fig:z-slo} for the whole sample of 3\,493 quasars means, that increase of the mean redshift for subsamples is probably a result of some selection effect. \begin{table} \begin{minipage}{80mm} \centering \caption{The mean redshift $\bar{z}$ and the mean monochromatic luminosity $\langle\log{l_{1450}}\rangle$ of subsamples and the spectral indices $\alpha_{\lambda}$ of their composites. The quoted error bars of $\langle\log{l_{1450}}\rangle$ and $\bar{z}$ are the root mean squares for corresponding distributions.}\label{tab:samples} \begin{tabular}{c|c|c|c} \hline n & $\alpha_{\lambda}$ & $\bar{z}$ & $\langle\log{l_{1450}}\rangle$ \\ \hline 1 & $-0.91\pm0.03$ & $2.90\pm0.54$ & $42.70\pm0.21$ \\ 2 & $-0.97\pm0.02$ & $2.89\pm0.53$ & $42.72\pm0.21$ \\ 3 & $-1.02\pm0.01$ & $2.87\pm0.51$ & $42.72\pm0.21$ \\ 4 & $-1.04\pm0.05$ & $2.85\pm0.49$ & $42.72\pm0.24$ \\ 5 & $-1.19\pm0.07$ & $2.82\pm0.43$ & $42.73\pm0.24$ \\ 6 & $-1.35\pm0.13$ & $2.84\pm0.49$ & $42.74\pm0.21$ \\ 7 & $-1.42\pm0.05$ & $2.82\pm0.44$ & $42.76\pm0.23$ \\ 8 & $-1.42\pm0.02$ & $2.76\pm0.40$ & $42.78\pm0.21$ \\ 9 & $-1.62\pm0.14$ & $2.77\pm0.44$ & $42.78\pm0.21$ \\ 10 & $-1.64\pm0.15$ & $2.80\pm0.46$ & $42.79\pm0.27$ \\ 11 & $-1.73\pm0.09$ & $2.75\pm0.45$ & $42.78\pm0.25$ \\ 12 & $-1.88\pm0.07$ & $2.75\pm0.43$ & $42.78\pm0.22$ \\ 13 & $-1.92\pm0.11$ & $2.74\pm0.43$ & $42.77\pm0.24$ \\ 14 & $-2.02\pm0.05$ & $2.72\pm0.42$ & $42.74\pm0.23$ \\ 15 & $-2.07\pm0.07$ & $2.74\pm0.42$ & $42.74\pm0.23$ \\ 16 & $-2.14\pm0.05$ & $2.74\pm0.43$ & $42.75\pm0.24$ \\ \hline \end{tabular} \end{minipage} \end{table} \begin{figure*} \centering \epsfig{figure=slopes-Mu.eps,width=.98\linewidth} \caption{Spectral index ($\alpha_{\lambda}$) -- absolute magnitude ($M$) diagrams for \textit{u,\,g,\,r,\,i}-bands.} \label{fig:slo-mu} \end{figure*} To study any optical-UV luminosity dependence of the spectral index we plotted $\alpha_{\lambda}$\,--\,$M$ diagrams for absolute magnitudes $M$ in $u$ (3551\,\AA), $g$ (4686\,\AA), $r$ (6165\,\AA), $i$ ($7481$\,\AA) bands (Fig.~\ref{fig:slo-mu}). These magnitudes were calculated within the frame of the flat $\Lambda$CDM cosmological model with $H_0=70$\,km/s/Mpc, $\Omega_M=0.3$, using the apparent psf magnitudes and reddening in corresponding bands from the SDSS DR7, K-correction values from \citet{richards_06} and correction for Galactic extinction from \citet{schlegel+98}. As one can see from Fig.~\ref{fig:slo-mu} there is no dependence between the spectral index and absolute magnitudes in these bands, however even the $u$-band central wavelength is about 2000\,\AA\ far from the bump peak, thus all these magnitudes are not a good characteristics in case of the UV-bump luminosity. Hence we also plotted the $\alpha_{\lambda}$\,--\,$\log{l_{1450}}$ diagram, where $\log{l_{1450}}$ is the monochromatic luminosity at 1450\AA, calculated from the mean flux within the wavelength range of $1449-1451$\,\AA. Generally speaking this range is also not the best choice, but this one is the most free of obvious emission and absorption features, thus it should be the most appropriate characteristics the quasar luminosity in continuum within the bump. Any dependence of $\alpha_{\lambda}$ on monochromatic luminosity at 1450\,\AA\ can be seen from Fig.~\ref{fig:nor-slo}, as well as from its mean values listed in the third row of Table~\ref{tab:samples}. \begin{figure} \centering \epsfig{figure=z-alpha.eps,width=.98\linewidth} \caption{Redshift distribution of spectral indices.}\label{fig:z-slo} \end{figure} \begin{figure} \centering \epsfig{figure=norma-slope-abs.eps,width=.98\linewidth} \caption{Spectral index ($\alpha_{\lambda}$) -- logarithm of monochromatic luminosity ($\log{l_{1450}}$) diagram.} \label{fig:nor-slo} \end{figure} \section{Templates for redshift measurement}\label{sec:3} \subsection{General notes}\label{sec:3.1} \indent\indent The precise redshift measurement for quasars is a challenging problem, because unlike normal galaxies they do not have narrow emission lines, except well-known [O{\sc iii}] (5007\,\AA), which is used for redshift calibration for quasars with $z\approx0.05-0.85$ (e.\,g. \citealt{vandenberk+01}). All other emission lines are usually broad and blended, and the rest wavelengths for high-ionization lines are known to be systematically blueshifted compared to low-ionization lines (see e.\,g. \citealt{Gaskell+1982,Tytler+1992,Richards+2002,Shen+2007}). Thus the redshift measured with the help of some given emission line appears to be lower or higher than that measured with another line. The discrepancy is about several hundreds km/s, which, for example, is of the same order as the estimations of quasar pairwise velocities measured via redshift-space distortions (\citealt{Outram+2001,Croom+2005,daAngela+2005,daAngela+2008,Mountrichas+2009,Ivashchenko+2010}). Actually, the value, estimated from redshift-space distortions, namely from the `Finger of God' effect, is a superposition of the pairwise velocity and the redshift errors. Hence, the more accurate the redshifts measurement we have the more precise pairwise velocity estimation we can do. But in addition to the blueshifting of high-ionization lines (this effect is well studied and can be taken into account somehow) the other systematic effects can contribute to the redshift errors, e.\,g. the fact that the only one template used for redshift calculation via cross-correlation technique is usually the composite spectrum stacked from a large number of single spectra without any separation by luminosity, spectral index, line equivalent width etc. But, as it was mentioned in Sec.\,\ref{sec:1}, these differences could be significant, and hence the use of only one `averaged' template for all types of quasar spectra could introduce additional uncertainties into the redshift measurements. Therefore, we tried to estimate, whether one can do better with redshift uncertainties using our composite spectra, generated with separation by spectral index, as templates. \begin{figure*} \centering \epsfig{figure=z_sigma_all.eps,width=.98\linewidth} \caption{Left to right: the difference between redshifts measured with our composites ($z_{1}$) and the SDSS template ($z_{2}$) as a function of $z$; ratio of the lower 1$\sigma$ marginalized uncertainties of $z_{2}$ to those of $z_{1}$; the same ratio for upper uncertainties.} \label{fig:z-z} \end{figure*} \subsection{The test sample and SDSS template}\label{sec:3.2} \indent\indent We selected by eye a test sample of spectra from our full sample of 3\,493 objects. These spectra are those of the most luminous objects, they have the highest signal-to-noise ratio and contain the smallest possible number of unwanted peculiarities, which can influence the redshift measurement with the rough procedure described below (Sec.~\ref{sec:3.3}), like strong telluric lines not extracted properly with SDSS pipelines. The redshifts of these objects, listed in HW catalogue, lie within the range of $2.3-3.5$, the arithmetic mean of their redshift errors from HW catalogue is 0.002, that is twice smaller than that of all objects with the same redshift range in the catalogue. We selected only the spectra with the index values within the range from $-2.2$ to $-0.8$ to match the index range of our composites. The final number of spectra in the test sample is 208. As a comparison template we have chosen the \citet{vandenberk+01} composite spectrum, which is used as cross-correlation template for redshift measurements in the SDSS\footnote{http://www.sdss.org/dr7/algorithms/redshift\_type.html}. It is compiled from a sample of 2204 quasar spectra from the Early Data Release of the SDSS. The sample covers a redshift range of $0.044-4.789$ and has a median redshift of $\bar{z}=1.253$. The spectral index of this composite (within the range of $1350-4230$\,\AA) is $\alpha_{\lambda}=-1.54$. It is worth to note, that our composites differ from that of \citet{vandenberk+01} not only in spectral index discretization, but also in the mean redshifts of the samples, which is about 2.8 for our ones. The reason for this difference is our intention to use selected sample for Ly$\alpha$-forest study, while \citet{vandenberk+01} utilised the whole redshift range of SDSS quasars. Here the question about redshift evolution of the quasar spectral shape and its influence on the redshift measurement accuracy arises. This problem is hard to study properly because of limitations imposed by the atmosphere on the observable spectral range. In fact, each observed quasar spectrum covers only a narrow part of the Big Blue bump, and hence each piece of the composite spectrum appears to be compiled from spectra of quasars with redshifts within some limited redshift range. Therefore, we can neglect the mean redshift difference between our samples and the sample of \citet{vandenberk+01}. \vspace*{-2ex} \subsection{Algorithm}\label{sec:3.3} \indent\indent The redshift measurement algorithm described below does not compete with specialised SDSS pipelines. It is developed only to test the difference between templates on example of `good' spectra, like those from our test sample, and cannot treat properly spectra with low signal-to-noise ratio, BALs, etc. Before utilisation we interpolated our composite spectra with spline polynomials, because they have about two-five times wider wavelength bins (2\AA), depending on the wavelength (Note, that the \citet{vandenberk+01} template has nonlinear dispersion as single SDSS spectra). To calculate the redshift with the help of our composites we were searching for the maximum value of the likelihood function $\EuScript{L}\sim\exp\left(-\chi^{2}/2\right)$, where $$ \chi^{2}=\sum\left[f_{obs}(\lambda_{obs})-Af_{comp}(n,\lambda_{obs}/(1+z))\right]\sigma_{f}^{-2}. $$ Here $f_{obs}$ and $f_{comp}$ are the fluxes in the test and template spectra, $\sigma_{f}$ is the noise of the test spectrum, and free parameters are flux normalization factor $A$, the composite number $n$, and the quasar redshift $z$. After finding the best fit values of parameters, $A$ and $n$ were fixed and 1$\sigma$ marginalized errors of $z$ were calculated. In the case of the SDSS template we used the same technique, but with only two free parameters: the flux normalization factor and the redshift. \subsection{Results and discussion}\label{sec:3.4} \indent\indent In Fig.~\ref{fig:z-z} (left panel) the difference $z_{1}-z_{2}$ is shown as a function of quasar redshift, where we denote the redshift measured with our templates as $z_{1}$ and those measured with the SDSS template as $z_{2}$. The middle and right panels of Fig.~\ref{fig:z-z} present the ratio $\sigma_{z,2}/\sigma_{z,1}$ of their lower and upper 1$\sigma$ uncertainties. One can see, that the redshifts measured with our templates are systematically higher, than those measured with the SDSS template, with the mean difference of 0.004. Meanwhile, the both lower and upper 1$\sigma$ uncertainties of the redshifts measured with our templates are systematically smaller. The mean difference of the lower and upper uncertainties are 0.022 and 0.008, correspondingly. Surely, the absolute values of the error differences cannot be compared to the redshift errors obtained with the SDSS pipeline, but their mean ratio (1.6 and 1.4 for lower and upper uncertainties, correspondingly) claims for possibility to reduce redshift errors up to 1.5 times when using a set of templates with different spectral indices instead of one mean template. \section[]{Application for $\bar{F}(z)$ measurement}\label{sec:4} \subsection{General notes}\label{sec:4.1} \indent\indent Composite spectra of quasars are also used in Ly$\alpha$-forest studies to determine intrinsic shape of quasar spectrum within the Ly$\alpha$-forest region prior to its absorption by intergalactic neutral Hydrogen. In these studies the Ly$\alpha$ forest region usually means the wavelength range between Ly$\beta$ and Ly$\alpha$ emission lines ($\sim1025-1216$\,\AA\ or more narrow), where the Ly$\alpha$-forest is not `contaminated' by other Ly-series forests. In high-resolution spectra, where it is easy to find unabsorbed parts of spectrum within the Ly$\alpha$-forest this procedure of `continuum' fitting is done directly with manual selection of such regions in each spectrum. Here, the term `continuum' is used for the whole intrinsic quasar spectrum including quasar emission lines. But for spectra with medium resolution spectra, like those in the SDSS, it is difficult to select unabsorbed regions due to blending of absorption lines. In this case another techniques are applied. They are based on similarity of quasar spectra, which allows to utilise composite spectra for estimation of the mean transmission at different redshifts. It was done, e.\,g. by \citet{bernardi+03,desjacques_07,greg+10}. More complicated, but similar to this one, method was proposed by \citet{mcdonald+06}. Recently, \citet{busca+12} used the idea of composite spectra utilisation for continuum fitting (to first approximation) in study of the Ly$\alpha$-forest with the first Baryon Oscillation Spectroscopic Survey (BOSS) data. The main idea of composite spectra utilisation for Ly$\alpha$ forest studies is the following. If the data (spectrum) are given in the form of pixels with (observed) wavelengths labelled $\lambda_{i}$, the flux density value $f_{i}$ and the noise $n_{i}$, one can present the observed flux density $f_{i}^{j}$ in $i$-th pixel within the Ly$\alpha$-forest region of $j$-th quasar as \begin{equation}\label{eq:flux_dens} f_{i}^{j} = A^{j}\bar{C}(\lambda_{rest})(1+\delta_{C,i}^{j})\bar{F}(z)(1+\delta_{F,i}^{j})+n_{i}^{j}, \end{equation} where $A_{j}$ is the flux normalization constant, the wavelength $\lambda_{i}$ of the absorption Ly$\alpha$ feature produced by the `cloud' of intergalactic H{\sc i} is related to its redshift $z_{i}$ as $\lambda_{i} = 1215.67(1+z_{i})$, $C(\lambda)$ is the mean `continuum' level (i.\,e. the mean intrinsic quasar spectrum), $\delta_{C}$ are deviations of individual continuum from the mean one, $n$ is the noise, $\bar{F}$ is the mean transmission of intergalactic medium in the Ly$\alpha$ line for a given redshift, and $\delta_{F}$ are variance (or fluctuations) of transmitted flux. Different terms of expression~\eqref{eq:flux_dens} depend on different variables: namely, `continuum' $(C+\delta_{C})$ is a function of the rest wavelength $\lambda_{rest}$; mean transmission $\bar{F}$ is a function of redshift, $\delta_{F}$ depends on a line of sight (presented by a given quasar); and $n$ depends on the instrument properties and conditions of observation (thus can also be considered as a function of the observed wavelength $\lambda_{obs}$). Hence, the mean arithmetic composite spectrum for a given subsample with the mean redshift $\bar{z}$ can be presented as a product of the mean `continuum' and the mean transmission: \begin{equation}\label{eq:bar_f} f(\lambda_{rest})=\bar{C}(\lambda_{rest})\bar{F}(\bar{z}), \end{equation} because the values $\langle\delta_{F,i}^{j}\rangle$, $\langle\delta_{C,i}^{j}\rangle$ and $\langle n_{i}^{j}\rangle$ are equal to zero when averaging over a large number of quasar spectra within a given redshift bin (see e.\,g.~\citet{bernardi+03,mcdonald+06} for details). Therefore, having several composite spectra stacked from subsamples of quasar spectra with different $z$ and assuming, that the mean quasar spectrum does not evolve with time, one can determine redshift dependence of $\bar{F}$ . \subsection{Estimation of errors}\label{sec:4.2} \indent\indent To estimate the possible errors introduced to estimations of $\bar{F}(z)$ by neglect of the differences in spectrum shape, namely of the spectral index, we have calculated the following ratio: \begin{equation*} \eta=\frac{\langle{f_{i}}\rangle_{q}}{\langle{f_{i}}\rangle_{1}}, \end{equation*} where $\langle{f_{i}}\rangle$ is the mean flux within the given wavelength range in $q$-th and first composite spectra. We have chosen the following wavelength ranges: $1450-1470$, $1050-1100$ and $1100-1150$\,\AA. The first range is the one over which we normalized the spectra (see Sec.~\ref{sec:2.3}), that is why the ratio $\eta$ for it is equal to unity for all spectra. The latter two ranges are two parts of the Ly$\alpha$-forest region. In Fig.~\ref{fig:norma-q} the obtained values of $\eta$ are shown as function of the spectrum number. As one can see from this figure, deviation of $\eta$ from unity for spectra with different indices, which has to cause additional uncertainty in estimation of $\bar{F}$, can reach the value of 20\,\%, if the sample used for stacking composite contains spectra with indices within the range between $-2.14$ and $-0.91$. Note, that the real distribution of spectral indices is even wider (see Fig.~\ref{fig:alpha-distr}). \begin{figure}\label{fig:norma-q} \centering \epsfig{figure=norma.eps,width=.99\linewidth} \caption{Ratio of the mean flux in each composite spectrum and composite spectrum with $\alpha_{\lambda}=-0.91$ within the wavelength ranges $1450-1470$, $1050-1100$ and $1100-1150$\,\AA.} \label{fig:spec-norma} \end{figure} \section{Searching for emission lines}\label{sec:5} \subsection{General notes}\label{sec:5.1} \indent\indent One more opportunity provided by composite spectra is detection of weak lines, which are not resolved in single spectra due to low signal-to-noise ratio. We tried to search for emission lines within some parts of our composite spectra and to compare obtained results with previously known, firstly, with the most detailed study of quasar composite spectrum by \citet{vandenberk+01}. For this purpose we have chosen two wavelength ranges: $\approx1050-1200$\,\AA\ and $\approx1210-1450$\,\AA. The former is the so-called Ly$\alpha$-forest region containing three broad emission features identified by \citet{vandenberk+01} as Ar{\sc i}, Fe{\sc iii} and C{\sc iii}*, the latter lies redward of the Ly$\alpha$ emission line, it is free of intergalactic Ly$\alpha$ absorption and contains five broad emission features associated by \citet{vandenberk+01} mainly with N{\sc v}, Si{\sc ii}, O{\sc i}+Si{\sc ii}, C{\sc ii} and Si{\sc iv}+O{\sc iv}]. Due to blending C{\sc iii}*, N{\sc v} and Si{\sc ii} features appear to merge into one $\approx130$\,\AA-wide emission feature with the strongest Ly$\alpha$ emission line between them. To search for emission lines in these eight spectral ranges we modelled each one with the sum of continuum and the smallest possible number of lines in a form of the Gaussian profile. Due to line blending we modelled some features together: N{\sc v} with Si{\sc ii} and O{\sc i}+Si{\sc ii}, O{\sc iv}+Ca{\sc ii} with Si{\sc iv}+O{\sc iv}], hence the total number of separate spectrum parts, which were analysed, is five. The Ly$\alpha$ emission line has intensity several times higher then that of other features, and its profile is asymmetric (even if there was no blending by other lines) because its blue part is absorbed by intergalactic H{\sc i}. Therefore we have not tried to analyse the region around its peak, but included it separately into C{\sc iii}* and N{\sc v}+Si{\sc ii} features in a form of one additional line, despite the fact that it also can have a complex structure. As it was discussed above (see Sec.~\ref{sec:1}), the quasar continuum redward of the Ly$\alpha$ emission line is fitted well the power law. The wavelength range blueward from it is also considered to be a power law, but much steeper, or even close to unity \citep{zheng+97}. One should also keep in mind, that in both cases the power law is just an approximation of some smooth curve at limited ranges (indeed a tangent to a real curve determining UV continuum of quasars). That is why consideration of such narrow wavelength bins ($\approx$50-100\,\AA\ wide) as a constant, wherever it is possible, seems to be precise enough, when the main goal is detection of lines. The main difference in modelling of the features blueward and redward of 1216\,\AA\ is the presence of some absorption described by $\bar{F}$ parameter according to \eqref{eq:bar_f}. Taking into account the similarity of the mean redshifts for all our subsamples we can assume $\bar{F}(z)=\bar{F}'$ to be the same for each subsample and renormalize \eqref{eq:bar_f} as $f(\lambda_{rest})=\bar{C}'(\lambda_{rest})$, where $\bar{C}'=\bar{C}\bar{F}'$. \subsection{Spectra modelling}\label{sec:5.2} \indent\indent We fitted each of five wavelength regions described above with the following model: \begin{equation}\label{eq:compos} f(\lambda)=C+\sum\limits_{k}a_{k}\exp\left[-\frac{(\lambda-\lambda^{0}_{k})^{2}}{2w_{k}^{2}}\right]. \end{equation} Here $\lambda$ is the rest frame wavelength, $a_{k}$, $\lambda_{k}$, $w_{k}$ are the amplitudes, the central wavelengths and FWHM (up to factor of $\sqrt{2}$) of each $k$-th emission feature, and $C$ describes continuum in a form of (i) constant $C=b$ or (ii) power law $C=d\lambda^{\alpha_{\lambda}}$ with fixed $\alpha_{\lambda}$ from Table~\ref{tab:samples}. The wavelength ranges along with the type of continuum shape used for modelling are presented in Table~\ref{tab-ranges}. For three first ranges the wavelength were selected manually for each spectrum and thus their wavelength ranges slightly differ. \begin{table} \centering \caption{The spectral ranges considered in the present study and the forms of continuum used for their modelling.}\label{tab-ranges} \vspace*{1ex} \begin{tabular}{ccc} \hline n & range, \AA & continuum \\ \hline 1 & $\approx$1050--1095 & (i) \\ 2 & $\approx$1095--1150 & (i) \\ 3 & $\approx$1150--1185 & (i) \\ 4 & 1215--1321 & (i) \\ 5 & 1323--1447 & (ii) \\ \hline \end{tabular} \end{table} The fitting procedure was conducted in the following two steps: (a) using the \texttt{IDL lmfit} subroutine for each range from Table~\ref{tab-ranges} the best fit model in a form of \eqref{eq:compos} with the smallest possible number of Gaussians was found; (b) the central wavelengths $\lambda^{0}$ obtained at the first step were fixed and the best fit values of other parameters $\{b/d,a_{k},w_{k}\}$ with the 1$\sigma$ marginalized errors were calculated by the Markov Chain Monte Carlo (MCMC) method using \texttt{CosmoMC} package as a generic sampler (with the values of $\{b/d,a_{k},w_{k}\}$ obtained previously as starting values). The MCMC technique has been chosen as it is fast and accurate method of exploration of high-dimensional parameter spaces. In each case we generated 8 chains which have converged to $R-1<0.0015$. Due to the small values of amplitudes of all lines compared to the Ly$\alpha$ emission line and larger uncertainty at Ly$\alpha$-forest range, the central wavelength of Ly$\alpha$ was determined only while fitting the 4-th range, and the same value was used as fixed while fitting the 3-rd range. Note, that we include the Ly$\alpha$ feature into both wavelength ranges only to take into account the wings of this feature, the influence of which we cannot neglect in both cases. Thus the parameters of this feature obtained from fitting of both regions cannot be considered as the parameters of the Ly$\alpha$ emission line. Due to the small values the flux dispersion, $\sigma_{f}^{2}$, in composite spectra determined from the covariance matrix to estimate errors correctly we introduced additional intrinsic errors, $\sigma_{int}$, determined such as the total dispersion, $\sigma^{2}=\sigma_{f}^{2}+\sigma_{int}^{2}$, is such resulting the minimal $\chi^{2}/$d.o.f. to be $1-0$. \subsection{Line identification}\label{sec:5.3} \indent\indent The central wavelengths of all the lines found in composite spectra are presented in Tables~4 and 5 for ranges 1-3 and 4-5, correspondingly. Only the lines for which the parameters were calculated with the help of MCMC method are presented. The arrows stand for regions for which the fit failed, `tent.' means tentative identification by eye (which was not included in fit). Despite of variance in central wavelengths and the number of lines in different spectra we tried to systematize them. In most cases, except the the 3rd and 5th ranges, the lines seem to be the same in all spectra, but due to the difference in their FWHM in one spectrum two given lines are resolved, while in another spectrum they are blended and appear to be fitted better with one Gaussian. The values in brackets in Tables~4, 5 mean that such dublets are fitted by one Gaussian. For the line identifications we used information from Table~3, where we listed emission lines found previously in composite UV spectra of quasars from HST and FUSE missions by \citet{zheng+97,telfer+02,scott2+04}, high-resolution by \citet{brotherton+94,Laor+94,Laor+1995,laor+97,Vestergaard+01,leighly+07,binette+08}, and also in composite spectra of quasars from optical Kast \citep{tytler+04} and SDSS \citep{vandenberk+01} surveys. The lines which were not identified with known lines from these papers are labelled with X$_{k}$. \subsection{Parameters of lines and continuum}\label{sec:5.4} \indent\indent The obtained values of the line parameters along with their 1$\sigma$ marginalized errors are presented in Tables~\ref{tab-spec-1}-\ref{tab-spec-16}, the values of constant $b$ for ranges 1--4 and $d$ for range 5 are presented in Table~\ref{tab:b-param}. The spectra along with the best fits for each range, separate Gaussians and continuum level are shown in Figures~\ref{fig:spec-1-4-a}-\ref{fig:spec-13-16-b}. In the last column of Tables~\ref{tab-spec-1}-\ref{tab-spec-16} the lower 3$\sigma$ marginalized limits a-3$\sigma$ for the amplitudes of each line are presented. All these limits are $>0$, this allows us to claim that all the `lines' are really detected in the composite spectra at least at 3$\sigma$ confidence level. \subsection{Discussion}\label{sec:5.5} \indent\indent As one can see from Table~\ref{tab:b-param} in most cases the values of continuum parameter $b$ for ranges 1--3 within one spectrum vary from one range to another within errors. Therefore, continuum level within the range between Ly$\beta$ and Ly$\alpha$ emission lines, utilized for studies of the Ly$\alpha$-forest, can be considered as a constant rather than having the same power-law form as that redward from Ly$\alpha$ emission line. This result agrees well with results from the composite UV spectra of quasars \citep{telfer+02,zheng+97}, which evidence for much steeper continuum in the Ly$\alpha$-forest region, than that from $\lambda>1216$\,\AA\ range. The number of emission lines, `detected' in both blue and red parts of UV bump, is larger than the previously known one. It means that differentiation of spectra according to their spectral index makes sense and indeed helps to reveal new emission features, which are not seen in composite spectra compiled neglecting this difference. It is clearly seen from Fig.~\ref{fig:spec-1-4-b}, \ref{fig:spec-5-8-b}, \ref{fig:spec-9-12-b} and \ref{fig:spec-13-16-b} that in some cases the wings of the emission feature in the 5-th wavelength region are fitted with one or two very broad but low-amplitude Gaussians. Most probably these Gaussians are `artificial' and serve as a fit for superposition of a number of weaker lines. On the other hand, in the continuum level in case of first three wavelength regions the continuum level varies from one range to another, thus the values of $sigma$ parameter, which is proportional to FWHM, cannot be considered as the `true' values and used for further analysis, e.\.g. for studies of FWHM variations with the spectral index. The values of Ly$\alpha$-line parameters from both red and blue parts are presented only for information about fits in general and cannot be compared or considered as the true Ly$\alpha$ line parameters. \section{Conclusions}\label{sec:6} \indent\indent We compiled 16 composite spectra from subsamples of individual SDSS DR7 quasar spectra with different spectral indices $\alpha_{\lambda}$ within the wavelength range 1270--1480\,\AA\ and studied the possible effects caused by neglect of this discretization when using composite spectra of quasars in different fields. The main results of the present work are the following:\\ \indent(i) the redshifts measured for a test sample of high signal-to-noise ratio quasar spectra using these composites as templates appear to be systematically higher than those calculated with a traditional template, compiled from spectra with different $\alpha_{\lambda}$, with 1.5 times smaller errors in the former case; \\ \indent(ii) the difference in $\alpha_{\lambda}$ in individual spectra used for compilation of composites can yield the mean transmission uncertainty up to 20\%; \\ \indent(iii) a number of emission lines indistinguishable in ordinary composites, but seen in individual high-resolution spectra, can be detected in such composites; \\ \indent(iv) it is confirmed that continuum level within the range between Ly$\beta$ and Ly$\alpha$ emission lines, utilized for studies of the Ly$\alpha$-forest, can be considered as a constant rather than having the same power-law form as that redward from Ly$\alpha$ emission line, that agrees well with steeper continuum index obtained from the composite UV quasar spectra \citep{telfer+02,zheng+97}, and was discussed previously \citep{desjacques_07};\\ \indent(v) it is also shown, that there is no dependence of $\alpha_{\lambda}$ on quasar luminosity in SDSS $u$, $g$, $r$ and $i$ bands, and monochromatic luminosity At $1450$\,\AA. It confirms results of \citet{yip+04} and \citet{vandenberk+04}, who found no relation between luminosity and spectral index, when analysing composite spectra with different luminosity. Detailed analysis in our previous study of the ranges redward of the Ly$\alpha$ emission lines, which is free from the Ly$\alpha$-forest, shown that there is also no evidence for spectral index dependence of equivalent width of emission lines \citep{torbaniuk+12}. The absence of dependence of the UV-bump shape on luminosity in the bands mentioned above requires further study of this region for understanding the physics behind the difference in the UV-bump shape in spectra of different quasars. It is worth to note, that the absolute magnitudes in given bands used as a characteristic of luminosity include K-correction, which itself is determined within the frame of some model of the spectrum general shape (e.\,g. with the spectral index of $\alpha_{\lambda}=-1.5$). The proposed approach can be applied for generation of new templates for more precise quasar redshift measurements with the common cross-correlation technique used in redshift surveys, more precise theoretical determination of K-correction and colour-indices, as far as for determination of continuum and mean transmission in Ly$\alpha$ forest studies. \section*{Acknowledgements} \indent\indent The authors are thankful to Mariangela Bernardi, Ravi K. Sheth, Oleg Ruchayskiy, Alexey Boyarsky and Ievgen Vovk for fruitful discussions. The authors also acknowledge the usage of \texttt{CosmoMC} package. This work has been supported by Swiss National Science Foundation (SCOPES grant No 128040). The authors are also thankful to the Sloan Digital Sky Survey team. Funding for the SDSS and SDSS-II has been provided by the Alfred P.\,Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. \input{ivashchenko.sergijenko.torbaniuk.bbl} \clearpage \begin{sidewaystable*} \vspace*{130ex} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \multicolumn{13}{l}{}\\ \multicolumn{13}{l}{{\bf Table 3.} Emission lines with their measured rest-frame wavelengths from individual and composite quasar spectra within the range $\sim1050-1340$\,\AA\ from}\\ \multicolumn{13}{l}{previous studies. Asterisk indicates more than one allowed value of the total angular momentum J for that specific term and transition, bracket means}\\ \multicolumn{13}{l}{intercombination transitions.}\\ \multicolumn{13}{l}{}\\ \hline line & S{\sc iv} & Ar{\sc i}+N{\sc ii} & Si{\sc iii}* & Fe{\sc iii} & C{\sc iii}* & Si{\sc ii} & Si{\sc iii} & Ly $\alpha$ & O~{\sc v} & N~{\sc v} & Si~{\sc ii}$^{*}$ & Si~{\sc ii} \\ \hline $\lambda_{lab}$,\,\AA & & 1066.66+ & 1111.59 & & 1175.67 & 1194.12 & & 1215.67 & 1218.3 & 1238.8+ & 1248.4+ & 1260.4+\\ & & +1085.12 & & & & & & & & +1242 & +1251.1 & +1264.7+ \\ & & & & & & & & & & & & +1265.0 \\ \hline \citet{francis+91} & & & & & & & & 1216 & & 1240 & & \\ \citet{brotherton+94} & & & & & & & & 1217.2 & & 1241.8 & & \\ \citet{Laor+94} & & & & & & & & 1215.67 & & 1238.82+ & & \\ & & & & & & & & & & +1242.8 & & \\ \citet{Laor+1995} & & & & & & & & 1215.67 & & 1240.15 & & 1263.31 \\ \citet{laor+97} & & & & & & & & 1214.2 & & 1236.7+ & 1249.5 & 1260.5+ \\ & & & & & & & & 1214.2 & & +1239.1 & 1249.5 & +1264.5 \\ \citet{zheng+97} & & tent. & & & & & & & & & & \\ \citet{vandenberk+01} & & 1065.10 & & 1117.26 & 1175.35 & & & 1216.25 & & 1239.85 & & 1265.22 \\ \citet{brotherton+01} & & & & & & & & 1216 & & 1240 & & \\ \citet{Vestergaard+01} & & & & & & & & 1210.8+ & 1219.0 & 1238.8 & 1249.7 & 1257.2+ \\ & & & & & & & & +1214.4+& & 1242.8 & 1250.7 & +1260.7+ \\ & & & & & & & & +1216.9 & & & & +1265.2 \\ \citet{telfer+02} & & 1065 & & 1123 & 1176 & 1195 & & & & & & \\ \citet{scott2+04} & 1062+ & 1084 & & & & & & & & & & \\ & +1073 & & & & & & & & & & & \\ \citet{tytler+04} & & 1070.95 & & 1123.3 & 1175.88 & & & & & & & \\ \citet{leighly+07} & & 1084.2 & 1110.5 & & 1175.4 & 1193.6 & & & & & & \\ \citet{binette+08} & 1067 & 1084 & & 1123 & 1176 & 1194 & 1207 & & & & & \\ \hline \end{tabular} \vspace*{-1ex} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline line & Si~{\sc iii}$^{*}$ & O~{\sc i}+Si~{\sc ii} & C~{\sc ii} & Ca~{\sc ii} & O~{\sc iv} & Fe~{\sc v} & Fe~{\sc iii} & Fe~{\sc v} & O~{\sc i} & Si~{\sc iv}+O~{\sc iv}] \\ \hline $\lambda_{lab}$,\,\AA & 1295.5+1298.9 & 1302.2+1304.6+ & 1335.3 & 1342.3 & 1343.5 & 1343.1 & 1343.2 & 1345.6 & 1355.6 & 1395.5+1399.8+ \\ & & +1306.0+1309.3 & & & & & & & & +1401.8+1407.4 \\ \hline \citet{francis+91} & & 1302 & 1335 & & & & & & & 1400 \\ \citet{brotherton+94} & & 1305.5+1306.3 & 1338.5+1339.8 & & & & & & & 1399.2+1401.8 \\ \citet{Laor+94} & & 1303.49 & 1335.3 & & & & & & & 1399.61 \\ \citet{Laor+1995} & & & & & & & & & & 1396.75+1402.46 \\ \citet{laor+97} & & 1302.6+1305.5+ & 1334.3 & & & & & & & 1392.1+1400.6 \\ & & +1309.2 & & & & & & & & \\ \citet{zheng+97} & & & & & & & & & & \\ \citet{vandenberk+01} & & 1305.42 & 1336.6 & & & & & & & 1348.33 \\ \citet{brotherton+01} & & 1302 & 1335 & & & & & & & 1400 \\ \citet{Vestergaard+01} & 1293.9+1296.5 & 1299.0+1302.1+ & 1330.8+1334.0 & 1343.9 & 1343.9 & 1343.9 & 1343.9 & 1343.9 & 1343.9 & 1387.4+1392.2+ \\ & & +1305.5+1309.3 & & & & & & & & +1397.5+1401.2 \\ \hline \end{tabular} \end{sidewaystable*} \begin{sidewaystable*} \vspace*{-125ex} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} \multicolumn{8}{l}{{\bf Table 4.} Emission lines detected in spectra within ranges 1-3. Lines in brackets are dublets. Arrows stand for failed fit.}\\ \multicolumn{8}{l}{}\\ \hline \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline & X$_1$ & X$_2$+Ar{\sc i}+X$_3$ & N{\sc ii}+X$_4$ & X$_5$+X$_6$ & FeIII-multiplet & X$_7$ & X$_8$+C{\sc iii}$^*$+X$_9$ & X$_{10}$+Ly$\alpha$ \\ \hline \hline 1 & tent. & 1058.4+[1071.8] & 1084.9 & 1096.3 & 1111.8+1125.2 & & 1161.6+1173.9+1182.1 & 1214.4\\ \hline 2 & tent. & 1059.6+[1072.4] & 1084.6 & 1096.4 & 1111.5+1123.3 & tent. & 1161.8+1174.8+1182.4 & 1214.3\\ \hline 3 & tent. & 1059.5+[1073.9] & 1086.1 & 1096.9 & 1118.3+1125.7+1134.4 & tent. & 1161.4+[1176.5] & 1214.3\\ \hline 4 & tent. & 1059.5+[1071.0] & 1089.3 & tent. & 1117.0+1126.2+1136.5 & tent. & tent.+1175.2+1185.3 & 1214.8\\ \hline 5 & tent. & tent.+ [1070.7] & 1087.6 & & 1115.2+1124.4+1136.2 & 1152.3 & 1171.9 & 1214.3\\ \hline 6 & $\rightarrow$ & $\rightarrow$ & $\rightarrow$ & 1100.9 & 1116.9+1125.0+1132.4 & tent. & tent.+[1174.2] & 1214.2\\ \hline 7 & $\rightarrow$ & 1061.6+[1071.1] & 1079.3+1091.0 & & 1108.4+1116.6+1127.0+1143.9 & & tent.+[1172.1] & 1213.2\\ \hline 8 & $\rightarrow$ & 1060.3+[1071.3] & 1080.8 & 1098.5 & 1111.5+1125.4+1141.0 & tent. & 1172.5 & 1213.8\\ \hline 9 & $\rightarrow$ & tent.+[1070.2] & 1086.0 & 1099.6 & 1122.6 & tent. & 1172.6 & 1213.4 \\ \hline 10 & $\rightarrow$ & 1056.3+[1071.7] & 1086.8 & 1099.6 & 1117.3+1127.4+1141.7 & tent. & 1163.7+[1174.2] & 1213.8\\ \hline 11 & $\rightarrow$ & 1058.1+[1070.4] & 1079.9 & & 1106.7+1115.7+ 1125.6 & tent. & [1168.5]+1174.9 & 1213.7\\ \hline 12 & $\rightarrow$ & [1063.0]+1074.1 & 1084.0 & & 1115.5+1123.8+1129.0 & & tent.+1172.9+1178.5 & 1214.5\\ \hline 13 & $\rightarrow$ & [1063.8]+1072.6 & 1082.0+1087.5 & 1103.3 & 1111.9+1124.3+1135.9 & 1153.9 & 1164.1+[1172.7] & 1214.5\\ \hline 14 & $\rightarrow$ & $\rightarrow$ & $\rightarrow$ & 1101.6 & 1116.2+1126.2+1135.5 & tent. & 1172.0 & 1214.4\\ \hline 15 & 1055.8 & [1066.6]+1072.5 & 1083.2 & tent & 1108.3+1117.3+1125.5+tent. & $\leftarrow$ & $\leftarrow$ & $\leftarrow$\\ \hline 16 & 1052.4 & 1060.6+[1071.4] & 1083.1 & tent. & 1111.3+1123.7+1140.1 & 1155.5 & 1165.7+[1174.3] & 1214.1\\ \hline \end{tabular} \end{sidewaystable*} \begin{sidewaystable*} \centering \vspace*{125ex} \begin{tabular}{c|c|c|c|c|c|c|} \multicolumn{7}{l}{{\bf Table 5.} Emission lines detected in spectra within ranges 4-5. Arrows stand for failed fit.}\\ \multicolumn{7}{l}{}\\ \hline \hline & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline & Ly$\alpha+$O{\sc iv} & N{\sc v} & Si{\sc ii}$^\ast+$Si{\sc ii} & Si{\sc iii}$^\ast+$O{\sc i}$+$Si{\sc ii} & C{\sc ii}$+$O{\sc iv}$+$Ca{\sc ii} & Si{\sc iv}$+$O{\sc iv}$]$ \\ \hline \hline 1 & 1214.4$+$1225.6 & 1236.9 & 1256.7 & 1290.4$+$1304.4 & 1334.5$+$1346.7 & 1368.2$+$1382.5$+$1390.4$+$1399.8$+$1418.7$+$1436.3 \\ \hline 2 & 1214.3$+$1225.4 & 1236.5 & 1256.7 & 1290.4$+$1304.5 & 1335.0$+$1344.8 & 1364.9$+$1382.3$+$1390.5$+$1399.6$+$1417.7$+$1430.8 \\ \hline 3 & 1214.3$+$1225.0 & 1236.1 & 1256.7 & 1289.7$+$1304.1 & 1334.9$+$1343.2 & 1364.4$+$1380.1$+$1390.3$+$1399.3$+$1418.7$+$1431.4 \\ \hline 4 & 1214.8$+$1226.4 & 1236.4 & 1257.0 & 1290.6$+$1304.2 & 1334.7$+$1341.4 & 1364.8$+$1376.9$+$1386.5$+$1392.0$+$1399.0$+$1412.3$+$1419.6$+$1439.5 \\ \hline 5 & 1214.3$+$1225.3 & 1236.0 & 1256.7 & 1291.4$+$1304.5 & 1335.8 & 1370.5$+$1396.5$+$1401.7$+$1410.9$+$1422.4$+$1436.4 \\ \hline 6 & 1214.2$+$1224.3 & 1234.8 & 1256.5 & 1290.1$+$1304.4 & 1334.6$+$1348.5 & 1361.4$+$1382.1$+$1389.8$+$1399.3$+$1407.4$+$1435.4 \\ \hline 7 & 1213.2$+$1221.6 & 1236.1 & 1253.4 & 1300.4$+$1305.2 & 1334.9 & 1362.6$+$1394.5$+$1396.5$+$1418.9 \\ \hline 8 & 1213.8$+$1223.7 & 1235.9 & 1255.7 & 1285.7$+$1303.6 & 1334.2$+$1344.1 & 1363.0$+$1382.0$+$1390.3$+$1399.6$+$1413.3$+$1430.7$+$1438.6 \\ \hline 9 & 1213.4$+$1223.4 & 1236.3 & 1255.6 & 1286.6$+$1303.6 & $\leftarrow$ & $\leftarrow$ \\ \hline 10 & 1213.8$+$1223.9 & 1235.5 & 1252.7 & 1288.6$+$1304.6 & $\leftarrow$ & $\leftarrow$ \\ \hline 11 & 1213.7$+$1223.4 & 1236.2 & 1252.3 & 1288.4$+$1304.0 & 1334.0$+$1342.3 & 1376.2$+$1395.6$+$1401.5$+$1408.7$+$1421.0$+$1436.6 \\ \hline 12 & 1214.5$+$1225.7 & 1235.1 & 1256.0 & 1281.9$+$1301.6$+$1304.9 & 1334.8$+$1345.1 & 1385.1$+$1393.9$+$1401.0$+$1407.8$+$1423.3$+$1438.5 \\ \hline 13 & 1214.5$+$1225.8 & 1237.4 & 1250.8$+$1258.4 & 1276.7$+$1304.5 & 1333.9$+$1336.0 & 1376.0$+$1397.9$+$1419.3$+$1429.8 \\ \hline 14 & 1214.4$+$1225.0 & 1235.1 & 1255.7 & 1286.2$+$1304.0 & 1335.3 & 1388.8$+$1398.5$+$1422.1 \\ \hline 15 & 1213.7$+$1223.2 & 1235.5 & 1256.1 & 1283.7$+$1303.2 & 1335.8 & 1396.8$+$1397.3 \\ \hline 16 & 1214.1$+$1224.1 & 1235.6 & 1252.4 & 1285.6$+$1302.2 & 1333.7$+$1343.7 & 1371.8$+$1397.4$+$1416.8$+$1425.7 \\ \hline \end{tabular} \end{sidewaystable*} \clearpage \setcounter{table}{5} \begin{table*} \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 1 with $\alpha_{\lambda}=-0.91$.}\label{tab-spec-1} \vspace*{-2ex} \fontsize{7}{7}\selectfont \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1058.4 & $0.139^{+0.015}_{-0.027}$ & $5.81^{+0.40}_{-0.74}$ & $0.105$\\ 1071.8 & $0.191^{+0.009}_{-0.017}$ & $6.76^{+0.41}_{-0.73}$ & $0.169$\\ 1084.9 & $0.094^{+0.014}_{-0.025}$ & $3.92^{+0.50}_{-0.88}$ & $0.063$\\ 1096.3 & $0.025^{+0.005}_{-0.010}$ & $1.40^{+0.43}_{-0.72}$ & $0.012$\\ 1111.8 & $0.047^{+0.003}_{-0.006}$ & $5.41^{+0.40}_{-0.72}$ & $0.038$\\ 1125.2 & $0.091^{+0.003}_{-0.005}$ & $7.82^{+0.37}_{-0.67}$ & $0.084$\\ 1161.6 & $0.046^{+0.007}_{-0.013}$ & $3.39^{+0.67}_{-1.13}$ & $0.029$\\ 1173.9 & $0.088^{+0.006}_{-0.010}$ & $5.21^{+0.44}_{-0.81}$ & $0.075$\\ 1182.1 & $0.031^{+0.008}_{-0.016}$ & $1.78^{+0.56}_{-1.13}$ & $0.012$\\ 1214.1 & $1.111^{+0.025}_{-0.047}$ & $17.33^{+0.27}_{-0.53}$ & $1.051$\\ \hline 1214.1 & $2.831^{+0.007}_{-0.014}$ & $5.61^{+0.02}_{-0.04}$ & $2.813$\\ 1225.6 & $0.809^{+0.010}_{-0.019}$ & $5.18^{+0.05}_{-0.09}$ & $0.784$\\ 1236.9 & $1.172^{+0.005}_{-0.011}$ & $7.12^{+0.04}_{-0.07}$ & $1.158$\\ 1256.7 & $0.404^{+0.006}_{-0.011}$ & $11.21^{+0.18}_{-0.35}$ & $0.390$\\ 1290.4 & $0.048^{+0.005}_{-0.009}$ & $4.07^{+0.39}_{-0.72}$ & $0.036$\\ 1304.4 & $0.194^{+0.005}_{-0.010}$ & $6.78^{+0.23}_{-0.44}$ & $0.181$\\ 1334.5 & $0.118^{+0.003}_{-0.005}$ & $6.45^{+0.14}_{-0.27}$ & $0.112$\\ 1346.7 & $0.035^{+0.001}_{-0.002}$ & $5.04^{+0.23}_{-0.44}$ & $0.032$\\ 1368.2 & $0.077^{+0.003}_{-0.005}$ & $9.136^{+0.34}_{-0.64}$ & $0.071$\\ 1382.5 & $0.112^{+0.002}_{-0.005}$ & $5.96^{+0.09}_{-0.18}$ & $0.106$\\ 1390.4 & $0.115^{+0.002}_{-0.004}$ & $4.77^{+0.10}_{-0.19}$ & $0.110$\\ 1399.8 & $0.326^{+0.003}_{-0.006}$ & $7.49^{+0.06}_{-0.11}$ & $0.318$\\ 1418.7 & $0.080^{+0.002}_{-0.004}$ & $7.67^{+0.16}_{-0.30}$ & $0.075$\\ 1436.3 & $0.043^{+0.003}_{-0.005}$ & $7.11^{+0.34}_{-0.65}$ & $0.036$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 2 with $\alpha_{\lambda}=-0.97$.}\label{tab-spec-2} \vspace*{-2ex} \fontsize{7}{7}\selectfont \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1059.6 & $0.154^{+0.013}_{-0.024}$ & $5.68^{+0.40}_{-0.71}$ & $0.125$\\ 1072.4 & $0.197^{+0.007}_{-0.013}$ & $5.53^{+0.20}_{-0.36}$ & $0.180$\\ 1084.6 & $0.106^{+0.013}_{-0.022}$ & $4.45^{+0.49}_{-0.85}$ & $0.078$\\ 1096.4 & $0.017^{+0.004}_{-0.008}$ & $1.99^{+0.56}_{-1.25}$ & $0.006$\\ 1111.5 & $0.030^{+0.003}_{-0.006}$ & $3.10^{+0.30}_{-0.56}$ & $0.022$\\ 1123.3 & $0.096^{+0.003}_{-0.006}$ & $9.38^{+0.47}_{-0.84}$ & $0.089$\\ 1161.8 & $0.057^{+0.014}_{-0.020}$ & $4.14^{+1.09}_{-1.60}$ & $0.033$\\ 1174.8 & $0.104^{+0.008}_{-0.014}$ & $4.90^{+0.36}_{-0.70}$ & $0.088$\\ 1182.4 & $0.035^{+0.006}_{-0.014}$ & $2.19^{+0.38}_{-0.73}$ & $0.018$\\ 1214.3 & $1.112^{+0.024}_{-0.046}$ & $17.08^{+0.37}_{-0.64}$ & $1.053$\\ \hline 1214.3& $2.715^{+0.007}_{-0.014}$ & $5.58^{+0.02}_{-0.05}$ & $2.696$\\ 1225.4& $0.768^{+0.010}_{-0.019}$ & $5.06^{+0.05}_{-0.09}$ & $0.742$\\ 1236.5& $1.186^{+0.005}_{-0.010}$ & $7.33^{+0.04}_{-0.07}$ & $1.172$\\ 1256.7& $0.403^{+0.006}_{-0.011}$ & $11.18^{+0.18}_{-0.36}$ & $0.389$\\ 1290.4& $0.043^{+0.005}_{-0.009}$ & $4.29^{+0.47}_{-0.87}$ & $0.031$\\ 1304.5& $0.185^{+0.005}_{-0.011}$ & $6.79^{+0.24}_{-0.45}$ & $0.171$\\ 1335.0& $0.100^{+0.002}_{-0.003}$ & $5.46^{+0.13}_{-0.23}$ & $0.096$\\ 1344.8& $0.026^{+0.001}_{-0.002}$ & $4.75^{+0.34}_{-0.58}$ & $0.023$\\ 1364.9& $0.048^{+0.002}_{-0.004}$ & $6.76^{+0.26}_{-0.49}$ & $0.043$\\ 1382.3& $0.127^{+0.002}_{-0.003}$ & $7.57^{+0.10}_{-0.19}$ & $0.123$\\ 1390.5& $0.067^{+0.002}_{-0.004}$ & $4.13^{+0.10}_{-0.19}$ & $0.062$\\ 1399.6& $0.304^{+0.002}_{-0.004}$ & $7.89^{+0.06}_{-0.11}$ & $0.300$\\ 1417.7& $0.0516^{+0.001}_{-0.002}$ & $5.64^{+0.17}_{-0.32}$ & $0.049$\\ 1430.8& $0.0326^{+0.002}_{-0.004}$ & $8.18^{+0.43}_{-0.81}$ & $0.028$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 3 with $\alpha_{\lambda}=-1.02$.}\label{tab-spec-3} \vspace*{-2ex} \fontsize{7}{7}\selectfont \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1059.5& $0.166^{+0.007}_{-0.013}$ & $6.94^{+0.31}_{-0.56}$ & $0.149$\\ 1073.9& $0.191^{+0.008}_{-0.015}$ & $5.79^{+0.15}_{-0.29}$ & $0.171$\\ 1086.1& $0.084^{+0.007}_{-0.013}$ & $3.37^{+0.21}_{-0.41}$ & $0.067$\\ 1096.9& $0.028^{+0.007}_{-0.013}$ & $3.05^{+0.88}_{-1.43}$ & $0.012$\\ 1118.3& $0.068^{+0.003}_{-0.006}$ & $8.23^{+0.32}_{-0.61}$ & $0.060$\\ 1125.7& $0.029^{+0.003}_{-0.008}$ & $2.59^{+0.27}_{-0.56}$ & $0.017$\\ 1134.4& $0.031^{+0.002}_{-0.004}$ & $4.12^{+0.48}_{-0.89}$ & $0.025$\\ 1161.4& $0.030^{+0.008}_{-0.017}$ & $2.45^{+0.67}_{-1.26}$ & $0.007$\\ 1176.5& $0.101^{+0.011}_{-0.020}$ & $7.60^{+0.62}_{-1.19}$ & $0.074$\\ 1214.3& $1.142^{+0.040}_{-0.078}$ & $16.08^{+0.29}_{-0.55}$ & $1.040$\\ \hline 1214.3& $2.544^{+0.008}_{-0.015}$ & $5.62^{+0.03}_{-0.06}$ & $2.524$\\ 1225.0& $0.713^{+0.011}_{-0.021}$ & $4.96^{+0.05}_{-0.10}$ & $0.685$\\ 1236.1& $1.177^{+0.005}_{-0.010}$ & $7.55^{+0.04}_{-0.07}$ & $1.164$\\ 1256.7& $0.404^{+0.006}_{-0.012}$ & $11.63^{+0.20}_{-0.39}$ & $0.388$\\ 1289.7& $0.044^{+0.005}_{-0.009}$ & $4.13^{+0.43}_{-0.78}$ & $0.032$\\ 1304.1& $0.181^{+0.006}_{-0.012}$ & $7.24^{+0.29}_{-0.54}$ & $0.166$\\ 1334.9& $0.083^{+0.002}_{-0.004}$ & $5.75^{+0.24}_{-0.47}$ & $0.078$\\ 1343.2& $0.028^{+0.004}_{-0.005}$ & $8.83^{+1.72}_{-3.02}$ & $0.022$\\ 1364.4& $0.038^{+0.003}_{-0.007}$ & $6.18^{+0.51}_{-0.88}$ & $0.029$\\ 1380.1& $0.104^{+0.006}_{-0.011}$ & $7.91^{+0.37}_{-0.64}$ & $0.088$\\ 1390.3& $0.066^{+0.005}_{-0.010}$ & $5.37^{+0.32}_{-0.55}$ & $0.052$\\ 1399.3& $0.290^{+0.006}_{-0.011}$ & $8.57^{+0.12}_{-0.24}$ & $0.275$\\ 1418.7& $0.050^{+0.002}_{-0.004}$ & $6.06^{+0.31}_{-0.55}$ & $0.045$\\ 1431.4& $0.030^{+0.005}_{-0.008}$ & $7.82^{+1.35}_{-2.15}$ & $0.020$\\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 4 with $\alpha_{\lambda}=-1.04$.}\label{tab-spec-4} \vspace*{-2ex} \fontsize{7}{7}\selectfont \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1059.5 & $0.099^{+0.008}_{-0.014}$ & $2.43^{+0.16}_{-0.31}$ & $0.081$\\ 1071.0 & $0.239^{+0.014}_{-0.027}$ & $9.54^{+0.33}_{-0.64}$ & $0.204$\\ 1089.3 & $0.054^{+0.011}_{-0.021}$ & $4.06^{+0.60}_{-1.10}$ & $0.027$\\ 1117.0 & $0.124^{+0.005}_{-0.010}$ & $8.32^{+0.26}_{-0.52}$ & $0.111$\\ 1126.2 & $0.055^{+0.003}_{-0.005}$ & $3.54^{+0.16}_{-0.31}$ & $0.048$\\ 1136.5 & $0.085^{+0.004}_{-0.008}$ & $4.69^{+0.20}_{-0.39}$ & $0.074$\\ 1175.2 & $0.093^{+0.007}_{-0.013}$ & $6.04^{+0.58}_{-0.99}$ & $0.076$\\ 1185.3 & $0.026^{+0.007}_{-0.015}$ & $1.58^{+0.78}_{-1.36}$ & $0.007$\\ 1214.8 & $1.236^{+0.042}_{-0.076}$ & $16.07^{+0.29}_{-0.72}$ & $1.140$\\ \hline 1214.8 & $2.465^{+0.008}_{-0.014}$ & $6.14^{+0.03}_{-0.06}$ & $2.446$\\ 1226.4 & $0.554^{+0.010}_{-0.020}$ & $4.74^{+0.06}_{-0.11}$ & $0.528$\\ 1236.4 & $1.142^{+0.005}_{-0.010}$ & $7.62^{+0.04}_{-0.07}$ & $1.129$\\ 1257.0 & $0.405^{+0.006}_{-0.012}$ & $11.54^{+0.21}_{-0.39}$ & $0.389$\\ 1290.6 & $0.038^{+0.005}_{-0.009}$ & $3.94^{+0.44}_{-0.83}$ & $0.026$\\ 1304.2 & $0.173^{+0.006}_{-0.011}$ & $7.28^{+0.32}_{-0.58}$ & $0.159$\\ 1334.7 & $0.077^{+0.001}_{-0.003}$ & $5.50^{+0.11}_{-0.21}$ & $0.074$\\ 1341.4 & $0.026^{+0.001}_{-0.002}$ & $4.28^{+0.23}_{-0.43}$ & $0.024$\\ 1364.8 & $0.041^{+0.002}_{-0.003}$ & $8.91^{+0.35}_{-0.63}$ & $0.037$\\ 1376.9 & $0.065^{+0.002}_{-0.003}$ & $5.77^{+0.17}_{-0.31}$ & $0.061$\\ 1386.5 & $0.114^{+0.003}_{-0.006}$ & $4.95^{+0.09}_{-0.16}$ & $0.106$\\ 1392.0 & $0.028^{+0.002}_{-0.004}$ & $2.82^{+0.16}_{-0.24}$ & $0.024$\\ 1399.0 & $0.297^{+0.001}_{-0.002}$ & $7.31^{+0.08}_{-0.17}$ & $0.294$\\ 1412.3 & $0.024^{+0.002}_{-0.004}$ & $3.71^{+0.26}_{-0.51}$ & $0.019$\\ 1419.6 & $0.054^{+0.001}_{-0.003}$ & $8.73^{+0.20}_{-0.37}$ & $0.050$\\ 1439.5 & $0.006^{+0.001}_{-0.002}$ & $2.37^{+0.53}_{-0.91}$ & $0.004$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 5 with $\alpha_{\lambda}=-1.19$.}\label{tab-spec-5} \vspace*{-2ex} \fontsize{7}{7}\selectfont \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1070.7 & $0.221^{+0.004}_{-0.007}$ & $8.60^{+0.22}_{-0.41}$ & $0.213$\\ 1087.6 & $0.055^{+0.004}_{-0.009}$ & $3.91^{+0.32}_{-0.62}$ & $0.043$\\ 1115.2 & $0.043^{+0.004}_{-0.007}$ & $2.10^{+0.18}_{-0.35}$ & $0.033$\\ 1124.4 & $0.101^{+0.004}_{-0.006}$ & $8.28^{+0.60}_{-1.08}$ & $0.094$\\ 1136.2 & $0.028^{+0.004}_{-0.007}$ & $2.17^{+0.25}_{-0.48}$ & $0.018$\\ 1152.3 & $0.039^{+0.006}_{-0.012}$ & $3.25^{+0.54}_{-1.01}$ & $0.024$\\ 1171.9 & $0.108^{+0.007}_{-0.013}$ & $7.94^{+0.39}_{-0.76}$ & $0.091$\\ 1214.3 & $1.123^{+0.023}_{-0.045}$ & $17.52^{+0.19}_{-0.37}$ & $1.063$\\ \hline 1214.3 & $2.819^{+0.007}_{-0.013}$ & $5.67^{+0.02}_{-0.04}$ & $2.802$\\ 1225.3 & $0.676^{+0.009}_{-0.018}$ & $4.78^{+0.04}_{-0.08}$ & $0.652$\\ 1236.0 & $1.203^{+0.005}_{-0.009}$ & $7.80^{+0.03}_{-0.07}$ & $1.191$\\ 1256.7 & $0.405^{+0.006}_{-0.011}$ & $11.49^{+0.17}_{-0.33}$ & $0.391$\\ 1291.4 & $0.041^{+0.004}_{-0.008}$ & $5.51^{+0.71}_{-1.20}$ & $0.031$\\ 1304.5 & $0.171^{+0.005}_{-0.009}$ & $6.77^{+0.28}_{-0.51}$ & $0.159$\\ 1335.8 & $0.091^{+0.001}_{-0.003}$ & $6.54^{+0.10}_{-0.20}$ & $0.088$\\ 1370.5 & $0.061^{+0.001}_{-0.003}$ & $11.17^{+0.19}_{-0.37}$ & $0.057$\\ 1396.5 & $0.310^{+0.001}_{-0.003}$ & $10.45^{+0.04}_{-0.07}$ & $0.307$\\ 1401.7 & $0.023^{+0.001}_{-0.002}$ & $2.76^{+0.15}_{-0.30}$ & $0.020$\\ 1410.9 & $0.019^{+0.002}_{-0.003}$ & $4.11^{+0.30}_{-0.59}$ & $0.015$\\ 1422.4 & $0.043^{+0.001}_{-0.003}$ & $5.33^{+0.13}_{-0.24}$ & $0.040$\\ 1436.2 & $0.021^{+0.002}_{-0.003}$ & $7.23^{+0.57}_{-0.99}$ & $0.017$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 6 with $\alpha_{\lambda}=-1.35$.}\label{tab-spec-6} \vspace*{-2ex} \fontsize{7}{7}\selectfont \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1100.9 & $0.023^{+0.003}_{-0.005}$ & $1.66^{+0.22}_{-0.38}$ & $0.017$\\ 1116.9 & $0.118^{+0.002}_{-0.004}$ & $5.08^{+0.13}_{-0.24}$ & $0.113$\\ 1125.0 & $0.040^{+0.003}_{-0.007}$ & $2.57^{+0.13}_{-0.25}$ & $0.030$\\ 1132.4 & $0.074^{+0.002}_{-0.005}$ & $4.69^{+0.23}_{-0.43}$ & $0.068$\\ 1174.2 & $0.076^{+0.008}_{-0.014}$ & $4.71^{+0.56}_{-0.99}$ & $0.057$\\ 1214.2 & $1.122^{+0.032}_{-0.059}$ & $17.05^{+0.25}_{-0.49}$ & $1.047$\\ \hline 1214.2& $2.920^{+0.007}_{-0.014}$ & $5.54^{+0.03}_{-0.05}$ & $2.902$\\ 1224.3& $0.586^{+0.011}_{-0.022}$ & $4.55^{+0.05}_{-0.10}$ & $0.557$\\ 1234.8& $1.298^{+0.005}_{-0.010}$ & $8.16^{+0.03}_{-0.066}$ & $1.284$\\ 1256.5& $0.404^{+0.005}_{-0.011}$ & $11.42^{+0.18}_{-0.37}$ & $0.390$\\ 1290.1& $0.046^{+0.004}_{-0.008}$ & $7.03^{+0.86}_{-1.52}$ & $0.036$\\ 1304.4& $0.181^{+0.006}_{-0.012}$ & $6.25^{+0.21}_{-0.40}$ & $0.164$\\ 1334.6& $0.106^{+0.004}_{-0.007}$ & $6.20^{+0.24}_{-0.42}$ & $0.097$\\ 1348.5& $0.033^{+0.002}_{-0.004}$ & $4.86^{+0.35}_{-0.65}$ & $0.028$\\ 1361.4& $0.041^{+0.004}_{-0.008}$ & $5.01^{+0.47}_{-0.87}$ & $0.032$\\ 1382.1& $0.131^{+0.005}_{-0.008}$ & $9.95^{+0.26}_{-0.50}$ & $0.120$\\ 1389.8& $0.081^{+0.004}_{-0.009}$ & $4.37^{+0.13}_{-0.24}$ & $0.069$\\ 1399.3& $0.196^{+0.004}_{-0.008}$ & $6.13^{+0.14}_{-0.27}$ & $0.186$\\ 1407.4& $0.125^{+0.005}_{-0.010}$ & $12.27^{+0.33}_{-0.60}$ & $0.111$\\ 1435.4& $0.030^{+0.004}_{-0.007}$ & $10.77^{+1.19}_{-2.08}$ & $0.021$\\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 7 with $\alpha_{\lambda}=-1.42$.}\label{tab-spec-7} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1061.6 & $0.211^{+0.008}_{-0.014}$ & $7.71^{+0.41}_{-0.75}$ & $0.194$\\ 1071.1 & $0.056^{+0.010}_{-0.020}$ & $2.36^{+0.28}_{-0.59}$ & $0.029$\\ 1079.3 & $0.137^{+0.007}_{-0.013}$ & $6.00^{+0.45}_{-0.80}$ & $0.120$\\ 1091.0 & $0.052^{+0.009}_{-0.017}$ & $3.33^{+0.60}_{-1.07}$ & $0.030$\\ 1108.4 & $0.055^{+0.014}_{-0.025}$ & $3.63^{+0.98}_{-1.49}$ & $0.024$\\ 1116.6 & $0.093^{+0.007}_{-0.014}$ & $3.81^{+0.35}_{-0.61}$ & $0.075$\\ 1127.0 & $0.126^{+0.013}_{-0.022}$ & $4.06^{+0.43}_{-0.72}$ & $0.098$\\ 1143.9 & $0.051^{+0.013}_{-0.021}$ & $4.07^{+1.08}_{-1.83}$ & $0.025$\\ 1172.1 & $0.086^{+0.009}_{-0.017}$ & $5.52^{+0.54}_{-0.99}$ & $0.064$\\ 1213.2 & $1.166^{+0.026}_{-0.049}$ & $16.18^{+0.18}_{-0.36}$ & $1.106$\\ \hline 1213.2& $2.723^{+0.015}_{-0.028}$ & $4.95^{+0.05}_{-0.09}$ & $2.687$\\ 1221.6& $1.319^{+0.020}_{-0.038}$ & $6.40^{+0.08}_{-0.14}$ & $1.270$\\ 1236.1& $1.115^{+0.007}_{-0.014}$ & $7.11^{+0.05}_{-0.10}$ & $1.096$\\ 1253.4& $0.498^{+0.025}_{-0.042}$ & $14.37^{+0.34}_{-0.71}$ & $0.447$\\ 1300.4& $0.136^{+0.044}_{-0.028}$ & $12.37^{+1.76}_{-3.03}$ & $0.098$\\ 1305.2& $0.092^{+0.012}_{-0.022}$ & $5.80^{+0.64}_{-1.19}$ & $0.062$\\ 1334.9& $0.094^{+0.004}_{-0.007}$ & $7.24^{+0.27}_{-0.51}$ & $0.085$\\ 1362.6& $0.008^{+0.002}_{-0.004}$ & $3.26^{+1.02}_{-1.71}$ & $0.003$\\ 1394.5& $0.118^{+0.006}_{-0.013}$ & $27.91^{+1.06}_{-2.08}$ & $0.102$\\ 1396.5& $0.227^{+0.004}_{-0.009}$ & $9.12^{+0.12}_{-0.24}$ & $0.216$\\ 1418.9& $0.006^{+0.002}_{-0.003}$ & $12.28^{+8.38}_{-8.97}$ & $0.002$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 8 with $\alpha_{\lambda}=-1.42$.}\label{tab-spec-8} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1060.3 & $0.201^{+0.017}_{-0.032}$ & $5.44^{+0.48}_{-0.87}$ & $0.161$\\ 1071.3 & $0.126^{+0.013}_{-0.030}$ & $3.38^{+0.29}_{-0.62}$ & $0.077$\\ 1080.8 & $0.091^{+0.022}_{-0.038}$ & $7.69^{+1.31}_{-2.66}$ & $0.045$\\ 1098.5 & $0.087^{+0.011}_{-0.019}$ & $3.90^{+0.39}_{-0.71}$ & $0.063$\\ 1111.5 & $0.109^{+0.010}_{-0.017}$ & $4.98^{+0.19}_{-0.35}$ & $0.088$\\ 1125.4 & $0.152^{+0.012}_{-0.021}$ & $6.61^{+0.24}_{-0.44}$ & $0.126$\\ 1141.0 & $0.085^{+0.011}_{-0.018}$ & $3.90^{+0.33}_{-0.60}$ & $0.062$\\ 1172.5 & $0.112^{+0.006}_{-0.012}$ & $7.71^{+0.51}_{-0.93}$ & $0.097$\\ 1213.8 & $1.249^{+0.028}_{-0.055}$ & $16.42^{+0.18}_{-0.36}$ & $1.178$\\ \hline 1213.8 & $3.121^{+0.008}_{-0.016}$ & $5.07^{+0.02}_{-0.04}$ & $3.099$\\ 1223.8 & $0.980^{+0.012}_{-0.023}$ & $5.00^{+0.04}_{-0.08}$ & $0.949$\\ 1235.9 & $1.317^{+0.006}_{-0.012}$ & $7.42^{+0.04}_{-0.07}$ & $1.301$\\ 1255.7 & $0.438^{+0.007}_{-0.013}$ & $12.02^{+0.25}_{-0.52}$ & $0.421$\\ 1285.7 & $0.039^{+0.004}_{-0.009}$ & $7.23^{+1.16}_{-1.87}$ & $0.028$\\ 1303.6 & $0.173^{+0.007}_{-0.015}$ & $7.30^{+0.31}_{-0.59}$ & $0.153$\\ 1334.2 & $0.073^{+0.002}_{-0.004}$ & $4.94^{+0.11}_{-0.20}$ & $0.068$\\ 1344.1 & $0.023^{+0.001}_{-0.002}$ & $7.01^{+0.90}_{-1.51}$ & $0.021$\\ 1363.0 & $0.028^{+0.002}_{-0.003}$ & $6.20^{+0.56}_{-1.00}$ & $0.024$\\ 1382.0 & $0.110^{+0.002}_{-0.003}$ & $8.35^{+0.21}_{-0.37}$ & $0.105$\\ 1390.3 & $0.125^{+0.002}_{-0.004}$ & $4.67^{+0.07}_{-0.13}$ & $0.116$\\ 1399.6 & $0.261^{+0.002}_{-0.004}$ & $5.89^{+0.04}_{-0.07}$ & $0.256$\\ 1413.3 & $0.082^{+0.001}_{-0.003}$ & $8.03^{+0.16}_{-0.30}$ & $0.078$\\ 1430.7 & $0.010^{+0.001}_{-0.003}$ & $2.43^{+0.27}_{-0.55}$ & $0.008$\\ 1438.6 & $0.008^{+0.001}_{-0.002}$ & $3.23^{+0.72}_{-1.26}$ & $0.005$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 9 with $\alpha_{\lambda}=-1.62$.}\label{tab-spec-9} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1070.2 & $0.226^{+0.003}_{-0.005}$ & $9.92^{+0.24}_{-0.44}$ & $0.219$\\ 1086.0 & $0.034^{+0.003}_{-0.007}$ & $1.92^{+0.25}_{-0.46}$ & $0.025$\\ 1099.6 & $0.077^{+0.007}_{-0.015}$ & $9.44^{+0.68}_{-1.19}$ & $0.058$\\ 1122.6 & $0.169^{+0.011}_{-0.022}$ & $10.38^{+0.39}_{-0.76}$ & $0.139$\\ 1172.6 & $0.095^{+0.009}_{-0.017}$ & $4.95^{+0.43}_{-0.80}$ & $0.073$\\ 1213.4 & $1.084^{+0.024}_{-0.045}$ & $17.44^{+0.23}_{-0.44}$ & $1.026$\\ \hline 1213.4 & $2.702^{+0.009}_{-0.017}$ & $5.42^{+0.03}_{-0.06}$ & $2.680$\\ 1223.4 & $1.045^{+0.012}_{-0.024}$ & $6.05^{+0.06}_{-0.11}$ & $1.013$\\ 1236.3 & $1.177^{+0.006}_{-0.013}$ & $7.49^{+0.04}_{-0.07}$ & $1.161$\\ 1255.6 & $0.444^{+0.007}_{-0.014}$ & $12.46^{+0.23}_{-0.45}$ & $0.426$\\ 1286.6 & $0.044^{+0.004}_{-0.007}$ & $6.36^{+0.64}_{-1.15}$ & $0.034$\\ 1303.6 & $0.175^{+0.007}_{-0.014}$ & $7.75^{+0.32}_{-0.60}$ & $0.158$\\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 10 with $\alpha_{\lambda}=-1.64$.}\label{tab-spec-10} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1056.3 & $0.180^{+0.009}_{-0.016}$ & $3.37^{+0.17}_{-0.30}$ & $0.161$\\ 1071.7 & $0.222^{+0.009}_{-0.016}$ & $8.10^{+0.29}_{-0.56}$ & $0.203$\\ 1086.8 & $0.054^{+0.008}_{-0.015}$ & $3.41^{+0.53}_{-0.94}$ & $0.035$\\ 1099.6 & $0.024^{+0.005}_{-0.009}$ & $2.30^{+0.59}_{-1.19}$ & $0.012$\\ 1117.3 & $0.083^{+0.004}_{-0.008}$ & $8.20^{+0.50}_{-0.94}$ & $0.073$\\ 1127.4 & $0.036^{+0.004}_{-0.008}$ & $2.91^{+0.34}_{-0.61}$ & $0.026$\\ 1141.7 & $0.025^{+0.005}_{-0.010}$ & $2.49^{+0.65}_{-1.11}$ & $0.012$\\ 1163.7 & $0.044^{+0.006}_{-0.013}$ & $2.70^{+0.44}_{-0.80}$ & $0.028$\\ 1174.3 & $0.090^{+0.005}_{-0.009}$ & $5.84^{+0.46}_{-0.81}$ & $0.078$\\ 1213.8 & $1.114^{+0.020}_{-0.037}$ & $17.88^{+0.28}_{-0.56}$ & $1.068$\\ \hline 1213.8 & $2.899^{+0.008}_{-0.016}$ & $5.32^{+0.02}_{-0.05}$ & $2.878$\\ 1223.9 & $0.903^{+0.011}_{-0.022}$ & $5.13^{+0.04}_{-0.08}$ & $0.874$\\ 1235.5 & $1.128^{+0.005}_{-0.010}$ & $7.44^{+0.05}_{-0.09}$ & $1.115$\\ 1252.7 & $0.483^{+0.007}_{-0.014}$ & $14.66^{+0.27}_{-0.51}$ & $0.465$\\ 1288.6 & $0.050^{+0.004}_{-0.008}$ & $6.39^{+0.58}_{-1.05}$ & $0.040$\\ 1304.6 & $0.171^{+0.007}_{-0.014}$ & $7.50^{+0.33}_{-0.62}$ & $0.153$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 11 with $\alpha_{\lambda}=-1.73$.}\label{tab-spec-11} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1058.1 & $0.222^{+0.004}_{-0.007}$ & $7.34^{+0.28}_{-0.50}$ & $0.213$\\ 1070.4 & $0.072^{+0.007}_{-0.014}$ & $4.70^{+0.22}_{-0.42}$ & $0.051$\\ 1079.9 & $0.145^{+0.004}_{-0.009}$ & $8.59^{+0.25}_{-0.49}$ & $0.133$\\ 1106.7 & $0.035^{+0.006}_{-0.011}$ & $2.60^{+0.50}_{-0.84}$ & $0.021$\\ 1115.7 & $0.076^{+0.004}_{-0.008}$ & $3.62^{+0.22}_{-0.42}$ & $0.065$\\ 1125.6 & $0.128^{+0.007}_{-0.011}$ & $7.00^{+0.34}_{-0.62}$ & $0.114$\\ 1168.5 & $0.069^{+0.007}_{-0.014}$ & $4.83^{+0.76}_{-1.38}$ & $0.052$\\ 1174.9 & $0.064^{+0.009}_{-0.017}$ & $2.60^{+0.31}_{-0.57}$ & $0.040$\\ 1213.7 & $1.212^{+0.024}_{-0.046}$ & $17.38^{+0.23}_{-0.45}$ & $1.151$\\ \hline 1213.7 & $2.984^{+0.007}_{-0.015}$ & $5.08^{+0.03}_{-0.05}$ & $2.964$\\ 1223.4 & $1.110^{+0.012}_{-0.024}$ & $5.50^{+0.04}_{-0.09}$ & $1.078$\\ 1236.2 & $1.103^{+0.005}_{-0.010}$ & $7.06^{+0.05}_{-0.09}$ & $1.089$\\ 1252.3 & $0.495^{+0.005}_{-0.010}$ & $14.55^{+0.19}_{-0.39}$ & $0.482$\\ 1288.4 & $0.037^{+0.004}_{-0.008}$ & $6.26^{+0.80}_{-1.42}$ & $0.027$\\ 1304.0 & $0.164^{+0.005}_{-0.010}$ & $6.90^{+0.25}_{-0.48}$ & $0.151$\\ 1334.0 & $0.065^{+0.001}_{-0.002}$ & $4.34^{+0.11}_{-0.21}$ & $0.062$\\ 1342.3 & $0.024^{+0.001}_{-0.002}$ & $3.56^{+0.17}_{-0.31}$ & $0.021$\\ 1376.2 & $0.053^{+0.001}_{-0.002}$ & $13.64^{+0.37}_{-0.71}$ & $0.050$\\ 1395.6 & $0.271^{+0.001}_{-0.002}$ & $9.18^{+0.05}_{-0.09}$ & $0.269$\\ 1401.5 & $0.037^{+0.001}_{-0.002}$ & $3.01^{+0.09}_{-0.18}$ & $0.034$\\ 1408.7 & $0.055^{+0.002}_{-0.003}$ & $4.89^{+0.12}_{-0.22}$ & $0.051$\\ 1421.0 & $0.059^{+0.001}_{-0.002}$ & $7.08^{+0.16}_{-0.29}$ & $0.056$\\ 1436.6 & $0.017^{+0.001}_{-0.003}$ & $5.45^{+0.41}_{-0.78}$ & $0.013$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 12 with $\alpha_{\lambda}=-1.88$.}\label{tab-spec-12} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1063.0 & $0.249^{+0.014}_{-0.025}$ & $7.30^{+0.52}_{-0.98}$ & $0.218$\\ 1074.1 & $0.082^{+0.014}_{-0.025}$ & $2.76^{+0.29}_{-0.57}$ & $0.051$\\ 1084.0 & $0.157^{+0.014}_{-0.023}$ & $6.56^{+0.57}_{-1.04}$ & $0.128$\\ 1115.5 & $0.108^{+0.005}_{-0.009}$ & $6.05^{+0.42}_{-0.73}$ & $0.097$\\ 1123.8 & $0.036^{+0.008}_{-0.019}$ & $2.55^{+0.33}_{-0.74}$ & $0.012$\\ 1129.0 & $0.038^{+0.005}_{-0.009}$ & $4.53^{+0.83}_{-1.43}$ & $0.027$\\ 1172.9 & $0.079^{+0.009}_{-0.014}$ & $4.36^{+0.67}_{-1.13}$ & $0.061$\\ 1178.5 & $0.032^{+0.008}_{-0.015}$ & $1.89^{+0.38}_{-0.81}$ & $0.012$\\ 1214.5 & $1.185^{+0.019}_{-0.036}$ & $17.34^{+0.26}_{-0.49}$ & $1.139$\\ \hline 1214.5 & $3.251^{+0.015}_{-0.025}$ & $5.85^{+0.03}_{-0.05}$ & $3.219$\\ 1225.7 & $0.478^{+0.012}_{-0.024}$ & $3.90^{+0.06}_{-0.12}$ & $0.445$\\ 1235.1 & $1.363^{+0.008}_{-0.015}$ & $8.21^{+0.04}_{-0.08}$ & $1.344$\\ 1256.0 & $0.460^{+0.014}_{-0.025}$ & $12.54^{+0.30}_{-0.67}$ & $0.429$\\ 1281.9 & $0.036^{+0.006}_{-0.012}$ & $4.55^{+1.00}_{-1.77}$ & $0.019$\\ 1301.6 & $0.133^{+0.014}_{-0.032}$ & $10.64^{+1.08}_{-1.88}$ & $0.088$\\ 1304.9 & $0.063^{+0.009}_{-0.017}$ & $4.43^{+0.61}_{-1.11}$ & $0.041$\\ 1334.8 & $0.064^{+0.001}_{-0.002}$ & $4.74^{+0.08}_{-0.15}$ & $0.062$\\ 1345.1 & $0.015^{+0.001}_{-0.001}$ & $3.43^{+0.20}_{-0.37}$ & $0.014$\\ 1385.1 & $0.087^{+0.001}_{-0.002}$ & $16.27^{+0.26}_{-0.49}$ & $0.083$\\ 1393.9 & $0.200^{+0.001}_{-0.002}$ & $7.60^{+0.06}_{-0.12}$ & $0.197$\\ 1401.0 & $0.065^{+0.001}_{-0.002}$ & $3.33^{+0.06}_{-0.11}$ & $0.062$\\ 1407.8 & $0.101^{+0.002}_{-0.003}$ & $6.04^{+0.06}_{-0.12}$ & $0.096$\\ 1423.3 & $0.048^{+0.001}_{-0.001}$ & $7.55^{+0.18}_{-0.34}$ & $0.047$\\ 1438.5 & $0.012^{+0.001}_{-0.002}$ & $4.81^{+0.36}_{-0.71}$ & $0.010$\\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 13 with $\alpha_{\lambda}=-1.92$.}\label{tab-spec-13} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1063.8 & $0.144^{+0.003}_{-0.006}$ & $8.71^{+0.52}_{-1.07}$ & $0.136$\\ 1072.6 & $0.093^{+0.005}_{-0.010}$ & $2.65^{+0.13}_{-0.26}$ & $0.080$\\ 1082.0 & $0.044^{+0.005}_{-0.010}$ & $2.07^{+0.24}_{-0.48}$ & $0.030$\\ 1087.5 & $0.068^{+0.003}_{-0.006}$ & $3.11^{+0.22}_{-0.40}$ & $0.059$\\ 1103.3 & $0.037^{+0.003}_{-0.006}$ & $2.78^{+0.25}_{-0.46}$ & $0.029$\\ 1111.9 & $0.070^{+0.003}_{-0.005}$ & $4.08^{+0.17}_{-0.32}$ & $0.063$\\ 1124.3 & $0.095^{+0.003}_{-0.005}$ & $4.80^{+0.13}_{-0.25}$ & $0.089$\\ 1135.9 & $0.023^{+0.003}_{-0.005}$ & $2.71^{+0.35}_{-0.63}$ & $0.016$\\ 1153.9 & $0.049^{+0.004}_{-0.008}$ & $3.87^{+0.37}_{-0.67}$ & $0.039$\\ 1164.1 & $0.047^{+0.003}_{-0.007}$ & $4.36^{+0.28}_{-0.53}$ & $0.038$\\ 1172.7 & $0.125^{+0.005}_{-0.009}$ & $5.89^{+0.24}_{-0.45}$ & $0.113$\\ 1214.5 & $1.471^{+0.020}_{-0.038}$ & $16.84^{+0.13}_{-0.27}$ & $1.422$\\ \hline 1214.5 & $3.211^{+0.004}_{-0.008}$ & $5.63^{+0.01}_{-0.02}$ & $3.200$\\ 1225.8 & $0.865^{+0.009}_{-0.018}$ & $4.85^{+0.02}_{-0.04}$ & $0.842$\\ 1237.4 & $1.413^{+0.005}_{-0.010}$ & $7.39^{+0.05}_{-0.10}$ & $1.399$\\ 1250.8 & $0.104^{+0.006}_{-0.011}$ & $3.67^{+0.16}_{-0.31}$ & $0.090$\\ 1258.4 & $0.386^{+0.005}_{-0.009}$ & $8.54^{+0.10}_{-0.19}$ & $0.374$\\ 1276.7 & $0.093^{+0.005}_{-0.009}$ & $14.14^{+0.78}_{-1.37}$ & $0.082$\\ 1304.5 & $0.145^{+0.005}_{-0.010}$ & $6.90^{+0.21}_{-0.42}$ & $0.132$\\ 1333.9 & $0.029^{+0.004}_{-0.007}$ & $3.92^{+0.47}_{-0.84}$ & $0.019$\\ 1336.0 & $0.055^{+0.004}_{-0.008}$ & $9.45^{+0.80}_{-1.36}$ & $0.044$\\ 1376.0 & $0.069^{+0.007}_{-0.011}$ & $18.68^{+0.71}_{-1.34}$ & $0.055$\\ 1397.9 & $0.302^{+0.002}_{-0.004}$ & $10.62^{+0.09}_{-0.19}$ & $0.296$\\ 1419.3 & $0.030^{+0.002}_{-0.004}$ & $6.16^{+0.47}_{-0.84}$ & $0.025$\\ 1429.8 & $0.046^{+0.005}_{-0.008}$ & $10.20^{+1.14}_{-2.0}$ & $0.036$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 14 with $\alpha_{\lambda}=-2.02$.}\label{tab-spec-14} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1101.6 & $0.021^{+0.004}_{-0.008}$ & $2.84^{+0.67}_{-1.11}$ & $0.011$\\ 1116.2 & $0.074^{+0.004}_{-0.009}$ & $4.61^{+0.24}_{-0.46}$ & $0.063$\\ 1126.2 & $0.060^{+0.004}_{-0.008}$ & $4.05^{+0.32}_{-0.57}$ & $0.050$\\ 1135.5 & $0.033^{+0.004}_{-0.009}$ & $2.44^{+0.39}_{-0.71}$ & $0.021$\\ 1164.2 & $0.031^{+0.005}_{-0.009}$ & $2.40^{+0.30}_{-0.58}$ & $0.019$\\ 1172.0 & $0.125^{+0.003}_{-0.005}$ & $5.42^{+0.25}_{-0.45}$ & $0.118$\\ 1214.4 & $1.364^{+0.016}_{-0.031}$ & $17.41^{+0.14}_{-0.27}$ & $1.325$\\ \hline 1214.4 & $3.299^{+0.007}_{-0.014}$ & $5.43^{+0.02}_{-0.04}$ & $3.280$\\ 1225.0 & $0.640^{+0.010}_{-0.020}$ & $4.20^{+0.04}_{-0.08}$ & $0.614$\\ 1235.1 & $1.410^{+0.006}_{-0.012}$ & $8.18^{+0.04}_{-0.07}$ & $1.394$\\ 1255.7 & $0.470^{+0.007}_{-0.013}$ & $12.36^{+0.25}_{-0.59}$ & $0.452$\\ 1286.2 & $0.057^{+0.004}_{-0.008}$ & $7.66^{+0.86}_{-1.61}$ & $0.046$\\ 1304.0 & $0.167^{+0.007}_{-0.018}$ & $6.59^{+0.29}_{-0.57}$ & $0.139$\\ 1335.3 & $0.045^{+0.001}_{-0.003}$ & $4.01^{+0.15}_{-0.29}$ & $0.042$\\ 1388.8 & $0.086^{+0.002}_{-0.004}$ & $13.12^{+0.17}_{-0.35}$ & $0.081$\\ 1398.5 & $0.227^{+0.001}_{-0.003}$ & $9.85^{+0.07}_{-0.14}$ & $0.224$\\ 1422.1 & $0.038^{+0.001}_{-0.002}$ & $6.73^{+0.23}_{-0.45}$ & $0.035$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 15 with $\alpha_{\lambda}=-2.07$.}\label{tab-spec-15} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1055.8 & $0.165^{+0.014}_{-0.022}$ & $17.41^{+3.38}_{-5.31}$ & $0.138$\\ 1066.6 & $0.078^{+0.006}_{-0.011}$ & $3.88^{+0.36}_{-0.65}$ & $0.065$\\ 1072.5 & $0.077^{+0.012}_{-0.021}$ & $2.38^{+0.29}_{-0.52}$ & $0.051$\\ 1083.2 & $0.066^{+0.012}_{-0.021}$ & $3.36^{+0.52}_{-0.96}$ & $0.040$\\ 1108.3 & $0.024^{+0.004}_{-0.008}$ & $1.88^{+0.50}_{-0.90}$ & $0.013$\\ 1117.3 & $0.063^{+0.003}_{-0.006}$ & $3.33^{+0.20}_{-0.38}$ & $0.055$\\ 1125.5 & $0.063^{+0.004}_{-0.007}$ & $4.07^{+0.32}_{-0.56}$ & $0.054$\\ \hline 1213.7 & $3.154^{+0.007}_{-0.015}$ & $5.06^{+0.02}_{-0.041}$ & $3.135$\\ 1223.2 & $1.010^{+0.011}_{-0.021}$ & $5.08^{+0.04}_{-0.07}$ & $0.982$\\ 1235.5 & $1.364^{+0.005}_{-0.011}$ & $8.15^{+0.03}_{-0.07}$ & $1.350$\\ 1256.1 & $0.480^{+0.007}_{-0.012}$ & $12.34^{+0.20}_{-0.38}$ & $0.464$\\ 1283.7 & $0.068^{+0.004}_{-0.008}$ & $6.25^{+0.44}_{-0.83}$ & $0.058$\\ 1303.2 & $0.171^{+0.006}_{-0.011}$ & $7.78^{+0.32}_{-0.60}$ & $0.156$\\ 1335.8 & $0.044^{+0.003}_{-0.005}$ & $4.98^{+0.44}_{-0.80}$ & $0.037$\\ 1396.8 & $0.138^{+0.012}_{-0.022}$ & $17.97^{+1.07}_{-1.84}$ & $0.110$\\ 1397.3 & $0.157^{+0.014}_{-0.027}$ & $8.27^{+0.33}_{-0.68}$ & $0.122$\\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}[t]{0.3\linewidth}\centering \caption{ Parameters of emission lines for spectrum 16 with $\alpha_{\lambda}=-2.14$.}\label{tab-spec-16} \fontsize{7}{7}\selectfont \vspace*{-2ex} \begin{tabular}{p{0.7cm}p{1.3cm}p{1.2cm}p{0.6cm}} \hline $\lambda_{0}$,\,\AA & a & w,\,\AA & a-3$\sigma$\\ \hline 1052.4 & $0.136^{+0.003}_{-0.007}$ & $3.61^{+0.15}_{-0.28}$ & $0.128$\\ 1060.6 & $0.137^{+0.005}_{-0.010}$ & $4.15^{+0.12}_{-0.23}$ & $0.124$\\ 1071.4 & $0.158^{+0.003}_{-0.005}$ & $4.50^{+0.10}_{-0.19}$ & $0.151$\\ 1083.1 & $0.073^{+0.003}_{-0.006}$ & $4.29^{+0.22}_{-0.42}$ & $0.065$\\ 1111.3 & $0.057^{+0.003}_{-0.006}$ & $4.42^{+0.22}_{-0.41}$ & $0.049$\\ 1123.7 & $0.093^{+0.003}_{-0.006}$ & $6.17^{+0.20}_{-0.38}$ & $0.084$\\ 1140.1 & $0.027^{+0.003}_{-0.007}$ & $2.72^{+0.41}_{-0.74}$ & $0.018$\\ 1155.5 & $0.064^{+0.007}_{-0.011}$ & $5.71^{+0.69}_{-1.13}$ & $0.050$\\ 1165.7 & $0.033^{+0.005}_{-0.010}$ & $2.19^{+0.26}_{-0.51}$ & $0.021$\\ 1174.3 & $0.114^{+0.005}_{-0.010}$ & $6.10^{+0.31}_{-0.60}$ & $0.102$\\ 1214.1 & $1.351^{+0.019}_{-0.037}$ & $17.12^{+0.20}_{-0.39}$ & $1.303$\\ \hline 1214.1 & $3.092^{+0.009}_{-0.017}$ & $5.21^{+0.03}_{-0.05}$ & $3.070$\\ 1224.1 & $0.917^{+0.014}_{-0.028}$ & $4.83^{+0.05}_{-0.09}$ & $0.879$\\ 1235.6 & $1.176^{+0.006}_{-0.012}$ & $7.74^{+0.06}_{-0.12}$ & $1.160$\\ 1252.4 & $0.516^{+0.007}_{-0.014}$ & $15.01^{+0.33}_{-0.67}$ & $0.497$\\ 1285.6 & $0.037^{+0.005}_{-0.010}$ & $6.83^{+1.27}_{-2.13}$ & $0.024$\\ 1302.2 & $0.155^{+0.007}_{-0.015}$ & $7.15^{+0.39}_{-0.75}$ & $0.134$\\ 1333.7 & $0.046^{+0.002}_{-0.004}$ & $5.10^{+0.31}_{-0.57}$ & $0.041$\\ 1343.7 & $0.021^{+0.003}_{-0.005}$ & $8.66^{+1.55}_{-2.56}$ & $0.014$\\ 1371.8 & $0.041^{+0.004}_{-0.006}$ & $9.95^{+0.46}_{-0.88}$ & $0.033$\\ 1397.4 & $0.300^{+0.004}_{-0.006}$ & $10.71^{+0.07}_{-0.14}$ & $0.293$\\ 1416.8 & $0.016^{+0.002}_{-0.004}$ & $3.99^{+0.45}_{-0.84}$ & $0.010$\\ 1425.7 & $0.045^{+0.004}_{-0.006}$ & $7.35^{+0.57}_{-1.01}$ & $0.038$\\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}[t]{0.65\linewidth}\centering \caption{The values of $b$ (or $d$ in case of range 5) characterizing continuum for each range. Here n is the number of spectrum.}\label{tab:b-param} \fontsize{8}{8}\selectfont \vspace*{-2ex} \begin{tabular}{c|c|c|c|c|c} \hline n & range 1 & range 2 & range 3 & range 4 & range 5\\ \hline 1 & $0.940^{+0.011}_{-0.032}$ & $0.953^{+0.003}_{-0.007}$ & $0.973^{+0.007}_{-0.018}$ & $1.052^{+0.005}_{-0.011}$ & $717^{+2}_{-4}$ \\ 2 & $0.963^{+0.012}_{-0.034}$ & $0.977^{+0.003}_{-0.007}$ & $0.989^{+0.015}_{-0.044}$ & $1.067^{+0.005}_{-0.011}$ & $1136^{+2}_{-4}$ \\ 3 & $0.971^{+0.007}_{-0.016}$ & $0.995^{+0.003}_{-0.008}$ & $1.014^{+0.010}_{-0.027}$ & $1.068^{+0.006}_{-0.013}$ & $1634^{+8}_{-21}$ \\ 4 & $0.973^{+0.014}_{-0.028}$ & $0.957^{+0.005}_{-0.009}$ & $1.047^{+0.007}_{-0.017}$ & $1.084^{+0.006}_{-0.013}$ & $1920^{+2}_{-5}$ \\ 5 & $1.011^{+0.003}_{-0.005}$ & $1.001^{+0.004}_{-0.010}$ & $1.010^{+0.006}_{-0.014}$ & $1.102^{+0.005}_{-0.012}$ & $5660^{+8}_{-16}$ \\ 6 &--- & $1.000^{+0.002}_{-0.006}$ & $1.072^{+0.008}_{-0.018}$ & $1.113^{+0.005}_{-0.010}$ & $17836^{+65}_{-140}$ \\ 7 & $1.041^{+0.007}_{-0.020}$ & $1.038^{+0.013}_{-0.043}$ & $1.083^{+0.011}_{-0.024}$ & $1.113^{+0.005}_{-0.010}$ & $29541^{+148}_{-304}$ \\ 8 & $1.077^{+0.020}_{-0.042}$ & $1.007^{+0.011}_{-0.025}$ & $1.099^{+0.005}_{-0.011}$ & $1.123^{+0.006}_{-0.013}$ & $30490^{+37}_{-88}$ \\ 9 & $1.066^{+0.003}_{-0.006}$ & $1.066^{+0.003}_{-0.006}$ & $1.099^{+0.012}_{-0.025}$ & $1.128^{+0.007}_{-0.015}$ & ---\\ 10 & $1.071^{+0.008}_{-0.018}$ & $1.087^{+0.005}_{-0.013}$ & $1.089^{+0.006}_{-0.013}$ & $1.126^{+0.008}_{-0.016}$ & --- \\ 11 & $1.066^{+0.004}_{-0.008}$ & $1.075^{+0.006}_{-0.014}$ & $1.123^{+0.008}_{-0.017}$ & $1.160^{+0.004}_{-0.009}$ & $292221^{+327}_{-699}$ \\ 12 & $1.103^{+0.015}_{-0.030}$ & $1.124^{+0.006}_{-0.016}$ & $1.165^{+0.014}_{-0.031}$ & $1.149^{+0.013}_{-0.032}$ & $865732^{+587}_{-1183}$ \\ 13 & $1.142^{+0.003}_{-0.006}$ & $1.121^{+0.002}_{-0.005}$ & $1.119^{+0.004}_{-0.008}$ & $1.178^{+0.003}_{-0.007}$ & $1136890^{+6492}_{-17148}$ \\ 14 & --- & $1.136^{+0.003}_{-0.007}$ & $1.149^{+0.003}_{-0.006}$ & $1.193^{+0.006}_{-0.012}$ & $2452040^{+1525}_{-3088}$ \\ 15 & $1.143^{+0.015}_{-0.052}$ & $1.155^{+0.004}_{-0.009}$ & --- & $1.187^{+0.006}_{-0.012}$ & $3499960^{+8783}_{-20747}$ \\ 16 & $1.147^{+0.003}_{-0.005}$ & $1.124^{+0.003}_{-0.007}$ & $1.123^{+0.008}_{-0.021}$ & $1.208^{+0.007}_{-0.014}$ & $5788270^{+21075}_{-47788}$ \\ \hline \end{tabular} \end{minipage} \end{table*} \clearpage \begin{figure} \centering \epsfig{figure=spec-bin1-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin2-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin3-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin4-fit.eps,width=.99\linewidth} \caption{Composite spectra 1$-$4 with $\alpha_{\lambda}$ (from top to bottom): $-0.91$, $-0.97$, $-1.02$, $-1.04$ (blue part).} \label{fig:spec-1-4-a} \end{figure} \begin{figure} \centering \epsfig{figure=spec-bin1-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin2-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin3-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin4-fit-lena.eps,width=.99\linewidth} \caption{Composite spectra 1$-$4 with $\alpha_{\lambda}$ (from top to bottom): $-0.91$, $-0.97$, $-1.02$, $-1.04$ (red part).} \label{fig:spec-1-4-b} \end{figure} \clearpage \begin{figure} \centering \epsfig{figure=spec-bin5-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin6-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin7-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin8-fit.eps,width=.99\linewidth} \caption{Composite spectra 5$-$8 with $\alpha_{\lambda}$ (from top to bottom): $-1.19$, $-1.35$, $-1.42$, $-1.42$ (blue part).} \label{fig:spec-5-8-a} \end{figure} \begin{figure} \centering \epsfig{figure=spec-bin5-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin6-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin7-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin8-fit-lena.eps,width=.99\linewidth} \caption{Composite spectra 5$-$8 with $\alpha_{\lambda}$ (from top to bottom): $-1.19$, $-1.35$, $-1.42$, $-1.42$ (red part).} \label{fig:spec-5-8-b} \end{figure} \clearpage \begin{figure} \centering \epsfig{figure=spec-bin9-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin10-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin11-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin12-fit.eps,width=.99\linewidth} \caption{Composite spectra 9$-$12 with $\alpha_{\lambda}$ (from top to bottom): $-1.62$, $-1.64$, $-1.73$, $-1.88$ (blue part).} \label{fig:spec-9-12-a} \end{figure} \begin{figure} \centering \epsfig{figure=spec-bin9-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin10-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin11-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin12-fit-lena.eps,width=.99\linewidth} \caption{Composite spectra 9$-$12 with $\alpha_{\lambda}$ (from top to bottom): $-1.62$, $-1.64$, $-1.73$, $-1.88$ (red part).} \label{fig:spec-9-12-b} \end{figure} \clearpage \vspace*{1ex} \label{lastpage} \begin{figure} \centering \epsfig{figure=spec-bin13-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin14-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin15-fit.eps,width=.99\linewidth} \epsfig{figure=spec-bin16-fit.eps,width=.99\linewidth} \caption{Composite spectra 13$-$16 with $\alpha_{\lambda}$ (from top to bottom): $-1.92$, $-2.02$, $-2.07$, $-2.14$ (blue part).} \label{fig:spec-13-16-a} \end{figure} \begin{figure} \centering \epsfig{figure=spec-bin13-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin14-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin15-fit-lena.eps,width=.99\linewidth} \epsfig{figure=spec-bin16-fit-lena.eps,width=.99\linewidth} \caption{Composite spectra 13$-$16 with $\alpha_{\lambda}$ (from top to bottom): $-1.92$, $-2.02$, $-2.07$, $-2.14$ (red part).} \label{fig:spec-13-16-b} \end{figure} \end{document}
1,108,101,562,631
arxiv
\section{Introduction} For many years, scientists have used machine learning (ML) to decode and understand human brain activity in response to visual stimuli. The great progress of deep neural networks (DNNs) in the last decade has provided researchers with powerful tools and a large number of unexplored opportunities to achieve better brain decoding and visual reconstructions from functional magnetic resonance imaging (fMRI) data. A variety of approaches have been taken to address image reconstruction from brain data. Before the deep learning era, researchers achieved reconstructions of simple binary stimuli directly from fMRI data~\cite{miyawaki2008visual}. Even though the reconstruction of complex natural images was hardly possible in those days, there were attempts to identify the image within a dataset, instead of reconstructing it: for example, quantitative receptive field models were used to identify the presented image~\cite{kay2008identifying}; in another work~\cite{naselaris2009bayesian}, the authors made use of Bayesian methods to find the image with the highest likelihood. In recent years, deep networks have brought significant improvements in this field, with the reconstruction of handwritten digits using deep belief networks~\cite{van2010neural}, of face stimuli with variational auto-encoders (VAEs)~\cite{vanrullen2019reconstructing}, and of natural scenes with feed-forward networks~\cite{shen2019deep,beliy2019voxels}, generative adversarial networks (GANs)~\cite{st2018generative, seeliger2018generative}, and dual-VAE/GAN~\cite{ren2019reconstructing}. Most reconstruction methods for natural images, however, tend to emphasize pixel-level similarity with the original images, and rarely produce recognizable objects, or visually plausible or semantically meaningful scenes. Inspired by~\cite{vanrullen2019reconstructing}, we propose a method to reconstruct natural scenes from fMRI data using a recently proposed large-scale bi-directional generative adversarial network, called BigBiGAN~\cite{donahue2019large}. This network is the current state-of-the-art for unconditional image generation on ImageNet in terms of image quality and visual plausibility. In our proposed method, the brain data is mapped to the latent space of the BigBiGAN (pre-trained on ImageNet), whose generator is then used to reconstruct the image. Fig.~\ref{fig:method} demonstrates an overview of the proposed method. Specifically, a training set of natural images that is shown to the human subjects is also fed into BigBiGAN's encoder to get ``original'' latent vectors. Then, a linear mapping is computed between brain responses to the training images and their corresponding original latent vectors. Applying this mapping to the brain data for novel test images, a set of ``predicted'' latent vectors is then generated. Finally, these predicted latent vectors are passed on to the BigBiGAN's generator for image reconstruction. We demonstrate that the proposed method is able to outperform others by generating high-resolution naturalistic reconstructions thanks to the BigBiGAN generator. We justify our claims by quantitative comparisons of reconstructions to the original images in the high-level representational space of a state-of-the-art deep neural network. \begin{figure} \begin{subfigure}{\columnwidth} \centering \includegraphics[width=\columnwidth]{training.pdf} \caption{} \label{fig:method_a} \end{subfigure} \begin{subfigure}{\columnwidth} \centering \includegraphics[width=\columnwidth]{testing_rev1.pdf} \caption{} \label{fig:method_b} \end{subfigure} \caption{The proposed method. (a) Training phase. We compute a linear mapping (computing the linear transform matrix $W$) from $120$-D latent vectors (derived from the BigBiGAN encoder or from PCA decomposition) to $nv$-D fMRI patterns. $nv$ is the number of voxels inside the brain region of interest. (b) Test phase. The obtained mapping is inversely used to transform fMRI patterns of test images into the latent vectors. The image is then reconstructed using BigBiGAN's generator (or a PCA inverse transform).} \label{fig:method} \end{figure} \section{Previous Works} We begin this section by describing our earlier work from which the method was adapted. In~\cite{vanrullen2019reconstructing}, we took advantage of the latent space of a VAE trained with a GAN procedure on a large set of faces. By learning a linear mapping between fMRI patterns and $1024$-dimensional VAE latent vectors, and using the GAN generator to reconstruct input images, we established a new state-of-the-art for fMRI-based face reconstruction. Moreover, the method even allowed for decoding face gender or face mental imagery. Despite these promising results on faces, dealing with natural images remains a hard challenge. In another study~\cite{han2018variational}, authors used a VAE for reconstructing naturalistic movie stimuli. They first trained a VAE, with five layers for encoding and five layers for decoding, on the ImageNet dataset. Then, similar to~\cite{vanrullen2019reconstructing}, they converted the fMRI patterns to the VAE's latent space through linear mapping. Although they reported an appreciable level of success, the reconstructions were still blurry and difficult to recognize. Studies in this field are not limited to the latent space of VAEs. In~\cite{shen2019deep}, the feature space of deep convolutional networks (DCNs) was used for fMRI decoding and image reconstruction. To do so, a decoder was first trained to transform fMRI patterns into the DCN's image representations. Then, for each fMRI pattern, an initial image was proposed and passed through iterative optimization steps. In each iteration, the image was given to the DCN and the difference between its feature representation and the one from the actual image was computed as a loss value. Finally, pixel values were optimized to decrease this loss. The authors also examined optimization in the space of deep generative networks instead of in pixel space. According to the obtained reconstructions, their method was able to capture input attributes such as object color, position, and a coarse estimation of the shape. However, images remained blurry and the objects difficultly recognizable. Other studies have proposed original network architectures instead of using pre-existing ones. In~\cite{beliy2019voxels} an encoder/decoder structure was proposed, in which the encoder maps images to fMRI data, while the decoder does the reverse. In the first step, the encoder and decoder were separately trained on (image, fMRI) data pairs. Since the number of data pairs was insufficient for proper generalization, the authors applied a second round of training in an unsupervised fashion. In yet another study~\cite{ren2019reconstructing}, the authors proposed a dual-VAE, trained with a GAN procedure. This method involved three stages of training. In stage 1 the encoder, generator, and discriminator were trained on original images vs. generated ones. In stage 2, the generator was fixed, the encoder was trained on fMRI data, and the discriminator was trained with reconstructed images from the fMRI data, and reconstructed images from Stage 1. Finally, in Stage 3, the encoder was fixed, and the generator and discriminator networks were fine-tuned using the original images and reconstructed images from the fMRI data. This three-step method not only outperformed previous studies in image decoding, but also generated more crisp and visually plausible reconstructions. However, object identity was not always evident in the reconstructed images. In this paper, we reconstruct images from human brain activity patterns using the state-of-the-art in natural image generation, a large-scale bi-directional GAN coined ``BigBiGAN''~\cite{donahue2019large}. Notably, the high-level image attributes captured in the latent space of the BigBiGAN allow us to go beyond pixel-wise similarity between the original and reconstructed images, and to reconstruct realistic and visually plausible scenes that express high-level semantic and category-level information from brain activity patterns. \section{Materials and Methods} \subsection{fMRI Data} In this paper, we used open-source fMRI data provided by~\cite{horikawa2017generic}. Images in the stimulus set were selected from ImageNet, and included $1200$ training samples (1 presentation each) from $150$ categories ($150\times 8$), and $50$ test samples (35 presentations each) from $50$ categories. Training and test categories were independent of each other. Five healthy subjects viewed these training and test images in a fMRI scanner in separate sessions. Each fMRI run consisted of a fixation point ($33$s), $50$ image presentations ($9$s per image, flashing at $2$Hz), and a final fixation point ($6$s). Moreover, $5$ images were randomly repeated during a run and subjects performed a one-back task on these images (i.e., they pressed a button when the same image was presented on two consecutive trials). We downloaded the raw data\footnote{\url{https://openneuro.org/datasets/ds001246/}} and applied a standard preprocessing pipeline: slice-time correction, realignment, and coregistation to the T$1$w anatomical image using SPM12 software\footnote{\url{https://www.fil.ion.ucl.ac.uk/spm/software/spm12/}}. Details of the parameters used for preprocessing can be found in~\cite{vanrullen2019reconstructing}. The downloaded fMRI dataset also provided pre-defined regions of interest (ROIs) that covered visual cortex. The onset and duration of each image were entered into a general linear model (GLM) as regressors (a separate GLM was used for the training and test sessions). \subsection{BigBiGAN} BigBiGAN is a state-of-the-art large-scale bi-directional generative network for natural images~\cite{donahue2019large}. It is a successor of the BiGAN bi-directional GAN~\cite{donahue2016adversarial}, but adopting the generator and discriminator architectures from the more recent BigGAN~\cite{brock2018large}. Similar to BiGAN, the encoder and generator are trained indirectly via a joint discriminator that has to discriminate real from fake [latent vector, data] pairs. The encoder maps data into the latent vectors (real pairs), while the generator reconstructs data from latent vectors (fake pairs). Unlike BigGAN, a conditional GAN which requires a separate ``conditioning'' vector for object category, BigBiGAN's generator has a unified 120-dimensional latent space which captures all properties of objects, including category and pose. In other words, each image can be expressed as a $120$-dimensional vector in the network's latent space, and any latent vector can be mapped back to the corresponding image. The low-dimensionality of the BigBiGAN model makes it particularly appealing for fMRI-based decoding, given the relatively small amount of brain data available for training our system (see~\ref{subsec:decoding}). In this study, we used the largest pre-trained BigBiGAN model, revnet50x4, with $256\times 256$ image resolution. The model is publicly available on TensorFlow Hub\footnote{\url{https://tfhub.dev/deepmind/bigbigan-revnet50x4/1}}. \subsection{PCA Model} As a baseline image decomposition and reconstruction model for our comparisons, we applied principal component analysis (PCA) on a set of $15000$ images that were randomly selected from the $150$ training categories ($100$ each). We made sure that the $1200$ training images were included. Using the first 120 principal components (PCs), all of the image stimuli were transformed into a set of 120-D vectors. These vectors were then treated similarly to BigBiGAN's latent vectors for brain decoding and reconstructions. This method (known as ``eigen-face'' or ``eigen-image'') has previously been applied to fMRI-based face reconstruction~\cite{cowen2014neural, vanrullen2019reconstructing} and natural image reconstruction~\cite{han2018variational}. \subsection{Decoding and Reconstruction}\label{subsec:decoding} Using linear regression, we computed a linear encoder that maps the $120$-dimensional BigBiGAN latent representations (or the $120$-dimensional PCA projections) associated with the training images onto the corresponding brain representations, recorded when the human subjects viewed the same images in the scanner (see Fig.~\ref{fig:method}a). For each subject, this mapping is computed by a general linear regression model (GLM) where the design matrix included the following regressors of interest: fixation (during the fixation point), stimulus (whenever an image was presented), and one-back (when the image was a target for the one-back task). In order to obtain mapping parameters, the $120$-dimensional latent vectors (or PCs) for the training images were added as parametric modulators for the ``stimulus'' regressor. This step takes into account the covariance matrix of the latent dimensions (across images), and produces a linear transform matrix ($W$) which will be used for the inverse transformation in the test phase. In other words, for the training set of $1200$ images, if there are $nv$ voxels in the desired ROI, the GLM finds an optimal transformation matrix $W$ between their $121$-dimensional latent vectors (including an additional constant bias term) and the corresponding $nv$-dimensional brain activation vectors: \begin{equation} Y_{1200\times nv} = X_{1200\times 121} \cdot W_{121\times nv}, \end{equation} where $X$ and $Y$ denote the latent and brain activation vectors, respectively. Please note that all of the GLMs were solved by SPM12 over the entire visual cortex (union of all pre-defined functional ROIs). For the test images, brain representations were derived from another GLM in which (in addition to ``fixation'' and ``one-back'' regressors, as previously) the presentation of each test image was considered as a separate regressor. The previously-computed mapping ($W$) was then inverted (again, taking into account the covariance matrix of the latent dimensions, this time across brain voxels), and used to predict the latent vectors (or PCA projections) from the brain representations (see Fig.~\ref{fig:method}b). This corresponds to the ``brain decoding'' step. Precisely, we retrieved the latent vectors $X_{50\times 121}$ from the brain activation vectors of the $50$ test images $Y_{50\times nv}$ using the previously-computed $W$ and its (pseudo-)inverse covariance matrix $(WW^T)^{-1}$: \begin{equation}\label{eq:test_phase} \begin{split} Y &= X \cdot W \\ YW^T &= X \cdot WW^T \\ X &= YW^T \cdot (WW^T)^{-1}. \end{split} \end{equation} Before solving equation~\ref{eq:test_phase}, the brain activation vectors were zero-meaned by subtracting from each the average activation vector across all test images. Finally, we discarded the bias term from the predicted latent vectors (PCA projections), and fed them into BigBiGAN's generator (PCA's inverse transform) to generate image reconstructions. Since BigBiGAN's generator is sensitive to the distribution of latent variables, we re-scaled predicted latent variables using the mean and standard deviation of latent variables from the training set, before feeding them to the generator. \begin{figure*} \centering \includegraphics[width=0.65\textwidth]{subject_cmp_hq_rev1.pdf} \caption{fMRI reconstructions by the proposed method across all subjects. The first and second columns show the input image and BigBiGAN's original reconstruction (reconstruction from the original latent vector), respectively. The next five columns illustrate BigBiGAN's fMRI reconstructions (reconstruction from predicted latent vectors) for each of the five subjects. Although fMRI reconstructions are not a perfect match to the input images, there are many attributes that are consistently captured by all subjects. These attributes can be semantic, such as being an animal or the body pose, and/or visually driven such as roundness or tallness, to mention a few.} \label{fig:recon_consistency} \end{figure*} \subsection{Computational Efficiency} The whole computation pipeline from raw fMRI data to image reconstructions consists of the following steps: \begin{enumerate} \item fMRI pre-processing \item Extracting brain representations for test images (GLM) \item Extracting latent representation for training images (using BiBiGAN's encoder) \item Computing the linear mapping (GLM) \item Predicting latent vectors for test images (using the inverse mapping) \item Reconstructing images (using BiBiGAN's generator) \end{enumerate} Apart from the first two steps that are common between almost all fMRI image reconstruction methods, the major computational cost of the proposed method is computing the linear mapping. This is not only considerably less expensive than training large complex encoder/decoder networks (we use pre-trained networks instead), but also easily adaptable to the latent space of any other pre-trained networks. In other words, as soon as a better natural scene generator emerges, we can substitute the new network with the old one and run the pipeline again (from step 2). For our experiments, we ran this pipeline on a machine running Ubuntu 18.04 with 128 GB of memory, 40 CPU cores (2.20GHz), and NVIDIA TITAN V as the GPU. Nipype python package was also used to parallelize the pre-processing and GLM steps over the five subjects. It took around 16 hours to compute the linear mapping (GLM on the training data) for all subjects, while the encoding and image reconstructions with BigBiGAN took only a few seconds. \subsection{Decoding Accuracy} We used a pairwise strategy to evaluate the accuracy of our brain decoder. Assume that there are a set of $n$ (original) vectors $v_1, v_2, ..., v_n$ and their respective predictions $p_1, p_2, ..., p_n$. Then the pairwise decoding accuracy is computed as: \begin{equation} \frac{\sum_{i=1}^{n-1}\sum_{j=i+1}^{n} K\left(c(v_i,p_i)+c(v_j,p_j),c(v_i,p_j)+c(v_j,p_i)\right)}{n}, \end{equation} where $c(.,.)$ is the Pearson correlation and \begin{equation} K(a,b) = \begin{cases} 1 & a > b\\ 0 & \textit{otherwise}. \end{cases} \end{equation} \begin{figure*}[h] \centering \includegraphics[width=0.7\textwidth]{method_comp_hq_rev1.pdf} \caption{Comparison of fMRI reconstructions by different methods. The first and second columns show the input image and BigBiGAN's original reconstruction (reconstruction from the original latent vector), respectively. Columns three to seven illustrate fMRI reconstructions for BigBiGAN (our method, reconstruction from the predicted latent vector), Eigen-Image (PCA, baseline model), Ren et al.~\cite{ren2019reconstructing}, Beliy et al.~\cite{beliy2019voxels}, and Shen et al.~\cite{shen2019deep}, respectively. Clearly, reconstructions by the proposed method are the most naturalistic, with the highest resolution, in contrast to the more blurry or semantically ambiguous results of the other methods.} \label{fig:recon_comp} \end{figure*} \subsection{High-Level Similarity Measure}\label{subsec:high-level} Unlike human judgement, classic similarity metrics such as mean squared error (MSE), pix-comp~\cite{ren2019reconstructing}, or structural similarity index (SSIM) are computed in pixel space and cannot capture high-level perceptual similarities, e.g. in terms of object attributes and identity, or semantic category. One good solution for this problem is to make use of DCN representational spaces, as there are several pieces of evidence supporting their correlation to the human brain~\cite{khaligh2014deep,cichy2016comparison,horikawa2017generic}. In this paper, Inception-V3~\cite{szegedy2016rethinking} was the DCN of our choice, with the outputs of its last inception block (after concatenation of its branches) defining our high-level representational space. In this space, as a measure of high-level perceptual representations, we computed the average Pearson correlation distance between representations of the original images and their associated fMRI reconstructions. In addition to this high-level measure, we also report pix-comp values~\cite{ren2019reconstructing} as a measure of low-level similarity. \section{Results} \subsection{Image Reconstructions} Using BigBiGAN's generator (or the PCA inverse transform), we could reconstruct an estimate of the test images from the latent vectors obtained by the brain decoder ($W^T$). Since BigBiGAN's generator is not perfect (see first and second columns in Fig.~\ref{fig:recon_consistency} and Fig.~\ref{fig:recon_comp}), we cannot expect the fMRI reconstructions to be identical to the input images (even if our decoding procedure was $100\%$ accurate). However, we found that the brain decoder not only captured several high-level attributes of the images, but that there were robust consistencies in image reconstruction across subjects. Fig.~\ref{fig:recon_consistency} shows a series of reconstructions across all of the five subjects. For example, when the input image contained an animal (rows 1, 2, 5, 7, 10) or a human (row 9), it was preserved in the reconstructions with comparable location, body shape and pose across subjects. It is worth mentioning that objects or attributes that occur with a higher frequency in the ImageNet dataset are more likely to be preserved in the original BigBiGAN and fMRI reconstructions. For instance, images in the third and eighth rows are not common in Imagenet, yet their roundness attribute is more frequently observed. Thus, all the reconstructions agreed with a round object, even though they could not exactly reconstruct what the object was. Other examples are the images of the tower (fourth row) for the narrowness and tallness attributes, or the insect (seventh row) whose reconstructions mostly captured the long rope-like object behind it and rendered it with insect-related attributes. \begin{table}[t!] \centering \caption{Quantitative comparison of image reconstructions. For each measure, the best value is highlighted in bold. (For Pix-Comp, higher is better; for Inception-V3, lower is better)} \label{tab:method_cmp} \begin{tabular}{|l|c|c|} \hline \multirow{3}{*}{Method} & \multicolumn{2}{c|}{Similarity Measure}\\ \cline{2-3} ~ & Low-Level & High-Level\\ ~ & (Pix-Comp) $\uparrow$ & (Inception-V3) $\downarrow$\\ \hline \hline Shen et al.~\cite{shen2019deep} & $79.7\%$ & $0.829$ \\ Beliy et al.~\cite{beliy2019voxels} & $85.3\%$ & $0.865$ \\ Ren et al.~\cite{ren2019reconstructing} & \boldmath{$87.8\%$} & $0.847$ \\ Eigen-Image (PCA) & $73.4\%$ & $0.884$ \\ BigBiGAN (ours) & $54.3\%$ & \boldmath{$0.818$} \\ \hline \end{tabular} \end{table} fMRI-based natural image reconstruction has been addressed by a variety of methods recently, however only a few of them have been evaluated on the dataset we used. Here, we compare our reconstructions to three recent works by Shen et al.~\cite{shen2019deep}, Beliy et al.~\cite{beliy2019voxels}, and Ren et al.~\cite{ren2019reconstructing}. Fig.~\ref{fig:recon_comp} shows reconstructions of seven images obtained by each method. Note that we could not compare other images since their reconstructions were not available for all methods. Although our reconstructions are not a perfect match to the input image, they show the clearest resolution, details, and naturalness, and display high-level similarity to the input image. Clearly, PCA (eigen-image) reconstructions rank worst in clarity. The other three methods suffered to varying degrees from ambiguous reconstructions (notably, without any clearly discernible object), although they did much better in estimating low-level attributes of the images, with the best performance obtained by Ren et al. Moreover, unlike the other methods, no image ``halo'' is present in our reconstructions. These halos can result from various factors such as the learning capacity of the encoder/decoder networks, the training approach, and most importantly, pixel-level or low-level similarity optimization, to mention a few. For a quantitative comparison, we quantified low- and high-level similarities between reconstructions and original images. The former was computed as the pairwise decoding performance in pixel space (pix-comp) for all of the test images, while the latter was the correlation distance between representations of the last inception block in Inception-V3 (see subsection~\ref{subsec:high-level}) over the common set of seven reconstructions showed in Fig.~\ref{fig:recon_comp}. These results (see table~\ref{tab:method_cmp}) justify our claim that high-level aspects of the input images were better preserved by our method, while the other methods had an advantage for low-level aspects. \subsection{Decoding Accuracy Across Brain Regions} As mentioned above, the fMRI dataset includes several pre-defined brain regions of interest (ROIs) in visual cortex, including V1 to V4, LOC, FFA, PPA, and HVC as the union of the last three. We also defined the whole visual cortex (VC) as the union of all these ROIs. By limiting voxels to those that were inside each ROI, we evaluated the pairwise decoding accuracy across different regions in visual cortex. Fig.~\ref{fig:decoding_acc} illustrates the average decoding accuracy over all subjects in each brain region. PCA outperformed BigBiGAN in the two earliest visual areas (V1 and V2). However, in higher areas, BigBiGAN gradually improved while PCA worsened. Peak performance for our method was reached in V3, V4, and HVC, where PCA performed poorly. We hypothesize that the superiority of PCA in lower areas is due to the fact that the PCs were computed in pixel space, and thus correspond mostly to low-level features. On the other hand, BigBiGAN's latent vectors can better represent high-level features, since they are obtained via a large hierarchy of processing layers. For both BigBiGAN and PCA, the best accuracy was achieved when we used brain responses from the whole VC. Peak accuracy was $84.1\%$ and $78.1\%$ for BigBiGAN and PCA, respectively. It is worth mentioning that, while the whole VC improved BigBiGAN's performance significantly compared to each individual region, PCA could only do marginally better than when using voxels in V1d alone (its best single-region performance). This again suggests that PCA mostly depends on low-level features, whereas the BigBiGAN brain decoder can benefit from low-level information as well as high-level image attributes. \begin{figure} \centering \includegraphics[width=\columnwidth]{perf_explicit_roi_dou.pdf} \caption{Pairwise decoding accuracy across different brain regions of interest (ROIs). While voxels in high-level areas of the visual cortex are best decoded using BigBiGAN (our method), PCA performs better in low-level regions (V1, V2). Although the best performance is achieved when all the voxels (the whole visual cortex) are included, PCA could only do marginally better than when only V1d voxels were used.} \label{fig:decoding_acc} \end{figure} \section{Discussion} In this paper, we have proposed a new method for realistic reconstruction of natural scenes from fMRI patterns. Thanks to the high-level, low-dimensional latent space of BigBiGAN, we could establish a linear mapping that associates image latent vectors to their corresponding fMRI patterns. This linear mapping was then inverted to transform novel fMRI patterns into BigBiGAN latent vectors. Finally, by feeding the obtained latent vectors into the BigBiGAN generator, the associated images were reconstructed. Many recent approaches have taken advantage of deep generative neural networks to reconstruct natural scenes~\cite{shen2019deep, beliy2019voxels, ren2019reconstructing}. However, due to the complexity of natural images, a huge amount of computational resources and capacity is required to achieve high-resolution realistic image generation~\cite{brock2018large}. Here, we used the pre-trained BigBiGAN as a state-of-the-art large-scale bi-directional GAN for natural images. We showed that the proposed method is able to generate the most realistic reconstructions in the highest resolution ($256\times 256$) compared to other methods. Moreover, comparing results across subjects revealed a robust consistency in capturing high-level attributes of different objects through the reconstructions. We acknowledge that our reconstructions are still far from perfect and can often lag behind the others in terms of low-level similarity measures. In contrast, the superiority of the proposed method is with respect to high-level evaluations of perceptual similarity. While we can surpass other methods in this area, we believe that there is still room for methodological improvements. In particular, failures to retrieve the proper semantic category or visual attribute can of course be caused by imperfect brain-decoding of the latent vectors, but also sometimes by inadequate image generation from the BigBiGAN generator (e.g., compare the first 2 columns in Fig.~\ref{fig:recon_consistency}). We believe that one promising area of improvement for our work is through the ability of the image generation model. In this regard, whenever new bidirectional GANs (or other bidirectional architectures) improve on the current state-of-the-art, our method can easily be adapted to deploy them and take advantage of their image generation prowess for more accurate brain-based reconstructions. Another current limitation of the proposed method is our use of pre-defined brain regions of interest (or potentially, of the entire visual cortex). It is likely that not all voxels are informative or relevant to the target task; including uninformative or irrelevant voxels can only degrade the outcome. Additionally, there might well be informative voxels in other brain areas such as pre-frontal cortex, signaling high-level perceptual or semantic aspects of the visual stimulus, that we are currently not considering. For these reasons, extending the analysis to the entire brain, while using a proper voxel selection stage to discard irrelevant voxels, is bound to further improve the results.
1,108,101,562,632
arxiv
\section{Introduction} Magnetic shape-memory alloys have drawn much attention in recent years owing to their unique magnetomechanical properties such as magnetic shape-memory \cite{OHandley98} and the magnetic superelasticity \cite{Krenke07}. These properties are a consequence of a strong coupling between magnetic and structural degrees of freedom. The prototypical and first discovered magnetic shape-memory material is the Heusler Ni$_2$MnGa \cite{Ullakko96}. This alloy undergoes a complex multi-stage transformation process from a high temperature paramagnetic cubic phase to a ferromagnetic martensitic phase. At intermediate temperatures it shows precursor tweed textures which may lock (via a first-order phase transition) into a modulated premartensitic structure due to the freezing of a specific phonon with a given wave vector. This behavior appears to be related to low resistance against distortions of the $\{110\}$ planes along the $\langle 1 \bar 1 0 \rangle$ directions and is evidenced by the features of the low energy TA$_2$ acoustic phonon branch \cite{Zheludev95,Zheludev96,Stuhr97,Manosa01} and the low value of the elastic constant $C'$ \cite{Worgull96,Manosa97,Stipcich04}. While these features are essentially inherent to the high-temperature cubic structure, additional softening has been shown to arise from the coupling between structural and magnetic degrees of freedom \cite{Stuhr97,Manosa01}. Thus, it has been suggested that the magnetostructural coupling is responsible for the phonon condensation yielding the intermediate modulated structure \cite{Planes97}. Nevertheless, the occurrence of a premartensitic phase is not yet a well understood phenomenon, as it only has been observed for a restricted number of magnetic shape memory alloys within limited composition ranges. Actually, the study of the structural (martensitic and premartensitic transformations) and magnetic properties of Ni-Mn-Ga alloys is a current topic of intense research \cite{Barman05,Bhobe06,Ranjan06,Banik06,PerezLandazabal07,Ahuja07,Banik07}. The effect of doping elements on the martensitic and magnetic transformations in Ni-Mn-Ga alloys has received considerable attention \cite{Liu02,Khovailo03b,Kikuchi04,Koho04,Guo05,Glavatskyy06,Ohtsuka06}. However, the lack of a systematic study makes it difficult to compare directly the properties of different compounds. In the present paper, we investigate the dependence of transition temperatures (martensitic, intermediate and Curie) on the electron concentration by analyzing the effect of substituting Ni, Mn and Ga by Fe. In all cases, the reference system is the stoichiometric Ni$_2$MnGa, which has a high temperature L2$_1$ structure ($Fm3m$). This structure can be viewed as four interpenetrating fcc sublattices [in Wickoff notation, (4a)-1 is occupied by Mn-atoms, (4b)-2 by Ga-atoms, and (8c) by Ni-atoms]. The total magnetic moment is $\sim 4.1 \mu_B$ per formula unit and is largely confined to the Mn-sites contributing with $3.5 \mu_B$. \section{Experimental} \begin{table} \begin{ruledtabular}\caption{\label{Tab1} Compositions of the Ni-Mn-Ga-Fe samples determined by EDX. Different specimens are grouped into three distinct families, depending on the element that is substituted by Fe (elements within parenthesis, first column). The estimated error in the compositions is less than $\pm 0.3$ \%. Values of valence electron concentration per atom, $e/a$, are also given.} \begin{tabular}{lccccc} Family & Ni & Mn & Ga & Fe & $e/a$\\ & (at. \%) & (at. \%) & (at. \%) & (at. \%) & \\ \hline\\ (Ni,Fe) & 52.6 & 23.1 & 24.3 & 0 \footnote{Data extracted from reference \cite{Hu01}.} & 7.606\\ & 51.3 & 22.8 & 24.5 & 1.4 & 7.573\\ & 50.1 & 23.1 & 24.6 & 2.2 & 7.541\\ & 49.3 & 23.1 & 24.5 & 3.1 & 7.530\\ & 48.1 & 23.0 & 24.5 & 4.4 & 7.507\\ & 47.0 & 23.1 & 24.6 & 5.3 & 7.479\\ (Mn,Fe) & 51.4 & 24.8 & 23.8 & 0 \footnote{Data extracted from reference \cite{Wu03}. Note that this composition slightly deviates (more than the experimental error, $\pm 0.3 \%$) from the fitted compositional line.} & 7.589\\ & 51.5 & 24.2 & 23.5 & 0.8 & 7.613\\ & 51.1 & 24.6 & 23.4 & 0.9 & 7.606\\ & 51.7 & 23.1 & 23.4 & 1.8 & 7.633\\ (Ga,Fe) & 51.3 & 24.0 & 24.7 & 0 \footnote{Data extracted from reference \cite{Tickle99}. Note that this composition slightly deviates (more than the experimental error $\pm 0.3 \%$) from the fitted compositional line.}& 7.551\\ & 51.2 & 24.2 & 23.8 & 0.8 & 7.592\\ & 51.8 & 24.8 & 21.7 & 1.7 & 7.703\\ & 51.3 & 24.5 & 22.2 & 2.0 & 7.671\\ \end{tabular} \end{ruledtabular} \end{table} Polycrystalline Ni-Mn-Ga-Fe ingots were prepared by arc melting pure metals under argon atmosphere in a water cooled Cu crucible. The ingots were melted several times for homogeneity and encapsulated under vacuum in quartz glass. They were then annealed at 1073 K for 72 hours to achieve a high degree of atomic order. Finally, the samples were quenched in ice-water. The compositions of the alloys were determined by energy dispersive x-ray photoluminescence analysis (EDX) with an estimated error less than $\pm 0.3$\% (Table \ref{Tab1}). The alloys are grouped according to their compositions into the families Ni$_{52.5-x}$Mn$_{23}$Ga$_{24.5}$Fe$_{x}$ (1.2 $\leq x\leq$ 5.5) for which Ni is replaced by Fe; Ni$_{51.4}$Mn$_{25.2-x}$Ga$_{23.4}$Fe$_{x}$ (0.8 $\leq x\leq$ 1.8) for which Mn is replaced by Fe; and Ni$_{51.4}$Mn$_{24.5}$Ga$_{24.1-x}$Fe$_{x}$ (0.7 $\leq x\leq$ 2.0) where Fe replaces Ga. The compositions are given in at\%. \begin{figure} \includegraphics[width=8cm]{Fig_01.eps} \caption{\label{fig01} Ni$_{52.5-x}$Mn$_{23}$Ga$_{24.5}$Fe$_{x}$ family represented by the sample with $x=4.4$. (a) Magnetic susceptibility versus temperature. The vertical arrow indicates the premartensitic transition temperature, $T_{I}$. The inset shows high temperature calorimetric curves. The Curie point $T_{C}$ is indicated by vertical arrows. (b) Transformed fraction as a function of temperature obtained by integration of the calorimetric curves (inset in b). The arrows indicate the direction of temperature change.} \end{figure} Specimens cut from the ingots using a low speed diamond saw (typical size $5 \times 1 \times 1$ mm$^{3}$) were used as samples for susceptibility and calorimetric studies. Structural transition temperatures were obtained from AC susceptibility and calorimetric measurements. Magnetic susceptibility measurements were carried out in an AC susceptometer (LakeShore 7120A) in the temperature range 80 K $\leq T\leq$ 320 K. The working parameters were 500 A m$^{-1}$ (6.28 Oe) applied field and 389 Hz frequency. For differential scanning calorimetry (DSC) measurements, one side of the samples was ground with SiC abrasive to ensure optimal thermal contact. Calorimetric measurements were carried out by means of a high sensitivity calorimeter in the temperature range 100 K $\leq T\leq$ 350 K. Typical heating and cooling rates were 0.5 K min$^{-1}$. Magnetic transition temperatures were determined by means of a DSC calorimeter suitable for higher temperatures. All transition temperatures are affected by an error of $\pm 1$ K. The errors in entropy change are based on reproducibility and shown as errors bars in the figures. \section{Experimental Results} \begin{figure} \includegraphics[width=8cm]{Fig_02.eps} \caption{\label{fig02} (a) Evolution of the transition temperatures of Ni$_{52.5-x}$Mn$_{23}$Ga$_{24.5}$Fe$_{x}$ as a function of Fe concentration. Open square and triangle symbols stand for data extracted from ref. \cite{Hu01}. (b) Entropy change at the martensitic transformation as a function of Fe concentration. Solid lines are linear fits to the experimental data.} \end{figure} Eleven different alloys were studied in the present work. In this section, we present selected results of susceptibility and calorimetric measurements which are representative of each family. In the following the given Fe content is taken as the value corresponding to the fitted compositional line. From the complete set of data, we determine a phase diagram for each family and the transition entropy change at the martensitic transformation. \subsection{Substitution of Ni by Fe} \begin{figure} \includegraphics[width=8cm]{Fig_03.eps} \caption{\label{fig03} Ni$_{51.4}$Mn$_{25.2-x}$Ga$_{23.4}$Fe$_{x}$ family represented by the sample with $x=1.8$. (a) Magnetic susceptibility versus temperature and (b) transformed fraction as a function of temperature, obtained by integration of the calorimetric curves (shown in the inset). Arrows in panel (b) and inset indicate direction of temperature change.} \end{figure} Figure \ref{fig01} shows the AC susceptibility and calorimetric curves for the sample with $x=4.4$. The inset in figure \ref{fig01}(b) shows the calorimetric curves recorded on cooling and heating. The multiple peaks (noticeable in the thermograms corresponding to the forward transition on cooling) are a consequence of the well-known jerky character of martensitic transformations. On the other hand, the extra noise observed at the lowest temperatures in the thermograms on cooling is an artifact arising from the very low cooling rate in the low temperature regime (notice that $dQ/dT$ is obtained by dividing the calorimetric signal $\dot{Q}$ by $\dot{T}$). Figure \ref{fig01}(b) shows the austenitic transformed fraction, $y$ versus $T$, obtained from the calorimetric data shown in the inset. The austenitic transformed fraction is computed as $y=1-\Delta S(T)/\Delta S$ for the forward transition on cooling, and $y=\Delta S(T)/\Delta S$ for the reverse transition on heating, with $\Delta S(T)=\int_{T_{i}}^{T} (dQ/dT)/T \, dT$ ($T<T_{i}$ on cooling and $T>T_{i}$ on heating) and $\Delta S$, the entropy change at the martensitic transformation. This plot is illustrative for the typical results obtained for the Ni$_{52.5-x}$Mn$_{23}$Ga$_{24.5}$Fe$_{x}$ family. Both susceptibility and calorimetric measurements reveal the presence of a martensitic transformation. The corresponding transition temperatures are: martensite start temperature $M_{s}=133$ K, martensite finish temperature $M_{f}=119$ K, austenite start temperature $A_{s}=132$ K and austenite finish temperature $A_{f}=146$ K. The Curie point was determined from complementary DSC measurements as $T_{C}=400$ K [shown in the inset of Fig. \ref{fig01}(a)]. Moreover, an additional feature is observed in the susceptibility curve at temperatures above the martensitic transition which is associated with the formation of the intermediate or premartensitic phase \cite{Manosa97}. The transition temperature is $T_{I}= 186$ K. No significant thermal hysteresis is detected at the premartensitic transition and no appreciable features are observed in the calorimetric curves at the premartensitic transition. This behaviour agrees with that observed in the related system Ni-Mn-Ga, where thermal anomalies are barely detected with differential scanning calorimetric techniques \cite{Kokorin96}. By contrast, AC susceptibility measurements are very suited for the observation of the intermediate phase transition \cite{Manosa97}. Figure \ref{fig02}(a) summarizes the results for the Ni$_{52.5-x}$Mn$_{23}$Ga$_{24.5}$Fe$_{x}$ family. To complete the picture, we have also included data for an $x=0$ sample from reference \cite{Hu01}. Transition temperatures are plotted as a function of the Fe concentration. All transition temperatures associated with the martensitic transformation ($M_{s}$, $M_{f}$, $A_{s}$ and $A_{f}$) follow the same $x$ dependence. Thus, for the sake of clarity, only $M_{s}$ temperatures are included. As can be seen from this figure, the martensitic transformation temperature decreases as the amount of Fe increases. In ternary Ni-Mn-$X$ ($X$: Ga, Al, Sn, In and Sb) systems it is well established that martensitic transformation temperatures decrease as the valence electron concentration $e/a$ decreases \cite{Chernenk99,Acet02,Krenke05,Krenke06}. When replacing Ni by Fe, $e/a$ decreases and a drop in $M_s$ is expected. This behavior is seen in Fig. \ref{fig02}(a). \begin{figure} \includegraphics[width=8cm]{Fig_04.eps} \caption{\label{fig04} Transition temperatures for Ni$_{51.4}$Mn$_{25.2-x}$Ga$_{23.4}$Fe$_{x}$ as a function of Fe concentration. Open square symbol stands for data extracted from ref. \cite{Wu03}, for this sample $T_{C}$ was not reported. The inset shows the entropy change at the martensitic transformation as a function of Fe concentration. Solid lines are fits to the experimental data.} \end{figure} Premartensitic transformation temperatures also decrease as the Fe concentration increases, but at lower rate than $M_s$. In addition $T_C$ increases with increasing $x$. Figure \ref{fig02}(b) shows the entropy change at the martensitic transformation as a function of Fe concentration. The concentration dependence of $\Delta S$ is similar to the behaviour of $M_{s}$, i. e., the entropy change decreases as the amount of Fe increases. Such a dependence reflects the stabilization of the cubic phase. \subsection{Substitution of Mn by Fe} Figure \ref{fig03} illustrates typical results obtained when replacing Mn by Fe (Ni$_{51.4}$Mn$_{25.2-x}$Ga$_{23.4}$Fe$_{x}$ family). For the sample with $x=1.8$ ($T_{C}=374$ K) a martensitic transition is observed on cooling at $M_{s}=275$ K and $M_{f}=267$ K. On heating, the reverse transformation takes place at $A_{s}=274$ K and $A_{f}=281$ K. No signatures of a premartensitic transformation are observed. \begin{figure} \includegraphics[width=8cm]{Fig_05.eps} \caption{\label{fig05} Ni$_{51.4}$Mn$_{24.5}$Ga$_{24.1-x}$Fe$_{x}$ family represented by the sample with $x=0.7$. (a) Magnetic susceptibility versus temperature and (b) transformed fraction as a function of temperature, obtained by integration of the calorimetric curves (shown in the inset). Arrows in panel (b) and inset indicate direction of temperature change.} \end{figure} The variation of transition temperatures with Fe concentration for this family is collected in Fig. \ref{fig04}. No significant changes in transition temperatures are observed over the compositional range studied. This is because $e/a$ varies little by replacing Mn with Fe in small amounts. Consistently, Fe addition does not substantially modifies the values of the entropy change at the martensitic transition, as can be seen in the inset of figure \ref{fig04}. \subsection{Substitution of Ga by Fe} \begin{figure} \includegraphics[width=8cm]{Fig_06.eps} \caption{\label{fig06} Transition temperatures for Ni$_{51.4}$Mn$_{24.5}$Ga$_{24.1-x}$Fe$_{x}$ as a function of Fe concentration. Open square and triangle symbols stand for data extracted from ref. \cite{Tickle99}. The inset shows the entropy change at the martensitic transformation as a function of Fe concentration. Solid lines are linear fits to the experimental data.} \end{figure} Figure \ref{fig05} illustrates typical results obtained for the Ni$_{51.4}$Mn$_{24.5}$Ga$_{24.1-x}$Fe$_{x}$ family. Data for the sample with $x=0.7$ ($T_{C}=363$ K) are shown. The presence of a martensitic transformation near room temperature is evidenced from both susceptibility and calorimetric measurements. The corresponding transition temperatures are $M_{s}=290$ K, $M_{f}=281$ K, $A_{s}=287$ K and $A_{f}=297$ K. Again, no signature of the premartensitic transition is observed. The phase diagram is shown in figure \ref{fig06}, where it is seen that $M_s$ increases with increasing Fe content. This is consistent with the rapid increase of $e/a$ when Fe is substituted for Ga. $T_C$ is essentially unaffected. The entropy change at the martensitic transition as a function of Fe concentration is collected in the inset of figure \ref{fig06}. As can be seen from this figure, $\Delta S$ parallels the behaviour of the martensitic transformation temperatures and increases as the amount of Fe increases, pointing out the stabilization of the the low temperature phase due to Fe substitution. \begin{figure} \includegraphics[width=8cm]{Fig_07.eps} \caption{\label{fig07} (Color online) (a) Phase diagram of Ni-Mn-Ga-Fe system as a function of electron concentration per atom $e/a$. Filled symbols stand for Ni$_{52.5-x}$Mn$_{23}$Ga$_{24.5}$Fe$_{x}$ family; half-filled symbols stand for Ni$_{51.4}$Mn$_{25.2-x}$Ga$_{23.4}$Fe$_{x}$ family; open symbols stand for Ni$_{51.4}$Mn$_{24.5}$Ga$_{24.1-x}$Fe$_{x}$ family; crossed symbols stand for data extracted from reference \cite{Hu01}. Red dashed lines depict the (fitted) transition lines of the related Ni-Mn-Ga ternary system. (b) Phase diagram of Ni-Mn-Ga system as a function of electron per atom concentration $e/a$ (data compiled from reference \cite{Marcos04}). Solid lines are fits to the experimental data.} \end{figure} \section{Discussion} The complete set of results for the different transition temperatures is collected in Fig. \ref{fig07}. Here, the magnetic and structural transition temperatures of the quaternary Ni-Mn-Ga-Fe system is plotted as a function of $e/a$. As can be seen from this plot, data from different families scale with the electron concentration parameter. It was established for Ni-Mn-Ga that the phase stability is controlled by $e/a$ \cite{Chernenk99,Jin02}. In the case of the quaternary system, the reasonable scaling of the transition temperatures indicates that the phase stability is mostly governed by the electron concentration as well. However, the scatter in the data points is higher than that observed in the phase diagram as a function of composition (see Figs. \ref{fig02}, \ref{fig04} and \ref{fig06}), thus suggesting that additional parameters other than electron concentration could affect phase stability. For comparison, figure \ref{fig07}(b) shows the phase diagram for the Ni-Mn-Ga system (data extracted from reference \cite{Marcos04} and references therein). The behavior is similar for both alloy systems. $M_{s}$ and $T_{I}$ increase as $e/a$ increases, whereas $T_{C}$ decreases. At constant $e/a$, we find that the addition of Fe to Ni-Mn-Ga shifts $M_s$ and $T_I$ to lower values, whereas $T_C$ shifts to higher temperatures [as illustrated by dashed lines in Fig. \ref{fig07}(a)]. The relationship of $e/a$ and lattice instability in cubic Heusler alloys has recently been investigated from first principles calculations \cite{Zayak06}. It has been reported that $e/a$ plays a central role in the occurrence of anomalies in the phonon dispersion curves along [110] directions. These control the stability of the cubic structure. In particular, it has been found that adding and removing electrons has the same effect as replacing the $sp$ ($X$) element. In the present study, we have experimentally investigated the effect of different element substitution. The general trends in the phase stability are given by the change in $e/a$. This is consistent with a change in the position of the Fermi energy as in a rigid band model. Nevertheless, the larger scatter of the data when plotted as a function of $e/a$ compared to the one in the plots as a function of composition suggests that the effect of alloying is not just a change in the Fermi level, but the addition of Fe could also modify to some extent the orbital hybridization and bonding. Actually, changes in hybridization were reported for Ni$_2$MnGa with several substitutional elements \cite{MacLaren02}. This could be related to volume effects which have been reported for In-doped Ni-Mn-Ga alloys \cite{Khan04}. \begin{figure} \includegraphics[width=8cm]{Fig_08.eps} \caption{\label{fig08} (Color online) (a) Entropy change at the martensitic transformation of Ni-Mn-Ga-Fe system as a function of electron concentration per atom $e/a$. Filled symbols stand for Ni$_{52.5-x}$Mn$_{23}$Ga$_{24.5}$Fe$_{x}$ family; half-filled symbols stand for Ni$_{51.4}$Mn$_{25.2-x}$Ga$_{23.4}$Fe$_{x}$ family; open symbols stand for Ni$_{51.4}$Mn$_{24.5}$Ga$_{24.1-x}$Fe$_{x}$ family. Red dashed line depicts the (fitted) entropy change of the related Ni-Mn-Ga ternary system. (b) Entropy change at the martensitic transformation of Ni-Mn-Ga system as a function of electron per atom concentration $e/a$ [data compiled from reference \cite{Chernenko95} ($\square$) and \cite{Khovailo03} ($\blacksquare$)]. Solid lines are linear fits to the experimental data.} \end{figure} As can be seen in Fig. \ref{fig07}, the premartensitic phase exists when martensitic and magnetic transition are well separated. In the Ni-Mn-Ga system, it has been shown that magnetoelastic coupling between structural and magnetic degrees of freedom gives rise to the premartensitic transition \cite{Planes97,Castan99}. The strength of such an interaction depends on the magnetization. Therefore, in order for the premartensitic phase to develop, the sample must remain in the cubic phase at temperatures well below the Curie point. This requires that the martensitic transition temperature is well below $T_{C}$. Moreover, the temperature that corresponds to the point where martensitic and premartensitic transformation temperatures meet is slightly displaced to higher $e/a$ values in the case of Ni-Mn-Ga-Fe system with respect to the ternary system. Such a shift is in agreement with the decrease of $M_s$ and the increase of $T_C$ due to Fe addition. As $M_s$ shifts to lower temperatures and $T_C$ to higher temperatures, the separation between both temperatures increases compared to the ternary system for equal $e/a$ values. Thus, the crossing point between $M_{s}$ and $T_{I}$ is displaced to higher electron concentration values. The features in the [110] TA$_2$ phonon branch giving rise to the intermediate phase are associated with a nesting in the Fermi surface. It has been found that such a Fermi-surface nesting is strongly dependent on the magnetization of the cubic phase \cite{Lee02}. This scenario is consistent with the experimental finding that the premartensitic phase only develops for ferromagnetically ordered samples for which the martensitic instability is well below $T_C$. Finally, figure \ref{fig08} shows the entropy change at the martensitic transformation as a function of electron concentration per atom $e/a$ for (a) Ni-Mn-Ga-Fe and (b) Ni-Mn-Ga systems. As can be seen from panel (a), in the quaternary system $\Delta S$ increases as the electron per atom concentration increases, similar to the behaviour exhibited by the martensitic transformation temperatures and to the behaviour of the ternary system. Moreover, the entropy change values in the Fe substituted alloys are lower than those in the ternary Ni-Mn-Ga system, as illustrated by the red dashed line. This drop could be accounted for by the strengthening of magnetic exchange interactions when adding Fe, as reflects the increase of $T_{C}$ in the quaternary system compared to the ternary one. When magnetic order occurs in the parent phase, the Gibb's chemical free energy decreases, compared to the non magnetic state. Thus, the difference in the free energy between parent and martensite phases is smaller and the parent phase becomes more stable. Actually, the magnetic contribution is also responsible of the strong concentration dependence of the entropy change, as was pointed out by Khovailo \textit{et al.} in the ternary Ni-Mn-Ga system \cite{Khovailo03}. \section{Conclusion} We have studied the effect of Fe addition on the structural and magnetic transformation properties in the magnetic shape memory alloy Ni-Mn-Ga for compositions close to stoichiometry. We find that $M_s$ and $T_I$ shift to lower values when Fe is substituted into Ni-Mn-Ga, while $T_C$ shift to higher values. Despite of the similarities between ternary Ni-Mn-Ga and quaternary Ni-Mn-Ga-Fe systems, which indicate that phase stability is qualitatively governed by $e/a$, the shift in $M_s$ evidences that parameters other than $e/a$ affect phase stability (essentially volume effects associated with atom sizes as suggested in \cite{Khan04}). Hence, a simple choice of $e/a$ can only be considered to be a guideline for examining systematic changes within a single-alloy system. Actually, the lack of universal character of $e/a$ parameterization has been previously pointed out for the Heusler alloys Ni-Mn-$X$ \cite{Krenke06,KrenkeJMMM07} and has been recently confirmed by the manipulation of structural and magnetic transition temperatures in isoelectronic Ni-Mn-Ga and Ni-Mn-Ga-In compounds \cite{Khan04,Aksoy07}. \begin{acknowledgments} This work received financial support from the CICyT (Spain), Project No. MAT2007--61200, DURSI (Catalonia), Project No. 2005SGR00969, from the Deutsche Forschungsgemeinschaft (GK277), from Marie-Curie RTN Multimat (Contract No. MRTN-CT-2004-505226), and from CONACYT (44786-SEP-CONACYT 2003). XM acknowledges support from DGICyT (Spain). We thank Peter Hinkel for technical support. \end{acknowledgments}
1,108,101,562,633
arxiv
\section{Introduction} \tableofcontents \section{Introduction} \label{intro} The interest in exotic theories beyond the Standard Model (SM) has been increasing over the past few decades. Among the motivations is the exploration of a phenomena not explainable either by General Relativity or Quantum Field Theory, as that would be a direct probe into the ultimate quantum theory of gravity. One of the leading theoretical candidates for such a phenomena is the violation of Lorentz and CPT symmetries. Indeed, most of the current approaches to the quantum gravity naturally allow the violation of these symmetries, among whom string theories \cite{String_Theory}, loop quantum gravity \cite{Loop QG,Gambini}, noncommutative field theories \cite{Non-commutative G}, and many others \cite{LV Theories} can be named. The breaking of the Lorentz symmetry may be either exact or spontaneous, although it was shown that the usual Riemann geometry cannot be maintained in the gravity sector when the breaking is explicit \cite{Gravity}\footnote{Explicit Lorentz symmetry breaking might suggest alternative geometries like Riemann-Finsler \cite{Kostelecky:2011qz}.}. As for the spontaneous breaking, various approaches about insertion of the Lorentz violation into the model, among which modifications in transformation laws \cite{Kinematical LV} and field theoretical approaches \cite{Field T. LV,Gambini} can be named, have been pursued in the literature, albeit such different approaches can be shown to be contained in a systematic field theoretical framework \cite{Fermion,Kostelecky2009}. The systematic framework for exploration of Lorentz and CPT violations was constructed over 15 years ago. This framework, so called \emph{Standard Model Extension} (SME) \cite{SME,Colladay.Kostelecky,Gravity}, is an action level \emph{effective field theoretical} (EFT) approach in which Lorentz violation is inserted to the model via background fields named \emph{Lorentz Violating Terms} (LVT), and has been analyzed and investigated both in theoretical and experimental fronts \cite{Kostelecky:2008ts} and references therein. Basically, it is assumed that the effective low energy description of the high energy fundamental theory can be expanded in energy over a mass scale, which is possibly related to the Planck scale. In this expansion, the lowest order term becomes the Standard Model. With the EFT approach, next terms in this expansion can be examined with the field theoretical machinery built within the SM. The task of examination of the next-to-leading term, called \emph{minimal extension of Standard Model} ({\it m}SME) \cite{Colladay.Kostelecky}, has been undergone in all sectors. While {\it m}SME constitutes all renormalizable operators, as gravity itself is nonrenormalizable, it is reasonable for the next term in the expansion to constitute of nonrenormalizable operators of arbitrarily high mass dimensions, called \emph{non-minimal Standard Model Extension} ({\it nm}SME). The photon, neutrino and the fermions sectors of {\it nm}SME were introduced in 2009, 2011, 2013 respectively \cite{Kostelecky2009,Neutrino,Fermion}. There are data tables \cite{Kostelecky:2008ts} listing all the available bounds on the sectors of {\it m}SME and {\it nm}SME. The updates of the tables are given in \cite{Kostelecky:2008tsupdates}. In the past, nonrenormalizable theories have not been considered very popular. This attitude changed as EFT approach to the nonrenormalizable theories has been proven to be quite useful \cite{EFT}. The reasoning beyond the EFT lies within the assumption of small deviations, which actually determines a validity range hence justifies the name ``effective". In the case of {\it nm}SME, the current bounds on the LVT directly indicate the necessity of quite small deviations in the interested low energy regimes, thus suggests the use of nonrenormalizable LVT within EFT. The available LVT in each sector of the SME splits into two parts: those which violate CPT invariance, being called \emph{CPT-odd}; and those which do not, being called \emph{CPT-even}. Among these sectors, the CPT-even and CPT-odd photons have been studied in the {\it m}SME \cite{Kostelecky:2001mb}. The photon sector of the {\it nm}SME has been discussed in \refs{\cite{Kostelecky2009}} and specifically the CPT-even part of it has been studied in \refs{\cite{Shreck2013,Schreck:2013gma}}. A similar discussion for the CPT-odd photon part of the {\it nm}SME is missing in literature. Hence, the aim of this study is to fill this gap by doing the analysis of CPT-even photon from a quantum field theoretical point of view. In the photon sector of the {\it nm}SME, the CPT-odd and CPT-even contributions are denoted by the coefficients $\hat{k}_{AF}$ and $\hat{k}_{F}$, respectively. The symbol hat ``$\hat{\;\;}$" is used to indicate that all higher order terms are contained. As it provides a natural classification with direct relevance to observations and experiments, the decomposition of these coefficients into spin weighted spherical harmonics, called spherical decomposition, is introduced in \refs{\cite{Kostelecky2009}}. Then, the LVT $\hat{k}_{F}$ and $\hat{k}_{AF}$ decompose as \begin{equation} \begin{aligned} \hat{k}_{AF}&\longrightarrow\left\{(k_{AF}^{(d)})_{njm}^{(0B)}, (k_{AF}^{(d)})_{njm}^{(1B)}, (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})_{njm}^{(1E)}\right\}\,,\\ \hat{k}_F&\longrightarrow\left\{(c_F^{(d)})_{njm}^{(0E)}, (k_F^{(d)})_{njm}^{(0E)}, (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_F^{(d)})_{njm}^{(1E)}, (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_F^{(d)})_{njm}^{(2E)}, (k_F^{(d)})_{njm}^{(1B)}, (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_F^{(d)})_{njm}^{(2B)}\right\}\,, \end{aligned}\label{Eq:decompose of k} \end{equation} where $c$ denotes nonbirefringence, and negation diacritic denotes vacuum-orthogonality (no leading order effect on vacuum propagation). The symbols $n, j, m$ denote frequency dependence, total angular momentum, $z$-component of angular momentum respectively; whereas $E$ and $B$ refer to the parity of the operator, and the preceding number gives the spin weight of the operator. These coefficients can be regrouped according to their effects on the leading order vacuum propagation. That splits the overall coefficient space into two distinct parts as listed in \reft{\ref{Table-sdc}}. \begin{table* \caption{\label{Table-sdc}Spherically decomposed coefficients according to their vacuum properties.} \begin{ruledtabular} \begin{tabular}{l@{\hskip 0.3in}c@{\hskip 0.3in}c} & $\bm{\hat{k}_{AF}}$ & $\bm{\hat{k}_{F}}$\\\hline Vacuum Models &$k_{(V)jm}^{(d)}$&$c_{(I)jm}^{(d)}$, $k_{(E)jm}^{(d)}$, $k_{(B)jm}^{(d)}$\\\hline Vacuum-Orthogonal\\ \hskip 1cm Models &$(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})_{njm}^{(0B)}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})_{njm}^{(1B)}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})_{njm}^{(1E)}$&$(\voc_{F}^{(d)})_{njm}^{(0E)}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{F}^{(d)})_{njm}^{(0E)}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{F}^{(d)})_{njm}^{(1E)}$,\\&& $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{F}^{(d)})_{njm}^{(2E)}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{F}^{(d)})_{njm}^{(1B)}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{F}^{(d)})_{njm}^{(2B)}$ \end{tabular} \end{ruledtabular} \end{table*} The outline of the paper is as follows. In \refp{\ref{sec:COPS}}, the modified dispersion relations, solutions for the photon field $A^\mu$ and the modified propagator are investigated. It is shown that only a particular subset of the coefficient space can produce physical results, and is also shown that the modified propagator can be brought to the diagonal form for this particular subspace. In \refp{\ref{sec:VOMFCOP}}, we further restrict the coefficient space to vacuum-orthogonal LVT only, and prove that vacuum orthogonal model remains vacuum orthogonal at all orders. We demonstrate that the dispersion relations for this models split into two sets, non-conventional and conventional; and non-conventional dispersion relations are shown to be spurious, whereas conventional dispersion relations are shown to accept conventional polarization vectors. In \refp{\ref{sec:SMA}}, we analyze some special cases and show that there exists a nontrivial coefficient subspace satisfying above results. \section{The CPT-odd Photon Sector} \label{sec:COPS} The general form of {\it nm}SME Lagrangian for the photon sector can be read off from \refe{8} of \refs{\cite{Kostelecky2009}}. For the CPT-odd model ($\hat{k}_{F}=0$), the Lagrangian becomes \begin{equation} \mathcal{L}_{\text{CPT-odd}}={}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}\epsilon^{\kappa\lambda\mu\nu}A_\lambda(\hat{k}_{AF})_\kappa F_{\mu\nu}\, . \label{general cpt-odd lagrangian} \end{equation} The corresponding CPT-odd action can be written as \begin{equation} \mathcal{S}_{\text{CPT-odd}}={}-\frac{1}{4}\int d^4x\left(F_{\mu\nu}F^{\mu\nu}-2\epsilon^{\kappa\lambda\mu\nu}A_\lambda(\hat{k}_{AF})_\kappa F_{\mu\nu}+2(\partial_\mu A^\mu)^2\right)\label{general action} \end{equation} where $\zeta=1$ Feynman 't Hooft gauge fixing term is used. After the surface terms are eliminated, the action is brought to the form $\mathcal{S}={}\frac{1}{2}\int d^4x A_\mu(\hat{G}^{-1})^{\mu\nu}A_\nu$. Then, the inverse propagator takes the form \begin{equation} (\hat{G}^{-1})^{\mu\nu}={}-\eta^{\mu\nu}(p_\sigma p^\sigma)+2i\epsilon^{\mu\kappa\lambda\nu}(\hat{k}_{AF})_\kappa p_\lambda \label{Momentum space inverse propagator}\, . \end{equation} From the action ($\ref{general action}$) with the adoption of the plane wave ansatz $A_\mu(x)=A_\mu(p)e^{-ix.p}$, equations of motion take the form $M^{\mu\nu}A_\nu=0$ for \begin{equation} M^{\mu\nu}={}\eta^{\mu\nu}p_\alpha p^\alpha-p^\mu p^\nu-2i\epsilon^{\mu\nu\alpha\beta}(\hat{k}_{AF})_\alpha p_\beta \label{general form of M} \end{equation} from \refe{23} of \refs{\cite{Kostelecky2009}}. \subsection{The Dispersion Relation} \label{s:DR} The dispersion relation for the Lagrangian ($\ref{general cpt-odd lagrangian}$) can be obtained via the usual way, first handling the gauge fixing and then calculating the determinant of reduced linear equations. Alternatively, rank-nullity can be used to find the covariant form of dispersion relations without sacrificing the gauge invariance, as is done in \refs{\cite{Kostelecky2009}}. We use this alternative method, and obtain\footnote{See the details in \refa{\ref{App:HB}}.} from the general result \refe{1} of \refs{\cite{Kostelecky2009}} \begin{equation} 0={}\left(p_\mu p^\mu\right)^2+4p_\alpha p^\alpha(\hat{k}_{AF})_\mu(\hat{k}_{AF})^\mu-4\left( p_\mu(\hat{k}_{AF})^\mu\right)^2 \label{General CPT-odd Dispersion Relation}\,. \end{equation} Special models such as vacuum, general vacuum-orthogonal and camouflage models can be most transparently applied if the spherical decomposition method is employed. To do that, we first set the helicity basis as the space part of the coordinate system. In this basis, \refe{\ref{General CPT-odd Dispersion Relation}} becomes \begin{equation} 0={}\left(p_\mu p^\mu\right)^2-4\left(p(\hat{k}_{AF})_0-\omega(\hat{k}_{AF})_r\right)^2-8p_\mu p^\mu(\hat{k}_{AF})_+(\hat{k}_{AF})_-\label{General CPT-odd Spherical Dispersion Relation}\,, \end{equation} where $\omega$ is the usual frequency and $p$ denotes the magnitude of the space part of $p^\mu$. Here, $(\hat{k}_{AF})_i$ can be expanded over spin-weighted spherical harmonics. The prescription for such an expansion is given in \refe{47-51} in \refs{\cite{Kostelecky2009}}, for which \refe{\ref{General CPT-odd Spherical Dispersion Relation}} becomes \begin{widetext} \begin{equation} \begin{aligned} 0 ={}&\left(p_\mu p^\mu\right)^2-4\left(\sum\limits_{dnjm}\omega^{d-3-n}p^n\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})\left(\frac{dp}{n+3}(k_{AF}^{(d)})^{(0B)}_{njm}+\frac{\omega}{n+2}(k_{AF}^{(d)})^{(1B)}_{njm}\right)\right)^2-8p_\mu p^\mu\\{}&\times\sum\limits_{d_1d_2n_1n_2j_1j_2m_1m_2}\omega^{d_1+d_2-6-n_1-n_2}p^{n_1+n_2}\prescript{}{+1}{Y}_{j_1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{j_2m_2}(\mathbf{\hat{p}})\frac{1}{\sqrt{4j_1j_2(j_1+1)(j_2+1)}}\\ {}&\times\left((k_{AF}^{(d_1)})^{(1B)}_{n_1j_1m_1}+i(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_1)})^{(1E)}_{n_1j_1m_1}\right)\left(-(k_{AF}^{(d_2)})^{(1B)}_{n_2j_2m_2}+i(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_2)})^{(1E)}_{n_2j_2m_2}\right)\,. \label{General CPT-odd Dispersion Relation2} \end{aligned} \end{equation} \end{widetext} This is the most general dispersion relation for CPT-odd photon sector of {\it nm}SME. As it stands, it is quite complicated; however, we will show in the next section that the last term will drop so as to have a corresponding physical polarization vector. \subsection{Polarization Vectors} \label{s:PV} In order to determine the photon field $A_\mu$, one needs to solve the equations of motion $M^{\mu\nu}A_\nu=0$. The necessary condition for non-trivial solution is $det(M)=0$, through which one finds the dispersion relations. The standard method is to apply these conditions on $M$ and find the corresponding polarization vectors. As extracting the generic explicit forms of the dispersion relation out of the implicit formula (\ref{General CPT-odd Dispersion Relation2}) is quite formidable, we will pursue an alternative way here. We will calculate the rank of $M$ using a generic frequency $\omega$, and obtain the constraints from the requirement $M$ having at most rank 2.\footnote{A rank-3 $M$ gives gauge solution only, and a rank-4 $M$ is trivial.} Then, these constraints will be applied to the dispersion relations, which we already worked out, in order to determine whether there exists a nontrivial coefficient subspace with a physical polarization vector obeying the general dispersion relation \refe{\ref{General CPT-odd Dispersion Relation2}}. In helicity basis, \refe{\ref{general form of M}} reduces to the form \begin{equation*} \begin{aligned} M_\mu^{\;\nu}={}&\delta_\mu^{\;\nu}(\omega^2-p^2)-\eta_{ \mu 0}\delta^\nu_0\omega^2-\left(\eta_{\mu 0}\delta^\nu_r+\eta_{\mu r}\delta^\nu_0\right)p\omega-\eta_{\mu r}\delta^\nu_rp^2+2\left(\eta_{\mu 0}\delta^\nu_+-\eta_{\mu +}\delta^\nu_0\right)(\hat{k}_{AF})_-p\\{}&+2\left(\eta_{\mu -}\delta^\nu_0-\eta_{\mu 0}\delta^\nu_-\right)(\hat{k}_{AF})_+p+2\left(\eta_{\mu +}\delta^\nu_--\eta_{\mu -}\delta^\nu_+\right)\left((\hat{k}_{AF})_0p-(\hat{k}_{AF})_r \omega\right)\\{}&+2\left(\eta_{\mu +}\delta^\nu_r-\eta_{\mu r}\delta^\nu_+\right)(\hat{k}_{AF})_-\omega+2\left(\eta_{\mu r}\delta^\nu_--\eta_{\mu -}\delta^\nu_r\right)(\hat{k}_{AF})_+\omega \,.\end{aligned} \end{equation*} Then, by the matrix representation convention $M_\rho^{\;\;\nu}$ in $(0,+,r,-)$ basis, \begin{widetext} \begin{equation} M\doteq \begin{pmatrix} -p^2 & 2(\hat{k}_{AF})_-p & -p\omega & -2(\hat{k}_{AF})_+p\\ -2(\hat{k}_{AF})_+p & \omega^2-p^2+2\left((\hat{k}_{AF})_0p-(\hat{k}_{AF})_r\omega\right) & 2(\hat{k}_{AF})_+\omega & 0\\ p\omega & 2(\hat{k}_{AF})_-\omega & \omega^2 & -2(\hat{k}_{AF})_+\omega \\ 2(\hat{k}_{AF})_-p & 0 & -2(\hat{k}_{AF})_-\omega & \omega^2-p^2-2\left((\hat{k}_{AF})_0p-(\hat{k}_{AF})_r\omega\right) \end{pmatrix}\label{conventional rooted M in helicity basis}\,. \end{equation} \end{widetext} This alternative method can be tested in no LV limit. For this case, \refe{\ref{conventional rooted M in helicity basis}} becomes \begin{equation} M\doteq \begin{pmatrix} -p^2 & 0 & -p\omega & 0\\ 0 & \omega^2-p^2 & 0 & 0\\ p\omega & 0 & \omega^2 & 0 \\ 0 & 0 & 0 & \omega^2-p^2 \end{pmatrix}\,. \label{No LV M without omega=p} \end{equation} This matrix is of rank 3 for $\omega\ne p$, which means the requirement of at most rank 2 $M$ enforces the condition $\omega=p$. Therefore, $M$ reduces to the form \begin{equation} M\doteq \begin{pmatrix} -p^2 & 0 & -p^2 & 0\\ 0 & 0 & 0 & 0\\ p^2 & 0 & p^2 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}\,, \label{No LV M} \end{equation} for which $MA=0$ yields the following solutions \begin{equation} A_\mu\in\left\{\begin{pmatrix}1\\0\\-1\\0 \end{pmatrix}, \begin{pmatrix}0\\1\\0\\0 \end{pmatrix}, \begin{pmatrix}0\\0\\0\\1 \end{pmatrix}\right\}\,.\label{conventional polarization vectors} \end{equation} This is the expected result in Lorenz gauge. When we further apply Coulomb gauge, first component will be set to 0, which in turn kills the first solution with the radial component. Hence, there remain two transverse solutions with the same dispersion relation $\omega=p$. The same procedure can be utilized for the general case with the Lorentz violation. From \refe{\ref{conventional rooted M in helicity basis}}, the equations of motion become \begin{widetext} \begin{equation} \begin{pmatrix} -p^2 & 2(\hat{k}_{AF})_-p & -p\omega & -2(\hat{k}_{AF})_+p\\ -2(\hat{k}_{AF})_+p & \omega^2-p^2+2p(\hat{k}_{AF})_s & 2(\hat{k}_{AF})_+\omega & 0\\ p\omega & 2(\hat{k}_{AF})_-\omega & \omega^2 & -2(\hat{k}_{AF})_+\omega \\ 2(\hat{k}_{AF})_-p & 0 & -2(\hat{k}_{AF})_-\omega & \omega^2-p^2-2p(\hat{k}_{AF})_s \end{pmatrix}\begin{pmatrix} A_0 \\ -A_+ \\ -A_r \\ -A_- \end{pmatrix}=\begin{pmatrix} 0\\0\\0\\0 \end{pmatrix}\,, \label{Equations of Motion for photon field} \end{equation} \end{widetext} where the minus signs in $A$ are because $A$ should be in covariant form as $MA=0$ reads $M_\mu^{\;\nu}A_\nu=0$ due to the matrix convention used. Also we define \begin{equation} (\hat{k}_{AF})_s:={}(\hat{k}_{AF})_0-\frac{\omega}{p}(\hat{k}_{AF})_r \label{definition of k_s} \end{equation} for brevity. As stated earlier, the rank of $M$ should be at most 2 for physical solutions to emerge. For mathematical convenience, we can calculate the row equivalent matrices of M for different possibilities, namely \mbox{$\{(\hat{k}_{AF})_+=0, (\hat{k}_{AF})_-=0\}$}, \mbox{$\{(\hat{k}_{AF})_+=0, (\hat{k}_{AF})_-\ne0\}$}, \mbox{$\{(\hat{k}_{AF})_+\ne0, (\hat{k}_{AF})_-=0\}$} \& \mbox{$\{(\hat{k}_{AF})_+\ne0, (\hat{k}_{AF})_-\ne0\}$}, then calculate the corresponding ranks. This calculation shows that rank of $M$ is at most 2 only if \begin{equation} \begin{aligned} (\hat{k}_{AF})_+={}& 0\,,\\ (\hat{k}_{AF})_-={}& 0\,. \end{aligned} \end{equation} is satisfied, where other cases results in either rank 3 or rank 4 depending on $(\hat{k}_{AF})_s$. Therefore, these are the restrictions on the coefficient subspace, and hence on the dispersion relation. We see that these restrictions kill the last term on the \refe{\ref{General CPT-odd Dispersion Relation2}}. The remaining coefficient subspace further splits into two parts regarding $(\hat{k}_{AF})_0=(\hat{k}_{AF})_r$ or not. It turns out that the polarization vectors and the dispersion relation which they obey become conventional if $(\hat{k}_{AF})_0=(\hat{k}_{AF})_r$, whereas there arises two transverse polarization vectors with two different dispersion relations if $(\hat{k}_{AF})_0\ne(\hat{k}_{AF})_r$. Concisely, the coefficient space for CPT-odd photon sector of {\it nm}SME can be one of the following subspaces: one with conventional solutions, one with birefringent solutions and one with no physical solutions. For later convenience, we denote these coefficients subspaces with $\hat{k}_{AF}^{(cn)}$, $\hat{k}_{AF}^{(bf)}$, and $\hat{k}_{AF}^{(np)}$ where $cn$, $bf$, and $np$ refer to nature of resultant polarization vectors: conventional, birefringent, and nonphysical, respectively. That physical solutions of photon field do exist vetoes the possibility $\hat{k}_{AF}=\hat{k}_{AF}^{(np)}$; hence, it is merely considered for completeness. The results are summarized in \reft{\ref{Table-cscops}}. \begin{table* \caption{\label{Table-cscops}Coefficient subspace of CPT-odd photon sector with physical solutions} \begin{ruledtabular} \begin{tabularx}{\textwidth}{@{}c c c c@{}} Coefficient Subspace & Conditions & Dispersion Relation & Polarization Vectors \\ \hline\hline $\hat{k}_{AF}^{(bf)}$ &$\begin{array}{r l} (\hat{k}_{AF})_+={}& 0\\(\hat{k}_{AF})_-={}& 0\\(\hat{k}_{AF})_0-(\hat{k}_{AF})_r \ne{}& 0 \end{array}$&$\begin{array}{l} \omega^2-p^2-2p(\hat{k}_{AF})_s={}0\\\omega^2-p^2+2p(\hat{k}_{AF})_s={}0 \end{array}$& $\begin{array}{c} \left\{\begin{pmatrix} \omega\\0\\p\\0 \end{pmatrix} , \begin{pmatrix} 0\\0\\0\\1 \end{pmatrix}\right\}\\\left\{\begin{pmatrix} \omega\\0\\p\\0 \end{pmatrix} , \begin{pmatrix} 0\\1\\0\\0 \end{pmatrix}\right\} \end{array}$\\\hline $\hat{k}_{AF}^{(cn)}$ &$\begin{array}{r l} (\hat{k}_{AF})_-={}& 0\\(\hat{k}_{AF})_+={}& 0\\(\hat{k}_{AF})_0-(\hat{k}_{AF})_r={}& 0 \end{array}$&$\omega=p$&$\left\{\begin{pmatrix} 1\\0\\1\\0 \end{pmatrix}, \begin{pmatrix} 0\\1\\0\\0 \end{pmatrix}, \begin{pmatrix} 0\\0\\0\\1 \end{pmatrix}\right\}$\\\hline $\hat{k}_{AF}^{(nf)}$ &$\{(\hat{k}_{AF})_+\ne0\} \lor \{(\hat{k}_{AF})_-\ne0\}$ & \refe{\ref{General CPT-odd Dispersion Relation2}} & \text{Gauge Solution Only} \end{tabularx} \end{ruledtabular} \end{table*} \subsection{The Propagator} \label{s:P} The task of analytical inversion of the inverse propagator ($\ref{Momentum space inverse propagator}$) while remaining covariant is formidable, if possible. Since we seek to remain as generic as possible in this paper, the options are either working in the leading order, or sacrificing the covariance. We will examine the first case via both \emph{ansatz} and \emph{perturbation expansion} methods, and the second case by choosing helicity basis explicitly. \subsubsection{Propagator Ansatz} From the form of the inverse propagator ($\ref{Momentum space inverse propagator}$), an ansatz for the leading order propagator can be proposed in the form \begin{equation} \hat{G}_{\rho\mu}={}\tilde{a}\,\eta_{\rho\mu}+\tilde{b}\,p_\rho p_\mu+\tilde{c}\,(\hat{k}_{AF})_{\{\rho} p_{\mu\}}+\tilde{d}\,(\hat{k}_{AF})_{[\rho} p_{\mu]}+\tilde{e}\,(\hat{k}_{AF})_\rho(\hat{k}_{AF})_\mu\,. \label{Propagator ansatz} \end{equation} where $(\hat{k}_{AF})_{\{\rho} p_{\mu\}}$ stands for symmetric combination of $(\hat{k}_{AF})_\rho$ and $p_\mu$ while the other form $(\hat{k}_{AF})_{[\rho} p_{\mu]}$ stands for the antisymmetric combination. By the defining equation $\hat{G}_{\rho\mu}(\hat{G}^{-1})^{\mu\nu}={}\delta_\rho^{\;\nu}$, coefficients can be determined as \begin{equation*} \begin{aligned} a={}&-\frac{1}{p_\sigma p^\sigma}\,,\\ b={}& 0\,,\\ c={}& 0\,,\\ \tilde{d}\,(p_\sigma p^\sigma)(\hat{k}_{AF})^{[\rho} p^{\nu]}={}& 2i\tilde{a}\,\delta^\rho_\mu\epsilon^{\mu\kappa\lambda\nu}(\hat{k}_{AF})_\kappa p_\lambda\,,\\ e={}& 0\,. \end{aligned} \end{equation*} With these coefficients put back to the ansatz, the leading order propagator takes the form \begin{align} \hat{G}_{\rho\mu}\simeq{}&-\frac{1}{p_\alpha p^\alpha}\eta_{\rho\mu}+2i\frac{1}{(p_\alpha p^\alpha)^2}\epsilon_{\rho\mu\kappa\lambda}(\hat{k}_{AF})^\kappa p^\lambda\,. \label{LO Propagator} \end{align} \subsubsection{Perturbation Expansion} Instead of using an ansatz for finding the leading order propagator, a full series expansion can be assumed. This series can be written as \begin{equation} \hat{G}_{\rho\mu}={}\sum\limits_{n=0}^{\infty}G^{(n)}_{\rho\mu}\,, \label{propagator expansion} \end{equation} where $G^{(n)}_{\rho\nu}$ is the term in the propagator with $n^\text{th}$ order Lorentz violation only. Then, via the inverse propagator ($\ref{Momentum space inverse propagator}$), the equality \mbox{$\delta^{\;\nu}_\rho=G_{\rho\mu}(G^{-1})^{\mu\nu}$} results in \begin{equation*} \delta^\nu_\rho={}-(p_\sigma p^\sigma)(G^{(0)})_{\rho}^\nu-(p_\sigma p^\sigma)\sum\limits_{n=1}^{\infty}(G^{(n)})_{\rho}^\nu+2i\sum\limits_{n=0}^{\infty}G^{(n)}_{\rho\mu}\epsilon^{\mu\kappa\lambda\nu}(\hat{k}_{AF})_\kappa p_\lambda\,. \end{equation*} The perturbation expansion inherently assumes a smooth transition to the conventional case. We can invoke this by turning off the Lorentz violation. Then, both summations die out; hence, \begin{equation*} (G^{(0)})_{\rho}^\nu={}-\frac{1}{(p_\sigma p^\sigma)}\delta^\nu_\rho\,, \end{equation*} which dictates \begin{equation*} 0={}\sum\limits_{n=0}^{\infty}\left((G^{(n+1)})_{\rho}^\nu-\frac{2i}{(p_\sigma p^\sigma)}G^{(n)}_{\rho\mu}\epsilon^{\mu\kappa\lambda\nu}(\hat{k}_{AF})_\kappa p_\lambda\right)\,. \end{equation*} From these two equations, we can expand the propagator upto any order we want. Particularly, up to first order, the propagator takes the form \begin{equation} \hat{G}_{\rho\mu}\simeq{}-\frac{1}{p_\alpha p^\alpha}\eta_{\rho\mu}+2i\frac{1}{(p_\alpha p^\alpha)^2}\epsilon_{\rho\mu\kappa\lambda}(\hat{k}_{AF})^\kappa p^\lambda\,, \tag{\ref{LO Propagator}} \end{equation} which is exactly the same with that of ansatz method. \subsubsection{Helicity Basis Propagator} In this option, instead of sacrificing the exactness, the covariant form of the propagator is abandoned by choosing an explicit basis. The analytic form of the propagator can be written in any particular basis, as one can always go to the matrix representation once an explicit basis is chosen, and the non-singular inverse propagator matrix can always be analytically inverted. For convenience, we choose the helicity basis, in which the inverse propagator ($\ref{Momentum space inverse propagator}$) can further be decomposed\footnote{See the details in \refa{\ref{App:HB}}.} and be brought to the form \begin{equation} \begin{aligned} (\hat{G}^{-1})^{\mu\nu}={}&-\eta^{\mu\nu}(p_\sigma p^\sigma)-2\delta^\mu_0\delta^\nu_+p(\hat{k}_{AF})_-+2\delta^\mu_+\delta^\nu_0p(\hat{k}_{AF})_-+2\delta^\mu_0\delta^\nu_-p(\hat{k}_{AF})_+-2\delta^\mu_-\delta^\nu_0p(\hat{k}_{AF})_+\\{} &-2\delta^\mu_-\delta^\nu_+(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)+2\delta^\mu_+\delta^\nu_-(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)-2\delta^\mu_r\delta^\nu_-\omega(\hat{k}_{AF})_+\\{} &+2\delta^\mu_-\delta^\nu_r\omega(\hat{k}_{AF})_+-2\delta^\mu_+\delta^\nu_r\omega(\hat{k}_{AF})_-+2\delta^\mu_r\delta^\nu_+\omega(\hat{k}_{AF})_- \end{aligned}\label{Equation: Helicity-basis inverse propagator} \end{equation} which can be represented in the matrix representation as \begin{widetext} \begin{equation} \begin{aligned} (\hat{G}^{-1})\doteq{} \begin{pmatrix} -(p_\sigma p^\sigma)&-2p(\hat{k}_{AF})_-& 0& 2p(\hat{k}_{AF})_+\\ 2p(\hat{k}_{AF})_+&-(p_\sigma p^\sigma)+2(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)& -2\omega(\hat{k}_{AF})_+& 0\\ 0& -2\omega(\hat{k}_{AF})_-&-(p_\sigma p^\sigma)& 2\omega(\hat{k}_{AF})_+\\ -2p(\hat{k}_{AF})_-& 0& 2\omega(\hat{k}_{AF})_-&-(p_\sigma p^\sigma)-2(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0) \end{pmatrix}\,.\label{inverse propagator in helicity} \end{aligned} \end{equation} \end{widetext} The propagator is simply the inverse of this matrix, which can be calculated analytically; yet, we do not provide it here for two reasons: It does not give any particular insight, and it can be further simplified. In the section \ref{s:PV}, it was demonstrated that coefficient space of {\it nm}SME CPT-odd photon sector can be one of the followings: $\hat{k}_{AF}^{(bf)}$, $\hat{k}_{AF}^{(cn)}$ and $\hat{k}_{AF}^{(np)}$. As $\hat{k}_{AF}^{(np)}$ does not produce physical solutions, we can restrict our attention to the possibilities $\{\hat{k}_{AF}^{(bf)}, \hat{k}_{AF}^{(cn)}\}$, which dictates $(\hat{k}_{AF})_\pm=0$\footnote{This is the main superiority of explicit helicity basis over other approaches as the possibility $\hat{k}_{AF}^{(np)}$ can not be trivially eliminated in them.}. Then, from \refe{\ref{inverse propagator in helicity}} we have \begin{widetext} \begin{equation} \begin{aligned} (\hat{G}^{-1})\doteq{} \begin{pmatrix} -(p_\sigma p^\sigma)& 0& 0& 0\\ 0&-(p_\sigma p^\sigma)+2(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)& 0& 0\\ 0& 0&-(p_\sigma p^\sigma)& 0\\ 0& 0& 0&-(p_\sigma p^\sigma)-2(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)\label{inverse propagator in helicity2} \end{pmatrix} \end{aligned} \end{equation} \end{widetext} which gives \begin{equation} \begin{aligned} \hat{G}\doteq{}\text{diagonal}\bigg({}&-\frac{1}{(p_\sigma p^\sigma)},-\frac{1}{(p_\sigma p^\sigma)+2(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)},\nonumber\\{}&-\frac{1}{(p_\sigma p^\sigma)}, -\frac{1}{(p_\sigma p^\sigma)-2(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)}\bigg)\,. \end{aligned} \end{equation} For notational reasons, we can rewrite this propagator as \begin{equation} \hat{G}_\mu^{\;\nu}={}-\frac{\delta_\mu^{\;\nu}}{(p_\sigma p^\sigma)}+(\hat{G}_{AF})_\mu^{\;\nu} \label{general form of propagator} \end{equation} in which the total propagator becomes the conventional propagator having a \emph{Lorentz violating propagator contribution}. This contribution is \begin{equation} \begin{aligned} (\hat{G}_{AF})_\mu^{\;\nu}={}&\delta_\mu^{\;+}\delta_+^{\;\nu}\left(\frac{1}{(p_\sigma p^\sigma)}-\frac{1}{(p_\sigma p^\sigma)+2(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)}\right)\\{}&+\delta_\mu^{\;-}\delta_-^{\;\nu}\left(\frac{1}{(p_\sigma p^\sigma)}-\frac{1}{(p_\sigma p^\sigma)-2(\omega(\hat{k}_{AF})_r-p(\hat{k}_{AF})_0)}\right) \label{LV contribution in the propagator}\,. \end{aligned} \end{equation} Here, $(\hat{G}_{AF})$ denotes the contribution of {\it nm}SME CPT-odd photon sector to the conventional photon propagator at all orders. \section{The Vacuum-Orthogonal Model for CPT-odd Photon} \label{sec:VOMFCOP} When the {\it nm}SME photon sector and the decompose of related LVT over spin-weighted spherical harmonics were introduced in the paper \refs{\cite{Kostelecky2009}}, the possibility of specialized models and their constructions were also presented and discussed. As stressed out in the \refp{\ref{intro}}, the main advantageous of helicity basis is its relevance to direct observation, which makes a decomposition in this basis decouple LVT according to their observable effects. A directly relevant observable effect of LVT is that on the vacuum propagation. If one restricts the attention to only those LVT which generate leading order dispersive or birefringence effects on vacuum, then the associated model is named \emph{vacuum model}. On the contrary, if one restricts the attention to the coefficient subspace, which is compliment of that of vacuum model, then the associated model is called \emph{vacuum orthogonal model}. In \refp{\ref{sec:COPS}}, we analyzed the whole coefficient space with the simple coefficient set of \refe{\ref{Eq:decompose of k}}, which does not differ vacuum properties. However, one needs to consider only vacuum orthogonal LVT for a vacuum orthogonal model; hence, the conversion from this simple set to those in \reft{\ref{Table-sdc}}, followed by the impose of $k_{(V)jm}^{(d)}=0$ is required. Fortunately, there is a simple prescription for this conversion, given by \refe{97-98} of \refs{\cite{Kostelecky2009}}. \subsection{Dispersion Relation and Polarization Vectors} \label{s:VODR} Being a special case of CPT-odd sector, the vacuum orthogonal model obeys the general CPT-odd dispersion relation (\ref{General CPT-odd Dispersion Relation2}). The mere modification is the application of prescription mentioned above, for which the dispersion relation reduces to the form \begin{widetext} \begin{equation} \begin{aligned} 0={}&\left(p_\mu p^\mu\right)^2-4\Bigg\{\sum\limits_{dnjm}\omega^{d-3-n}p^n\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})\Bigg[(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}\frac{p}{\omega^2}\left(\frac{(d-2-n)\omega^2}{d-2-n+j}-\frac{(d-4-n)p^2}{d-4-n+j}\right)\nonumber\\{}&-\frac{dp^2}{(n+4)(n+2)\omega}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}+(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}\frac{j}{p}\left(\frac{\omega^2}{d-2-n+j}-\frac{p^2}{d-4-n+j}\right)\nonumber\\{}&+\frac{d\omega}{(n+2)(n+4)}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}\Bigg]\Bigg\}^2+8p_\mu p^\mu\sum\limits_{d_1d_2n_1n_2j_1j_2m_1m_2}\omega^{d_1+d_2-6-n_1-n_2}p^{n_1+n_2}\prescript{}{+1}{Y}_{j_1m_1}(\mathbf{\hat{p}})\nonumber\\{} &\times\prescript{}{-1}{Y}_{j_2m_2}(\mathbf{\hat{p}})\frac{1}{\sqrt{4j_1j_2(j_1+1)(j_2+1)}}\Bigg\{\Bigg[\left(\frac{\omega j_1(n_1+1)}{p(d_1-2-n_1+j_1)}-\frac{pj_1(n_1+3)}{\omega(d_1-4-n_1+j_1)}\right)(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_1)})_{n_1j_1m_1}^{(0B)}\nonumber\\{}&+\frac{d_1}{n_1+4}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_1)})^{(1B)}_{n_1j_1m_1}\Bigg]\Bigg[\left(\frac{\omega j_2(n_2+1)}{p(d_2-2-n_2+j_2)}-\frac{pj_2(n_2+3)}{\omega(d_2-4-n_2+j_2)}\right)(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_2)})_{n_2j_2m_2}^{(0B)}\nonumber\\{}&+\frac{d_2}{n_2+4}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_2)})^{(1B)}_{n_2j_2m_2}\Bigg]+(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_1)})^{(1E)}_{n_1j_1m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_2)})^{(1E)}_{n_2j_2m_2}\Bigg\}\,.\label{General Vacuum-orthogonal dispersion relation} \end{aligned} \end{equation} \end{widetext} As it stands, this dispersion relation does not give much insight. However, it can be cast into the form\footnote{See the details in \refa{\ref{App:Covodr}}.} \begin{equation} 0={}(p_\mu p^\mu)\times\Big((p_\mu p^\mu)\mathcal{P}(\omega,p)+\mathcal{Q}(\omega,p)\Big)\,,\label{General Vacuum-orthogonal dispersion relation2} \end{equation} where $\mathcal{P}$ and $\mathcal{Q}$ are defined as \begin{widetext} \begin{subequations} \begin{align} \mathcal{P}(\omega,p):={}& 1-4\Bigg\{\sum\limits_{dnjm}\omega^{d-4-n}p^n\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})\left((\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}\frac{d}{(n+2)(n+4)}\right)\nonumber\\{}&+\omega^{d-5-n}p^{n-1}\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}\left(\omega^2\frac{j}{d-2-n+j}+p^2\frac{d-4-n}{d-4-n+j}\right)\Bigg\}^2\,,\\ \mathcal{Q}(\omega,p):={}& 8\sum\limits_{d_1d_2n_1n_2j_1j_2m_1m_2}\omega^{d_1+d_2-6-n_1-n_2}p^{n_1+n_2}\prescript{}{+1}{Y}_{j_1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{j_2m_2}(\mathbf{\hat{p}})\frac{1}{\sqrt{4j_1j_2(j_1+1)(j_2+1)}}\nonumber\\ {}&\times\Bigg\{\Bigg(\left(\frac{\omega j_1(n_1+1)}{p(d_1-2-n_1+j_1)}-\frac{pj_1(n_1+3)}{\omega(d_1-4-n_1+j_1)}\right)(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_1)})_{n_1j_1m_1}^{(0B)}+\frac{d_1}{n_1+4}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_1)})^{(1B)}_{n_1j_1m_1}\Bigg)\nonumber\\{}&\times\Bigg(\left(\frac{\omega j_2(n_2+1)}{p(d_2-2-n_2+j_2)}-\frac{pj_2(n_2+3)}{\omega(d_2-4-n_2+j_2)}\right)(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_2)})_{n_2j_2m_2}^{(0B)}+\frac{d_2}{n_2+4}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_2)})^{(1B)}_{n_2j_2m_2}\Bigg)\nonumber\\{}&+(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_1)})^{(1E)}_{n_1j_1m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d_2)})^{(1E)}_{n_2j_2m_2}\Bigg\}\,. \end{align}\label{P and Q of dispersion relation} \end{subequations} \end{widetext} As we are interested in the physical solutions only, we can discard the possibility \mbox{$\hat{k}_{AF}=\hat{k}_{AF}^{(np)}$}. This brings the restriction $(\hat{k}_{AF})_\pm=0$, which turns into the constraint $\mathcal{Q}(\omega,p)=0$ for the vacuum orthogonal subspace. Then \refe{\ref{General Vacuum-orthogonal dispersion relation2}} becomes \begin{equation} 0={}\left(p_\mu p^\mu\right)^2\left(1+\mathcal{R}(\omega,p)\right)\left(1-\mathcal{R}(\omega,p)\right)\,,\label{General Vacuum-orthogonal dispersion relation3} \end{equation} where $\mathcal{R}$ is defined as \begin{equation} \begin{aligned} \mathcal{R}(\omega,p):={}& 2\sum\limits_{dnjm}\Bigg\{\omega^{d-4-n}p^n\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})\left((\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}\frac{d}{(n+2)(n+4)}\right)\\{}&+\omega^{d-5-n}p^{n-1}\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}\left(\omega^2\frac{j}{d-2-n+j}+p^2\frac{d-4-n}{d-4-n+j}\right)\Bigg\}\,. \label{definition of R} \end{aligned} \end{equation} The dispersion relation in \refe{\ref{General Vacuum-orthogonal dispersion relation3}} has three roots: $\omega=p$ and $\mathcal{R}(\omega,p)\pm1=0$. As can be clearly deduced from \reft{\ref{Table-cscops}}, the first root is the conventional dispersion relation that $\hat{k}_{AF}^{(cn)}$ gives rise, and the other two are the dispersion relations that the birefringent solutions, which $\hat{k}_{AF}^{(bf)}$ gives rise, obey. However, to prevent any ambiguity, the dispersion relations for the specific cases will be explicitly calculated below. In the vacuum orthogonal coefficient subspace, the term \mbox{$p(\hat{k}_{AF})_s=p(\hat{k}_{AF})_0-\omega(\hat{k}_{AF})_r$} for $\hat{k}_{AF}^{(b)}$ becomes \begin{widetext} \begin{equation*} \begin{aligned} p(\hat{k}_{AF})_0-\omega(\hat{k}_{AF})_r={}& (p_\mu p^\mu)\Bigg\{\sum\limits_{dnjm}\omega^{d-4-n}p^n\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})\Bigg((\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}\frac{d}{(n+2)(n+4)}\Bigg)\nonumber\\{}&+\omega^{d-5-n}p^{n-1}\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}\left(\omega^2\frac{j}{d-2-n+j}+p^2\frac{d-4-n}{d-4-n+j}\right)\Bigg\}\,; \end{aligned} \end{equation*} \end{widetext} hence, \begin{equation*} p(\hat{k}_{AF})_s={}\frac{1}{2}(p_\mu p^\mu)\mathcal{R}(\omega,p)\,. \end{equation*} Once this is inserted into the the dispersion relations in \reft{\ref{Table-cscops}}, they become \begin{equation*} \omega^2-p^2\pm 2p(\hat{k}_{AF})_s=0\quad \longrightarrow\quad (p_\mu p^\mu)\left(1\pm\mathcal{R}(\omega,p)\right)=0\,. \end{equation*} For $\omega=p$ root, $(\hat{k}_{AF})_s$ is forced to be zero which dictates $(\hat{k}_{AF})_r-(\hat{k}_{AF})_0=0$. However, this contradicts with the requirement $(\hat{k}_{AF})_r\ne(\hat{k}_{AF})_0$ of $\hat{k}_{AF}^{(bf)}$ in which these dispersion relations are valid. Therefore, the only roots of the dispersion relation for $\hat{k}_{AF}^{(bf)}$ are \mbox{$\mathcal{R}(\omega,p)\pm1=0$}, which is exactly our earlier deduction. At the first glance, there seems a contradiction in the results. The vacuum orthogonal coefficient subspace should not have produced birefringent results; after all, the name vacuum orthogonal asserts no leading order vacuum birefringence. The results are consistent though, as the birefringent dispersion relations $\mathcal{R}(\omega,p)\pm1=0$ are not actually so-called \emph{perturbative solutions}, which smoothly reduces to the conventional dispersion relation as Lorentz violation is switched off. They are so-called \emph{spurious solutions} \cite{Shreck2013}, which blow up as LVT go to zero. According to \refs{\cite{Kostelecky2009}}, these solutions are Planck scale effects and should be neglected. We will explicitly show that these solutions blow up in \refp{\ref{sec:SMA}}. The resultant situation is actually worth repetition. In \refp{\ref{s:PV}}, we show that the coefficient space of {\it nm}SME CPT-odd photon sector can be one of the possibilities $\{\hat{k}_{AF}^{(bf)}, \hat{k}_{AF}^{(cn)}, \hat{k}_{AF}^{(np)}\}$ where $\hat{k}_{AF}^{(np)}$ is irrelevant as it produces no physical solutions. For the general vacuum orthogonal model, $\hat{k}_{AF}^{(bf)}$ also becomes irrelevant as it produces spurious solutions only; hence, the general vacuum orthogonal model has conventional solutions only. Since the model has no birefringent solutions at any order\footnote{As construction, the vacuum orthogonal model should not have leading order birefringence, but that by no means prevents it to have higher order birefringence effects.}, we say that \emph{the vacuum orthogonal model is vacuum orthogonal at all orders}, and \emph{all polarization vectors and their dispersion relations remain conventional in vacuum orthogonal model}. \subsection{The Coefficient Subspace $\hat{k}_{AF}^{(cn)}$ in Vacuum Orthogonal Model} \label{s:VOPV} The general vacuum orthogonal model has physical and relevant solutions only in the coefficient subspace $\hat{k}_{AF}^{(cn)}$ as shown in \refp{\ref{s:VODR}}. Because $\hat{k}_{AF}^{(cn)}$ is defined as the subspace for which $(\hat{k}_{AF})_\pm=0$ and $(\hat{k}_{AF})_0=(\hat{k}_{AF})_r$ hold, the relevant coefficient subspace of vacuum orthogonal model is the vacuum orthogonal version of these constraints. One can show that these equations translate into the following conditions in the vacuum orthogonal coefficient subspace. \begin{widetext} \textbf{$\bm{(\hat{k}_{AF})_0=(\hat{k}_{AF})_r}$ Condition:} \begin{equation} \begin{aligned} 0={}&\sum\limits_{n}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}\left(-\frac{4}{d}+\frac{4j(d+1+j)}{d(d-2-n+j)(d-4-n+j)}\right)\\ {}&+\sum\limits_{n}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}\left(\frac{1}{n+2}-\frac{d}{(n+4)(n+2)}-\frac{(d-3-n)(n+4)}{d(d-3-n+j)(n+2)}\right)\label{first condition for vacuum polarization} \end{aligned} \end{equation} \textbf{$\bm{(\hat{k}_{AF})_\pm=0}$ Condition:} \begin{equation} \begin{aligned} \sum\limits_{n}\left(-\frac{2j(d-1+j)}{(d-2-n+j)(d-4-n+j)}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}+\frac{d}{n+4}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}\right)={}& 0\,, \\ \sum\limits_{n}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1E)}_{njm}={}& 0 \end{aligned}\label{second condition for vacuum polarization} \end{equation} \end{widetext} \begin{turnpage} \begin{table* \caption{\label{Table-vos} Vacuum Orthogonal Solutions.} \begin{ruledtabular} \begin{tabular}{c c c c} \textbf{Coefficient Subspace} & Conditions & \textbf{Dispersion Relation} & \textbf{Polarization Vectors} $A^\mu$\\\hline $\hat{k}_{AF}^{(cn)}$ & $\begin{aligned} 0={}&\sum\limits_{n}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}\left(-\frac{4}{d}+\frac{4j(d+1+j)}{d(d-2-n+j)(d-4-n+j)}\right)\\ {}&+\sum\limits_{n}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}\bigg(\frac{1}{n+2}-\frac{d}{(n+4)(n+2)}\\ {}&-\frac{(d-3-n)(n+4)}{d(d-3-n+j)(n+2)}\bigg)\\ 0={}&\sum\limits_{n}\bigg(-\frac{2j(d-1+j)}{(d-2-n+j)(d-4-n+j)}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{njm}\\ &{}+\frac{d}{n+4}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1B)}_{njm}\bigg)\\ 0={}&\sum\limits_{n}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(1E)}_{njm} \end{aligned}$ & $\omega=p$ & $\left\{\begin{pmatrix} 1\\0\\1\\0 \end{pmatrix},\begin{pmatrix} 0\\1\\0\\0 \end{pmatrix},\begin{pmatrix} 0\\0\\0\\1 \end{pmatrix}\right\}$\\\hline $\{\hat{k}_{AF}^{(bf)}, \hat{k}_{AF}^{(np)}\}$ & Given by compliment LVT combinations & Spurious or Nonphysical & Physically Irrelevant \end{tabular} \end{ruledtabular} \end{table*} \end{turnpage} \subsection{The Propagator} In the vacuum orthogonal model, the general {\it nm}SME CPT-odd photon propagator can be further simplified by noting that the LV contribution to the propagator, \refe{\ref{LV contribution in the propagator}}, can be cast into the form \begin{equation*} (\hat{G}_{AF})_\mu^{\;\nu}={}\delta_\mu^{\;+}\delta_+^{\;\nu}\left(\frac{1}{(p_\sigma p^\sigma)}-\frac{1}{(p_\sigma p^\sigma)\left(1+\mathcal{R}(\omega,p)\right)}\right)+\delta_\mu^{\;-}\delta_-^{\;\nu}\left(\frac{1}{(p_\sigma p^\sigma)}-\frac{1}{(p_\sigma p^\sigma)\left(1-\mathcal{R}(\omega,p)\right)}\right) \end{equation*} from the equivalence $ 2p(\hat{k}_{AF})_s=(p_\mu p^\mu)\mathcal{R}(\omega,p)$ in the vacuum orthogonal coefficient subspace, which is showed in \refp{\ref{s:VOPV}}. For notational convenience, we can combine these terms and rewrite the general form of the propagator (\ref{general form of propagator}) as \begin{equation} \hat{G}_\mu^{\;\nu}={}-\frac{\delta_\mu^{\;\nu}}{(p_\sigma p^\sigma)}+\frac{1}{(p_\sigma p^\sigma)}\left(\delta_\mu^{\;+}\delta_+^{\;\nu}\frac{\mathcal{R}(\omega,p)}{1+\mathcal{R}(\omega,p)}-\delta_\mu^{\;-}\delta_-^{\;\nu}\frac{\mathcal{R}(\omega,p)}{1-\mathcal{R}(\omega,p)}\right)\,.\label{eq: General vacuum orthogonal propagator} \end{equation} This is the general form of the propagator for the vacuum orthogonal model. However, it contains redundant generality as the only physical solutions emerge from $\hat{k}_{AF}^{(cn)}$. We can restrict $\mathcal{R}(\omega,p)$ to this case by taking $(\hat{k}_{AF})_r$ to $(\hat{k}_{AF})_0$: \begin{equation} \begin{aligned} \mathcal{R}(\omega,p)&=\frac{2p(\hat{k}_{AF})_s}{(p_\mu p^\mu)}=2\frac{p(\hat{k}_{AF})_0-\omega(\hat{k}_{AF})_r}{\omega^2-p^2}\,,\\ \lim\limits_{(\hat{k}_{AF})_r\rightarrow(\hat{k}_{AF})_0}\mathcal{R}(\omega,p)&=-\frac{2(\hat{k}_{AF})_0}{\omega+p}\,. \end{aligned}\nonumber \end{equation} Then, \refe{\ref{eq: General vacuum orthogonal propagator}} becomes \begin{equation} \hat{G}_\mu^{\;\nu}={}-\frac{\delta_\mu^{\;\nu}}{(p_\sigma p^\sigma)}-\frac{1}{(p_\sigma p^\sigma)}\left(\delta_\mu^{\;+}\delta_+^{\;\nu}\frac{2(\hat{k}_{AF})_0}{\omega+p-2(\hat{k}_{AF})_0}-\delta_\mu^{\;-}\delta_-^{\;\nu}\frac{2(\hat{k}_{AF})_0}{\omega+p+2(\hat{k}_{AF})_0}\right)\,.\label{eq: Physical vacuum orthogonal propagator} \end{equation} \section{Special Model Analysis} \label{sec:SMA} \subsection{Vacuum Orthogonal and Isotropic Models at All Orders} \label{ss:VOI} The examination of a Lorentz violating model with the full LVT set is theoretically quite cumbersome and experimentally not practical. This makes working with special models inevitable, among whom vacuum and vacuum orthogonal models are introduced in \refp{\ref{sec:VOMFCOP}}. Another special model that can be considered is so called \emph{isotropic model}, which is also referred as ``fried-chicken" model. In such a model, all LVT except the isotropic ones are accepted to vanish in a preferred reference frame. Here, the selection of the reference frame is crucial as the vanishing terms are not necessarily zero in other reference frames. From the theoretical point of view one natural choice is the frame of \emph{Cosmic Microwave Background} (CMB) as indicated in \refs{\cite{Kostelecky2009}}. Another possible choice is the canonical Sun-centered frame, which exploits the isotropic features of the theory better for an experimental point of view. Although isotropic models are special models in their own rights, what is examined here is only a hybrid model of vacuum orthogonal and isotropic models, where isotropic model is considered as a limiting case of general vacuum orthogonal model. That is a useful limiting case because isotropic models are somewhat popular, and moreover the result of \refp{\ref{s:VODR}} that there is no nonconventional root of dispersion relation in vacuum orthogonal models can be better seen in this limit. With the spherical decomposition, the condition of being isotropic translates into $j=0$, leading all LVT except $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{n00}$ to vanish in the {\it nm}SME CPT-odd photon sector as can be seen in \reft{\ref{Table-irfvoc}}. If we apply this to the general CPT-odd vacuum orthogonal dispersion relation \refe{\ref{General Vacuum-orthogonal dispersion relation2}}, the dispersion relation takes the form \begin{equation} 0={}\left(1-\frac{p^2}{\pi}\left(\sum\limits_{d=\text{odd}>3}\sum\limits_{n=\text{even}\ge 0}^{d-5}\omega^{d-5-n}p^n\xi_{dn}\right)^2\right)(p_\mu p^\mu)^2\,, \label{vacuum orthogonal & isotropic dispersion relation} \end{equation} where \begin{equation*} \xi_{dn}:={}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(d)})^{(0B)}_{n00} \end{equation*} is defined for brevity. In the leading order, the dispersion relation reduces to the form \begin{equation} 0={}\left(1-\frac{(\xi_{50} p)^2}{\pi}\right)(p_\mu p^\mu)^2 \label{vod in 5} \end{equation} where the Lorentz violation is purely multiplicative, and there are only two conventional roots. The multiplicative term possesses no roots for $\omega$ and is practically irrelevant since the framework is EFT which is expected to hold only for $\lvert\xi_{dn}p\rvert\ll1$. In the next-to-leading order, the dispersion relation becomes \begin{equation*} 0=\left(p_\mu p^\mu\right)^2\left(1+\frac{p}{\sqrt{\pi}}\left(\omega^2\xi_{70}+p^2\xi_{72}\right)\right)\left(1-\frac{p}{\sqrt{\pi}}\left(\omega^2\xi_{70}+p^2\xi_{72}\right)\right) \end{equation*} which can be directly compared with the general case \refe{\ref{General Vacuum-orthogonal dispersion relation3}}. The deduction there that nonconventional roots are spurious is explicitly proven here in this limit, as these roots are \begin{equation*} \omega^2=\pm\frac{\sqrt{\pi}}{\xi_{70}p}-\frac{\xi_{72}}{\xi_{70}}p^2\,, \end{equation*} which blow up as Lorentz violation is turned off. As higher orders are considered, there will be extra perturbative terms added with higher orders of $p$; however, the first term, which causes the spurious nature, will remain. In isotropic model analysis, the so-called \emph{ring coefficients} are preferred over the general coefficients employed above. We did the calculations in the usual coefficients as they are more transparent; however, a similar treatment can be done via the ring coefficients. In these experimentally more convenient coefficients, \refe{\ref{vacuum orthogonal & isotropic dispersion relation}} reduces to the form\footnote{See the details in \refa{\ref{App:COVOIDRIRC}}.} \begin{equation} 0={}\Bigg\{1-\frac{p^2}{\pi}\bigg(\sum\limits_{d=\text{odd}}\sum\limits_{n=\text{even}\ge 0}^{d-5}\sum\limits_{i=\text{even}\ge 0}^{(d-5-n)}\frac{d}{n+3}\omega^{d-5-n-i}p^{i}\left((\ring{k}_{AF}^{(d)})_np^{n}\right)\bigg)^2\Bigg\}(p_\mu p^\mu)^2\,. \label{disperion relation in ring coefficients} \end{equation} \subsection{The Leading Order Vacuum Orthogonal Model} \label{ss:LOVOM} In \refp{\ref{ss:VOI}}, the isotropic limit of the general vacuum orthogonal model is considered. In this section, leading order limit $d=5$ of general vacuum orthogonal model will be examined for the similar purposes: explicit analysis of spurious roots and relevant coefficients' determination. The dispersion relation for this leading order model is readily given by Equation (\ref{General Vacuum-orthogonal dispersion relation3}). One, then, needs to restrict \refe{\ref{definition of R}} to $d=5$ and expand it explicitly in terms of the relevant LV coefficients. However, in order to derive the more generic form of the dispersion relation which also applies to the case $\hat{k}_{AF}=\hat{k}_{AF}^{(np)}$\footnote{We argued that these should vanish for physical polarization vectors to emerge; nonetheless, we will count them for now for the sake of completeness.}, the most general dispersion relation \refe{\ref{General Vacuum-orthogonal dispersion relation}} is the starting point to account for additional terms $(\hat{k}_{AF})_\pm$. Once $d=5$ is set, \refe{\ref{General Vacuum-orthogonal dispersion relation}} becomes \begin{widetext} \begin{equation} \begin{aligned} 0={}(p_\mu p^\mu)\times\Bigg\{{}&\left(p_\mu p^\mu\right)-4(p_\mu p^\mu)\Bigg[\sum_{jm}\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})\bigg((\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{0jm}^{(0B)}p+\frac{\omega}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{1jm}^{(0B)}+\frac{5\omega}{8}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{0jm}^{(1B)}\\{}&+\frac{p}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{1jm}^{(1B)}\bigg)\Bigg]^2+8\sum\limits_{n_1n_2j_1j_2m_1m_2}\omega^{4-n_1-n_2}p^{n_1+n_2}\prescript{}{+1}{Y}_{j_1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{j_2m_2}(\mathbf{\hat{p}})\\ {}&\times\frac{1}{\sqrt{4j_1j_2(j_1+1)(j_2+1)}}\Bigg[\Bigg(\left(\frac{\omega j_1(n_1+1)}{p(3-n_1+j_1)}-\frac{pj_1(n_1+3)}{\omega(1-n_1+j_1)}\right)(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{n_1j_1m_1}^{(0B)}\\{}&+\frac{5}{n_1+4}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{n_1j_1m_1}\Bigg)\Bigg(\left(\frac{\omega j_2(n_2+1)}{p(3-n_2+j_2)}-\frac{pj_2(n_2+3)}{\omega(1-n_2+j_2)}\right)(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{n_2j_2m_2}^{(0B)}\\{}&+\frac{5}{n_2+4}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{n_2j_2m_2}\Bigg)+(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1E)}_{n_1j_1m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1E)}_{n_2j_2m_2}\Bigg]\Bigg\}\,. \end{aligned} \end{equation} \end{widetext} Now that $d$ is fixed, $n$ and $j$ are bounded; hence, the expansion over them can be carried out explicitly: \begin{widetext} \begin{equation} \begin{aligned} 0={}&\left(p_\mu p^\mu\right)^2-4(p_\mu p^\mu)^2\Bigg(\sum_{m}\bigg(\prescript{}{0}{Y}_{0m}(\mathbf{\hat{p}})(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{00m}^{(0B)}p+\prescript{}{0}{Y}_{1m}(\mathbf{\hat{p}})\left(\frac{\omega}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{11m}^{(0B)}+\frac{5\omega}{8}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{01m}^{(1B)}\right)\\ {}&+\prescript{}{0}{Y}_{2m}(\mathbf{\hat{p}})\frac{p}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{12m}^{(1B)}\bigg)\Bigg)^2\\ {}&+8p_\mu p^\mu\sum\limits_{m_1m_2}\left(\begin{array}{r l} (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m_2} {}&\times\frac{1}{4}\left(\frac{2}{3}\omega^2-4p^2\right)^2\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{1m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{01m_2} {}&\times\frac{5}{16}p^2\left(\frac{2}{3}\omega^2-4p^2\right)\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{1m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m_2}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{01m_1} {}&\times\frac{5}{16}p^2\left(\frac{2}{3}\omega^2-4p^2\right)\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{1m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{12m_2} {}&\times\frac{1}{4\sqrt{3}}\omega p\left(\frac{2}{3}\omega^2-4p^2\right)\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{2m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m_2}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{12m_1} {}&\times\frac{1}{4\sqrt{3}}\omega p\left(\frac{2}{3}\omega^2-4p^2\right)\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{2m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{01m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{01m_2} {}&\times\frac{25}{64}p^4\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{1m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{01m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{12m_2} {}&\times\frac{5}{16\sqrt{3}}p^3\omega\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{2m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{01m_2}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{12m_1} {}&\times\frac{5}{16\sqrt{3}}p^3\omega\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{2m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{12m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{12m_2} {}&\times\frac{1}{12}\omega^2p^2\prescript{}{+1}{Y}_{2m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{2m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1E)}_{11m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1E)}_{11m_2} {}&\times\frac{1}{4}\omega^2p^2\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{1m_2}(\mathbf{\hat{p}})\\{} +(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1E)}_{11m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1E)}_{22m_2} {}&\times\frac{1}{4\sqrt{3}}\omega p^3\left(\prescript{}{+1}{Y}_{1m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{2m_2}(\mathbf{\hat{p}})+\prescript{}{+1}{Y}_{2m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{1m_2}(\mathbf{\hat{p}})\right)\\{}+(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1E)}_{22m_1}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1E)}_{22m_2} {}&\times\frac{1}{12} p^4\prescript{}{+1}{Y}_{2m_1}(\mathbf{\hat{p}})\prescript{}{-1}{Y}_{2m_2}(\mathbf{\hat{p}}) \end{array}\right)\,. \label{vo dispersion relation in component form for d5} \end{aligned} \end{equation} \end{widetext} This is the most generic dispersion relation for the \emph{leading order vacuum orthogonal model of {\it nm}SME photon sector}. In the isotropic limit, all coefficients except $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{00m}$ dies, which turns \refe{\ref{vo dispersion relation in component form for d5}} into \refe{\ref{vod in 5}}: a trivial consistency check. Now that the most generic dispersion relation is obtained, the attention can be restricted to the physically relevant set $\{\hat{k}_{AF}^{(cn)}, \hat{k}_{AF}^{(bf)}\}$. Then, the last term in the \refe{\ref{vo dispersion relation in component form for d5}} dies out, simplifying the dispersion relation to the form \begin{equation} \begin{aligned} 0={}\left(p_\mu p^\mu\right)^2-4(p_\mu p^\mu)^2\Bigg(\sum_{m}\bigg({}&\prescript{}{0}{Y}_{0m}(\mathbf{\hat{p}})(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{00m}^{(0B)}p+\prescript{}{0}{Y}_{1m}(\mathbf{\hat{p}})\left(\frac{\omega}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{11m}^{(0B)}+\frac{5\omega}{8}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{01m}^{(1B)}\right)\\{}& +\prescript{}{0}{Y}_{2m}(\mathbf{\hat{p}})\frac{p}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{12m}^{(1B)}\bigg)\Bigg)^2\,.\label{simplified vo dispersion relation in component form for d5} \end{aligned} \end{equation} The equation can be reorganized as \begin{widetext} \begin{equation*} \begin{aligned} 0={}&(p_\mu p^\mu)^2\times\Bigg\{1-2\sum_{m}\Bigg[\prescript{}{0}{Y}_{0m}(\mathbf{\hat{p}})(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{00m}^{(0B)}p+\prescript{}{0}{Y}_{1m}(\mathbf{\hat{p}})\left(\frac{\omega}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{11m}^{(0B)}+\frac{5\omega}{8}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{01m}^{(1B)}\right)\\{}&+\prescript{}{0}{Y}_{2m}(\mathbf{\hat{p}})\frac{p}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{12m}^{(1B)}\Bigg]\Bigg\}\Bigg\{1+2\sum_{m}\Bigg[\prescript{}{0}{Y}_{0m}(\mathbf{\hat{p}})(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{00m}^{(0B)}p\\ {}&+\prescript{}{0}{Y}_{1m}(\mathbf{\hat{p}})\left(\frac{\omega}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{11m}^{(0B)}+\frac{5\omega}{8}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{01m}^{(1B)}\right)+\prescript{}{0}{Y}_{2m}(\mathbf{\hat{p}})\frac{p}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{12m}^{(1B)}\Bigg]\Bigg\}\,. \end{aligned} \end{equation*} \end{widetext} At last, it is obvious now to extract the roots: \begin{equation*} \begin{aligned} \omega={}& p\,,\\ \omega={}&\pm\frac{1}{2a}-\frac{b}{a}p \end{aligned} \end{equation*} where \begin{equation*} \begin{aligned} a:={}&\sum\limits_m\prescript{}{0}{Y}_{1m}(\mathbf{\hat{p}})\left(\frac{1}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{11m}^{(0B)}+\frac{5}{8}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{01m}^{(1B)}\right)\,,\\{} b:={}&\sum\limits_m\bigg(\prescript{}{0}{Y}_{0m}(\mathbf{\hat{p}})(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{00m}^{(0B)}+\prescript{}{0}{Y}_{2m}(\mathbf{\hat{p}})\frac{1}{3}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})_{12m}^{(1B)}\bigg)\,. \end{aligned} \end{equation*} As promised, the nonconventional roots are explicitly spurious. Again, like the result in the isotropic limit, the spurious nature is given by the first term which is expected to remain at all orders, where consideration of higher orders will simply bring higher order perturbative terms into the equation. This is analogous to \refe{3.4} of \refs{\cite{Shreck2013}}, which is the result of a similar calculation within CPT-even sector of {\it nm}SME. The possibility $\hat{k}_{AF}^{(bf)}$ being explicitly shown to be spurious, as well as $\hat{k}_{AF}^{(np)}$ being physically irrelevant, the only option for $\hat{k}_{AF}$ remains to be $\hat{k}_{AF}^{(cn)}$, which was deduced at the end of \refp{\ref{s:VODR}} and expressed as vacuum orthogonal model being vacuum orthogonal at all orders. However, the question of whether there indeed exists a nontrivial coefficient subspace\footnote{The trivial coefficient subspace would be the null set, which indicates no Lorentz violation whatsoever in the vacuum orthogonal model of {\it nm}SME photon sector.} which satisfies the necessary conditions in \reft{\ref{Table-vos}} is not addressed. We explicitly showed that\footnote{See the details in \refa{\ref{App:COD5VOCS}}.} there indeed exists such a nontrivial coefficient subspace, which we shall denote as $\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5cn)}$. In this notation, $AF$ indicates that the coefficient space is CPT-odd, negation diacritic stands for vacuum-orthogonal model, 5 is the operator dimension and $cn$ denotes that the resultant dispersion relation is conventional. Consequently, $\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5cn)}$ reads as \emph{the coefficient subspace of leading order vacuum orthogonal model of {\it nm}SME CPT-odd photon}\footnote{We need not to refer to the conventional nature of the resultant solutions of this coefficient subspace as there are no other physical solutions for this model; hence, there will be no terms like $\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5bf)}$ or $\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5np)}$. Yet, $cn$ should be there to make a distinction from already-existing LVT $\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)}$.}. In this subspace, with \refe{47} of \refs{\cite{Kostelecky2009}}, $(\hat{k}_{AF})_0$ can be shown to take the form \begin{equation} (\hat{k}_{AF})_0=p^2\sum\limits_{m}\prescript{}{0}{Y}_{jm}(\mathbf{\hat{p}})(k_{AF}^{(5)})^{(0B)}_{11m}\,,\nonumber \end{equation} where contributions of $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{20m}$ and $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{00m}$ cancel one another. As $\prescript{}{0}{Y}_{jm}$ are ordinary spherical harmonics, we can rewrite this as \begin{equation} (\hat{k}_{AF})_0=p^2\sum\limits_{m}{Y}_{j}^{m}(\mathbf{\hat{p}})(k_{AF}^{(5)})^{(0B)}_{11m}\,,\nonumber \end{equation} which then can be inserted into \refe{\ref{eq: Physical vacuum orthogonal propagator}} to yield the corresponding propagator. The important consequence of this is that in addition to the dispersion relation and the polarization vectors, the propagator also remains conventional if the LVT are restricted to $\{(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{000}, (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{200}\}$. The results are summarized in \reft{\ref{Table-cslovom}}. \begin{table* \caption{\label{Table-cslovom}The coefficient subspace of leading order vacuum orthogonal model $\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5c)}$} \begin{ruledtabular} \begin{tabular}{l l} Free Coefficients: & $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{00m}$ \& $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m}$\\\hline Nonzero Coefficients:& $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{00m}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{20m}$, $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{01m}$ \& $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{11m}$\\\hline Constraint Relations:& $\begin{aligned} (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{20m}={}&-(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{00m}\,,\\ (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{01m}={}&\frac{296}{109}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m}\,,\\ (\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(1B)}_{21m}={}&-\frac{8}{109}(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m} \end{aligned}$\\\hline Field Theoretical Properties& \begin{tabular}{l} Conventional Dispersion Relation\\ Conventional Polarization Vectors\\ Conventional Propagator if $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{11m}=0$ \end{tabular} \end{tabular} \end{ruledtabular} $^*$Parameter $m$ runs from $-j$ to $j$ as integers. \end{table*} \section{Conclusions} \label{sec:C} In this study, the CPT-odd photon sector of {\it nm}SME is analyzed generically. The dispersion relation is calculated for this model out of the general photon sector dispersion relation \cite{Kostelecky2009} and is explicitly expressed in the helicity basis. In this explicit expansion, it is shown that the general dispersion relation can be highly simplified via removal of the redundant terms $(\hat{k}_{AF})_\pm$ whose contribution vetoes the emergence of physical polarization vectors. This last result is shown via rank nullity approach in the equations of motion, which also induces three possibilities for the polarization vectors with respect to the coefficient subspace at hand. These coefficient subspaces are named as $\hat{k}_{AF}^{(bf)}$, $\hat{k}_{AF}^{(cn)}$, and $\hat{k}_{AF}^{(np)}$ where the distinguishing letters stand for the resultant solutions of the relevant coefficient subspace: birefringent, conventional, and nonphysical respectively. One direct consequence of this result is that there is a possible LVT combination which modifies neither dispersion relation nor polarization vectors for the whole CPT-odd model. Another important consequence is that there is no coefficient subspace in the {\it nm}SME CPT-odd photon sector that yields nonconventional nonbirefringent solutions. The second consequence becomes particularly important in the general vacuum orthogonal models. These models are characterized with the fact that they induce no leading order vacuum propagation effect; hence, the initial anticipation would be $\hat{k}_{AF}^{(bf)}$ having no projection on vacuum orthogonal model. A puzzle arises in the model at the first glance, as it was demonstrated that $\hat{k}_{AF}^{(bf)}$ indeed generates solutions in vacuum orthogonal model; however, the resultant solutions are shown to be spurious, which are the solutions that diverge as LV is turned off. It is stated in the \refs{\cite{Kostelecky2009,Shreck2013}} that these solutions are Planck scale effects and should be neglected; therefore, the only possibility with physical solutions for vacuum orthogonal models is $\hat{k}_{AF}^{(cn)}$. Vacuum orthogonal models are constructed with no leading order vacuum effect; however, our result indicates that vacuum orthogonal models in the CPT-odd sector are vacuum orthogonal at all orders; in other words, they do not accept any solution other than the two standard transverse polarizations with the conventional dispersion relation $\omega=p$ at any order. The only missing part, whether coefficient subspace $\hat{k}_{AF}^{(c)}$ is nontrivial, is investigated for the leading order model, and is explicitly shown to be nontrivial: There arises a two parameter coefficient subspace, denoted by $\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5c)}$, which induces no effect at all orders, both on polarization vectors and on the dispersion relations in the leading order model. In addition to the dispersion relations and the polarization vectors, the general propagator is also addressed for the {\it nm}SME CPT-odd photon sector. As the generic form of inverse propagator is an infinite series, it was argued that it would be formidable, if possible, to invert it covariantly and analytically; hence, three different methods were proposed. While first two methods, ansatz and perturbation expansion, successfully resulted in covariant forms of propagators up to some order and agreed on the result in the leading order; the last method, explicit helicity basis, was able to give an analytically exact form, albeit covariance is lost. The superiority of explicit helicity basis propagator arises when attention is restricted to the possibilities with physical solutions, $\{\hat{k}_{AF}^{(bf)}, \hat{k}_{AF}^{(cn)}\}$, as the redundant LVT can be eliminated immediately in this method unlike the covariant propagators. This simplified propagator, which is diagonal in the helicity basis, is written as conventional propagator receiving a Lorentz violating propagator contribution denoted by $\hat{G}_{AF}$. This propagator contribution is also calculated for the vacuum orthogonal special case and shown to be non-vanishing unless all $(\hat{k}_{AF})_i$ are explicitly zero. Additionally, it is demonstrated all $(\hat{k}_{AF})_i$ vanish if all Lorentz violation is provided with two non-vanishing terms $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{000}$ \& $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{200}$ with $(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{200}=-(\mathrel{\rlap{\lower0pt\hbox{\hskip1pt{$k$}}_{AF}^{(5)})^{(0B)}_{000}$.
1,108,101,562,634
arxiv
\section{Introduction} In order to describe the automorphisms of a finite group we often begin by locating a characteristic series from which the automorphisms can be determined by a recursive process. This thinking is also used in group-isomorphism tests and classification, as seen in early work by Fitting and Hall \citelist{\cite{Fitting:const}\cite{Hall:const}*{p. 208}}, and in successive improvements, e.g. \citelist{\cite{Higman:chic}*{Section III--IV}\cite{Newman}\cite{Robinson:aut}\cite{ELGO}*{Section 7}\cite{CH:iso}\cite{Babai:iso}}. A barrier is that many groups have few known characteristic subgroups. Obviously products of isomorphic simple groups have no proper nontrivial characteristic subgroups. Taunt and Glasby-P{\'a}lfy-Schneider characterized groups with a unique proper nontrivial characteristic subgroup \citelist{\cite{Taunt}\cite{GPS}}. Yet those situations seem rare when compared to the complexity of general finite groups. Evidence suggests that $p$-groups have many characteristic subgroups beyond those typically known. Martin and Helleloid \citelist{\cite{Martin}\cite{HM:auto-generic}} show that for `most' finite $p$-groups $G$, $\Aut(G)$ is also a $p$-group.\footnote{`Most' in those works is the conditional probability after fixing natural properties of $G$ \cite{HM:auto-generic}.} So the action of $\Aut(G)$ on the factors of the exponent-$p$ central series of $G$ stabilizes a maximal flag of each factor. Remarkably, $G$ has a characteristic composition series (the preimages of the flags). This article introduces characteristic refinements of nilpotent series that can be located by solving systems of linear equations. These refinements retain a correspondence with Lie rings graded by commutative monoids. Such monoids capture complicated subgroup containment. Repeating the methods creates characteristic series substantially longer than traditional verbal and marginal subgroups chains. \section{Notation} Here $\mathbb{N}$ is the non-negative integers and $\mathbb{Z}^+=\mathbb{N}-\{0\}$. For a set ${\tt X}$, $2^{\tt X}$ is its power set. Our use of groups and rings follows \citelist{\cite{Khukhro}\cite{Jac:basicII}*{Chapter 4}}. For $x,y\in G$, $[x,y]=x^{-1} x^y=x^{-1}y^{-1}xy$ and for $X,Y\subseteq G$, $[X,Y]=\langle [x,y] :x\in X, y\in Y\rangle$. For subgroups $A_i$, $[A_1]=A_1$ and $[A_1,\dots, A_{m+1}]=[[A_1,\dots,A_m], A_{m+1}]$. A left $\mathbb{Z}$-module $V$ is a right $\End(V)$-module and a left $\End(V)^{\rm op}$-module (where $\End(V)$ is the endomorphism ring of $V$ and $\End(V)^{\rm op}$ its opposite ring). Put $\mathbb{Z}_p=\mathbb{Z}/p\mathbb{Z}$ and $\mathfrak{gl}(V)=\End(V)$ with product $[X,Y]=XY-YX$. \section{Filters} Fix a group $G$ and a commutative monoid $M$. A {\em filter} on $G$ is a function $\phi:M\to 2^G$ where for every $s\in M$, $\phi_s$ is a subgroup of $G$, $G=\phi_0$, and \begin{align}\label{eq:def-filter} (\forall & s,t\in M) & [\phi_s,\phi_t] & \leq \phi_{s+t}\leq \phi_s\cap \phi_t. \end{align} The assumption $G=\phi_0$ is not necessary but convenient since all observations about filters occur between the groups $\cap_{s\in M} \phi_s$ and $\phi_0$. Note that $\phi_s\normaleq \phi_0$. Associated to a filter $\phi:M\to 2^G$ are the following normal subgroups: for $s\in M$, \begin{align*} \phi_{s}^+ & = \prod_{t\in M-\{0\}} \phi_{s+t}. \end{align*} For all $s\in M$, $\phi_s^+\leq \phi_s$ and if $M=\langle {\tt X}\rangle$ then $\phi_s^+=\langle \phi_{s+x}: x\in {\tt X}-\{0\}\rangle$. Notice \begin{align*} (\forall & s,t\in M) & [\phi_s^+,\phi_t] & = \prod_{u\in M-\{0\}}[\phi_{s+u},\phi_{t}] \leq \prod_{u\in M-\{0\}} \phi_{s+u+t} = \phi_{s+t}^+. \end{align*} Likewise, $[\phi_s,\phi_t^+]\leq \phi_{s+t}^+$. Now, if $s\in M-\{0\}$ then $[\phi_s,\phi_s]\leq \phi_{s+s}\leq \phi_s^+$ and so $L_s=\phi_s/\phi_{s}^+$ is abelian. As $[\phi_0^+,\phi_s]\leq \phi_s^+$, $L_s$ is a right $\mathbb{Z}[\phi_0/\phi_0^+]$-module where $\phi_0/\phi_0^+$ acts by conjugation. Associate to $\phi$ the abelian group: \begin{align}\label{eq:Lie-filter} L(\phi) & = \bigoplus_{s\in M} L_s, & L_0=0. \end{align} Also define an $M$-graded product on the homogeneous components by \begin{align*} (\forall & x\in \phi_s,\forall y\in \phi_t) & [x\phi_s^+,y\phi_t^+]_{st} & = [x,y]\phi_{s+t}^+ = x^{-1}x^y\phi_{s+t}. \end{align*} \begin{thm}\label{thm:Lie} $L(\phi)$ is an $M$-graded Lie ring and a $\mathbb{Z}[\phi_0/\phi_0^+]$-module. \end{thm} \begin{proof} Compare \cite{Lazard}*{Chapter I}. \end{proof} (\thmref{thm:Lie} holds letting $L_0$ be a Lie ring of derivations on $\bigoplus_{s\in M-\{0\}} L_s$.) Filters with $M\cong \mathbb{N}$ are essentially the filters described by Lazard \cite{Lazard}*{p. 106} but with explicit operators. For example, the lower central series $\gamma$ of a group $N$ is \begin{align} (\forall & i\in \mathbb{Z}^+) & \gamma_i & = \overbrace{[N,\dots,N]}^i. \end{align} To extend $\gamma$ to a filter on $\mathbb{N}$ we have several options, e.g. let $\gamma_0=N$. A more informative choice is to let $\gamma_0=\Hol(N)=\Aut(N)\ltimes N$ be the {\em holomorph} of $N$. This captures the property that for $i>0$, $\gamma_i$ is characteristic in $N$. If $G$ is nilpotent of class $c$ and $|G|\neq 2$ then $\gamma$ factors through the injective filter on $\{0,1,\dots, c=c+1\}$. More generally, given a group $G$ and normal subgroup $N$, the map $\gamma:\mathbb{N}\to 2^G$ with $\gamma_0=G$ and $\gamma_i=\gamma_i(N)$ for $i>0$ is a filter where the subgroups $\gamma_1\geq \gamma_2\geq\cdots$ are a nilpotent series of $G$-invariant subgroups of $N$. This same treatment applies to Higman's exponent-$p$ central series $\eta$, and the Jenning's series $\kappa$, i.e.: for $N\normaleq G$, set $G=\eta_0=\kappa_0$, $N=\eta_1=\kappa_1$ and recursively define for each $i>0$, \begin{align} \eta_{i+1}(N) & = [N,\eta_i(N)]\eta_i^p(N)\qquad\& & \kappa_i(N) = [N,\kappa_{i-1}(N)]\kappa_{\lfloor i/p\rfloor}(N)^p. \end{align} In these cases it makes sense to use $\mathbb{Z}_p\otimes L(\eta)$ and $\mathbb{Z}_p\otimes L(\kappa)$ to obtain graded Lie $\mathbb{Z}_p$-algebras. Indeed, $\mathbb{Z}_p\otimes L(\kappa)$ is $p$-restricted. See \citelist{\cite{Khukhro}*{Chapter 3}\cite{Shalev:p-groups}} for surveys of filters over $\mathbb{N}$, their properties, and their uses. \subsection{Filters over ordered monoids}\label{sec:order} In a commutative monoid $M$ there is a natural reflexive and transitive relation $\prec$ (a {\em pre-order}) defined as $s\prec u$ if there is a $t$ where $s+t=u$. Notice that filters $\phi:M\to 2^G$ are order-reversing maps from $\langle M,\prec\rangle$ to $\langle 2^G, \subseteq\rangle$. Hence, filters translate some of the often complicated subgroup inclusions in a group into the language of commutative monoids. Notice if $s\prec t$ and $t\prec s$ then $\phi_{s}=\phi_t$. So we can improve our understanding when $M$ is an ordered monoid, that is, there is a partial order $\leq$ on $M$ such that whenever $s\leq t$ and $u\leq v$ then also $s+t\leq u+v$. Say a filter $\phi:M\to 2^G$ is {\em ordered} if $M$ is ordered and $s\leq t$ implies $\phi_s \geq \phi_t$. We will sometimes call filters {\em pre-ordered filters} for added clarity. Of particular interest to us are ordered filters over totally ordered commutative monoids $M$. In such a filter for every $s,t\in M$, either $\phi_s\geq \phi_t$ or $\phi_s\leq \phi_t$, i.e. $\{\phi_s : s\in M\}$ is a series. Indeed, for every $s\in M$, there is an $s^+\in M$ with $\phi_s^+=\phi_{s^+}$ ($s^+$ may not be unique). If $M$ is well-ordered, then we may take $s^+=s+e$, $e=\min M-\{0\}$. We call an ordered filter on a well-ordered set a {\em $\nu$ series}. \subsection{Generating filters}\label{sec:gen-filter} It will be convenient to specify filters by describing a few members which ``generate'' the remaining terms. At issue is what generation should mean. The monoid is an obvious resource. Given generators ${\tt X}$ of a commutative monoid $M$ it would seem that a function $\pi:{\tt X}\to 2^G$ would be enough information to specify a corresponding filter $\bar{\pi}:M\to 2^G$. The complication is that \eqref{eq:def-filter} asks for $\bar{\pi}$ to satisfy both a lower and upper bound. This is possible with some assumptions on $(M,{\tt X}, \pi)$ but we are not aware of a general meaning of generating a filter from an arbitrary function $\pi:{\tt X}\to 2^G$. Fix a monoid $M$ and a set ${\tt X}$ that generates $M$. The Cayley graph $\mathcal{G}=\mathcal{G}(M;{\tt X})$ has vertex set $M$ and directed labeled edge set $\left\{s\overset{x}{\longrightarrow} s+x : s\in M, x\in {\tt X}\right\}$. A finite directed path in the Cayley graph from $0$ to a vertex $s$ is specified by a sequence $s_1,\dots,s_d$ in ${\tt X}$ where $s=s_1+\cdots+s_d$. We let $\mathcal{G}_0^s$ denote the set of all finite directed paths from $0$ to $s$. Note that we regard an element $u$ of $\mathcal{G}_0^s$ to be the sequence of edge labels, i.e. $u=(s_1,\dots,s_d)$ where $s=s_1+\cdots +s_d$. For notation we write $[\pi_u]=[\pi_{s_1},\dots,\pi_{s_d}]$. For a function $\pi:{\tt X}\to 2^G$, define $\bar{\pi}:M\to 2^G$ as follows: for each $s\in S$, \begin{align}\label{eq:bar} \bar{\pi}_s & = \prod_{u\in \mathcal{G}_0^s} [\pi_{u}]. \end{align} Notice by \eqref{eq:def-filter}, if ${\tt X}=M$ and $\pi$ is a filter then $\bar{\pi}=\pi$. By applying the $3$-subgroups lemma we show $\bar{\pi}$ already satisfies the first inequality in \eqref{eq:def-filter}. \begin{lemma}\label{lem:lower} If $\pi:{\tt X}\to 2^G$ maps into the normal subgroups of $G$ then for every $s,t\in M$, $[\bar{\pi}_s, \bar{\pi}_t] \leq \bar{\pi}_{s+t}$. \end{lemma} \begin{proof} We begin by proving that for all $i,j\in\mathbb{Z}^+$ and all $s_1,\dots,s_{i+j}\in M$, \begin{align}\label{eq:collect} \left[[\bar{\pi}_{s_1}, \dots,\bar{\pi}_{s_i}], [\bar{\pi}_{s_{i+1}},\dots,\bar{\pi}_{s_{i+j}}]\right] & \leq \prod_{\sigma \in S_{i+j}} [\bar{\pi}_{s_{1\sigma}},\dots, \bar{\pi}_{s_{(i+j)\sigma}}]. \end{align} We induct on $(i,j)$ where $\mathbb{Z}^{+}\times\mathbb{Z}^+$ is well-ordered by $(i,j)\leq (i',j')$ if $j<j'$, or $j=j'$ and $i\leq i'$. For every $i\geq 1$, if $j=1$ then: \begin{align*} \left[[\bar{\pi}_{s_1},\dots,\bar{\pi}_{s_i}], [\bar{\pi}_{s_{i+1}}]\right] & = [\bar{\pi}_{s_1},\dots,\bar{\pi}_{s_i},\bar{\pi}_{s_{i+1}}] & \leq \prod_{\sigma \in S_{i+1}} [\bar{\pi}_{s_{1\sigma}},\dots, \bar{\pi}_{s_{(i+1)\sigma}}]. \end{align*} Now suppose $j>1$. Let $X=[\bar{\pi}_{s_1}, \dots,\bar{\pi}_{s_i}]$, $Y=[\bar{\pi}_{s_{i+1}},\dots,\bar{\pi}_{s_{i+j-1}}]$, and $Z=\bar{\pi}_{s_{i+j}}$. It follows that: \begin{align*} \left[[\bar{\pi}_{s_1}, \dots,\bar{\pi}_{s_i}], [\bar{\pi}_{s_{i+1}},\dots,\bar{\pi}_{s_{i+j}}]\right] & = [X,[Y,Z]]= [Y,Z,X]\leq [Z,X,Y][X,Y,Z]. \end{align*} As $(i+1,j-1)<(i,j)$ we may induct to find \begin{align*} [Z,X,Y] & = [[\bar{\pi}_{s_{i+j}},\bar{\pi}_{s_1},\dots,\bar{\pi}_{s_i}], [\bar{\pi}_{s_{i+1}},\dots,\bar{\pi}_{s_{i+j-1}}]] \leq \prod_{\sigma \in S_{i+j}} [\bar{\pi}_{s_{1\sigma}},\dots, \bar{\pi}_{s_{(i+j)\sigma}}]. \end{align*} Since $(i,j-1)<(i,j)$ we appeal once more to induction to show \begin{align*} [X,Y,Z] & = \left[\left[[\bar{\pi}_{s_1},\dots,\bar{\pi}_{s_i}], [\bar{\pi}_{s_{i+1}},\dots,\bar{\pi}_{s_{i+j-1}}]\right],\bar{\pi}_{s_{i+j}}\right]\\ & \leq \left[\prod_{\sigma \in S_{i+j-1}} [\bar{\pi}_{s_{1\sigma}},\dots, \bar{\pi}_{s_{(i+j-1)\sigma}}], \bar{\pi}_{i+j}\right] \leq \prod_{\sigma \in S_{i+j}} [\bar{\pi}_{s_{1\sigma}},\dots, \bar{\pi}_{s_{(i+j)\sigma}}]. \end{align*} Combining these three inclusions we observe that the formula holds for $(i,j)$ and so by induction \eqref{eq:collect} holds. Fix $s,t\in M$. For path $(s_1,\dots,s_i)$ from $0$ to $s$ and $(s_{i+1},\dots,s_{i+j})$ from $0$ to $t$, it follows that $s+t=s_1+\cdots+s_{i+j}$ and so $(s_1,\dots,s_{i+j})$ is a path from $0$ to $s+t$. So now it follows from \eqref{eq:collect} that: \begin{align*} [\bar{\pi}_s,\bar{\pi}_t] & = \prod_{ u\in \mathcal{G}_0^s v\in \mathcal{G}_0^t} [ [\pi_u],[\pi_v]] \leq \prod_{w\in \mathcal{G}_0^{s+t}} [\pi_{w}]=\bar{\pi}_{s+t}. \end{align*} \end{proof} The formula \eqref{eq:bar} is not sufficient to generate a filter as we must also guarantee that for every $s\in M$, and $(s_1,\dots,s_d)\in \mathcal{G}_0^s$, $\bar{\pi}_s\leq \bar{\pi}_{s_1}\cap\cdots \cap \bar{\pi}_{s_d}$. For that we have needed further assumptions on $\pi$ and on $M$. We will rely on commutative (pre-)ordered monoids $\langle M,+,\prec\rangle$ with minimal element $0$ and satisfying: \begin{equation}\label{def:star} \textnormal{If $s\prec u$ and $u=\sum_{i=1}^d u_i$, then there exists $s_i\prec u_i$ with $s=\sum_{i=1}^d s_i$.} \end{equation} To show that a commutative monoid satisfies \eqref{def:star} it suffices to prove it for $d=2$. \begin{ex} \begin{enumerate}[(i)] \item Every cyclic monoid satisfies \eqref{def:star}. \item For a family of commutative monoids satisfying \eqref{def:star}, the direct product also satisfies \eqref{def:star}. In particular direct products of cyclic monoids satisfy \eqref{def:star}. \item $\mathbb{N}^d$ with the lexicographic well-ordering satisfies \eqref{def:star}. \end{enumerate} \end{ex} \begin{proof} For (i), note first that every cyclic monoid is isomorphic $C_{k,m}=\{c^i: 0\leq i<k+m\}$ for $k\in \mathbb{N}\cup\{\infty\}$ and $m\in \mathbb{Z}^+$, and where the product is $c^ic^j=c^{i+j}$ if $i+j<k$; else, $c^i c^j=c^{k+r}$ where $i+j-k=qm+r$, $0\leq r<m$. (Note $C_{\infty,m}\cong \mathbb{N}$.) Suppose that $s=c^i\prec u=c^j$ and $u=u_1 u_2$ with $u_i=c^{e_i}$, for $0\leq i,j,e_1,e_2<k+m$. If $s\prec u_1$ (i.e.: $i\leq e_1$), let $s_1=s$ and $s_2=1$. Otherwise, $u_1\prec s$ so $i\geq e_1$ and we let $s_1=u_1$, $s_2=c^{i-e_1}$ thus $s_1 s_2=c^{i}=s$, $s_1\prec u_1$, $s_2\prec u_2$. Now we prove (ii). Let $\mathcal{F}$ be a family of commutative monoids satisfying \eqref{def:star}. Let $s=(s^F: F\in \mathcal{F}),u=(u^F: F\in\mathcal{F})\in \prod\mathcal{F}$ with $s\prec u$. It follows that for every $F\in\mathcal{F}$, $s^F\prec u^F$. So if $u=u_1+u_2$ then $u^F=u_1^F+u_2^F$ so there are $s_1^F, s_2^F\in F$ such that $s_i^F\prec u_i^F$ and $s=(s_1^F+s_2^F: F\in\mathcal{F})$. Finally we prove (iii). Write $s=\sum_{i=1}^d s_i e_i$, $t=\sum_{i=1}^d t_i e_i$ and $u_j=\sum_{i=1}^d u_{ij} e_i$ with $s_i,t_i,u_{ij}\in \mathbb{N}$. The $i$-th row of the matrix $U=[u_{ij}]$ sums to $t_i$. Assume $s\neq t$ so $s<t$ and there is a $c$ where $s_1=t_1,\dots, s_c=t_c$ and $s_{c+1}<t_{c+1}$. For $1\leq i\leq c$, set $v_{ij}=u_{ij}$. Next, since $s_{c+1}<t_{c+1}$, there are $v_{(c+1)j}\leq u_{(c+1)j}$ such that $s_{c+1}=\sum_j v_{(c+1)j}$ and at least one $j_0$ exists such that $v_{(c+1)j_0}<u_{(c+1)j_0}$. Finally, for each $c+1<i\leq d$, if $s_i\leq t_i$ then choose $s_i=\sum_{j} v_{ij}$ with $v_{ij}\leq u_{ij}$ for each $i$; otherwise, $s_i>t_i$. So, set $v_{ij}=u_{ij}$ for all $j\neq j_0$ and $v_{ij_0}=u_{ij_0}+(s_i-t_i)$. We claim the matrix $V=[v_{ij}]$ has the following properties: (i) for each $i$, $s_i=\sum_j v_{ij}$, and (ii) for each $j$, $v_j=\sum_i v_{ij} e_i\leq u_j$ in the lexicographic order. For (i), the first $c+1$ rows are elected in this manner as are any subsequent rows where $s_i\leq t_i$. If in a row $i$, $s_i>t_i$ then $\sum_{j} v_{ij}=u_{ij_0}+(s_i-t_i)+\sum_{j\neq j_0} u_{ij}=s_i-t_i+t_i=s_i$. For (ii), the first $c$ coefficients of $v_j$ and $u_j$ agree so the first place $v_j$ can differ from $u_j$ is if $v_{(c+1)j}<u_{(c+1)j}$. If this occurs then $v_j\leq u_j$. Otherwise, for all $i$, $v_{ij}\leq u_{ij}$ and so $v_j\leq u_j$. \end{proof} \begin{thm}\label{thm:generate} Fix a commutative (pre-)ordered monoid $\langle M, +, \prec \rangle$ with minimal element $0$ and satisfying \eqref{def:star}. Fix a set ${\tt X}$ of generators for $M$ such that for every $x\in {\tt X}$ and $y\in M$ if $y\prec x$ then $y\in {\tt X}$. If $\pi:{\tt X}\to 2^G$ maps into the normal subgroups of $G$ and for every $s,u\in {\tt X}$, if $s\prec u$ then $\pi_{u}\leq \pi_s$, then $\bar{\pi}$ is a (pre-)ordered filter. \end{thm} \begin{proof} Fix $s,u\in M$ with $s\prec u$ for the (pre-)-order on $M$. By \eqref{def:star} for every $w=(u_1,\dots,u_d)\in \mathcal{G}_0^u$, as $s\prec u$, there exists a decomposition $s=\sum_{i=1}^d s_i$ with $s_i\prec u_i$. By our assumption on ${\tt X}$, $x=(s_1,\dots,s_d)\in \mathcal{G}_0^s$. By our assumptions on $\pi$, $\pi_{u_i}\leq \pi_{s_i}$ and $[\pi_{w}]\leq [\pi_{x}]$. Thus: \begin{align*} \bar{\pi}_{u} & = \prod_{w\in \mathcal{G}_0^{s+t}} [\pi_{w}] \leq \prod_{x\in\mathcal{G}_0^s} [\pi_{x}] = \bar{\pi}_s. \end{align*} So $\pi$ is order-reversing. Now for all $s,t\in M$, $s\prec s$ and $0\prec t$ so $s\prec s+t$. Thus, $\pi_{s+t}\leq \pi_s$. Likewise, $\pi_{s+t}\leq \pi_t$. Together with \lemref{lem:lower} we see $\bar{\pi}$ is a (pre-)ordered filter. \end{proof} \section{Adjoint, centroid, and derivation filter refinements} Here we introduce new filters by considering refinements of known filters $\phi:M\to 2^G$. Fix $s,t\in M$ and assume $M$ has property \eqref{def:star}. The graded product of the Lie algebra $L=L(\phi)$ (see \eqref{eq:Lie-filter}) restricts to a bimap (biadditive map) $[,]=[,]_{st}:L_{s}\times L_{t}\to L_{s+t}$. We introduce three nonassociative rings $\Adj([,])$, $\Der([,])$, and $\Cent([,])$ that are new sources of $G$-invariant subgroups and capture various properties of commutation in $G$. The rings are: \begin{align*} \Adj([,]) & = \{ (X,Y)\in \End(L_s)\times \End(L_t)^{{\rm op}}:\\ & \qquad \forall u\in L_s, \forall v\in L_t,\;[uX,v]= [u,Yv]\},\\ \Cent([,] ) & = \{ (X,Y;Z) \in \End(L_s)\times \End(L_t)\times \End(L_{s+t}) :\\ & \qquad \forall u\in L_s,\forall v\in L_t,\; [uX,v]+[u,vY] = [u,v]Z\},\quad \&\\ \Der([,] ) & = \{ (X,Y;Z) \in \gl(L_s)\times \gl(L_t)\times \gl(L_{s+t}) :\\ & \qquad \forall u\in L_s,\forall v\in L_t\; [uX,v]+[u,vY] = [u,v]Z\}. \end{align*} The {\em adjoint} ring $\Adj([,])$ is unital and associative. The {\em centroid} ring $\Cent([,])$ is unital, associative, and essentially commutative.\footnote{ Technically, $[,]_{st}$ factors through $L_s/L_t^{\bot}\times L_t/L_s^{\bot}\to [L_s,L_t]$, where $L_t^{\perp}=\{u\in L_s: [u,L_t]=0\}$ etc.. The centroid on that induced bimap is commutative \cite{Wilson:direct-I}*{Lemma 6.8(iii)}.} The {\em derivation} ring $\Der([,])$ is a Lie ring. The motivation to consider these rings is discussed in Section~\ref{sec:algebras}. Now let $J$ be the Jacobson radical of $\Adj([,])$. Set $J^0=\Adj([,])$ and $J^{i+1}=J^i J$ for $i\geq 0$. For each $i\in \mathbb{N}$, define $H_i$ so that $\phi_{s}^+\leq H_i\leq \phi_s$ and \begin{align}\label{def:alpha} H_i/\phi_{s}^+ & = L_s J^i. \end{align} Next, for $(u,i)\in {\tt X}=M\times \{0\}\cup \{u: u\prec s\}\times \mathbb{N}$, define \begin{align*} \alpha_u^i & = \left\{\begin{array}{cc} \phi_u & i=0 \textnormal{ or } u\neq s,\\ H_i & u=s. \end{array}\right. \end{align*} \begin{lemma}\label{lem:hypo} $(M\times\mathbb{N}, {\tt X}, \alpha)$ satisfies the hypotheses of \thmref{thm:generate}. \end{lemma} \begin{proof} As $M$ and $\mathbb{N}$ satisfy \eqref{def:star}, so does $M\times \mathbb{N}$. Evidently ${\tt X}$ generates $M \times \mathbb{N}$ as a monoid. For $(x,i)\in {\tt X}$ and $(y,j)\in M$, if $(y,j)\prec (x,i)$ then $y\prec x$ and $j\leq i$. In particular, either $i=j=0$ so that $(y,j)\in {\tt X}$ or $i>0$, $y\prec x\prec s$, and so $(y,j)\in {\tt X}$. Also $\alpha:{\tt X}\to 2^G$ maps into the normal subgroups of $G$. Finally, if $(y,j)\prec (x,i)$ in ${\tt X}$ then $y\prec x$ and $j\leq i$. If $x=y=s$ then $\alpha_x^i=H_i\leq H_j=\alpha_y^j$. Suppose $x\neq s$. If $y\neq s$ then $\alpha_y^j=\phi_y\geq \phi_x=\alpha_x^i$. If $y=s$ then $\alpha_y^j=H_j\geq \phi_s^+\geq \phi_x=\alpha_x^i$. \end{proof} In light of \thmref{thm:generate} and \lemref{lem:hypo}, we may now speak of the filter $\bar{\alpha}:M\times \mathbb{N}\to 2^G$ generated by $\alpha$. We call this an {\em adjoint} refinement. Our construction of adjoint refinements of filters depends on the choice of filter and the selected homogeneous components of the Lie ring. A natural choice is to begin with a $\nu$ series $\nu:\mathbb{N}^d\to 2^G$. Since $L_0$ represents a fixed set of operators it is not informative to refine $L_0$ so instead we refine the first $L_s\neq 0$ where $L_s\neq L_0$. Of course $\nu$ series are well-ordered and so a small modification is necessary. Assume $M=\mathbb{N}^d$ with the right-to-left lexicographic order (i.e. $e_{d}=(\dots,0,1)>e_{d-1}=(\dots,1,0)>\cdots$). Define $\alpha$ as above only now restricted to the generating set ${\tt X}=\{u\in \mathbb{N}^{d+1} : u< (s+e_{d},0)\}$. (Note in $M$, $s^+=s+e_{d}$ which is why ${\tt X}$ is the appropriate choice.) The resulting well-ordered filter $\bar{\alpha}$ is again a $\nu$-series, but now on $\mathbb{N}^{d+1}$. We call this a {\em lex-least} adjoint refinement of $\nu$. In general we can begin with the lower central series $\gamma$ on $\mathbb{N}$, or other standard $\mathbb{N}$-filters. By recursive lex-least adjoint refinements we construct longer and longer characteristic series indefinitely or until the series stabilizes. In that case we speak of the {\em stable lex-least} refinement. Note that the analogous constructions based on the radicals of $\Cent([,])$ and $\Der([,])$ (equivalently the radical of the associative enveloping algebra of $\Der([,])$) determine functions $\sigma:{\tt X}\to 2^G$ and $\delta:{\tt X}\to 2^G$ and corresponding filters and $\nu$ series. \subsection{Concrete examples}\label{sec:concrete} We introduce some families of examples of lex-least refinements. Usually to spot proper adjoint, centroid, or derivation refinements requires we setup and solve specific systems of linear equations and discern the structure of the appropriate rings. There are algorithms for that task \citelist{\cite{Wilson:find-cent}*{Sections 4 \& 5}\cite{BW:isom}\cite{deGraaf}\cite{GMT}}. That is not always necessary and as the examples below show. \begin{ex} The $\gamma$ series of a finitely generated nonabelian free group has no proper lex-least adjoint, derivation, or centroid refinement. \end{ex} \begin{proof} In a free group of rank $r$, $\gamma_1/\gamma_2\cong \mathbb{Z}^r$, $\gamma_2/\gamma_3\cong \mathbb{Z}^{\binom{r}{2}}$ and $[,]:\gamma_1/\gamma_2\times \gamma_1/\gamma_2\to \gamma_2/\gamma_3$ is the exterior square $\mathbb{Z}^r\times \mathbb{Z}^r\to \mathbb{Z}^r\wedge \mathbb{Z}^r$. The adjoint ring of $[,]$ is isomorphic to $\mathbb{Z}$ if $r\neq 2$ and $M_2(\mathbb{Z})$ if $r=2$; cf. \cite{Wilson:unique-cent}*{Sections 7.1 \& 7.6}. In these rings the Jacobson radical is trivial. Likewise, the centroid (which embeds in the adjoint ring) is $\mathbb{Z}$. Finally, the derivation algebra of the exterior square is $\mathfrak{gl}_r(\mathbb{Z})$ so its enveloping algebra has a trivial radical. \end{proof} \begin{ex}\label{ex:longerseries} Every finite $p$-group with $\eta_1/\eta_2$ of odd dimension and $\eta_2/\eta_3$ of dimension $2$ has a proper lex-least adjoint refinement of the $\eta$ series. \end{ex} The \exref{ex:longerseries} explains the proper refinement later in \eqref{eq:4by4} and it generalizes in several ways. We assert the following fact in the proof of \exref{ex:longerseries}. \begin{lemma}\label{lem:adj-ex} Let $Z\in M_m(\mathbb{Z}_p)$ be the matrix with $Z_{i(m-i+1)}=1$ and $0$'s elsewhere. Define a bimap $\circ:(\mathbb{Z}_p^m\oplus \mathbb{Z}_p^{m+1})\times (\mathbb{Z}_p^m\oplus \mathbb{Z}_p^{m+1})\to \mathbb{Z}_p^2$ by \begin{align*} (u,v)\circ (x,y) & =(uFy^t-vF^t x, uGy^t-vG^t x^t) \end{align*} where $F=[Z,0], G=[0,Z]\in M_{m\times (m+1)}(\mathbb{Z}_p)$. It follows that \begin{align*} \Adj(\circ) & = \left\{\left(\begin{bmatrix} aI_m & T \\ 0 & bI_{m+1}\end{bmatrix}, \begin{bmatrix} b I_m & -T\\ 0 & a I_{m+1} \end{bmatrix}\right) : T \textnormal{ a Toeplitz matrix}\right\}. \end{align*} In particular, $J(\Adj(M))\cong \mathbb{Z}_p^{2m}>0$. \end{lemma} \begin{proof}[Proof of \exref{ex:longerseries}] We argue along the lines of Bond \cite{Bond}*{pp. 608--611} and Vi{\v{s}}nevecki{\u\i} \cite{Vish}. Let $V=\eta_1/\eta_2\cong \mathbb{Z}_p^{2m+1}$, and $W=\eta_2\cong\mathbb{Z}_p^{2}$. Commutation produces an alternating $\mathbb{Z}_p$-bimap $[,]:\mathbb{Z}_p^{2m+1}\times \mathbb{Z}_p^{2m+1}\to \mathbb{Z}_p^{2}$, equivalently, a pair of alternating forms on an odd-dimensional vector space. Using a classic result of Kronecker (generalized by others, compare \cite{Scharlau}) we know pairs of alternating forms are perpendicularly decomposable into indecomposable pairs of subspace $V=E_1\oplus \cdots \oplus E_s$. As our vector space has odd dimension, at least one $E_i$ has odd dimension. Furthermore, there is one type (up to equivalence) of odd-dimensional indecomposable pair of alternating forms. Specifically this is the bimap described in \lemref{lem:adj-ex}. Notice $\Adj([,])$ restricted to $E_i$ is $e_i \Adj([,])e_i$, for $e_i$ the projection idempotent of $V$ onto $E_i$ with kernel $\bigoplus_{i\neq j} E_j$. By \lemref{lem:adj-ex}, $e_i \Adj([,]) e_i\cong \Adj(\circ)$ has a nontrivial radical. Thus, $\Adj([,])$ has a nontrivial radical and the lex-least adjoint refinement of $\eta$ is proper.\footnote{Each $E_j$ of even dimension has $e_j\Adj([,])e_j\cong M_2(\mathbb{Z}_p[x]/(a_j(x)^{c_j}))$ for $a_j(x)$ an irreducible polynomial. So the length of an adjoint refinement is at least $\underset{j}{\max}\{1,c_j\}$.} \end{proof} \begin{ex} Fix a finite nonabelian $p$-group $G$ that is not $2$-generated. If $H\leq G$ where $[G:H]=p$ and $[H,H]\leq \eta_3(G)$, then $G$ has proper lex-least adjoint and derivation refinement of its $\eta$ series. \end{ex} \begin{proof} Let $V=\eta_1/\eta_2$ and $W=\eta_2/\eta_3$. So $[H/\eta_2,H/\eta_2]\equiv 0\in W$ and $H/\eta_2$ is a hyperplane in $V$ which is totally isotropic with respect to the commutation bimap $[,]:V\times V\to W$. Factoring out the radical of $[,]$, we arrive at the bimap described in \cite{Wilson:unique-cent}*{Lemma 7.13}. That lemma shows $J(\Adj([,]))>0$ when $\dim V>2$. The argument for derivations is analogous. Hence the adjoint and derivation refinements are proper. \end{proof} Our final family of examples is parametrized by commutative unital rings. \begin{ex}\label{ex:Hei} Let $R$ be an associative commutative unital ring with Jacobson radical $J$. Consider the Heisenberg group \begin{equation} H(R) = \begin{bmatrix} 1 & R & R\\ . & 1 & R\\ . & . & 1 \end{bmatrix}. \end{equation} The lex-least adjoint refinement $\alpha:\mathbb{N}^2\to 2^{H(R)}$ of the $\gamma$ series has $\alpha_c^i(H)=1$ for $c>2$ and for $i\in\mathbb{N}$, \begin{align}\label{eq:ex-alpha-1} \alpha_{1}^i(H) & = \begin{bmatrix} 1 & J^i & R\\ . & 1 & J^i\\ . & . & 1 \end{bmatrix}, & \alpha_2^i(H) & = \begin{bmatrix} 1 & . & J^i \\ . & 1 & . \\ . & . & 1 \end{bmatrix}. \end{align} In particular, if $J^c>J^{c+1}=0$, then the length of $\alpha$ is $2c+2$, whereas the $\gamma$ series has length $2$.\footnote{The length of a $\nu$ series counts the number of non-redundant terms $L_s\neq 0$, for $s\neq 0$.} \end{ex} \begin{proof} To see this is correct we notice that commutation on the first allowable graded component of $L(\gamma)$ amounts to $R^{\oplus 2}\times R^{\oplus 2}\to R$ where: \begin{align*} (\forall & (a,b),(c,d)\in R^{\oplus 2}) & [(a,b),(c,d)] & = ad-bc \end{align*} Hence, $\Adj([,])$ is the ring \begin{align*} \left\{\left(\begin{bmatrix} a & b\\ c & d\end{bmatrix}, \begin{bmatrix} d & -b\\ -c & a \end{bmatrix}\right): a,b,c,d\in R\right\} \cong M_2(R). \end{align*} For $i\in\mathbb{N}$, $(R^{\oplus 2}) J^i(\Adj([,]))=(R^{\oplus 2})M_2(J^i(R))=(J^i)^{\oplus 2}$. This gives us the $\alpha_1^i(H)$ described in \eqref{eq:ex-alpha-1} and the rest follows from \thmref{thm:generate}: \begin{align*} \alpha_2^i & = \prod_{i=i_1+i_2} [\alpha_1^{i_1},\alpha_1^{i_2}] = \left\langle \begin{bmatrix} 1 & . & J^{i_1} J^{i_2} \\ . & 1 & . \\ . & . & 1 \end{bmatrix} : i=i_1+i_2\right\rangle =\begin{bmatrix} 1 & . & J^i \\ . & 1 & . \\ . & . & 1 \end{bmatrix}. \end{align*} \end{proof} It might be speculated that the lex-least adjoint refinement of upper unitriangular $(d\times d)$-matrix groups proceeds along similar lines to \exref{ex:Hei} and so it will be uninteresting for matrices over fields. To the contrary, the lex-least adjoint refinement of the $\gamma$ series of the upper unitriangular $(4\times 4)$-matrices over a field is the following proper refinement (for proof see \exref{ex:longerseries}): \begin{align}\label{eq:4by4} \begin{bmatrix} 1 & * & * & *\\ . & 1 & * & * \\ . & . & 1 & * \\ . & . & . & 1 \end{bmatrix} & > \begin{bmatrix} 1 & * & * & *\\ . & 1 & . & * \\ . & . & 1 & * \\ . & . & . & 1 \end{bmatrix} > \begin{bmatrix} 1 & . & * & *\\ . & 1 & . & * \\ . & . & 1 & . \\ . & . & . & 1 \end{bmatrix} > \begin{bmatrix} 1 & . & . & *\\ . & 1 & . & . \\ . & . & 1 & . \\ . & . & . & 1 \end{bmatrix} > 1. \end{align} \begin{remark}\label{rem:unipotent} J. Maglione of Colorado State University has computed the stable lex-least adjoint refinement of upper unitriangular $(d\times d)$-matrices. The result is that the $\gamma$ series (which has length $d-1$) is refined to characteristic $\nu$ series of length $d^2/4+\Theta(d)$ with each factor in the series of dimension at most $2$ over the field. Further work and a Magma \cite{Magma} implementation is underway. \end{remark} \subsection{Positive logarithmic proportions of proper refinements} We now give some attention to the proportion of finite $p$-groups that exhibit a proper lex-least refinement. The number $f(p^n)$ of pairwise nonisomorphic groups of size $p^n$ is known for small values of $n$, but asymptotically we only know logarithmic estimates, e.g.: $\frac{2}{27}n^3+Cn^2\leq \log_p f(p^n)\leq \frac{2}{27}n^3+C'n^{2.5}$ for constants $C$ and $C'$ \cite{BNV:enum}*{Chapter 1}. Similarly granularity occurs when counting pairwise nonisomorphic rings \cite{Neretin}. This means that proportions of finite groups are not generally quantifiable except on a logarithmic scale. This is the context of our main result in this section. \begin{thm}\label{thm:count} For each prime $p$ and $n\gg 0$, there are at least $p^{2n^3/729+\Omega(n^2)}$ pairwise nonisomorphic groups of order $p^n$ that have $\eta$ series of length $2$ and a lex-least adjoint refinement of length at least $6$. \end{thm} In other words, \thmref{thm:count} says that as $n\to \infty$, a positive logarithmic proportion of finite groups of size $p^n$ have proper adjoint refinements. Specifically: $\frac{2n^3/729+C''n^2}{2n^3/27+C'n^{2.5}}\to \frac{1}{27}$ in our result but the quantity could be larger. Of course this count is logarithmic and so the actual proportion may well tend to zero. \begin{lemma}\label{lem:ring-count} There are at least $p^{2n^3/27+\Omega(n^2)}$ pairwise nonisomorphic local commutative associative unital rings $R$ of order $p^n$ with $J(R)>J^2(R)>0$. \end{lemma} \begin{proof} Let $\circ:V\times V\to W$ be a nontrivial symmetric bimap of elementary abelian $p$-groups $V$ and $W$. Set $R=R(\circ)=\mathbb{Z}_p\oplus V\oplus W$ as an additive group and equip $R$ with the following distributive product: \begin{align*} (s,v,w)\cdot (s',v',w') & = (ss', sv'+vs', sw'+v\circ v'+ws'). \end{align*} This product is associative, commutative, and $(1,0,0)$ is the identity.\footnote{Writing $(s,v,w)$ as a formal matrix $\left[\begin{smallmatrix} s & v & w\\ . & s & v \\ . & . & s\end{smallmatrix}\right]$, the operations mimic matrix operations.} Furthermore, $J=0\oplus V\oplus W$ is a nilpotent ideal so $J$ is contained in the Jacobson radical $J(R)$ of $R$. Furthermore, $R/J\cong \mathbb{Z}_p$ is semisimple proving $J=J(R)$. Finally, $J^2=0\oplus 0\oplus (V\circ V)>0$ as $\circ$ is nontrivial. To every pair $\circ,\diamond$ of bimaps $V\times V \to W$, with $W=V\circ V=V\diamond V$, an isomorphism $f:R(\circ)\to R(\diamond)$ induces isomorphisms $(f_V:V\to V, f_W:W\to W)$ by identifying $V=J/J^2$ and $W=J^2$. Furthermore, for all $v,v'\in V$, $vf_V\circ v'f_V=(v\circ v')f_W$. Therefore, the number $g(V,W)$ of pairwise nonisomorphic rings of the form $R(\circ)$ has the following lower bound. Let $S^2V=V\otimes V/\langle u\otimes v-v\otimes u: u,v\in V\rangle$, \begin{align*} g(V,W) & \geq \frac{|\hom(S^2V, W)|}{|\GL(V)\times \GL(W)|} \geq p^{\frac{1}{2}(\dim V)^2\dim W -(\dim V)^2-(\dim W)^2}. \end{align*} Let $d=\dim V$ so that $\dim W=n-1-d$. As $n\to \infty$, the maximum of $p^{1/2\cdot d^2(n-1-d)-d^2-(n-1-d)^2}$ occurs for $d$ near $2n/3$ which produces the lower bound of $p^{2n^3/27+\Omega(n^2)}$. (This method of counting is owed to Higman \cite{BNV:enum}*{Chapter 2}.) \end{proof} \begin{proof}[Proof of \thmref{thm:count}] Consider Heisenberg groups. If $H(R)\cong H(S)$ for local commutative rings $R$ and $S$ then $M_2(R)\cong \Adj([,]_R)\cong \Adj([,]_S)\cong M_2(S)$. Fix an isomorphism $f:M_2(R)\to M_2(S)$. Since $R$ is local $E=\left[\begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\right]$ is a primitive idempotent of $M_2(R)$ and therefore so is $Ef=F$. Thus $f$ induces an isomorphism $R\cong EM_2(R)E\to FM_2(S)F\cong S$. Consequently, for every isomorphism type of commutative associative local ring $R$ of size $p^{m}$ we obtain a distinct isomorphism type of group $H(R)$ of size $p^{3m}$. By \lemref{lem:ring-count} there are at least $p^{\frac{2}{27}m^3+\Omega(m^2)}$ commutative associative local rings $R$ of size $p^m$, for $m\gg 0$. So for $n\gg 0$ and $n=3m+r$, $0\leq r<3$, there are $p^{2n^3/(27)^2+\Omega(n^2)}$ pairwise nonisomorphic groups $G$ of order $p^{3m}$ that have $\eta$ series of length $2$ but an adjoint refinement of length at least $6$ (\exref{ex:Hei}). For any such group $G$, the length of the $\eta$ series for $G\times \mathbb{Z}_p^r$ is unchanged and the adjoint refinement can only increase in length. Thus, the claim holds for all $n\gg 0$. \end{proof} \section{Implications to group-isomorphism testing}\label{sec:iso} The ``nilpotent quotient'' algorithm is arguably the leading method for group-isomorphism testing and for finding generators of automorphism groups of $p$-groups. The idea is evident in lectures of G. Higman \cite{Higman:chic}*{pp. 10--12}, but was rediscovered in greater generality by M. F. Newman \cite{Newman} and refined extensively by E.A. O'Brien \cite{OBrien:first} and others. We summarize the framework to present our own adaptations. A {\em characteristic} function $G\mapsto \tau(G)$ on groups $G$ sends $G$ to a subgroup $\tau(G)$ such that every isomorphism $f:G\to H$ satisfies $\tau(G)f=\tau(H)$. The first step of a nilpotent quotient algorithm is to specify for each $n\in \mathbb{N}$, characteristic functions $\tau_n$ where $\tau_n(G)/\tau_{n+1}(G)$ is elementary abelian. Usually this is the series $\eta$ of $G$ but our intention is to use lex-least refinements of $\eta$. To detect isomorphisms between finite $p$-groups $G$ and $H$ the algorithm recursively builds isomorphisms $f_n:G/\tau_n(G)\to H/\tau_n(H)$, or proves that at some stage there is no isomorphism. The base case is isomorphism testing of elementary abelian groups which is straight-forward. Within the recursion, the isomorphism $f_n$ is used to create {\em covering groups} $C_n$ that satisfy the following. \begin{enumerate}[(i)] \item $G/\tau_n(G)\cong C_n/\tau_n(C_n)\cong H/\tau_n(H)$. \item There are $M,N \normaleq \tau_n(C_n)$ with $C_n/M\cong G/\tau_{n+1}(G)$ and $C_n/N\cong H/\tau_{n+1}(H)$. \item For all isomorphisms $f:G/\tau_{n+1}(G)\to H/\tau_{n+1}(H)$, $\exists g\in \Aut(C_n)$, $Mg=N$ inducing $f$. \end{enumerate} The algorithm's work is to find $g\in \Aut(C_{n})$ where $N=Mg$. Such a $g$ induces an isomorphism $f_{n+1}:G/\tau_{n+1}(G)\to H/\tau_{n+1}(H)$ that allows the algorithm to increase $n$. If this fails, by (iii) we know $G\not\cong H$. (For computing automorphisms the task is instead to find generators of the stabilizer in $\Aut(C_{n})$ of $M$.) There are many heuristics to find $g\in \Aut(C_n)$ with $Mg=N$, but most settings lead to exhaustive searching in sets as large as $p^{O(d_n^2)}$, $d_n=[\tau_{n-1}(G),\tau_n(G)]$. So the running time is at worst $|G|^{O(\log |G|)}$. To improve performance of this exhaustive search it is enough to have a series $\tau$ with small $\Aut(C_n)$-orbits on the subgroups of $\tau_n(C_{n})/\tau_{n+1}(C_n)$. We propose $\nu$ series, such as the adjoint, centroid, or derivation refinements of $\eta$. These series can decrease the size of the $\Aut(C_n)$-orbits in two ways. First the size of sections of longer series are correspondingly smaller. Secondly, the automorphism groups must respect ring-theoretic properties of adjoints, centroids, and derivations. Indeed, in some extreme cases it has been shown that the $\Aut(C_n)$-orbits are smaller than $p^{d_n}$. That lead to isomorphism tests for large families of $p$-groups that run in $O(\log^6 |G|)$ steps (compared to the $|G|^{O(\log |G|)}$ steps without considering adjoints) \cite{LW:iso}*{Theorem 2}. We show how nilpotent quotient algorithms adapt to use the $\nu$ series we have described. Let $F[x_1,\dots,x_d]$ denote a free group on $d$ generators. \begin{thm} Let $G$ and $H$ be finite $p$-groups with $\nu$ a fixed lex-least adjoint, centroid, or derivation refinement of their $\eta$ series. Suppose that for some $(i,s)\in \mathbb{N}^2$, $i>1$, $G/\nu_i^s(G)\cong F[x_1,\dots,x_d]/R_i\cong H/\nu_i^s(H)$. Let $\nu_j^{t}(G)^{-1}$ be the preimage of $\nu_1^t(G)$ in $F=F[x_1,\dots,x_d]$. Set \begin{align*} U_i=[R_i,F]R_i^p \prod_{s+1=t_1+\cdots+t_i} [(\nu_1^{t_1}(G))^{-1},\dots,(\nu_1^{t_i}(G))^{-1}]. \end{align*} It follows that $C_i=F/U_i$ is a covering group for the pair $(G/\nu^{s+1}_i(G),H/\nu^{s+1}_i(H))$. \end{thm} \begin{proof} First, $G/\eta_{i+1}(G)$ is a quotient of $F/[R_i,F]R_i^p$ \cite{OBrien:first}*{Theorem 2.2}. Since $\eta_{i+1}(G)=\nu_{i+1}^0(G)\leq \nu^{s+1}_i(G)\leq \nu_i^s(G)$, it follows that $G/\nu_i^{s+1}(G)$ is a quotient $C_i/M$ with $M\leq R_i$. Likewise $H/\nu_i^{s+1}(H)$ is a quotient $C_i/N$ with $N\leq R_i$. This resolves properties (i) and (ii) of a covering group. Let $g:C_i\to G/\nu_i^{s+1}(G)$ be an epimorphism with kernel $M$ and $h:C_i\to H/\nu_i^{s+1}(H)$ an epimorphism with kernel $N$. Suppose there is an isomorphism $f:G/\nu_{i}^{s+1}(G)\to H/\nu_i^{s+1}(H)$. Thus $f$ induces a function $j\mapsto w_j\in F$ such that $(U_i x_j)g f=(U_i w_j)h$. As $F$ is free, we obtain a homomorphism $f':F\to F$ defined by $x_j\mapsto w_j$. Now $f$ is an isomorphism and $G\mapsto \nu_1^t(G)$ is characteristic, so it follows that $U_if'= U_i$ and so $f'$ factors through the homomorphism $\hat{f}:C_i\to C_i$ given by $(U_i x_j)\hat{f}=U_i w_j$. Finally, $\hat{f}$ is an isomorphism sending $M$ to $N$, as $C_i=\langle U_i x_1,\dots,U_i x_d\rangle=\langle U_i w_1,\dots, U_i w_d\rangle$. For an expanded argument of the same nature consider \cite{OBrien:first}*{Theorem 2.5}. \end{proof} \section{Closing remarks} \subsection{Why use adjoints, centroids, and derivations?}\label{sec:algebras} We have resisted discussing the reasons to consider adjoints, centroids, and derivations. We close with a few hints of their importance. The adjoint ring in our generality was introduced in \cite{Wilson:unique-cent} to explain the nature of central products of $p$-groups. Notice $[]=[,]_{st}:L_s\times L_t\to L_{s+t}$ factors through $L_s \otimes_{\Adj([,])} L_t$ uniquely. Thus, ring-theoretic properties of $\Adj([,])$, e.g. radicals, idempotents, and nilpotent elements, have strong implications on the commutation in $G$. At times this is sufficient to distinguish groups up to isomorphism \cite{LW:iso}. A detailed treatment of adjoints in general is found in \cite{Wilson:division}. One useful feature of the derivation algebra is that we can exponentiate nilpotent elements of $\Der([,])$ to produce pseudo-isometries of $[,]$. If a Lazard-Mal'cev type correspondence is available then these also lift to automorphisms of $G$. Unfortunately one still deals with the issues of exponentiation in positive characteristic. The fundamental property of the centroid is that $[,]$ is bilinear with respect to $\Cent([,])$. For example, the centroid of the group $H(R)$ is $R$. Centroids in this form arose to describe direct product decompositions of $p$-groups, cf. \cite{Wilson:direct-I}*{Section 6.4}. The associated filter for the centroid ring informs us of direct factors. \subsection{The role of characteristic abelian subgroups} There is a theme connecting many examples of proper adjoint refinements which is poorly understood. If a finite nilpotent group $G$ has a proper lex-least adjoint refinement of its $\gamma$ series then there exists a proper characteristic subgroup $\gamma_2<H< \gamma_1$ such that $[H,H]\leq \gamma_3$. This is because the adjoint ring $A=\Adj([,]_1:L_1\times L_1\to L_2)$ ($L_i=\gamma_i/\gamma_{i+1})$) has a proper Jacobson radical $J$. Hence for some power $J^i>0$, $J^{2i}=0$. Thus, $[L_1J^i,L_1J^i]=[L_1 J^{2i}, L_1]=[0,L_1]=0$. So take $H=\alpha_1^i$. The examples in Section~\ref{sec:concrete} where discovered from this observation. It is not known what additional properties are needed on $H$ to guarantee that $H$ is part of an adjoint refinement. Ultimately, since computing adjoints is linear algebra it is more likely in practice that we will discover $H$ by computing $A$ rather than have foreknowledge of $H$ to imply properties of $A$. \subsection{Many more examples} The lex-least refinements of adjoint, centroid, and derivation type we have considered here are only of $\nu$ series $\nu:\mathbb{N}^d\to 2^G$. Also they only depended on the bimap of the leading nontrivial homogeneous component. That bimap is unchanged by central extensions $N.G$ in which $N$ is contained in the Frattini subgroup. Thus \thmref{thm:count} should imply the existence of many more examples. We also emphasize again that our refinements can be repeated indefinitely and perhaps in different ways. Since the increase in length can be substantial (consider \remref{rem:unipotent}) we are tempted to consider a notion of ``class'' that counts the longest possible series by some canonical refinement process. \subsection{Why stop at filters?} In our introduction we remarked that characteristic series are desired so that we can constrain the properties of $\Aut(G)$. In our constructions we have computed various nonassociative rings $\Adj([,])$, $\Der([,])$, and $\Cent([,])$ on which $\Aut(G)$ is represented. The action of $\Aut(G)$ on these nonassociative rings is much more informative than the longer series that they induce. However in practice the automorphisms of nonassociative rings are extremely difficult groups to produce whereas the refinements we describe here are reasonable to compute but remain informative. A related work of the author and Brooksbank attempts to constrain automorphism groups further and shows the difficulties \cite{BW:autotopism}. \section*{Acknowledgments} I am grateful to T. Doresey, J. Maglione, and C. R. B. Wright for many helpful remarks and discussions, and to the referee for candid advice. \begin{bibdiv} \begin{biblist} \bib{Babai:iso}{article}{ author={Babai, L.}, author={Codenotti, P.}, author={Grochow, J. A.}, author={Qiao, Y.}, title={Code equivalence and group isomorphism}, conference={ title={Proc. 22nd ACM-SIAM Symposium on Discrete Algorithms}, }, book={ publisher={SIAM}, place={Philadelphia, PA}, }, date={2011}, pages={1395--1408}, review={\MR{2858409 (2012j:94191)}}, } \bib{BNV:enum}{book}{ author={Blackburn, S. R.}, author={Neumann, P. M.}, author={Venkataraman, G.}, title={Enumeration of finite groups}, series={Cambridge Tracts in Mathematics}, volume={173}, publisher={Cambridge University Press}, place={Cambridge}, date={2007}, pages={xii+281}, isbn={978-0-521-88217-0}, review={\MR{2382539 (2009c:20041)}}, } \bib{Bond}{article}{ author={Bond, J.}, title={Lie algebras of genus one and genus two}, journal={Pacific J. Math.}, volume={37}, date={1971}, pages={591--616}, issn={0030-8730}, review={\MR{0308221 (46 \#7336)}}, } \bib{BW:autotopism}{article}{ author={Brooskbank, P. A.}, author={Wilson, J. B.}, title={Groups acting on tensors (submitted)}, note={arXiv:1210.0827}, } \bib{BW:isom}{article}{ author={Brooksbank, P. A.}, author={Wilson, J. B.}, title={Computing isometry groups of Hermitian maps}, journal={Trans. Amer. Math. Soc.}, volume={364}, date={2012}, number={4}, pages={1975--1996}, issn={0002-9947}, review={\MR{2869196}}, } \bib{Magma}{article}{ author={Bosma, W.}, author={Cannon, J.}, author={Playoust, C.}, title={The Magma algebra system. I. The user language}, note={Computational algebra and number theory (London, 1993)}, journal={J. Symbolic Comput.}, volume={24}, date={1997}, number={3-4}, pages={235--265}, issn={0747-7171}, review={\MR{1484478}}, } \bib{CH:iso}{article}{ author={Cannon, J. J.}, author={Holt, D. F.}, title={Automorphism group computation and isomorphism testing in finite groups}, journal={J. Symbolic Comput.}, volume={35}, date={2003}, number={3}, pages={241--267}, issn={0747-7171}, review={\MR{1962794 (2004c:20035)}}, } \bib{deGraaf}{book}{ author={de Graaf, W. A.}, title={Lie algebras: theory and algorithms}, series={North-Holland Mathematical Library}, volume={56}, publisher={North-Holland Publishing Co.}, place={Amsterdam}, date={2000}, pages={xii+393}, isbn={0-444-50116-9}, review={\MR{1743970 (2001j:17011)}}, } \bib{ELGO}{article}{ author={Eick, B.}, author={Leedham-Green, C. R.}, author={O'Brien, E. A.}, title={Constructing automorphism groups of $p$-groups}, journal={Comm. Algebra}, volume={30}, date={2002}, number={5}, pages={2271--2295}, issn={0092-7872}, review={\MR{1904637 (2003d:20027)}}, } \bib{Fitting:const}{article}{ author={Fitting, H.}, title={Beitr\"age zur Theorie der Gruppen endlicher Ordnung.}, date={1938}, journal={Jber DMV}, volume={48}, pages={77--141}, } \bib{GMT}{article}{ author={Gianni, P.}, author={Miller, V.}, author={Trager, B.}, title={Decomposition of algebras}, conference={ title={Symbolic and algebraic computation}, address={Rome}, date={1988}, }, book={ series={Lecture Notes in Comput. Sci.}, volume={358}, publisher={Springer}, place={Berlin}, }, date={1989}, pages={300--308}, review={\MR{1034741 (91e:12009)}}, } \bib{GPS}{article}{ author={Glasby, S. P.}, author={P{\'a}lfy, P. P.}, author={Schneider, C.}, title={$p$-groups with a unique proper non-trivial characteristic subgroup}, journal={J. Algebra}, volume={348}, date={2011}, pages={85--109}, issn={0021-8693}, review={\MR{2852233}}, } \bib{Hall:const}{article}{ author={Hall, P.}, title={The construction of soluble groups}, journal={J. Reine Angew. Math.}, volume={182}, date={1940}, pages={206--214}, issn={0075-4102}, review={\MR{0002877 (2,125j)}}, } \bib{HM:auto-generic}{article}{ author={Helleloid, G. T.}, author={Martin, U.}, title={The automorphism group of a finite $p$-group is almost always a $p$-group}, journal={J. Algebra}, volume={312}, date={2007}, number={1}, pages={294--329}, issn={0021-8693}, review={\MR{2320459 (2008h:20035)}}, } \bib{Higman:chic}{misc}{ author={Higman, G.}, title={Enumerating $p$-Groups, I -- IV} series={Group Theory Seminar Lectures, Department of Mathematics University of Chicago}, year={1960-61}, pages={6--12}, } \bib{Jac:basicII}{book}{ author={Jacobson, N.}, title={Basic algebra. II}, publisher={W. H. Freeman and Co.}, place={San Francisco, Calif.}, date={1980}, pages={xix+666}, isbn={0-7167-1079-X}, review={\MR{571884 (81g:00001)}}, } \bib{Khukhro}{book}{ author={Khukhro, E. I.}, title={Nilpotent groups and their automorphisms}, series={de Gruyter Expositions in Mathematics}, volume={8}, publisher={Walter de Gruyter \& Co.}, place={Berlin}, date={1993}, pages={xiv+252}, isbn={3-11-013672-4}, review={\MR{1224233 (94g:20046)}}, } \bib{Lazard}{article}{ author={Lazard, M.}, title={Sur les groupes nilpotents et les anneaux de Lie}, journal={Ann. Sci. Ecole Norm. Sup. (3)}, volume={71}, date={1954}, pages={101--190}, issn={0012-9593}, review={\MR{0088496 (19,529b)}}, } \bib{LW:iso}{article}{ author={Lewis, L.}, author={Wilson, J. B.}, title={Isomorphism in expanding families of indistinguishable groups}, journal={Groups - Complexity - Cryptology} volume={4}, date={2012}, pages={73--110}, } \bib{Martin}{article}{ author={Martin, U.}, title={Almost all $p$-groups have automorphism group a $p$-group}, journal={Bull. Amer. Math. Soc. (N.S.)}, volume={15}, date={1986}, number={1}, pages={78--82}, issn={0273-0979}, review={\MR{838793 (87j:20057)}}, } \bib{Neretin}{article}{ author={Neretin, Yu. A.}, title={An estimate for the number of parameters defining an $n$-dimensional algebra}, journal={Izv. Akad. Nauk SSSR Ser. Mat.}, volume={51}, date={1987}, number={2}, pages={306--318, 447}, issn={0373-2436}, review={\MR{896999 (88i:17001)}}, } \bib{Newman}{article}{ author={Newman, M. F.}, title={Determination of groups of prime-power order}, conference={ title={Group theory (Proc. Miniconf., Australian Nat. Univ., Canberra, 1975)}, }, book={ publisher={Springer}, place={Berlin}, }, date={1977}, pages={73--84. Lecture Notes in Math., Vol. 573}, review={\MR{0453862 (56 \#12115)}}, } \bib{OBrien:first}{article}{ author={O'Brien, E. A.}, title={The $p$-group generation algorithm}, note={Computational group theory, Part 1}, journal={J. Symbolic Comput.}, volume={9}, date={1990}, number={5-6}, pages={677--698}, issn={0747-7171}, review={\MR{1075431 (91j:20050)}}, } \bib{Robinson:aut}{article}{ author={Robinson, D. J. S.}, title={Automorphisms of group extensions}, conference={ title={Algebra and its applications}, address={New Delhi}, date={1981}, }, book={ series={Lecture Notes in Pure and Appl. Math.}, volume={91}, publisher={Dekker}, place={New York}, }, date={1984}, pages={163--167}, review={\MR{750857}}, } \bib{Shalev:p-groups}{article}{ author={Shalev, A.}, title={Finite $p$-groups}, conference={ title={Finite and locally finite groups}, address={Istanbul}, date={1994}, }, book={ series={NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci.}, volume={471}, publisher={Kluwer Acad. Publ.}, place={Dordrecht}, }, date={1995}, pages={401--450}, review={\MR{1362818 (97h:20023)}}, } \bib{Scharlau}{article}{ author={Scharlau, R.}, title={Paare alternierender Formen}, journal={Math. Z.}, volume={147}, date={1976}, number={1}, pages={13--19}, issn={0025-5874}, review={\MR{0419484 (54 \#7505)}}, } \bib{Taunt}{article}{ author={Taunt, D. R.}, title={Finite groups having unique proper characteristic subgroups. I}, journal={Proc. Cambridge Philos. Soc.}, volume={51}, date={1955}, pages={25--36}, review={\MR{0067886 (16,792f)}}, } \bib{Vish}{article}{ author={Vi{\v{s}}nevecki{\u\i}, A. L.}, title={Groups of class $2$ and exponent $p$ with commutant of order $p^{2}$}, journal={Dokl. Akad. Nauk Ukrain. SSR Ser. A}, date={1980}, number={9}, pages={9--11, 103}, issn={0201-8446}, review={\MR{593560 (82d:20026)}}, } \bib{Wilson:unique-cent}{article}{ author={Wilson, J. B.}, title={Decomposing $p$-groups via Jordan algebras}, journal={J. Algebra}, volume={322}, date={2009}, number={8}, pages={2642--2679}, issn={0021-8693}, review={\MR{2559855 (2010i:20016)}}, } \bib{Wilson:find-cent}{article}{ author={Wilson, J. B.}, title={Finding central decompositions of $p$-groups}, journal={J. Group Theory}, volume={12}, date={2009}, number={6}, pages={813--830}, issn={1433-5883}, review={\MR{2582050 (2011a:20044)}}, } \bib{Wilson:division}{article}{ author={Wilson, J. B.}, title={Division, adjoints, and dualities of bilinear maps (in press)}, journal={Communications in Algebra}, volume={41}, date={2013}, note={arXiv:1007.4329} } \bib{Wilson:direct-I}{article}{ author={Wilson, J. B.}, title={Existence, algorithms, and asymptotics of direct product decompositions, I}, journal={Groups - Complexity -Cryptology}, volume={4}, date={2012}, pages={33--72}, } \end{biblist} \end{bibdiv} \end{document} b{Wilson:direct-I}{article}{ author={Wilson, J. B.}, title={Existence, algorithms, and asymptotics of direct product decompositions, I}, journal={Groups - Complexity -Cryptology}, volume={4}, date={2012}, pages={33--72}, } \end{biblist} \end{bibdiv} \end{document}
1,108,101,562,635
arxiv
\section{Introduction} The purpose of the present note is to clarify an issue arising in the study of compactifications of type I and heterotic string theories. Such compactifications are specified by the choice of an internal manifold, $X$, and of a gauge bundle over $X$ suitably embedded in $Spin(32)/{\mathbb Z}_2$. Because the gauge group is $Spin(32)/{\mathbb Z}_2$, rather than $SO(32)$, certain choices which would be forbidden in the latter case are in fact allowed. These are the gauge bundles ``without vector structure". They play an important role in the discussion of various string dualities, as pointed out some time ago in \cite{Berkooz:1996iz, Bianchi:1997rf, Witten:1997bs} and further analyzed in \cite{Aspinwall:1996vc, Sen:1997pm, Lerche:1997rr, Kakushadze:1998bw, Angelantonj:1999xf, Angelantonj:1999jh, Angelantonj:2000xf, Kakushadze:2000hm, Keurentjes:2000bs, de Boer:2001px, Keurentjes:2001cp}. \vskip 1mm A crucial ingredient of the discussion on the type I side is the option of turning on a non-zero but quantized background of the internal NS-NS 2-form $B_{ij}$, which is odd under the worldsheet parity $\Omega$. This was recognized early on, based on intuition gained from rational models \cite{Bianchi:1989du, Bianchi:1990yu, Bianchi:1990tb}, in the first systematic study of toroidal compactifications of the type I theory \cite{Bianchi:1991eu}. The key observation is that since the flux of $B$ through any 2-cycle $\gamma$ of $X$ is defined (in appropriate units) up to $2\pi$ shifts, both $\int_\gamma B = 0$ and $\int_\gamma B= \pi$ can be compatible with the $\Omega$ projection. These discrete closed-string moduli are thus described by an element of a ${\rm mod}\,2$ cohomology, $b \in H^2(X, {\mathbb Z}_2)$. A worldsheet argument \cite{Sen:1997pm} then shows that the gauge bundle, $E$, supported on the D9-branes of the type I theory must obey the consistency condition \begin{equation}\label{b=w} b= \tilde w_2(E) \ , \end{equation} where $\tilde w_2$ is a generalized Stiefel-Whitney class which measures the obstruction to endowing $E$ with vector structure \cite{Berkooz:1996iz}. In the special case of toroidal models and flat $E$, a non-zero torsion class $b$ leads to unbroken gauge groups with reduced rank \cite{Bianchi:1991eu}. This statement acquires a more intuitive, geometric meaning when translated in the T-dual language of type IIA orientifolds, discussed in \cite{Angelantonj:1999xf, Angelantonj:1999jh, Angelantonj:2000xf}. Here we will clarify the precise meaning of the consistency condition (\ref{b=w}), and further elucidate the T-duality transformation and the reduction of the rank. For simplicity we will perform this analysis for the simplest case of compactifications on a single two-torus in section \ref{GBonT2}. This admits a straightforward generalization to four-dimensional models on $T^2 \times T^2 \times T^2$ as will be summarized in section \ref{Torus4D}. \vskip 1mm In a different development, one of us (CB) noted that type I theory on magnetized tori presented many interesting phenomenological features, which were illustrated with a simple (non-supersymmetric but only marginally unstable) grand-unified 3-generation model \cite{Bachas:1995ik}. The systematic analysis of the model-building possibilities of type I magnetic fields, and of their T-dual intersecting D-branes, started with the work in \cite{Blumenhagen:2000wh, Angelantonj:2000hi,Blumenhagen:2000ea} and has been very actively pursued thereafter (for reviews and more references see \cite{Angelantonj:2002ct, Uranga:2003pz, Kiritsis:2003mc, Blumenhagen:2005mu, Blumenhagen:2006ci}). The 3-generation model of reference \cite{Bachas:1995ik} (hereafter called for short ``model C'') was actually discarded in \cite{Blumenhagen:2000wh}, as being T-dual to a type IIA orientifold with half-integer D6-brane wrapping numbers. Closer inspection, however, reveals that the magnetic fields of model C describe precisely a $Spin(32)/{\mathbb Z}_2$ bundle without vector structure. As we will explain in section \ref{Torus4D}, turning on the discrete $B$-field background required by condition (\ref{b=w}) makes model C consistent, and restores the integrality of the D6-brane wrapping numbers in the IIA picture. Furthermore, contrary to the case of flat bundles, the rank of the unbroken gauge group is not reduced. \vskip 1mm Part of our motivation in writing this note was the wish to clarify this subtle point, and to amend/rectify the relevant statements in \cite{Bachas:1995ik, Blumenhagen:2000wh}. Moreover, gauge bundles without vector structure have not been much used in (semi)-realistic model building so far. The simplicity of the gauge bundle of model C, which yields quite readily a 3-generation grand-unified model, is an encouragement to further explore this direction. \vskip 1mm As one step towards further applications we generalize the framework to type I compactifications without vector structure on genuine Calabi-Yau manifolds in section \ref{CYmodels}. Their mirror dual type IIA orientifolds are distinguished by allowing for a non-vanishing real part of some of the complex structure moduli. While this freedom is usually not much appreciated in the literature it does amplify considerably the possibilities for model building. We conclude by illustrating these observations for the example of the quintic. \section{Gauge bundles and orientifolds on $T^2$} \label{GBonT2} We will first discuss gauge bundles without vector structure in the simplest case of a toroidal Type I compactification \cite{Bianchi:1991eu}. Although for such backgrounds most of the work on model-building has been already done, our discussion will hopefully shed some more light on a few subtle points\footnote{{See also \cite{Pesando:2008xt} for recent work on the subject from a different vantage point.}}. It also serves as preparation for the discussion of orientfiolds without vector structure on general Calabi-Yau spaces in section 4. \subsection{Long strings and 't Hooft flux} Consider compactification of type II string theory on a 2-torus parameterized by $(x^8, x^9)\in [0,1]\times [0,1]$. The torus is wrapped by a stack of $n$ D($2+k$)-branes which carry on their world-volume a $U(n)$ gauge field $(A_8, A_9)$. In this section the extra $k$ dimensions of the D-branes will be inert, so we may as well set $k=0$. We are interested in configurations \begin{equation}\label{mag1} A_8=0\ ; \ \ \ A_9 = {\rm diag} [ f^{1}, \cdots , f^{n}]\, x^8 + {\rm diag} [ \alpha^{1}, \cdots , \alpha^{n}]\ , \end{equation} corresponding to a constant diagonal magnetic field $F_{89}= {\rm diag} [ f^{1}, \cdots f^{n}]$. This background field defines the field strength of a gauge bundle, and the observable gauge symmetry on the D-branes is the commutant of the structure group of this bundle in $U(n)$ (modulo the issue of massive $U(1)$ factors.) If there are no other D-branes in the problem, the first Chern class, which counts the number of D-particles, must be integer: \begin{equation}\label{Chern} m \equiv {1\over 2\pi} \int_{T^2}\, {\rm tr}\, F_{89}\, = \, \sum_{I=1}^n {f^I\over 2\pi} \ \in\ {\mathbb Z}\ . \end{equation} We would like to understand the quantization conditions for the individual $f^I$. The argument is well-known, but we summarize it here for completeness. \vskip 1mm If the structure group of the gauge bundle were $U(1)^n$, then standard Dirac quantization condition would impose that \begin{equation}\label{quant} {f^{I}\over 2\pi} = m^{I} \in {\mathbb Z}\ \ \ \ \ \forall I. \end{equation} But if we choose the structure group to fill the full $U(n)$, these conditions are in fact too restrictive. One example of an allowed gauge bundle that violates them is \begin{equation}\label{longD} f^{I} = {2\pi \over n} \ \ \ {\rm and} \ \ \ \alpha^{I} = {2\pi I \over n}\ . \end{equation} As one can easily verify, this is a consistent configuration because \begin{equation}\label{longDD} A_\mu (x^8+1) = {\cal U}^{-1} \left[ A_\mu (x^8) + i \partial_\mu\right] {\cal U} \ , \ \ \ \ {\rm with}\ \ \ {\cal U} = e^{-2\pi i \times {\rm diag} [1 \cdots 0] x^9 }\ {\mathcal P} \ . \end{equation} Here ${\mathcal P}$ is the cyclic-shift permutation that sends $I\rightarrow I+1$, and the gauge transformation ${\cal U}$ is periodic when $x^9\rightarrow x^9+1$, as it should be. Notice that the transition functions ${\cal U}$ cannot be chosen in $U(1)^n$, except when the fluxes $m^I$ are integer. Other consistent non-abelian gauge bundles, with fractional fluxes $f^I \in 2\pi {\mathbb Z}/n$, can be constructed in a similar way. \vskip 1mm These gauge-theory statements acquire a simple geometric meaning after a T-duality in the $x^9$ direction. The duality transforms the D2-branes to D-strings and sends $A_9 \rightarrow 2\pi Y^9$, where $Y^9$ is the position of the D-strings in the transverse dualized dimension. As shown in figure \ref{figthooft}, the abelian bundles (\ref{quant}) get mapped to configurations of $N$ independent D-strings with integer winding numbers $(1, m^I)$. Furthermore, their transverse positions $\alpha^{I}/2\pi$ are unconstrained, and the unbroken gauge symmetry has rank $n$. The configuration T-dual to (\ref{longD}), on the other hand, has $n$ pieces of D-string with fractional winding number $1/n$ on the dual torus. These combine suitably so as to form a single ``long D-string" winding $(n, 1)$ times in the $(x^8, \tilde x^9)$ directions. Such long-string configurations are familiar from the counting of black-hole microstates and from the Matrix-model proposal for M theory, see for example \cite{Ganor:1996zk,Rabadan:2001mt}. Notice that the above long D-string is the minimal-energy configuration in the $(n, 1)$ topological sector. \vskip 1mm \begin{figure}[ht] \centering \hspace{40pt} \includegraphics[width=0.8\textwidth]{thooft.eps} \vspace{-10pt} \caption{The left figure shows the T-dual of an abelian bundle $(n,m)=(1,7)$ and the right image the T-dual of a 't Hooft bundle $(n,m)=(5,1)$. \label{figthooft}} \end{figure} It is instructive for our purposes here to separate the transition function (\ref{longDD}) into a $U(1)$ phase and an $SU(n)$ part, i.e. to write \begin{equation}\label{spl} {\cal U} = e^{-2\pi i x^9/n }\ e^{-i\pi/n} \, \widehat {\cal U}\ \ \ \ \ {\rm with}\ \ \ \ \ \widehat {\cal U}(x^9) \in SU(n)\ . \end{equation} Neither of the two factors is periodic when $x^9\rightarrow x^9+1$, but the phases $e^{\mp 2\pi i /n}$ that they acquire cancel in the product. As a result (\ref{longD}) cannot be split into separately consistent $U(1)$ and $SU(n)$ bundles, but it could be separated into consistent $U(1)/{\mathbb Z}_n$ and $SU(n)/{\mathbb Z}_n$ bundles if there were no particles transforming under the center ${\mathbb Z}_n$ of $SU(n)$. In physicist's language the latter bundle, although flat, carries a non-zero 't Hooft flux \cite{Hooft:1979uj}, which is responsible for the breaking of the observable gauge symmetry and the reduction of its rank from $n$ to $1$. The 't Hooft flux is an obstruction to ``$n$-ality", or to ``fundamental structure" of the $SU(n)/{\mathbb Z}_n$ bundle, much like the obstruction to vector structure which is the subject of the present note. \vskip 1mm Generalizing the above example, we can define an obstruction to ``n-ality" for any $SU(n)/{\mathbb Z}_n$ bundle $\widehat{V}$, whether flat or not. If this is part of a consistent $U(n)$ bundle, then the obstruction can be related to the $U(1)$ flux, encoded in the Wilson loop \begin{equation}\label{tH} {\cal W}_n(\widehat{V}, T^2) \equiv \widehat{\cal U}(x^9+1)\, \widehat{\cal U}^\dag(x^9) = e^{ i \int_{T^2} {\rm tr} F /n} = e^{2\pi i m/n}\ \in\ {\mathbb Z}_n\ . \end{equation} The obstruction is thus determined by the number of D-particles modulo the number of D2-branes. Note that bundles with abelian transition functions and integer $m^I$ are also obstructed whenever $m \not= 0 \, ({\rm mod} \, n)$. Note also that when $n$ is not prime the obstruction may concern only a subgroup of the center ${\mathbb Z}_n$. \vskip 1mm In slightly more mathematical terms, the internal gauge fields we are considering correspond to stable $U(n)$ bundles on the torus. Stability here guarantees that the field strength of the associated connection is constant, i.e. a solution of the hermitian Yang-Mills equation.\footnote{On the T-dual IIA side the corresponding D-strings are linear and thus special Lagrangian.} Given such a stable $U(n)$ bundle $V$, then if $c_1(V)=m \in n \, {\mathbb Z}$ we can split off a line bundle ${\cal L}$ as $V = \widehat{V}\otimes {\cal L}$, where the structure group of $\widehat{V}$ is now $SU(n)$. It is known that stable $SU(n)$ bundles on a torus split into the direct sum of $n$ line bundles, $\widehat{V}= \bigoplus_i {\cal L}_i$ (see e.g. \cite{Friedman:1997yq}). As a result of this splitting, the rank of the visible gauge group is not reduced, and we are left with a $U(1)^n$ gauge theory. The above splitting does not, however, occur for stable $U(n)$ bundles with $c_1(V)$ not a multiple of $n$, or for stable $SU(n)/{\mathbb Z}_n$ bundles. Such bundles can, however, be always obtained by deforming the direct sum of line bundles into a non-trivial extension. In the type IIA language, a piecewise-linear D-string passing at each step through a node of the compactification lattice can be deformed, after enlarging the structure group, to a linear D-string of minimal length. This is the meaning of switching on non-trivial 't Hooft flux. \vskip 1mm In what follows we will be interested in the particular case of $n=32$ D-branes, with first Chern class $m = -16$. From equation (\ref{tH}) we conclude that the D-branes carry an $SU(32)/{\mathbb Z}_2$ bundle whose lift to a full $SU(32)$ bundle is obstructed. There exist two simple choices for such an obstructed bundle: (i) a flat bundle with ${\mathbb Z}_2$ 't Hooft fluxes which corresponds to joining the dual D-strings in pairs, leading to a reduction of the rank from 32 to 16; or (ii) a bundle with half-integer magnetic fields which, when combined with the $U(1)$ flux, make all the $f^I/2\pi$ integer. The rank in this case is not reduced. An example that illustrates this second option is the abelian $U(1)^{32}$ bundle \begin{eqnarray} {F_{89}\over 2\pi} &=& {\rm diag}[ 0, \cdots 0, -1, \cdots -1] \cr &&\hskip -0.7cm =\ -{1\over 2}\, {\rm diag}[ 1, \cdots 1, 1, \cdots 1] + {1\over 2}\, {\rm diag}[ 1, \cdots 1, -1, \cdots -1] \ , \end{eqnarray} where in the first line there are 16 zeros and 16 minus ones. Note that extracting the diagonal $U(1)$ left us with ``half-integer" magnetic fields in the remaining $SU(32)/{\mathbb Z}_2$ bundle. Mixed configurations, with both half-integer magnetic fields and ${\mathbb Z}_2$ 't Hooft fluxes, are also possible as we discuss later. \subsection{Relation to the $B$-flux} The above considerations made no assumptions about the closed-string moduli. The NS-NS background $B_{89}$ does, however, affect the dynamics of the magnetized D-branes, as is evident for instance from the fact that the $U(1)$ magnetic field appears in the Dirac-Born-Infeld action only through the invariant combination ${\cal F} = B {\bf 1} + F $. This is invariant under the NS-NS gauge transformations \begin{equation}\label{NSgauge} B_{\mu\nu} \rightarrow B_{\mu\nu} + \partial_\mu \Lambda_\nu - \partial_\nu \Lambda_\mu\ \ \ \ \ \ {\rm and}\ \ \ \ \ A_\mu \rightarrow A_\mu - \Lambda_\mu\, {\bf 1}\ , \end{equation} where the one-form $\Lambda$ defines a $U(1)$ bundle over $T^2$ (we use here the convention $2\pi\alpha^\prime = 1$). Large gauge transformations change, as is well-known, the number of D0-branes. Choosing, for instance, $ \Lambda = 2\pi\, x^8 dx^9$ transforms $f^I \rightarrow f^I -2\pi$ and hence $m \rightarrow m - n$. The first Chern class $c_1(F)$ defines therefore a quantized but not gauge-invariant charge. \vskip 1mm A ``physical" D0-brane charge, which is gauge-invariant but not quantized,\footnote{The different notions of charge are even subtler in the general case where the NS-NS 3-form $H=dB$ does not vanish. For a discussion see references \cite{Bianchi:1997gt, Bachas:2000ik, Taylor:2000za, Alekseev:2000ch, Marolf:2000cb, FigueroaO'Farrill:2000kz}.} can be defined as the first Chern class of the bundle ${\cal V}$ with field strength ${\cal F}$, \begin{equation}\label{physQ} q \, \equiv\, {1 \over 2\pi} \int_{T^2}\, {\rm tr}\, {\cal F}_{89}\, \, =\, m + {n \over 2\pi} \int_{T^2}\, {B}_{89} \ . \end{equation} Notice that the background $B$ field induces (fractional) D0 charge on the D2-branes, in the same way as the Yang-Mills $\theta$-angle induces electric charge on magnetic monopoles \cite{Witten:1979ey}. Suppose now that we insist that the {physical} D0-brane charge vanish. From equations (\ref{physQ}) and (\ref{tH}) we then conclude that ${\cal V}$ is identified with the $SU(n)/{\mathbb Z}_n$ bundle ${\widehat V}$ (put differently the $B$ field cancels the diagonal-$U(1)$ part of $F$), and that \begin{equation}\label{Bfl} {\cal W}_n(\widehat{V}, T^2) = e^{ - i \int_{T^2} B} \ . \end{equation} Thus the obstruction to ``$n$-ality" of the $SU(n)/{\mathbb Z}_n$ bundle is determined by the flux of $B$, if one insists that $q = 0$. As we will argue momentarily, this latter condition is automatic in the type I theory where the D2-branes are replaced by D9-branes and there is no R-R 8-form to which D7-brane charge can couple. This reasoning establishes the formula (\ref{b=w}) of \cite{Sen:1997pm}. \vskip 1mm Before including orientifolds, let us translate these statements into the more intuitive T-dual language. Let the K\"ahler and complex structure moduli of the original 2-torus, which we take for simplicity orthogonal, be {} \begin{equation}\label{moduli} T = {1\over 2\pi} \, (- {B_{89}} + i \ell_8 \ell_9) \ \ \ \ {\rm and}\ \ \ \ U = i {\ell_8\over \ell_9}\ , \end{equation} where $\ell_j$ are the circumferences of the two circles. A T-duality along $x^9$ exchanges $T$ with $U$, so that for non-zero $B$-field the dual torus is a tilted torus. The large gauge transformations (\ref{NSgauge}) correspond to the complex structure transformations {} $\tilde U\rightarrow \tilde U - 1$, which shift the D-string winding numbers appropriately, $ (n, m) \rightarrow (n, m - n )\, . $ The obstruction to $n$-ality, determined by $m$ (mod $n$), is not affected by this shift. The physical D0-brane charge, $q$, measures the (net oriented) projection of the D-string on the imaginary axis of the complex plane with coordinate {} \begin{equation} z \equiv \, i ( \tilde x^9 - \tilde U x^8) \ . \end{equation} One can easily check that the D-string with winding numbers $(n, m)$ is parallel to the real-$z$ axis when $q$ vanishes. Roughly speaking, the rotation of the D-string undoes the torus tilt in this case. This (minimal-length) D-string is dual to a stack of D2-branes carrying a flat $SU(n)/{\mathbb Z}_n$ bundle. \subsection{Including the orientifold} We are now ready to consider the modding out by $\Omega {\cal R}$, where $\Omega$ is the reflection of the worldsheet coordinate $\sigma$, and ${\cal R}$ is a ${\mathbb Z}_2$ transformation of the (generalized) target spacetime. In the type IIA theory ${\cal R}$ flips the orientation of the 2-torus, so $\tilde T^2/{\cal R}$ is one of the three open and/or unoriented genus-1 surfaces: the annulus, the Klein bottle or the M\"obius strip.\footnote{Because of the action of $\Omega$ these surfaces should not be literally thought of as the compactification space.} The first two have a purely-imaginary complex structure, whereas the third has, in our conventions, {} ${\rm Re}\, \tilde U = - 1/2$. From eq. (\ref{moduli}) we see that its T-dual configuration has $B_{89} = \pi$, so this is the case of interest to us here. The action of ${\cal R}$ in this case is \begin{equation}\label{orie} {\cal R} z = \bar z \ \ \ \Longleftrightarrow \ \ \ {\cal R} (x^8, \tilde x^9) = (x^8, -\tilde x^9 - x^8 ) \ . \end{equation} The fixed-point surface, $x^8 = -2\tilde x^9$, is an orientifold 8-plane along the connected boundary of the M\"obius strip. It has winding numbers $(2,-1)$ on the doubling torus, as illustrated in figure 2. To cancel its R-R 9-form charge we need, therefore, to introduce D8-branes with total winding numbers $(32,-16)$. Allowed configurations must be invariant under the action of $\Omega {\cal R}$ which is modded out. \vskip 1mm Figure \ref{figrrcancel} shows two simple configurations that do the job. \begin{figure}[ht] \centering \hspace{40pt} \includegraphics[width=0.8\textwidth]{rrcancel.eps} \vspace{-10pt} \caption{Two configurations of D8-branes canceling the tadpole with the branes indicated by the blue arrows. \label{figrrcancel}} \end{figure} The first configuration has 16 D8-branes along the boundary of the M\"obius strip, i.e. with winding numbers $(2, -1)$ for each D8-brane. Because they sit on top of the orientifold, these D8-branes and their ${\cal R}$-images coincide. This is the supersymmetric vacuum, discussed in refs. \cite{Bianchi:1991eu, Bianchi:1997rf, Witten:1997bs}, which is dual to the heterotic CHL models \cite{Chaudhuri:1995fk}. It corresponds to a flat $Spin(32)/{\mathbb Z}_2$ bundle with non-commuting Wilson lines \cite{Bianchi:1997rf, Witten:1997bs} \begin{equation} W_1 W_2 = e^{-i\int_{T^2} B}\; W_2 W_1. \end{equation} The second configuration in \ref{figrrcancel}, on the other hand, has 16 D8-branes plus their mirror images under ${\cal R}$, with winding numbers respectively $(1, 0)$ and $(1, -1)$. This is the configuration on the first of the three tori of model C \cite{Bachas:1995ik}. Standing on its own configuration (ii) is actually unstable, because the $(1,0)$ and $(1, -1)$ mirror pairs can recombine to form $(2, -1)$ branes.\footnote{But it is interesting to observe that this is not allowed for a single mirror pair of D-branes.} Model C ``cures" this instability by exploiting the existence of the other compactified dimensions. \vskip 1mm The T-duals to the configurations of figure \ref{figrrcancel} are precisely the gauge bundles described at the end of subsection 2.1. We can make this identification more explicit by looking at the action of $\Omega {\cal R}$ on the matrix-valued field $Y^9(x^8)$, which describes (in static gauge) the transverse position of the D8-branes. Consistently with the geometric action (\ref{orie}) this reads \begin{equation}\label{omega} \Omega {\cal R} (Y^9) \ = \ - \gamma_\Omega\, (Y^9)^t \gamma_\Omega^{-1} - x^8 {\bf 1} \ , \end{equation} where we choose (without loss of generality) the Chan-Paton basis so that \begin{equation} \gamma_\Omega = \left(\hskip -1mm \begin{array}{cc} 0 & {\bf 1}_{16\times 16} \\ {\bf 1}_{\rm\tiny{16\times 16}} & 0 \end{array} \hskip -1mm\right)\ . \end{equation} The general solution to the above condition is of the form \begin{equation} Y^9 = - {1\over 2} x^8 {\bf 1} + \widehat{Y}^9\ , \end{equation} where $\widehat{Y}^9$ takes values in the Lie algebra of $SO(32)$. Using the T-duality dictionary, $A_9 = 2\pi Y^9$, one can now easily check that the gauge bundles of subsection 2.1 are indeed T-dual to those of figure \ref{figrrcancel}. \vskip 1mm Note that although the $U(1)\subset U(32)$ gauge field is projected out of the spectrum of the orientifold theory, a discrete background for it actually survives. Its role is to cancel the discrete $B$ flux so that ${\rm tr} {\cal F} = 0$. Some of the confusion in the literature is due to a lack of appreciation of this subtle point. The obstruction to vector structure of the split-off bundle is related to this discrete $U(1)$ flux and hence, by the previous argument, to the discrete $B$ modulus \cite{Bianchi:1991eu}. Furthermore, the physical D7-brane charge is automatically zero, consistently with the fact that the type I theory has no R-R 8-form to which this charge could couple. \vskip 1mm Now consider a general configuration of D-branes which is easier to describe in the type IIA language. One accounts for both 't Hooft fluxes and magnetic fields by considering stacks of D8-branes with arbitrary integer winding numbers. Let the $a$-th stack have $N_a$ D8-branes with (relatively-prime) winding numbers $(n_a, m_a)$. For every stack we must also include the mirror stack with winding numbers $(n_a, -m_a - n_a)$. Stacks with $n_a = -2m_a$ can, a priori, be their own image. The cancellation of R-R charge requires that $n\equiv \sum_a N_a n_a = 32$. In the T-dual language the $a$-th stack carries a $U(n_aN_a)$ gauge bundle which has 't Hooft flux that breaks the symmetry to $U(N_a)$, and a $U(1)$ magnetic field equal to \begin{equation}\label{magf} F_{89}^a \ =\ {2\pi m_a\over n_a}\, {\bf 1}_{N_a\times N_a}\otimes {\bf 1}_{n_a\times n_a}\ . \end{equation} Our normalization is such that fundamental-string endpoints have charge $\pm 1$. As one can easily check, reflection symmetry fixes automatically the first Chern class of the complete $U(32)$ bundle, as advertised \begin{equation} m \equiv {1\over 2\pi}\, \sum_{a} {\rm tr}\, F^a_{89} \, = \, \sum_{a} N_a m_a = -16 \ . \end{equation} \vskip 1mm Since ${m/ n} = -1/2$, separating the diagonal $U(1)$ gives an $SU(32)/{\mathbb Z}_2$ bundle $\widehat{V}$, whose lift to an $SU(32)$ bundle is obstructed. Now in accordance with the type I symmetry, the structure group of the bundle should actually be $SO(32)/{\mathbb Z}_2$.\footnote{We will discuss the requirement of being spin-liftable to $Spin(32)/{\mathbb Z}_2$ in section \ref{Torus1}.} The reduction is automatic if none of the D8-branes is its own image. In this case the full transition matrices have the block-diagonal form \begin{equation}\label{Osplit} \widehat{\cal O} = \left(\hskip -1mm \begin{array}{cc} {\cal U} & 0 \\ 0 & {\cal U}^* \end{array} \hskip -1mm\right)\ , \end{equation} and take values in $SO(32)$ defined as the subgroup of matrices that obey the reality condition $\widehat{\cal O}^* = \gamma_\Omega\widehat{\cal O} \gamma_\Omega^{-1}$ and have determinant 1. Such bundles can be thus written as the sum of two conjugate $U(16)/{\mathbb Z}_2$ bundles, $\widehat{V} = \widehat W \oplus \widehat W^{\vee}$. The story is subtler for D8-branes which are their own image, and which are hence stuck to the orientifold plane. The elementary ``stuck" D8-brane has winding numbers $(2, -1)$ and $\widehat{Y}^9 = 0$. From eqs. (\ref{longDD}) and (\ref{spl}) we see that the corresponding $U(2)$ transition function for this D-brane reads: \begin{equation} {\cal U} \, \equiv\, e^{-i\pi x^9} \, {\cal O} \ \ \ \ {\rm with}\ \ \ \ \, {\cal O} = \, e^{-i\pi x^9\sigma_3}\, \sigma_1\ , \end{equation} where $\sigma_i$ are the usual Pauli matrices. The global phase in the split-off bundle was here fixed by imposing the reality condition $ {\cal O}^* = \sigma_1 {\cal O} \sigma_1$, where we think of $ {\cal O}$ as occupying the $2\times 2$ block in the center of the full $O(32)$ matrix. Since ${\rm det}\, {\cal O} = -1$, if one insists that the full structure group be $SO(32)/{\mathbb Z}_2$ then stuck D8-branes are not permitted. An even number of $(2, -1)$ D8-branes can, on the other hand, be always combined in mirror pairs. \vskip 1mm This obstruction to the existence of a good $SO(32)/{\mathbb Z}_2$ bundle is described by an element of a ${\rm mod}\,2$ cohomology, the first Stiefel-Whitney class $w_1\in H^1(X, {\mathbb Z}_2)$. It is the same obstruction that prevents a pair of $(1,0)$ and $(1,-1)$ D8-branes to merge into a $(2,-1)$ brane, as we previously noted. The gauge group of \emph{perturbative} type I theory is, as a matter of fact, $O(32)/{\mathbb Z}_2$. It is non-perturbative consistency (see the following section) which requires that $w_1=0$, and forbids\footnote{$SO(2k+1)$ factors in the gauge group are possible in lower dimension in the presence of `exotic' $\widetilde\Omega$-planes with (quantized) R-R fluxes \cite{Keurentjes:2001cp}.} the existence of ``stuck" D8-branes and of $SO(2k+1)$ gauge groups in $T^2$ compactifications \cite{deBoer:2001px}. In the heterotic theory the vanishing of $w_1$ is a perturbative requirement, which follows from multiloop modular invariance \cite{ABK}. \section{Toroidal 4d orientifolds and the model C} \label{Torus4D} Now we move on to more realistic backgrounds, obtained by compactification of type I theory on a six-torus with a non-flat $SO(32)/{\mathbb Z}_2$ bundle \cite{Bachas:1995ik}. These backgrounds are T-dual to (non-supersymmetric) intersecting D6-brane models \cite{Blumenhagen:2000wh,Blumenhagen:2000ea}. We restrict attention to factorizable tori, $T^6 = T^2\times T^2\times T^2$. For a discussion of the non-factorizable case see the recent papers \cite{Blumenhagen:2004di,Forste:2007zb}. \subsection{Consistency conditions for $T^6/\Omega{\cal R}$ orientifolds } \label{Torus1} Let $(n^i_a, m^i_a)$ be the integer wrapping numbers of the $a$th stack of D6-branes on the $i$th torus, and $(n^i_a, -m^i_a - 2 b^i n^i_a)$ the wrapping numbers of the mirror stack. We have defined here $b^i = 1/2$ or $0$, according to whether the $i$th torus is a tilted torus or not. In the type-I language this corresponds to a $B$-flux equal, respectively, to $\pi$ or to $0$. Following reference \cite{Blumenhagen:2000ea} it is also convenient to introduce the shifted or ``effective" wrapping numbers \begin{equation}\label{dfn1} \hat m^i_a \equiv m^i_a + b^i\, {n^i_a}\ , \ \ \ \ {\rm so\ that} \ \ \ \ {\cal R}(n^i_a, \hat m^i_a) = (n^i_a, -\hat m^i_a)\ \ \ \ \forall \ b^i\ . \end{equation} The $\hat m^i_a$ can be considered as wrapping numbers along the T-dualized directions of the three rectangular tori, or as magnetic fields from which the diagonal $U(1)$ was stripped-off. Note that the $B$-fluxes enter through the quantization conditions, \begin{eqnarray}\label{hatquant} \hat m^i_a = m^i_a + b^i\, n^i_a, \ \ \ \ {\rm where}\ \ \ n_a^i, m^i_a \in {\mathbb Z}\ . \end{eqnarray} For tori with $b^i = \frac{1}{2}$ the $\hat m^i_a$ must be integer if $n^i_a$ is even, and half-integer if $n^i_a$ is odd, while when $b^i=0$ the $\hat m^i_a$ are always integer. The definition (\ref{dfn1}) makes it possible to treat both untilted and tilted tori in a unified way. \vskip 1mm Tadpole cancellation for the R-R 7-forms gives one condition for each independent 3-cycle. On $T^6$ there are a priori $20$ 3-cycles, but only 4 of them are even under the ${\cal R}$ reflection, $z^i \rightarrow \bar z^i$ for all $i$. One of them is the orientifold 3-cycle, and the other three share with the orientifold one dimension. The corresponding tadpole conditions (counting branes and their images separately) read \cite{Blumenhagen:2000wh,Blumenhagen:2000ea} \begin{eqnarray} \label{Tadtoria} & &\sum_{a=1}^{2K} N_a\, n_a^1\, n_a^2\, n_a^3\ =\ 32 \ ,\nonumber\\ \sum_{a=1}^{2K}N_a\, n_a^1\, \hat m_a^2\, \hat m_a^3 &=& \sum_{a=1}^{2K}N_a\, n_a^2\, \hat m_a^1\, \hat m_a^3 \ =\ \sum_{a=1}^{2K}N_a\, n_a^3\, \hat m_a^1\, \hat m_a^2 \ =\ 0\ . \end{eqnarray} Thanks to reflection symmetry, tadpole cancellation for the remaining odd cycles is automatic. Note that if the $n^i_a$ are positive, i.e. if there are no anti-D6-branes, then maximal rank is achieved when $n^i_a = 1$ for all stacks and for all tori. Any $n^i_a >1$ implies a corresponding reduction of the rank. \vskip 1mm Using the dictionary of the previous section, it is easy to translate the above statements into the language of magnetized D9-branes. The first of the conditions (\ref{Tadtoria}) fixes the total number of D9-branes, while the other three ensure that the second Chern class of the $SO(32)/{\mathbb Z}_2$ bundle on them vanishes: \begin{equation} \int_{T^2_i \times T^2_j} {\rm tr} ( F^i \wedge F^j) \, =\, 0\ \ \ \ \ \forall i,j=1,2,3\ . \end{equation} These are precisely the conditions for cancellation of D5-brane charge. The total gauge bundle has structure group $ \otimes_{a=1}^K \, SO(2n_a)/{\mathbb Z}_2$, where $n_a \equiv n_a^1n_a^2n_a^3$ and we have put here stacks and image stacks in a single factor. The wrapping numbers $(n^i_a, \hat m^i_a)$ describe the 't Hooft flux and magnetic fields of each separate $SO(2n_a)/{\mathbb Z}_2$ bundle. Each of these bundles has an obstruction to vector structure on the $i$th torus whenever $b^i\not= 0$. \vskip 1mm Apart from R-R tadpole cancellation, additional conditions come from the by now recurrent observation that, while the perturbative gauge group is $O(32)/{\mathbb Z}_2$, the full non-perturbative symmetry \cite{Polchinski:1995df} of type I theory is $Spin(32)/{\mathbb Z}_2$. Recall that $Spin(32)$ has four conjugacy classes: $O,V,S$, and $C$, corresponding respectively to the adjoint, the vector, the positive-chirality and the negative-chirality spinors. In terms of 16-dimensional root/weight vectors these are described as follows: \begin{table}[h!] \centering \begin{tabular}{ccc} $O$: & $(0, \ldots,0, \pm1, 0,\ldots, 0,\pm1,0,\ldots,0)$ &\\ $V$: & $(0, \ldots, 0,\pm1, 0,\ldots,0)$ &\\ $S$: & $(\pm \frac{1}{2}, \pm \frac{1}{2},\ldots, \pm \frac{1}{2})$\ : &even number of $+$ \\ $C$: & $(\pm \frac{1}{2}, \pm \frac{1}{2},\ldots, \pm \frac{1}{2})$\ : & odd number of $+$\ . \\ \label{conj} \end{tabular} \end{table} \hfil \break \vskip -1.1cm \noindent To obtain $SO(32)$ from $Spin(32)$ one projects out the spinor representations $S$ and $C$, while keeping the adjoint and the vector. Keeping only the adjoint gives the symmetry $O(32)/{\mathbb Z}_2$. By contrast, $Spin(32)/{\mathbb Z}_2$ keeps the adjoint $O$ and the positive chirality spinor representation $S$, while projecting out the vector representation and the other spinor. \vskip 1mm The spinor representation $S$ arises in type I theory via D-particle states which are non-BPS yet stable in $D=10$ and become BPS upon toroidal compactification \cite{Sen:1998tt}. These are dual to massive states of the heterotic string. Because states with the `wrong' chirality $C$ do not exist, the parity transformation is not defined. Alternatively, the reduction of $O(32)$ to $SO(32)$ can be traced to the existence of non-BPS D-instantons \cite{Witten:1998cd}. One immediate consequence, encountered already in subsection 2.3, is that the first Stiefel-Whitney class, $w_1$, must vanish. As explained there, the complete $SO(32)/{\mathbb Z}_2$ bundle can then be written as the sum of two conjugate $U(16)/{\mathbb Z}_2$ bundles, $\widehat{V} = \widehat W \oplus \widehat W^{\vee}$. \vskip 1mm Such bundles are spin-liftable if the standard Dirac-quantization condition for charges in $S$ is satisfied, i.e. if the first Chern class of $\widehat W$ is even. Explicitly, \begin{equation}\label{sSW} \int_{T^2_{(3)}} c_1(\widehat W)\ = \ \sum_{a=1}^{K} N_a \, n_a^1\, n_a^2 \, \hat m_a^3 \ \in\ 2 {\mathbb Z} \ , \end{equation} and similarly for the other two tori. Note that the sum here runs over all D-brane stacks, but not over their mirrors. Condition (\ref{sSW}) is known as the vanishing of the second Stiefel-Whitney class which, like the obstruction to vector structure, is an element of a ${\rm mod}\,2$ cohomology, $w_2(\widehat V) = c_1(\widehat W) \, {\rm mod} \, 2 \in H^2(X, {\mathbb Z}_2)$, which obstructs the existence of spin structure. It can be formulated as the requirement of cancellation of K-theory charge \cite{Minasian:1997mm, Witten:1998cd} which is stronger than the mere cancellation of R-R tadpoles. Violation of (\ref{sSW}) manifests itself also in the form of global SU(2) anomalies \cite{Witten:1982fp} in the world-volume theory of probe D5-branes \cite{Uranga:2000xp}. \subsection{Three generations and the model C} \label{ModelC} For any solution of the consistency conditions (\ref{Tadtoria}), subject to the quantization rules (\ref{hatquant}) and (\ref{sSW}), some of the most interesting observables are the intersection numbers of stacks of D6-branes: \begin{eqnarray} I_{ab} = \prod_{i=1}^3 (n^i_a \hat m^i_b - n^i_b \hat m^i_a)\ . \end{eqnarray} These determine the chiral spectrum in the effective four-dimensional theory, and in particular the number of Standard-Model or GUT generations in (semi)realistic models of this kind. Note that the intersection numbers $I_{ab}$ are not affected by the shift (\ref{dfn1}), which is why the integer winding numbers $m^i_a$ could be replaced by $\hat m^i_a$ in the above expression. Recall also that in deriving the chiral spectrum $[ab]$ and $[a b^\prime]$, where $b^\prime$ is the mirror of the stack $b$, should be considered separately. \vskip 1mm Let us review now (and sharpen a little) the argument \cite{Blumenhagen:2000ea} which shows that 3-generation toroidal models can only exist when one or more of the tori are tilted. We focus on the left-handed quarks, which correspond to open strings stretching between the color and weak-isospin stacks of D-branes (denoted here by the labels $c$ and $w$). To get 3 generations we need that $I_{cw} + I_{c w^\prime}=3$. Generic models have $I_{cw} = 3$ and $I_{c w^\prime} = 0$, but because the $2$ and ${\bar 2}$ representations are equivalent we only require the above weaker condition. Now the mirror to the weak-isospin stack is obtained by flipping the sign of the $\hat m^i_w$, so that \begin{equation} I_{cw} + I_{c w^\prime} = -2 \prod_{i=1}^3 (n^i_w \hat m^i_c) -2 ( n^1_c\hat m^1_w n^2_c\hat m^2_w n^3_w\hat m^3_c + {\rm cyclic})\ . \end{equation} This can be odd only if some of the effective wrapping numbers are half-integers, which implies in turn that at least one of the tori must be tilted. Bundles without vector structure are thus unavoidable in all realistic toroidal-orientifold models. \vskip 1mm One of the nice features of the 3-family Grand-Unified model C is that it is obtained with a very simple choice for the $SO(32)/{\mathbb Z}_2$ bundle \cite{Bachas:1995ik}. The choice is exhibited in table 1. \begin{table}[h!b!p!] \caption{The wrapping numbers of model C.} \begin{center} \begin{tabular*} {0.75\textwidth}{@{\extracolsep{\fill}} | c | | c | c | c | c | } \hline & & & & \\ stack & $U(5)$ & $U(3)$ & $U(4)$ & $ \tilde U(4)$ \\ [2ex] \hline\hline & & & & \\ $ (n_1, \hat m_1)$ & $(1, {3\over 2})$ & $(1, - {5\over 2})$ & $(1, {1\over 2})$ & $(1, -{1\over 2})$ \\ [2ex] \hline & & & & \\ $(n_2, \hat m_2)$ & $(1, {1\over 2})$ & $(1, {1\over 2})$ & $(1, -{1\over 2})$ & $(1, -{1\over 2})$ \\ [2ex] \hline & & & & \\ $(n_3, \hat m_3)$ & $(1, {1\over 2})$ & $(1, {1\over 2})$ & $(1, {1\over 2})$ & $(1, {1\over 2})$ \\ [2ex] \hline \end{tabular*} \end{center} \label{table1} \end{table} From our previous discussion it should be clear that the bundle admits no vector structure, and requires $b^i =1/2$ on all three tori. If instead the $b^i$ were zero, we would need to multiply the wrapping numbers $\hat m^i_a$ by a factor 2, thereby increasing to 24 the number of families \cite{Blumenhagen:2000wh}. The construction of this model predated the discovery of the non-perturbative structure of type I theory \cite{Polchinski:1995df}, so the vanishing of the second Stiefel-Whitney class, eqs. (\ref{sSW}), was not checked at that time. One can, however, verify that not only the tadpole conditions (\ref{Tadtoria}), but also eqs. (\ref{sSW}) are satisfied, so that the bundle C can be lifted to a fully-consistent $Spin(32)/{\mathbb Z}_2$ bundle. \vskip 1mm The Standard-Model gauge group in model C is unified in the $SU(5)$ group on the first stack of D6-branes. There is also a horizontal $U(3)$ symmetry and a $U(4)\times \tilde U(4)$ hidden sector. One can easily check that $I_{5 3} = - 1$ and $I_{5 5^\prime} = 3$, giving three generations in the $10$ and $\bar 5$ representations of $SU(5)$. There is no chiral matter from strings between the hidden and observable stacks of D6-branes. The model is non-supersymmetric but free of tachyons, in appropriate regions of parameter space, and it has the necessary scalar fields for GUT, electroweak and horizontal-symmetry breaking \cite{Bachas:1995ik}. The pattern of supersymmetry breaking is also rather interesting: the gauge sector is maximally-supersymmetric at tree level, and the breaking in the chiral-matter sector is tunable. The split-supersymmetry scenario \cite{ArkaniHamed:2004fb,Giudice:2004tc} can be thus implemented in this model naturally (bearing in mind the usual problems of vacuum stability). A simple variant of model C has been, in fact, analyzed in this spirit in ref. \cite{Antoniadis:2006eb}. For other unified intersecting D-brane models see also \cite{Ellis:2002ci,Kokorelis:2002ns,Cvetic:2002pj, Axenides:2003hs,Leontaris:2005ax,Floratos:2006hs}. \vskip 1mm \subsection{Euclidean D1-brane instantons} \label{D5_T^2} In subsection 2.3 we have discussed the origin of the obstruction to vector structure for the $Spin(32)/{\mathbb Z}_2$ bundle on the type I D9-branes. It is easy to extend this argument to Euclidean trajectories of D-strings (or instantonic E1-branes) that wrap an orientifolded two-torus with $B$-flux. The logic is the same as before: given $n$ branes of the above kind, we should look for $U(n)$ gauge bundles, $A^9(x^8)$, that survive the twisted orientifold projection \begin{equation}\label{omega} \Omega {\cal R} (A^9) \ = \ \pm \gamma_\Omega\, (A^9)^t \, \gamma_\Omega^{-1} - 2\pi\, x^8\, {\bf 1} \ . \end{equation} The sign here is $+$ for the D5-branes and $-$ for the E1-instantons, for reasons explained clearly in references \cite{small, Gimon:1996rq}. As in subsection 2.3, the general solution of the above condition is a ``half-integer" magnetic field in the overall $U(1)\subset U(n)$ factor, and an $O(n)/{\mathbb Z}_2$ or $Sp(n)/{\mathbb Z}_2$ bundle without vector structure on the E1-brane, respectively on the D5-brane world-volume. \vskip 1mm The D5-branes in toroidal orientifolds are special limits of gauge bundles on the D9-branes, so we will not discuss them here further. Let us consider instead in more detail E1-instantons wrapping the $i$th $T^2$ factor, for which $b^i= \frac{1}{2}$. In the type-IIA language, the corresponding instantonic trajectories must wrap invariant 1-cycles of $T^2/{\cal R}$, and all such cycles have even winding number, $n=2k$, in the $x^8$ direction. This means, when translated in the type I language, that only an even number of E1-branes, with a non-trivial $O(2k)/{\mathbb Z}_2$ bundle on their worldvolume, can wrap the obstructed 2-cycle. Note that, in contrast to the D9-branes, the structure group for the E1-branes need not be reducible to $SO(2k)/{\mathbb Z}_2$. Note also that the complete gauge group for the combined system of D9- branes and E1-branes is $[Spin(32)\times O(2k)]/\mathbb Z_2$, where the ``invisible" $\mathbb Z_2$ flips the sign of the vector representations of the two factor groups, thus leaving the bi-fundamental representation $(32, 2k)$ unchanged. \vskip 1mm The fact that a single E1-instanton cannot wrap an obstructed 2-cycle has been observed previously in \cite{Witten:1999eg}. This does not, however, mean that the multiply-wrapped E1-branes make no contributions to supersymmetry-protected quantities. Flatness of the Chan-Paton bundle is, of course, required for the instanton to be supersymmetric and thus have a chance of contributing to F-terms. Consider for example $n=2$: a flat $O(2)/{\mathbb Z}_2$ bundle with 't Hooft flux is consistent with spacetime supersymmetry, and lifts half of the fermionic zero modes as is evident in the T-dual ``long-string" picture. It should therefore contribute to the same quantities as the single ($n=1$) E1-brane in compactifications without $B$-flux. In principle, with only the zero modes corresponding to Wilson lines along $T^2$, such an instanton could contribute to the gauge kinetic function on D9- or D5-branes. However, in the pure toroidal case considered here, there exist extra zero modes related to the transverse translations of the E1-instanton, so that such objects rather contribute to higher-derivative F-terms. \vskip 1mm E1-instantons have attracted recently much attention, because they can generate phenomenologically desirable terms in the effective superpotential of type I models \cite{Blumenhagen:2006xt, Ibanez:2006da, Bianchi:2007fx, Cvetic:2007qj} . Corrections to higher-derivative F-terms for $N=1$ vacua were pioneered, in the context of heterotic worldsheet instantons, in \cite{Beasley:2005iu} and discussed in the language of D-brane instantons in \cite{Blumenhagen:2007bn}. A nice guide for elucidating the type-I D-instanton calculus, in a simpler though less realistic setting, are the $F^4$ threshold corrections of maximally-supersymmetric, $N=4$ vacua \cite{Bachas:1997xn, Bachas:1997mc}. The one-loop computation of these corrections on the heterotic side is exact \cite{Lerche:1988zy}, so the contributions of D-instantons are known. By comparing threshold corrections in Type I models with non-zero $B$ flux and in heterotic CHL models it is possible to verify that D-instantons with even and odd $n$ correspond to different sectors of the freely acting orbifold \cite{Bianchi:1998vq}. Precise agreement between heterotic-worldsheet and E1-instanton corrections to 4-hyperini Fermi couplings on $T^4/{\mathbb Z}_2$ has been recently demonstrated in \cite{Bianchi:2007rb}. \section{Calabi-Yau compactifications of $Spin(32)/{\mathbb Z}_2$ bundles with or without vector structure} \label{CYmodels} So far we have analyzed the simplest case of toroidal compactification. Much of the analysis carries, however, over to genuine Calabi-Yau spaces, with full $SU(3)$ holonomy, as we will discuss in this section. \subsection{$Spin(32)/{\mathbb Z}_2$ gauge bundles} \label{TypeIa} Let us begin with type I compactifications on a general Calabi-Yau manifold $X$, which for conceptual simplicity we assume to be smooth. Through every 2-cycle $\gamma \in H_2(X, {\mathbb Z})$ we may turn on integer or half-integer $B$ flux, consistently with the $\Omega$ projection. These fluxes, and the corresponding discrete K\"ahler moduli ${Re} (T_i)$, are described by an element ${\cal B} \in H^2(X,{\mathbb Z}/2)$, normalized so that ${\cal B}(\gamma) \equiv \int_\gamma B/ 2\pi$. Of course, only the mod2 cohomology $b\equiv [{\cal B}] \in H^2(X,{\mathbb Z}_2)$ describes physically-distinct vacua, so the number of inequivalent choices of discrete moduli is $2^{h_{11}(X)}$. We are interested in $Spin(32)/{\mathbb Z}_2$ bundles on $X$. For background material on Type I compactifications with non-abelian vector bundles (but vanishing $B$-flux) we refer the reader to \cite{Blumenhagen:2005pm,Blumenhagen:2005zg}. \vskip 1mm As in the toroidal case, the $Spin(32)/{\mathbb Z}_2$ bundle defining the type I model can be constructed by first considering a $U(32)$ bundle $V$, \begin{eqnarray} {V} = \bigoplus_a { V}_a^{\oplus N_a} \oplus \, \bigoplus_a ({ V}^*_a) ^{\oplus N_a}. \end{eqnarray} Here ${V}_a$ denotes a $U(n_a)$ bundle with $c_1({V}_a) \in H^2(X,{\mathbb Z})$, while the $*$ operation is defined by dualizing the bundle and then twisting it with a line bundle ${\cal N}$, \begin{eqnarray} { V}_a^* = { V}_a^{\vee} \otimes {\cal N} \quad\quad {\rm where}\quad\quad c_1({\cal N}) = - 2 {\cal B} \in H^2(X,{\mathbb Z}). \ \end{eqnarray} The twist bundle ${\cal N}$ accounts for the shift under the action of $\Omega$ which, in the toroidal case, was encoded in the transformation of wrapping numbers $(n,m)\rightarrow (n, -m -2b\, n)$. The structure group $U(n_a)$ of each ${ V}_a$ is embedded diagonally into $U(n_a N_a) \in Spin(32)/{\mathbb Z}_2$ subject to the constraint $\sum n_a N_a =16$. The resulting four-dimensional gauge group is given by $\prod_a U(N_a)$ (modulo massive U(1) factors) along the lines of \cite{Blumenhagen:2005pm, Blumenhagen:2005zg}. It follows immediately from the above definitions that the total $U(1)$ flux associated with $ V$ equals -16, \begin{eqnarray} \int_{\gamma} c_1({ V}) = \frac{1}{2\pi} \int_{\gamma} {\rm tr}\, F= -16\, \end{eqnarray} for each two-cycle $\gamma \in H_2(X, {\mathbb Z})$ with half-integer $B$-flux. Exactly as in section \ref{GBonT2}, this guarantees vanishing D7-brane charge on the D9-branes. \vskip 1mm The advantage of working with the ${ V}_a$ is that they are conventional bundles with integer first Chern class. This comes at the cost of introducing the unusual orientifold-action operator $*$, which is twisted by the appearance of the $B$-flux. Alternatively, we can use the conventional twist but work with bundles whose first Chern class can be half-integer. To this end we write \begin{eqnarray} \label{cal V} { V}= \widehat V \otimes {\cal L} \, , \end{eqnarray} where the line bundle ${\cal L}$ is such that $c_1({\cal L}) =- {\cal B} \in H^2(X, {\mathbb Z}/2)$. After splitting off the diagonal $U(1)$ in this way, $\widehat V$ represents a $Spin(32)/{\mathbb Z}_2$ bundle given by the direct sum of a $U(16)/{\mathbb Z}_2$ bundle and its dual, \begin{eqnarray} \label{V} \widehat V= \widehat W \oplus \widehat W^{\vee} \quad\quad\quad {\rm with}\quad\quad\quad W = \bigoplus_a \widehat V_a^{\oplus N_a}. \end{eqnarray} The generalization of the quantization condition eq. (\ref{hatquant}) reads \begin{eqnarray} \label{quant_gen} c_1(\widehat V_a) + n_a \, b \in H^2(X, {\mathbb Z})\ . \end{eqnarray} For non-zero $b$ and odd rank $n_a$, in particular, the first Chern class $c_1(\widehat V_a)$ takes half-integer values. This violates the Dirac quantization condition for the vector representation of $SO(32)$, so that the bundle $\widehat V$ is an $SO(32)/{\mathbb Z}_2$ bundle without vector structure. \vskip 1mm For $\widehat V$ to be liftable to $Spin(32)/{\mathbb Z}_2$, the bundle $\widehat W$ must furthermore satisfy the Dirac quantization condition with respect to the spin conjugacy class, \begin{eqnarray} \label{SW1} c_1(\widehat W) = \sum_a N_a \int_{\gamma} c_1(\widehat V_a) \in 2 \mathbb Z \quad\quad\quad \forall\ \gamma \in H_2(X,\mathbb Z). \end{eqnarray} This generalizes eq. (\ref{sSW}). Finally, the tadpole cancellation condition for such Type I compactifications with D9-branes only is given by \cite{Blumenhagen:2005pm, Blumenhagen:2005zg} \begin{eqnarray} {\rm ch}_2(\widehat W) + c_2(T_X) = 0, \end{eqnarray} where $T_X$ denotes the tangent bundle of $X$. \vskip 1mm In general, one can add D5-branes wrapping holomorphic curves in $X$ provided the total fivebrane class \begin{eqnarray} W_5 = {\rm ch}_2(\widehat W) + c_2(T_X) \end{eqnarray} is effective. Recall that a 2-cycle $\gamma$ with ${\cal B}(\gamma)=0$ can only be wrapped by $2k$ five-branes on $X$ (in the upstairs geometry).\footnote{The class $W_5$ in eq. (40) is the one after modding out by the orientifold action, i.e. it describes the set of $n$ five-branes.} This yields gauge group $Sp(k)$ in conventions where $Sp(1)=SU(2)$. For ${\cal B}(\gamma)=1/2$ we must invoke non-trivial 't Hooft flux which further breaks the gauge group on the five-branes. The minimal configuration on a smooth manifold now correpsonds to $2 \times 2$ fivebranes along $\gamma$ in the upstairs picture where each of the two pairs carries a non-trival $SU(2)/{\mathbb Z}_2$ bundle. This yields again gauge group $Sp(1)$ after modding out by the orientifold. \vskip 1mm For E1-instantons, by contrast, if ${\cal B}({\gamma})=0$ the Chan-Paton group is $O(k)$ and no restrictions on $k$ arise. However, absence of vector structure along a 2-cycle due to ${\cal B}({\gamma})=\frac{1}{2}$ is an obstruction for the appearance of a single E1-instanton along $\gamma$ \cite{Witten:1999eg}. Here Dirac quantization would be violated for the charged zero modes between the E1-instanton and the magnetized D9-branes, which are discussed in Type I language in \cite{Blumenhagen:2006xt, Bianchi:2007fx, Cvetic:2007qj}. \vskip 1mm For E1-instantons to contribute to holomorphic quantities like the superpotential or the gauge kinetic functions, they must be of type $O(1)$, i.e. carry bundles satisfying $c_1(V_a)=0$. As a result, the quantization condition \eqref{quant_gen} can only be satisfied for even rank of the bundle. From this argument it seems to be possible that in fact the structure established in section \ref{D5_T^2} for E1's wrapping genus one curves can be generalized to for instance degree $k$ covers of isolated rational curves. Namely, for $k$ even the quantization condition is satisfied and a contribution to the superpotential seems to be possible.\footnote{Note that the degree $k$ cover can be thought of as the image of the map $z\rightarrow z^k$ which has two $k$-fold branch cuts at $z=0,\infty$.} \subsection{Smooth Calabi-Yau Type IIA orientifolds} \label{Subsec_SmoothIIA} Let us now discuss the mirror-dual side of type-IIA orientifolds on general Calabi-Yau manifolds. We will identify, in particular, the discrete freedom in the choice of complex structure moduli which is dual to the choice of orientifolds with and without vector structure in type I. \vskip 1mm Under mirror symmetry the pure world-sheet parity transformation $\Omega$ on a manifold $X$ is mapped to $\Omega{\cal R} (-1)^{F_L}$, where ${\cal R}$ denotes an anti-holomorphic involution on the miror dual Calabi-Yau manifold ${\cal W}$. It acts on the holomorphic $(3,0)$ form $\Omega_3$ and the K\"ahler two-form $J$ as \begin{eqnarray} {\cal R} : \Omega_3 \rightarrow e^{2i\theta} \, \overline \Omega_3, \quad {\cal R} : J \rightarrow -J\; . \end{eqnarray} Without loss of generality we will set $\theta=0$ in what follows. The fixed point locus of ${\cal R}$ gives rise to an orientifold $O6$-plane, whose tadpole is canceled by the introduction of D6-branes wrapping special Lagrangian 3-cycles on the Calabi-Yau manifold \cite{Blumenhagen:2002wn}. These D6-branes are wrapped around homology 3-cycles $\pi_a$. \vskip 1mm Let us first review the case of mirror dual to Type I compactifications with zero $B$-field. The homology group $H_3({\cal W},{\mathbb Z})$ splits into an $\Omega{\cal R}$ even and odd part, $H_3({\cal W},{\mathbb Z})=H^+_3({\cal W},{\mathbb Z})\oplus H^-_3({\cal W},{\mathbb Z})$ \cite{Grimm:2004ua}. The even part contains real 3-cycles and the odd part completely imaginary ones. Moreover, $\Omega{\cal R}$ exchanges the holomorphic and the anti-holomorphic 3-forms, so that the volume form \begin{equation} \label{volume}{ {\rm vol}({\cal W})={i\over 8} \Omega_3\wedge \overline\Omega_3 } \end{equation} is anti-invariant, i.e. $\Omega{\cal R}:{\rm vol}({\cal W})\rightarrow -{\rm vol}({\cal W})$. Therefore, the only non-vanishing intersections are between 3-cycles from $H^+_3({\cal W},{\mathbb Z})$ and $H^-_3({\cal W},{\mathbb Z})$. \vskip 1mm One can always find a symplectic unimodular basis $(A_I,B_I)$ of $H_3({\cal W},{\mathbb Z})$, $I=0,\ldots,h_{2,1}$, where we take $A_0, B_i\in H^-_3({\cal W})$ and $B_0, A_i\in H^+_3({\cal W})$ for $i=1,\ldots, h_{2,1}$. The intersection matrix for this choice of basis has the simple form $A_I \cap B_J=\delta_{IJ}$ with all other intersection numbers vanishing. Note that this defines a Poincar\'e dual basis $(\alpha_0, \beta_i)$ of $H_-^3({\cal W},\mathbb Z)$ and $(\beta_0, \alpha_i)$ of $H_+^3({\cal W},\mathbb Z)$ such that \begin{eqnarray} \label{uni-basis} \int_{A^I} \alpha_J = \delta_{IJ}, \quad \quad \int_{B^I} \beta_J = -\delta_{IJ}\, ,\quad (I,J=0,\dots , h_{2,1}). \end{eqnarray} In this basis, the holomorphic three-form $\Omega_3$ is expanded as \begin{eqnarray} \Omega_3 = \sum_I X_I \alpha_I- \sum_J F_J \beta_J \end{eqnarray} in terms of the periods \begin{eqnarray} X_I = \int_{A_I}\Omega_3, \quad\quad\quad F_J = \int_{B_J}\Omega_3. \end{eqnarray} Special geometry of the complex structure moduli space implies that the periods along $B_I$ can be expressed as derivatives of the prepotential ${\cal F}(U_i)$, where one defines the quotient of two periods \begin{eqnarray} U_i={X_i\over X_0}={\int_{A_i} \Omega_3\over \int_{A_0} \Omega_3}\; . \end{eqnarray} In terms of these one has \begin{eqnarray} {\partial {\cal F}\over \partial U_i}={\int_{B_i} \Omega_3\over \int_{A_0} \Omega_3}\; , \quad\quad {\cal F}_0 \equiv 2{\cal F} - \sum_i U_i {\partial {\cal F}\over \partial U_i} ={\int_{B_0} \Omega_3\over \int_{A_0} \Omega_3}\;. \end{eqnarray} The $U_i$ indeed transform under the orientifold action as $\Omega{\cal R}: U_i\rightarrow -\overline U_i$, and a consistent choice is $Re (U_i)=0$. \vskip 1mm As in \cite{Blumenhagen:2004xx} we expand the 3-cycles of the branes and the orientifold planes as \begin{eqnarray} \label{def_wrap} \pi_a &=&\sum_{I=0}^{h_{2,1}} ( q_{a,I}\, A_I -p_{a,I}\, B_I), \quad\quad \pi_{{\rm O}6} = {1\over 2} \left( L_0 B_0 + \sum_{i=1}^{h_{2,1}} L_i\, A_i \right). \end{eqnarray} The image brane has the expansion $\pi'_a = -q_{a,0} A_0 -p_{a,0} B_0 + \sum_{i=1}^{h_{2,1}} ( q_{a,i}\, A_i + p_{a,i}\, B_i)$. For a supersymmetric brane configuration the NS-NS tadpole cancellation condition takes the simple form \begin{eqnarray} \label{tadpole} -\sum_a N_a\, p_{a,0} {\cal F}_0 + \sum_{a,i} N_a\, q_{a,i}\,U_i \, = \left( L_0 {\cal F}_0 + \sum_i L_i\, U_i \right),\,\, \end{eqnarray} where the terms of zero and second order in the $U_i$ cancel due to the image branes. Equation \eqref{tadpole} encodes $h_{2,1}+1$ independent conditions on the wrapping numbers of the D6-branes. \vskip 1mm Mirror duality maps this type IIA orientifold with intersecting D6-branes and O6-plane to type I= type IIB/$\Omega$ compactifications with magnetized D9-branes. The type IIB K\"ahler moduli {} $T_i=-b_i+iJ_i$ are defined by expanding {} $ T= - {\cal B} + iJ$ as {} $\sum_i ( - b_i + i J_i)\, \omega_i = T^i\, \omega_i$, where $\omega_i$ denotes a basis of $H^2(X, {\mathbb Z})$. The mirror map exchanges the IIB moduli $T_i$ with the IIA complex structure moduli $U_i$ \cite{Candelas:1990rm,Candelas:1990qd}. The above choice $Re(U_i) =0$ is obviously dual to $b_i = 0$ on the type I side. Recall that in the type I case the possibility of half-integer NS-NS flux results from the periodic identification $ {\cal B}(\gamma) \simeq {\cal B}(\gamma) + 1$, which, together with ${\cal B} \rightarrow -{\cal B}$ under $\Omega$ allows for $ {\cal B}(\gamma) = 0$ or $\frac{1}{2}$ \cite{Bianchi:1991eu}. By mirror symmetry also the complex structure moduli $U_i$ enjoy a shift symmetry {} $U_i\simeq U_i-1$, so that the two discrete values {} $Re (U_i)=0,-{1\over 2}$ are allowed. The value {} $Re (U_i)=-{1\over 2}$ is the mirror dual of the type I orientifold without vector structure. \vskip 1mm Note that the value $Re(U_i)$ is still measured with respect to the old unimodular basis (\ref{uni-basis}). In general, with the tilt in the complex structure, this basis ceases to take values in $H^3({\cal W}, {\mathbb Z})$, but rather is defined only in $H^3({\cal W}, {\mathbb Q})$. Of course one can now define a new basis of $H^3({\cal W}, {\mathbb Z})$, \footnote{For the toroidal case, this basis would be the one constructed from the fundamental cycles $e'_1$ and $e'_2$ in figure \ref{figrrcancel}.} but this basis does not split into even and odd parts under $\Omega {\cal R}$. As on the torus one can choose to keep the nice transformation properties of the basis (\ref{uni-basis}) and formally expand the three-cycles wrapped by the D-branes as in (\ref{def_wrap}). The so-defined wrapping numbers are subject to certain constraints which ensure that the object $\pi$ is a bona fide cycle. \vskip 1mm To find the correct description we use the fact that mirror symmetry exchanges the central charges of a B-type brane carrying a holomorphic bundle $V_a$ on a Calabi-Yau $X$ and the dual A-type brane wrapping the sLag $\pi_a$ on the mirror manifold ${\cal W}$. Recall that the central charges are defined as \cite{Aspinwall:2004jr} \begin{eqnarray} \label{centralZ} Z_B = \int_X e^{{\cal B} - i J} \, {\rm ch}(V_a) \, \sqrt {Td(T_X)}, \quad\quad Z_A = \int_{\pi_a} \Omega_3. \end{eqnarray} The expression for $Z_B$ depends on the gauge field and the $B$-field only via the gauge invariant combination ${\cal F} = F+B$. \footnote{To comply with the convention used in the discussion of toroidal models we have chosen ${\cal F} = F+B$ (as opposed to $F-B$) to be the gauge invariant combination. Consequently we have defined $Z_B$ in terms of $e^{{\cal B} - i J}$, rather than $e^{-({\cal B} + i J)}$ as in \cite{Aspinwall:2004jr}.} One now expands $Z_B$ and $Z_A$ along $H^2(X, {\mathbb Z})$ and $H^3({\cal W})$ and uses the mirror map between $T_i$ and $U_i$ to express the 'wrapping numbers' $p_I$ and $q_I$ of the A-brane in terms of the topological data of the mirror dual bundle and the Todd class of $X$. To work this out explicitly requires the form of the prepotential ${\cal F}(U_i)$. As already anticipated one has here two choices: either work with the bundle $V_a$ as in eq. (\ref{centralZ}) with $c_1(V_a) \in H^2(X,{\mathbb Z})$. The corresponding wrapping numbers for the mirror dual A-brane will then be the ones with respect to the tilted basis taking values in $H_3({\cal W}, {\mathbb Z})$. Alternatively one absorbs the $B$-field into the gauge bundle by writing \begin{eqnarray} Z_B= \int_X e^{- i J} \, {\rm ch}(\widehat V_a) \, \sqrt {Td(T_X)}. \end{eqnarray} This will give us the effective fractional wrapping numbers along the unimodular basis valued in $H^3({\cal W},{\mathbb Q})$. Let us treat the two cases in turn. First, one can expand $Z_B$ as \cite{Douglas:2006jp} \begin{eqnarray} Z_B = Q^6 - T\, Q^4 + \frac{1}{2} T^2 \, Q^2 - \frac{1}{6} \, T^3 \, Q^0 \end{eqnarray} with \begin{eqnarray} && Q^0 = {\rm rk} ({\cal E}), \quad \quad Q^2= c_1({\cal E}) , \quad\quad Q^4 = {\rm ch}_2({\cal E})+ \frac{rk ({\cal E})}{24}\, c_2(T_X), \nonumber\\ && Q^6 = {\rm ch}_3({\cal E}) + \frac{1}{24}\, c_1({\cal E})\, c_2(T_X). \end{eqnarray} where ${\cal E}$ collectively denotes the bundles $V_a$ or $\widehat V_a$, depending on whether we absorb the $B$-flux in the field $T=-b+iJ$ or the gauge bundle. The analogous expansion for $Z_A$ reads \begin{eqnarray} Z_A = \int_{\pi} \Omega_3 = X_0 ( q_0 + \sum_i q_i U_i - \sum_i p_i \frac{\partial {\cal F}}{\partial U_i} - p_0 {\cal F}_0). \end{eqnarray} Now one uses the mirror map to identify $T_i$ with $U_i$. In the large volume limit the prepotential ${\cal F}(T)$ takes the form \begin{eqnarray} {\cal F}(T) = - \frac{1}{6} T^3 + \frac{1}{2} A \, T^2 - \frac{1}{24}\, c_2(T_X) \, T. \end{eqnarray} This classical expression receives worldsheet instanton corrections away from the large volume limit. For a discussion of the terms linear and quadratic in $T$, which do not enter the tri-linear couplings, we refer e.g. to \cite{Hosono:1994av}. Using this result and comparing the expansions of $Z_B$ and $Z_A$ leads to \cite{Douglas:2006jp} \begin{eqnarray} \label{wrapNum_gen} && (p_a)_0 = {\rm rk}({\cal E}), \quad\quad \sum_i (p_a)_i \, \omega_i = c_1({\cal E}), \\ && q_0 = {\rm ch}_3({\cal E}), \quad\quad \sum_i (q_a)_i \, \widetilde \omega_i = - \left({\rm ch}_2({\cal E}) + \frac{{\rm rk}({\cal E})}{12} c_2(T_X) \right) + c_1({\cal E}) A \nonumber. \end{eqnarray} Here $\widetilde \omega_i$ are the elements of $H^4(X, {\mathbb Z})$ dual to $\omega_i$. Again, the wrapping numbers with respect to the tilted geometry with $Re(U_i)=\frac{1}{2}$ correspond to ${\cal E}=V_a$. Note that even in this case, with the overall normalization chosen, the quantities $q_I$ need not be integer-valued even though they are integer on $T^2 \times T^2 \times T^2$. By contrast, if we stick to the unimodular basis (\ref{uni-basis}), we insert ${\cal E}=\widehat V_a$, and obviously even the corresponding $p_i$ can be half-integer. This generalizes the effective wrapping numbers constructed from the elementary winding numbers $(n_i,\hat m_i)$ for $T^2 \times T^2 \times T^2$ as described in the appendix. \vskip 1mm The structure presented in this section is rather formal. For a concrete Calabi-Yau manifold and a specified anti-holomorphic involution, finding the nice symplectic basis used in this section is not an easy task. To really see that these two discrete choices in the complex structure moduli space are indeed possible, we will now discuss one non-trivial example in some more detail. \subsection{Example: The Quintic} While in appendix A we will provide some details on the straightforward example of a toroidal orientifold, here we would like to discuss for the simplest genuine Calabi-Yau, i.e. the Quintic, how the framework summarized in the last section actually applies. \vskip 1mm We consider the type I string compactified on the quintic, i.e. ${X}=\mathbb{P}_4[5]$, which has Hodge numbers $(h_{21},h_{11})=(101,1)$ and whose complexified K\"ahler modulus we denote as {} $T=-{\cal B}+iJ$. On the dual side we get a type IIA orientifold on the mirror manifold ${\cal W}=\mathbb{P}_4[5]/\mathbb Z_5^3$. The sole complex structure modulus $\psi$ is visible in the general form of the hypersurface constraint surviving the $\mathbb Z_5^3$ orbifold \begin{eqnarray} Z_1^5+Z_2^5+Z_3^5+Z_4^5+Z_5^5 - (5\psi)\, Z_1\, Z_2\, Z_3\, Z_5\, Z_5 =0\; . \end{eqnarray} By a coordinate transformation like $z_1\rightarrow \alpha z_1$ with $\alpha=\exp(2\pi i/5)$ one sees that $\psi$ and $\alpha\psi$ define equivalent manifolds, so that only the cone $0\le \arg (\psi) < 2\pi /5$ respectively $z=(5\psi)^{-5}$ are good coordinates on the complex structure moduli space. The fundamental region for $\psi$ gets further reduced by dividing by more general coordinate transformations \cite{Candelas:1990rm}. We assume that the type IIA anti-holomorphic involution acts just by complex conjugation ${\cal R}: Z_i\rightarrow \overline Z_i$, so that the two half-lines $\arg (\psi)=0, \pi/5$ are the two real one-dimensional components of the complex structure moduli space of the orientifold model. \vskip 1mm In order to see how this is related to the discrete choices of the $B$-field in the mirror dual type I description, we need to know the mirror map. Luckily, for the quintic this map is explicitly known and we just need to copy and interpret the results \cite{Candelas:1990rm,Candelas:1990qd}. In the region $|\psi|>1$, $T$ is mapped to a quotient of periods \begin{eqnarray} U={\Phi_1\over \Phi_0} \end{eqnarray} with the periods solving the Picard-Fuchs equation given by \begin{eqnarray} \Phi_0=\sum_{n=0}^\infty { (5n)!\over n!^5} {1\over (5\psi)^{5n}}, \quad \quad \Phi_k=-{5\over (2\pi i)^k} \left[\log (5\psi)\right]^k\, \Phi_0 + \tilde\Phi_k ( \psi )\quad\quad {\rm for}\ k=1,2,3\; ,\nonumber \end{eqnarray} where, like $\Phi_0$, $\tilde\Phi_k ( \psi )$ is an infinite series in the variable $\psi^{-5}$. The complex structure modulus $U$ can eventually be expressed in terms of $\psi$ as \begin{eqnarray} U=-{5\over 2\pi i}\left[ \log (5\psi) - {1\over \Phi_0}\sum_{m=0}^\infty {(5m)!\over (m!)^5\, (5\psi)^{5m}} \left( \Psi(1+5m)-\Psi(1+m) \right)\right], \end{eqnarray} where $\Psi(x)$ denotes the digamma function. Now it is clear that $\psi\simeq \psi\, e^{2\pi i N/5}$ is mapped to the periodicity {} $U\simeq U-N$ and that ${\cal R}: U\rightarrow -\overline U$. In addition, the half line $\arg (\psi)=0$ is mapped to $T=U=i J$ with $J\ge J_0\simeq 1.21$. The other half-line $\arg (\psi)=\pi/5$ is mapped to {} $T=U=-1/2+i J$. Note that $\psi=1$ resp. $U=iJ_0$ is a singular point in the complex structure moduli space, where the Calabi-Yau manifold develops a conifold singularity. To describe the other side of the singular point, i.e. in the region $|\psi|<1$, one is analytically continuing the periods to this region. Note that in the mirror dual type I model this region corresponds to the Landau-Ginzburg phase of the linear sigma model. In the region around the Gepner point $\psi=0$ the mirror map has the following expansion {} \begin{eqnarray} U=-{1\over 2} +{i\over 2}\left[ \cot\left({\pi\over 5}\right)+ {\Gamma^4\left( {4\over 5}\right)\Gamma\left( {2\over 5}\right)\over \Gamma\left( {1\over 5}\right)\Gamma^4\left( {3\over 5}\right)} \left(\cot\left({\pi\over 5}\right)-\cot\left({2\pi\over 5}\right) \right)e^{\pi i\over 5}\psi + O(\psi^2) \right]\!. \end{eqnarray} Suppressing a discussion of branch cuts and of the fundamental region of $\psi$, which can be found in the literature \cite{Candelas:1990rm}, we realize that the Gepner point $\psi=0$ corresponds to {} $T=U=-{1\over 2} +i \cot {\pi\over 5} $. Therefore, the Gepner point lies on the ${\cal B}=1/2$ branch, i.e. in the Type I model it is on the same branch in K\"ahler moduli space as the orientifolds without vector structure. The structure of the moduli space is shown in figure \ref{figquint} (essentially taken from \cite{Aspinwall:1994ay}, see also \cite{Brunner:2004zd}). \begin{figure}[ht] \centering \hspace{40pt} \includegraphics[width=0.8\textwidth]{figquintic.eps} \begin{picture}(100,1) \put(-90,160){$\psi$} \put(4,89){$_{\psi=1 \ {\rm conifold}}$} \put(-44,89){$_{\rm Gepner}$} \put(69,96){$_{{\cal B}=0}$} \put(53,145){$_{{\cal B}=1/2}$} \put(129,160){$U$} \put(240,95){$J_0$} \put(204,102){$_{\rm conifold}$} \put(181,69){$_{\rm Gepner}$} \put(230,172){$_{{\cal B}=0}$} \put(185,172){$_{{\cal B}=1/2}$} \end{picture} \vspace{-10pt} \caption{Complex structure moduli space for the mirror quintic ${\cal W}$ 5 in the $\psi$- and the $U$-plane. The blue lines indicate the two discrete branches after the orientifold projection, related to ${\cal B}=0,1/2$ in the mirror dual Type I model. \label{figquint}} \end{figure} For the model discussed here, i.e. the Type I string on the quintic resp. the Type IIA orientifold on the mirror quintic, the Gepner model orientifold was first discussed in \cite{Blumenhagen:1998tj} and featured a maximally rank tadpole canceling solution with gauge group $SO(20)\times SO(12)$. \section{Outlook} In this paper we have reconsidered Type I compactifications without vector structure. We have offered several equivalent descriptions that clarify some longstanding puzzles. In particular we have shown the consistency of a 3 generation non-supersymmetric but tachyon-free GUT model proposed by one of us (C.B.) \cite{Bachas:1995ik} long time ago. The possibility of relating ``half-integer'' wrapping numbers in the Type IIA orientifold description to a quantized NS-NS B-field opens new possibilities for model building and suggests a re-analysis of toroidal compactifications with oblique fluxes \cite{Antoniadis:2004pp, Bianchi:2005yz, Antoniadis:2005nu, Bianchi:2005sa} in the perspective of stabilizing off-diagonal moduli. Their mirror Type IIA description would require ``co-isotropic'' D-branes, {\it i.e.} wrapped rotated D-branes supporting non trivial magnetic fields associated to bundles with(out) vector structure \cite{Font:2006na,Anastasopoulos:2006hn}. We have not explicitly considered models with different kinds of oppositely charged but mutually supersymmetric orientifold planes \cite{Sugimoto:1999tx, Hanany:2000fq, Bergman:2001rp, Dudas:2001wd} that lead to models without D-branes dual to Type II models with massive R-R sector \cite{Vafa:1995gm, Angelantonj:1996mw}. Though an interesting playground in string dualities \cite{Bianchi:1998vq}, at first sight this kind of models are less appealing because of the very low rank of the gauge group and the related difficulty in accomodating chiral fermions. Although model C is non-supersymmetric, yet it can be made non tachyonic by displacing the mutually non supersymmetric stacks along the directions where they are parallel. Moreover, one can still envisage the possibility of introducing stacks of magnetized branes mutually supersymmetric in pairs but not sharing any common global susy as a whole, see e.g. \cite{Kokorelis:2002ns, Axenides:2003hs, Floratos:2006hs, Emparan:2006it}. The presence of non globally supersymmetric magnetic fields mimics the presence of lower dimensional D-branes with opposite R-R charges and may greatly help relaxing the stringent tadpole conditions on the rank of the Chan-Paton group\footnote{As a `caricature' consider an (alas tachyonic) Type I model in $D=10$ with $N+16$ D9-branes and $N$ $\overline{\rm D9}$-branes with chiral fermions and gauge symmetry `enhancement'.} and allow for further interesting lines of investigation. \vskip 1cm {\noindent {\Large \bf Acknowledgements}} \vskip 0.5cm \noindent We thank C.~Angelantonj, M.~Axenidis, V.~Braun, T.~Brelidze, M.~Cveti{\v c}, J.~Evslin, E.~Floratos, A.~Klemm, C.~Kokorelis, R.~Minasian, R.~Richter, A.~Sagnotti and C.~Timirgaziu for useful conversations. This work has been supported in part by the European Community Human Potential Program under contracts MRTN-CT-2004-005104 and MRTN-CT-2004-512194, by the INTAS grant 03-516346, by MIUR-COFIN 2003-023852, by NATO PST.CLG.978785, by DOE grant EY-76-02-3071 and by the Excellence Cluster ``The Origin and the Structure of the Universe'' in Munich. C.B. thanks the Arnold-Sommerfeld-Center in Munich, M.B. thanks the Ecole Normale Sup\'erieure, R.B. thanks the University of Bonn, D.L. thanks the University of Pennsylvania and T.W. thanks the University of Wisconsin, Madison, for hospitality during part of this work. \begin{appendix} \section{Toroidal Example} In this appendix we demonstrate the observations of section \ref{Subsec_SmoothIIA} for the simple example of compactifications on $T^2\times T^2\times T^2$. Here we have 8 homology 3-cycles \begin{eqnarray} \label{untiltbasis} & & A_0=(0,1)\otimes (0,1)\otimes (0,1), \quad\quad\quad \,\, \, B_0= (-1,0)\otimes (-1,0)\otimes (-1,0), \quad \nonumber\\ & & A_1= (-1,0)\otimes (0,-1)\otimes (0,-1), \quad B_1=(0,-1)\otimes (-1,0)\otimes (-1,0), \quad \\ & & A_2= (0,-1)\otimes (-1,0)\otimes (0,-1), \quad B_2=(-1,0)\otimes (0,-1)\otimes (-1,0), \quad \nonumber\\ & & A_3= (0,-1)\otimes (0,-1)\otimes (-1,0), \quad B_3=(-1,0)\otimes (-1,0)\otimes (0,-1). \quad \nonumber \end{eqnarray} They satisfy $A_I \cap B_J = \delta_{IJ}$. We also introduce the dual basis $(\alpha_I, \beta_J)$ with $\int \alpha_I \wedge \beta_J = \delta _{IJ}$, \begin{eqnarray} \label{Basis^3} & &\alpha_0 = dy^1 \wedge dy^2 \wedge dy^3, \quad\quad\,\, \, \, \, \beta_0 = dx^1 \wedge dx^2 \wedge dx^3, \nonumber \\ && \alpha_1 = - dx^1 \wedge dy^2 \wedge dy^3, \quad\quad \beta_1 = dy^1 \wedge dx^2 \wedge dx^3, \\ && \alpha_2 = -dy^1 \wedge dx^2 \wedge dy^3, \quad\quad \beta_2 = dx^1 \wedge dy^2 \wedge dx^3, \nonumber \\ && \alpha_3 = -dy^1 \wedge dy^2 \wedge dx^3, \quad\quad \beta_3 = dx^1 \wedge dx^2 \wedge dy^3 \nonumber. \end{eqnarray} The orientifold plane is chosen along the $x$-direction in each $T^2$ so that indeed $A_0, B_i \in H^-_3(T^6, {\mathbb Z})$ and $B_0, A_i \in H^+_3(T^6, {\mathbb Z})$. The holomorphic coordinates \begin{eqnarray} dz^i =- U_i dx^i + dy^i, \quad\quad d\overline z^i = - \overline U_i dx^i + dy^i \end{eqnarray} are determined by the complex structure moduli $U^i$. We take $U_i = (- b_i + i u_i)$ with $u_i=\frac{R_x^i}{R_y^i}$ in terms of the radii of the elementary 1-cycles. In the symplectic basis (\ref{Basis^3}), the holomorphic three-form $\Omega_3 = dz^1 \wedge dz^2 \wedge dz^3$ enjoys the expansion \begin{eqnarray} \Omega_3 = \alpha_0 + \sum_{i=1}^3 (U_i) \alpha_i + \frac{1}{2} \sum_{i\neq k \neq j} \, (U_i U_j) \beta_k - U_1 U_2 U_3 \beta_0. \end{eqnarray} Note that indeed the ratio of periods $\frac{\int_{A_i}\Omega_3}{\int_{A_0}\Omega_3} = U_i$. The orientifold rule $\Omega {\cal R}: U_i \rightarrow \overline U_i$ together with the identification $U_i \simeq U_i -1$ translate into \begin{eqnarray} U_i= - \overline U_i - n. \end{eqnarray} Indeed, the values $U_i=i$ and $U_i=i- \frac{1}{2}$ of the untilted and tilted case satisfy this with $n=0$ and $n=1$, respectively. One way to describe consistent 3-cycles on the torus is by introducing effective wrapping numbers $q_I, p_I$ as in equ. (\ref{def_wrap}) with respect to the \emph{untilted} basis \ref{untiltbasis}, which, for $b_i =1/2$, takes values only in $H^2(T^6, {\mathbb Q})$. For factorizable branes these are given in terms of the wrapping numbers along the horizontal and vertical axes, $n_i$ and $\tilde m_i = m_i + b_i n_i$, by \begin{eqnarray} \label{wrap_T} && p_0 = n^1 n^2 n^3, \quad p_1 = \hat m^1 n^2 n^3, \quad p_2 = n^1 \hat m^2 n^3, \quad p_3 = n^1 n^2 \hat m^3, \\ && q_0 = \hat m^1 \hat m^2\hat m ^3, \quad q_1 = - n^1 \hat m^2 \hat m^3, \quad q_2 = -\hat m^1 n^2 \hat m^3, \quad q_3 =- \hat m^1 \hat m^2 n^3.\nonumber \end{eqnarray} This is in agreement with the general expression (\ref{wrapNum_gen}). \end{appendix} \clearpage
1,108,101,562,636
arxiv
\section{Introduction} In recent years, compelling dynamical evidence has indicated that supermassive black holes (SMBHs) are ubiquitous in galactic nuclei (e.g., Ferrarese \& Ford 2005). According to the standard modern theory of cosmological structure formation, the Cold Dark Matter (CDM) paradigm (e.g., Blumenthal et al. 1984), galaxies in the Universe grow through a complex process of continuous mergers and agglomeration of smaller systems. Thus, if more than one of the protogalactic fragments contained a SMBH, the formation of SMBH binaries during galaxy assembly will be almost inevitable (e.g., Begelman et al. 1980). In a purely stellar background, as the binary separation decays, the effectiveness of dynamical friction slowly declines, and the pair can become tightly bound via three-body interactions, namely by capturing stars that pass close to the black holes and ejecting them at much higher velocities (e.g., Milosavljevi{\' c} \& Merritt 2001). If the hardening continues to sufficiently small relative distances, gravitational wave emission becomes the dominant source of orbital energy loss and the two SMBHs may coalesce in less than a Hubble time. However, the binary orbit may stop shrinking before gravitational radiation becomes relevant as there is a finite supply of stars on intersecting orbits (e.g., Berczik et al. 2005). During the assembly of galaxies, especially at high $z$, their SMBHs likely evolve within gas-rich environments. Merging systems such as the Ultraluminous Infrared Galaxies (ULIRGs) NGC 6240 and Arp 220 harbor large concentrations of gas, in excess of $10^9 {\rm M_\odot}$, at their center, in the form of either a turbulent irregular structure or of a kinematically coherent, rotating disk (e.g., Downes \& Solomon 1998). Massive rotating nuclear disks of molecular gas are also ubiquitous in galaxies that appear to have just undergone a major merger, such as Markarian 231 (Davies et al. 2004). Gas dynamics may thus profoundly affect the pairing of SMBHs both during and after their host galaxies merge (e.g., Escala et al. 2004; Kazantzidis et al. 2005). Recent simulations of the orbital evolution of SMBHs within an equilibrium, rotationally-supported, gaseous disk have shown that dynamical friction against the gaseous background leads to the formation of a tightly bound SMBH binary with a final separation of $<1$~pc in about $10^7$~yr (Escala et al. 2005; Dotti et al. 2006; Dotti et al., these proceedings). Here we review the results of high-resolution $N$-body + smoothed particle hydrodynamics (SPH) simulations of mergers between galaxies with central SMBHs having enough dynamic range to follow the black holes from hundreds of kiloparsecs down to sub-parsec scales, bridging more than ten orders of magnitude in density. \section{Methods} The aim of this study is to investigate the orbital evolution and pairing of SMBHs in multi-scale galaxy mergers in the hydrodynamical regime. A thorough description of our methods is presented in Kazantzidis et al. (2005) and Mayer et al. (2007, hereafter M07) and we summarize them here. First, we started with two identical spiral galaxies, comprising a disk of stars and gas with an exponential surface density distribution, a spherical, non-rotating Hernquist bulge, and a spherical and isotropic NFW dark matter halo. We adopted parameters from the Milky Way model A1 of Klypin et al. (2002) to initialize the galaxy models. Specifically, the dark matter halo had a virial mass of $M_{\rm vir}=10^{12}{\rm M_\odot}$, a concentration parameter of $c=12$, and a dimensionless spin parameter of $\lambda=0.031$. The mass, thickness and resulting scale length of the disk were $M_d=0.04 M_{\rm vir}$, $z_{0}=0.1 R_d$, and $R_d=3.5$~kpc, respectively. The bulge mass and scale radius were $M_b=0.008 M_{\rm vir}$ and $a=0.2 R_d$, respectively. The halo was adiabatically contracted to respond to the growth of the disk and bulge resulting in a model with a central total density slope close to isothermal. The galaxy models were consistent with the stellar mass Tully-Fisher and size-mass relations. A softened particle of mass $2.6 \times 10^6 {\rm M_\odot}$ was placed at the center of the bulge to represent a SMBH. This choice satisfies the $M_{\rm BH}-\sigma$ relation (Kazantzidis et al. 2005). Lastly, the gas fraction, $f_{\rm g}$, was chosen to be $10\%$ of the total disk mass. We used a standard cooling function for a primordial mixture of atomic hydrogen and helium. We also shut off radiative cooling at temperatures below $2 \times 10^{4}$~K that is a factor of $\sim 2$ higher than the temperature at which atomic radiative cooling would drop sharply due to the adopted cooling function. With this choice we effectively take into account non-thermal, turbulent pressure to model the warm ISM of a real galaxy. The galaxies were placed on parabolic orbits with pericentric distances that were 20\% of the halo virial radius ($r_{\rm peri} \sim 50$~kpc), typical of cosmological mergers (e.g., Khochfar \& Burkert 2006). The initial separation of the halo centers was twice their virial radii and their initial relative velocity was determined from the corresponding Keplerian orbit of two point masses. Each galaxy consisted of $10^5$ stellar disk particles, $10^5$ bulge particles, and $10^6$ dark matter particles. The gas component was represented by $10^5$ particles. We employed a gravitational softening of $\epsilon = 100$~pc for both the dark matter and baryonic particles of the galaxy, and $\epsilon=30$~pc for the particle representing the SMBH. During the interaction between the two galaxies, the relative separation of the black holes followed that of the galactic cores in which they were embedded. The merging galaxies approached each other several times as they sank into one another via dynamical friction. After $\sim 5$~Gyr, the dark matter halos had nearly merged and the two baryonic cores, separated by about $6$~kpc, continued to spiral down. As much as 60\% of the gas originally present in the galaxies was funneled to the inner few hundred parsecs of each core by tidal torques and shocks occurring in the repeated fly-bys between the two galaxies (e.g., Barnes \& Hernquist 1996). Each SMBH was embedded in a rotating gaseous disk of mass $\sim 4 \times 10^8 {\rm M_\odot}$ and size of a few hundred parsecs which was produced by the gas inflow. Second, just before the last pericentric passage of the two merging galaxies, we adopted the technique of particle splitting to increase the gas mass resolution in the central region of the computational volume. By selecting a large enough volume for the fine grained region one can avoid dealing with spurious effects at the coarse/fine boundary, such as two-body heating due to scattering by massive particles of the low-resolution region. We selected the volume of the fine-grained region to be large enough to quarantee that the dynamical timescales of the entire coarse-grained region were much longer than those corresponding to the refined region. Specifically, we performed the splitting in a volume of $30$~kpc in radius at the point where the two galaxy cores were separated by only $6$~kpc. The new particles were randomly distributed according to the SPH smoothing kernel within a volume of size $\sim h_p^3$, where $h_p$ is the smoothing length of the parent particle. The velocities of the child particles were equal to those of their parent particle (ensuring momentum conservation) and so was their temperature, while each child particle was assigned a mass equal to $1/N_{\rm split}$ the mass of the parent particle, where $N_{\rm split}$ is the number of child particles per parent particle. The mass resolution in the gas component was originally $2 \times 10^4 {\rm M_\odot}$ and became $\sim 3000 {\rm M_\odot}$ after splitting, for a total of $\sim 1.5$ million SPH particles. For the standard calculations, the softening of the gas particles was set to $2$~pc. We note that the local Jeans length was always resolved by $10$ or more SPH smoothing kernels (e.g., Bate \& Burkert 1997) in the highest density regions of the refined simulations. The softening of the black holes was also reduced from $30$~pc to $2$~pc, while the softening of dark matter and stellar particles remained $100$~pc as they were not split in order to limit the computational burden. Therefore, stellar and dark matter particles essentially provide a smooth background potential, while the computation focused on the gas component which dominates by mass the nuclear region. All simulations were performed with GASOLINE, a multi-stepping, parallel Tree-SPH $N$-body code (Wadsley et al. 2004). The radiation physics in the refined simulations was modeled via an ``effective'' equation of state that accounts for the net balance of radiative heating and cooling. The value of the adiabatic index, $\gamma$, namely the ratio between the specific heats, is the parameter that controls the degree of dissipation in the gas. While the various cooling and heating mechanisms should be followed directly, this simple scheme allows us to investigate the effect of thermodynamics on the structure of the merger remnant and on the orbital decay of the black holes. Lastly, we tested that the transition between the two thermodynamic schemes used in the different parts of the simulation did not introduce spurious fluctuations in the hydrodynamical variables (M07). \section{Effects of thermodynamics on the orbital decay of SMBHs} Calculations that include radiative transfer show that the thermodynamic state of a solar metallicity gas heated by a starburst can be well approximated by an ideal gas with adiabatic index $\gamma=1.3-1.4$ over a wide range of densities (Spaans \& Silk 2000). For the standard refined simulation discussed in the present work, we adopted $\gamma=7/5$. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{ierapetra1.eps}} \caption{\footnotesize Relative separation of the two SMBHs as a function of time during the last stage of the standard, multi-scale merger simulation with $\gamma=7/5$. This value of $\gamma$ approximates well the balance between radiative heating and cooling in a starburst galaxy. The two peaks at scales of tens of parsecs at around $t=5.1213$~Gyr correspond to the end of the phase during which each black hole is still embedded in a distinct gaseous core. The inset shows the details of the last part of the orbital evolution, which takes place inside the nuclear disk arising from the merger of the two galactic cores. A SMBH binary forms rapidly, less than a million years after the coalescence of the two galactic nuclei, owing to the drag exerted by the surrounding dense gaseous nuclear disk. \label{fig1}} \end{figure} \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{ierapetra2.eps}} \caption{\footnotesize Relative separation of the two SMBHs as a function of time in two multi-scale, merger simulations with different prescriptions for the gas thermodynamics. The stiffer equation of state ($\gamma=5/3$) corresponds to a situation where radiative cooling is completely suppressed by a strong heating source (e.g., AGN feedback) and causes the hardening process to significantly slow down. The orbital decay and pairing of SMBHs depends sensitively on the details of gas thermodynamics. } \label{fig2} \end{figure} The gaseous cores finally merge at $t \sim 5.12$~Gyr, forming a single nuclear disk with a mass of $3\times 10^9 {\rm M_\odot}$ and a size of $\sim 75$~pc. The two SMBHs are embedded in this nuclear disk. The disk is surrounded by several rings and by a more diffuse, rotationally-supported envelope extending out to more than a kiloparsec. A background of dark matter and stars distributed in a spheroid is also present but the gas component is dominant in mass within a few hundred pc from the center. From here on the orbital decay of the black holes is dominated by dynamical friction against this fairly dense gaseous disk. The black holes are on eccentric orbits and move with a speed of $v_{\rm BH} \sim 200-300\>{\rm km}\,{\rm s}^{-1}$ relative to the disk's center of mass. The typical ambient sound speed is $v_s \sim 50\>{\rm km}\,{\rm s}^{-1}$. The relative orbit of the SMBH pair decays from about $40$~pc to a few parsecs, our resolution limit, in less than a million years after the merger of the two galaxies (Figure~\ref{fig1}). At this point the two black holes are gravitationally bound to each other, as the gas mass enclosed within their separation is less than the mass of the binary. Dynamical friction against the stellar background would bring the two black holes this close only on a much longer timescale, $\sim 3 \times 10^7$~yr (Section 4). Such a short sinking timescale due to the gas is expected because of the high densities in the nuclear disk and because the decay occurs in the supersonic regime with $v_{\rm BH} > v_s$ (Ostriker 1999). The subsequent hardening of the binary will depend on the details of gasdynamics and other processes at scales below the adopted resolution (Sections 6 \& 7). It is interesting to investigate the effect of the adopted equation of state on the orbital decay of the black holes. In particular, we considered a smaller degree of dissipation in the gas and increased $\gamma$ to $5/3$. This value of $\gamma$ would correspond to a purely adiabatic gas, or equivalently to a situation where radiative cooling is completely suppressed. The radiative feedback from an active galactic nucleus (AGN) is a good candidate for such a strong heating source. In this case, we find that a turbulent, pressure supported cloud of a few hundred parsecs arises from the merger rather than a disk. The nuclear region is still gas dominated, but the gas mass is lower within $100$~pc relative to the $\gamma=7/5$ case. This is because of the adiabatic expansion of the gas following the final shock when the two cores merged. Figure~\ref{fig2} demonstrates that the hardening process is significantly suppressed when a stiffer equation of state with $\gamma=5/3$ is adopted. In this case, the black holes do not form a binary and maintain a relative separation of $\sim 100-150$~pc well after the binary forms in the simulation with $\gamma=7/5$. The density of the gas in the nuclear region surrounding the SMBHs is a factor of $\sim 5$ lower compared to that in the $\gamma=7/5$ case, the sound speed is $v_s \sim 100\>{\rm km}\,{\rm s}^{-1}$, and the black hole velocity is $v_{\rm BH} \lesssim 100\>{\rm km}\,{\rm s}^{-1}$. The lower density and to a lesser extent the fact that the two black holes move subsonically ($v_{\rm BH} \lesssim v_s$) rather than supersonically greatly reduce the drag due to the gas distribution when $\gamma=5/3$ (Ostriker 1999). In Section~5, we briefly discuss how the structure and kinematics of the nuclear regions of merger remnants in the simulations with different values of $\gamma$ compare to those of observed systems. Given the sensitivity of the black hole pairing process to the value of $\gamma$, a scenario in which the black holes rapidly form a binary owing to dynamical friction against the gas would require that AGN feedback has negligible thermodynamical effects on small scales. In fact, this may be a more general requirement if the ubiquitous nuclear disk-like structures seen in many merger remnants are to be preserved. Interestingly, previous studies that included a prescription for AGN feedback in similar galaxy merger simulations (e.g., Springel et al. 2005) find that feedback affects strongly the thermodynamics of the gas in the nuclear region only $>10^8$~yr after the galaxy merger is completed. \section{Dynamical friction timescales} It is important to examine if the black holes could still form a binary as a result of the interaction with the collisionless stellar background. Since the resolution of the collisionless components is likely inadequate to assess directly the effect of dynamical friction (Section 2), we opt to calculate the dynamical friction timescale in the collsionless background analytically (Colpi et al. 1999) \begin{equation} {\tau_{\rm DF}=1.2 {V_{\rm cir}r_{\rm cir}^2 \over GM_{\rm BH}\ln(M_{\rm sd}/M_{\rm BH})}\,\varepsilon^{0.4}} \ . \label{dyn.friction} \end{equation} Here $V_{\rm cir}$ and $r_{\rm cir}$ are, respectively, the initial orbital velocity and the radius of the circular orbit with the same energy of the actual orbit of the black holes in the simulation, $\varepsilon$ is the circularity of the orbit, and $M_{\rm sd}$ is the sum of the dark matter and stellar mass within $r_{\rm cir}$. We calculate the decay time when the two black holes are separated by $100$~pc, that is at the periphery of the nuclear disk just after the galaxy merger. Drawing the numbers from the simulations, we have $r_{\rm cir} = 100$~pc, $V_{\rm circ}= 200\>{\rm km}\,{\rm s}^{-1}$, $\varepsilon =0.5$, $M_{\rm BH} = 2.6 \times 10^6 {\rm M_\odot} $ and $M_{\rm sd} = 5 \times 10^8 {\rm M_\odot}$. We find that the dynamical friction timescales in the collisionless background are equal to $5 \times 10^7$~yr and $3 \times 10^7$~yr in the $\gamma=5/3$ and $\gamma=7/5$ simulations, respectively (the shorter timescale in the $\gamma=7/5$ case is due to the fact that the stars and halo contract adiabatically more in response to the higher gas mass concentration in this case, and hence $M_{\rm sd}$ is higher). In comparison, the binary formation timescale in the simulation with $\gamma=7/5$ was only $5 \times 10^5$~yr (Figure~\ref{fig1}). We stress that eq.~(\ref{dyn.friction}) was derived for an isothermal sphere. The stellar and dark matter distribution are indeed only mildly triaxial within a few hundred parsecs from the center of the remnant and the total density profile is fairly close to $\rho(r) \propto r^{-2}$, as expected from previous work (e.g., Kazantzidis et al. 2005). We also note that eq.~(\ref{dyn.friction}) actually yields a lower limit to the dynamical friction timescale since close to parsec scales, as the binary becomes hard, evacuation of the stellar background due to three-body encounters will take place and the efficiency of the sinking process will be greatly reduced. Whether orbital decay will continue and eventually lead to coalescence of the two black holes is uncertain in this case. Centrophilic orbits in triaxial systems could help in refilling the loss cone and decrease the binary's separation to the point where the emission of gravitational waves becomes efficient at extracting the last remaining angular momentum (Berczik et al. 2005). However, as we just mentioned, the structure of the stellar core is only mildly triaxial. Further investigation with simulations having higher resolution in the collisionless component is needed. The $\gamma=5/3$ run was stopped $5 \times 10^6$~yr after the merger of the gaseous cores is completed. Once again, the fact that there is no evidence that the black holes are sinking until the end is likely due to insufficient mass and force resolution in the collisionless background that does not allow to resolve dynamical friction properly. We also compared our results with the {\it expected} dynamical friction timescale due to the gaseous background. In the simulation with $\gamma=7/5$, the gas is distributed in a disk rather than in an isothermal sphere. Since the disk thickness is $> 10$ times the black hole gravitational softening and because of the fact that the density profile of the disk can be roughly approximated with a power law with an index close to 2 (except at the center where it becomes steeper) we are allowed to use eq.~(\ref{dyn.friction}) to obtain a rough estimate of the timescales. As shown by Escala et al. (2004), analytical predictions with a fixed Coulomb logarithm (Ostriker 1999) can overestimate the drag in the supersonic regime by a factor of $\sim 1.5$. In the $\gamma=7/5$ simulation, the black holes move supersonically and the analytical formula should yield the correct prediction. In this case the drag is a factor of $\sim 2.3$ stronger than in the corresponding collisionless case (Escala et al. 2004). This is fairly consistent with our results. Indeed, eq.~(\ref{dyn.friction}) with a reduction of a factor of $2.3$ gives $\sim 10^6$~yr if we set $M_{\rm gas}=M_{\rm sd}$, with $M_{\rm gas} \sim 20 M_{\rm stars}$. This timescale has to be compared with that measured directly in the simulation, $5 \times 10^5$~yr. As discussed above, the gas profile is actually steeper than $r^{-2}$ near the center. Thus, it is not surprising that the decay is faster. Despite the apparent agreement with the analytically estimated drag, we note that the orbital evolution of the two black holes might be affected by more than just the gravitational wake. Indeed, the nuclear disks show strong, highly dynamical non-axisymmetric structures such as spiral arms (see Section 7) which are highly efficient at removing angular momentum from the orbiting SMBHs. The drag drops rapidly by an order of magnitude in the subsonic regime (Escala et al. 2004). This coupled with the fact that $M_{\rm gas}$ is a factor of $\sim 5$ lower in the simulation with $\gamma=5/3$ compared to that with $\gamma=7/5$ would give a drag $50$ times smaller or $\tau_{\rm DF} \sim 5 \times 10^7$~yr, explaining why the orbital decay caused by the gas is so inefficient in this case. Thus, in the $\gamma=5/3$ simulation stars and gas contribute to the drag in a comparable way. Adding star formation is unlikely to change the above conclusions in any significant way. The unrefined galaxy merger simulation yields a starburst timescale of $\sim 5 \times 10^7$~yr. During this time, which is much longer than the binary formation timescale in the run with $\gamma=7/5$, half of the gas in the nuclear disk will be turned into stars. Instead, due to the fact that the black hole sinking timescale is comparable to that of the star formation timescale in the $\gamma=5/3$ simulation, the overall orbital evolution will be dictated by the stars rather than by the gas. There are, however, some caveats in the argument regarding the role of star formation in the $\gamma=7/5$ case. First, the starburst timescale is based on the unrefined merger simulations. Had we included star formation in the refined simulations we would have probably found shorter timescales locally since these simulations can resolve much higher densities and the star formation rate depends on the local gas density. Second, one might wonder how the inclusion of feedback from star formation, which was neglected in the unrefined merger simulations, would affect gas properties and, consequently, the orbital decay of the black holes. We defer a detailed numerical study of these considerations to future work. \section{Structure, kinematics, and gas inflow in the nuclear regions of merger remnants} The nuclear disk produced in the $\gamma=7/5$ case is highly turbulent. The sources of turbulence are the prominent shocks generated as the cores merge and the persistent non-axisymmetric structures sustained by the self-gravity of the disk after the merger is completed (e.g., Wada \& Norman 2002). The perturbation due to the black hole binary is a negligible effect since its mass is about $10^3$ times smaller than the mass of the disk. The degree of turbulence, of order $50-100\>{\rm km}\,{\rm s}^{-1}$ as measured by the radial velocity dispersion, is comparable to that of observed circumnuclear disks (e.g., Downes \& Solomon 1998). The disk is composed by a very dense, compact region of size about $25$~pc which contains half of its mass (the mean density inside this region is $> 10^5$ atoms/cm$^3$). The outer region instead, from $25$ to $75-80$~pc, has a density $10-100$ times lower, and is surrounded by even lower-density rotating rings extending out to a few hundred parsecs. The disk scale height also increases from inside out, ranging from $20$~pc to nearly $40$~pc. The volume-weighted density within $100$~pc is in the range $10^3-10^4$ atoms/cm$^3$, comparable to that of observed nuclear disks (e.g., Downes \& Solomon 1998). This suggests that the degree of dissipation implied by the equation of state with $\gamma=7/5$ is reasonable despite the simplicity of the thermodynamical scheme adopted. The rotating, flattened cloud produced in the $\gamma=5/3$ is instead more turbulent and less dense than observed circumnuclear disks in merger remnants. The mean velocity dispersion measured within $100$~pc is about $300\>{\rm km}\,{\rm s}^{-1}$, higher than the mean rotational velocity within the same radius, which is $\sim 250\>{\rm km}\,{\rm s}^{-1}$. This suggests that the $\gamma=5/3$ simulation does not describe the typical nuclear structure resulting from a dissipative merger. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{ierapetra3.eps}} \caption{ \footnotesize Radial velocities inside the nuclear disk in the standard, multi-scale merger simulation with $\gamma=7/5$. The blue line corresponds to $t=5.1218$~Gyr, while red and green lines show results after $10^5$~yr and $2 \times 10^5$~yr, respectively. Remarkable gas inflows and outflows are the result of streaming motions within the bar and spiral arms. These arise during the phases of strong, non-axisymmetric instabilities sustained by the disk self-gravity. At late times, the instabilities saturate due to self-regulation and the radial motions also decrease. } \label{fig3} \end{figure} The strong spiral pattern associated with the nuclear disk in the simulation with $\gamma=7/5$ produces remarkable radial velocities (Figure~\ref{fig3}). Since spiral modes transfer angular momentum inwards and mass outwards, strong inward radial velocities are expected. The amplitude of radial motions evolves with the amplitude of the spiral pattern; radial motions decline as the spiral arms weaken over time. Just after the merger, when non-axisymmetry is strongest, radial motions reach amplitudes of $\sim 100\>{\rm km}\,{\rm s}^{-1}$ (Figure~\ref{fig3}). This phase lasts only for a couple of orbital periods, while later the disk becomes smoother as spiral shocks increase the internal energy which in turn weakens the spiral pattern. Inward radial velocities of order $30-50\>{\rm km}\,{\rm s}^{-1}$ are seen for the remaining few orbital times during which we are able to follow the system (Figure~\ref{fig3}). Such velocities are comparable to those recently reported in high-resolution observations of nuclear disks of nearby Seyfert galaxies (Fathi et al. 2006). As the gas reaches down to a distance of few parsecs from the center, its radial velocity diminishes as we approach the resolution limit of the simulations ($\sim 2$~pc). Therefore, the fact that there is almost no net radial velocity within a few parsecs from the center (Figure~\ref{fig3}) is an artifact of the limited numerical resolution. If we assume that speeds of $30-50\>{\rm km}\,{\rm s}^{-1}$ can be sustained down to scales of a few parsecs, more than $10^8 {\rm M_\odot}$ of gas could reach parsec scales in about $10^5$~yr. This timescale is much smaller than the duration of the starburst, and therefore such gas inflow should develop in a similar way even when star formation is taken into account. The inflow is also marginally faster than the decay timescale of the binary SMBH measured in the simulation ($\sim 5 \times 10^5$~yr). Presumably some of this gas could be intercepted by the two SMBHs as they are spiraling down (the relative velocities between the gas and the black holes are small since the SMBHs are always corotating with the nuclear disk) \section{Merger simulations at sub-pc scales; nuclear fueling and orbital decay of SMBHs} In the study of M07, the simulations were stopped when the two black holes had a separation of about $2$~pc, comparable to the adopted gravitational softening length, which sets the nominal resolution limit. At this stage, the black holes had formed a loose binary on a fairly eccentric orbit. In order to explore the sinking of the SMBH binary to even smaller scales we performed a new simulation with a spatial resolution in the gas component of $0.1$~pc, that is $20$ times higher compared to M07. This resolution is comparable to the highest resolution achieved in simulations of nuclear disks starting from equilibrium initial conditions rather than from a large scale galaxy merger (Escala et al. 2005; Dotti et al. 2007). On the other hand, the mass resolution in the new simulation was kept the same as in M07. The number of gas particles in the nuclear disk forming after the merger is sufficiently high ($\sim 10^6$) that even with $0.1$~pc resolution in the gas the Jeans mass is resolved by several SPH kernels in the disk, thus avoiding spurious numerical effects such as artificial fragmentation (Bate \& Burkert 1997). Due to the much higher spatial resolution, the nuclear disk now reveals a much richer structure with both large and small scale spiral patterns (Figure~\ref{fig4}). While the large scale spiral structure was also reported in the simulations of M07, the inner few $10$~pc, that were quite featureless before, now reveal a high order spiral pattern extending down to sub-parsec scales. The spirals-in-spirals patterns are reminiscent of the bars-in-bars patterns that are suggested as possible candidates for bridging large and small scale inflows in non-interacting galaxies. Up to the point when the SMBHs reach a relative separation of $1-2$~pc the sinking rate is comparable to what was previously reported by M07. However, at smaller relative separations, the binary's orbital decay slows down and the orbit oscillates between a fraction of a parsec and $\sim 1$~pc (Figure~\ref{fig5}). What causes this relative stalling? The answer lies in the evolution of the gas density and temperature profile in the nuclear disk. In a few $10^5$~yr, the strong gas inflow produces a very dense central clump with a mass of $\sim 10^8 {\rm M_\odot}$ and size of only $0.5$~pc. We note that this constitutes the first demonstration that a galaxy merger can produce remarkable concentrations of gas at sub-parsec scales subsequent to the formation of a nuclear disk from a larger scale gas inflow. While the density in the very center goes up by an order of magnitude as the central clump forms, at scales above $1$~pc the density decreases as angular momentum is transported outwards leading to the expansion of the disk. The disk exhibits a strong non-axisymmetric structure, with multi-armed spirals, extending down to the inner few tens of parsecs (Figure~\ref{fig4}). The inner spiral pattern is responsible for the efficient transfer of angular momentum in the inner disk region and is probably due to the SLING mechanism (Adams et al. 1989; Krumholz et al. 2007) which enables accretion on the disk orbital timescale rather than on the viscous timescale. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{ierapetra4.eps}} \caption{ \footnotesize Color-coded projected density maps of the nuclear disk viewed face-on in the numerical simulation with $\gamma=7/5$ and $0.1$~pc spatial resolution in the gas. Both panels display the nuclear disk $\sim 10^6$~yr after the galaxy merger is deemed complete. The bottom panel presents the inner region of the disk. A conspicuous spiral pattern reaching to the central region and a central, massive clump produced by the strong gas inflow are evident. } \label{fig4} \end{figure} \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{ierapetra5.eps}} \caption{ \footnotesize Orbital evolution of the two SMBHs as a function of time in a multi-scale, merger simulation with a resolution of $0.1$~pc in the gas component. Results are presented for an equation of state with $\gamma=7/5$. The relative separation of the two SMBHs oscillates between $\sim 0.5$~pc and $\sim 2$~pc and never reaches the resolution limit of the simulation. As a result of very strong gas inflows, the density of the surrounding gas decreases considerably reducing the effect of dynamical friction on the SMBHs. } \label{fig5} \end{figure} By the time the two black holes reach a separation of about $1$~pc the central clump has already formed, sweeping most of the mass from the inner disk As a result of the central inflow, the density of the surrounding gas decreases by a factor of $\sim 5$. The density reduction weakens the effect of dynamical friction. We note that a similar phenomenon is seen by Dotti et al. (these proceedings). In their simulations, the disk profile becomes flatter rather than steeper, but it is still the case that the black holes find themselves in a region of very low density and reduced dynamical friction (the so-called ``core''). In addition, because the two black holes are orbiting around the central massive clump, their relative velocity is a factor of $\sim 3$ higher compared to that in the standard $2$~pc simulation in which the central clump was not resolved. The dynamical friction force scales as $\rho/v_{\rm BH}^2$. The combined reduction of $\rho$ and increase of $v_{\rm BH}$ results in an overall decrease of almost a factor of $50$ in the strength of dynamical friction, explaining the observed suppression in the orbital decay by almost two orders of magnitude. Moreover, as a result of the clump formation, the mass contained within the orbit of the two black holes has become much larger than the sum of their masses; the two black holes do not form a binary as in the $2$~pc resolution simulations but only a loose pair. Should we trust the formation of the massive clump and the suppression in the orbital decay? Probably not. Gas inflows are expected in non-axisymmetric disks, and they are indeed reported in high-resolution simulations of nuclear disks starting from equilibrium conditions (Escala 2006; 2007; Kawakatu and Wada 2008). Yet the magnitude of the inflows, hence the mass of the central clump and associated variation of the disk density profile, is likely exaggerated by the crude modeling of the ISM, and even more by the lack of star formation, supernovae feedback, and gas accretion onto the SMBHs. We take a closer look at this issue below. In Escala (2006; 2007) and Kawakatu \& Wada (2008) the mass that collects in the inner parsec after a few million years is less than $1\%$ of the total mass of the nuclear disk, while it is nearly $10\%$ in our simulations (we note, however, that our nuclear disk is more than an order of magnitude more massive than the disk models used in these studies, hence stronger non-axisymmetric torques are expected due to the stronger self-gravity). \section{Missing physics; multi-phase ISM, star formation, and gas accretion} As explained in the previous sections we used an effective adiabatic equation of state with $\gamma=7/5$ to describe the gas thermodynamics. As discussed in M07, this equation of state breaks down at $\rho > 10^{4}$ atoms cm$^{-3}$ because the gas becomes nearly isothermal at such high densities ($\gamma \sim 1.1$ based on Spaans \& Silk 2000). Such high densities are indeed reached in the inner few parsecs of the nuclear disk. Allowing the equation of state to become softer, namely adopting a lower value of $\gamma$, would result in a more compressible gas at these densities; this would in principle exacerbate the sinking problem by producing an even more concentrated profile and massive central clump. However, the problem is that the approach of the effective equation of state becomes increasingly less robust as the resolution is increased. With $0.1$~pc resolution we should be able to resolve very well the clumpy nature of the ISM and a multi-phase model would be required. Wada \& Norman (2001) showed that when the main radiative heating/cooling processes and supernovae feedback are directly incorporated in simulations of rotating gaseous disks a multi-phase, turbulent ISM arises naturally via a combination of gravitational instability, thermal instability, and turbulent energy injection by supernovae explosions. The resulting structure is filamentary and clumpy at all scales (Wada 2004). The large scale, coherent spiral patterns seen in our simulations would be replaced by much more irregular structures with higher degree of small scale turbulence (turbulence is present in our disks, but is generated only at large scales by global non-axisymmetric modes). This should give rise to a stochastic, episodic inflow (Escala 2006; Kawakatu \& Wada 2008) as opposed to the steady inflow that we observe in our simulations since the coherence of large scale torques would be partially disrupted. Apart from the absence of a multi-phase model for the ISM, there are other key ingredients missing from our simulations that would certainly have an effect on the structural evolution of the nuclear disk and thus on the sinking of the two SMBHs: star formation and gas accretion. Let us first consider the issue of star formation. During the short timescale probed by our simulations after the merger ($\sim 10^6$~yr) most of the nuclear gas should not be converted into stars, even assuming the high star formation rates observed in powerful ULIRGs. To estimate the expected star formation rate, we can simply assume that most of the mass in the nuclear disk is molecular (as expected by the high densities, above $10^3$ atoms/cm$^3$). This gas will be turned into stars on the local dynamical timescale. Star formation in molecular clouds is rather inefficient; for giant and large molecular clouds ($> 10$~pc) the fraction of gas that is converted into stars can be as low as $1-2\%$ (Krumholz \& Tan 2007), for cloud cores, the densest regions of clouds that collapse directly into individual stars, it can be at most $30\%$ (Li et al. 2005) (radiative feedback and turbulence driven by supernovae explosions, outflows and large-scale gravitational instability all contribute to regulate the efficiency at various scales). Let us now consider the worst case scenario which is to adopt the highest value of the efficiency (our spatial resolution of $2$~pc is intermediate between the scale of molecular clouds and that of their cores) and write the star formation rate in the nuclear disk as $dM_*/dt = 0.3 \times M_{\rm gas}/T_{\rm orb}$, where $T_{\rm orb} = 10^6$~yr, the orbital time at the disk half mass radius of $25$~pc and $M_{\rm gas} = 3 \times 10^9 {\rm M_\odot}$. The resulting star formation rate is $900 {\rm M_\odot}$/yr. Nonetheless, even with such high star formation rate less than 1/5 of the gas in the disk, $4.5 \times 10^8 {\rm M_\odot}$, would be converted into stars during the time required for the black holes to sink and bind in the nuclear disk ($5 \times 10^5$~yr). However, while this statement is true for the disk as a whole, if we restrict ourselves to the inner few parsecs then the gas has such high densities that the local star formation rate could convert a few times $10^8 {\rm M_\odot}$ of gas into stars before the black holes reach a parsec scale. The rapid conversion into stars would weaken the non-axisymmetry of the gas by reducing the gas surface density, thus increasing the disk Toomre $Q$ parameter and stabilizing the disk. With a weaker spiral pattern the central gas inflow would be reduced. Kawakatu \& Wada (2008) conclude that star formation consumes a significant fraction of the gas available in the nuclear disk, limiting the amount of gas that can feed the central parsec region and thus the eventual growth of the SMBHs. Dotti et al. (2007) showed that, once a nuclear disk has formed, a nearly identical decay rate of the two massive black holes follows for entirely gaseous or stellar nuclear disks since dynamical friction has a similar strength in gaseous and stellar backgrounds in the supersonic regime (Escala et al. 2004). Therefore, if the central region of the nuclear disk converts rapidly into stars this would not lower the efficiency of the decay process by itself; rather, the slowdown of the decay might be avoided by limiting the accumulation of the dense central gas clump. As far as gas accretion onto the black holes is concerned, part of the cold molecular gas in a multi-phase medium will be accreted from the surrounding nuclear disk. This phenomenon could help with the observed black hole stalling in two ways, i.e. increasing the strenght of dynamical friction as the black hole become more massive and/or making the disk more stable by lowering its gas surface density. However, the ultimate effect of gas accretion can only be elucidated with an appropriate simulation. If the two black holes manage to sink below $1$~pc and decrease their relative velocity they could eventually reach the ellipsoidal torque regime (Escala et al. 2004; 2005) and continue to sink to $0.1$~pc and below. A new ISM model which combines the equilibrium effective model of Spaans \& Silk (2000) with non-equilibrium processes such as shock heating and supernovae explosions has been recently implemented in GASOLINE (Roskar et al., in prep.). This model produces a multi-phase ISM with properties similar to those seen in Wada \& Norman (2002) and Escala (2006; 2007) but it incorporates a more realistic modeling of the balance between heating and cooling for the high density, cold gas phase based on radiative transfer calculations. The calculations of M07 are currently being recomputed with this new model which also includes star formation. \begin{acknowledgements} We are grateful to our collaborators Simone Callegari, Monica Colpi, Piero Madau, Tom Quinn, Rok Roskar, and James Wadsley for allowing us to present results in advance of publication. We acknowledge discussions with Mandeep Gill, David Merritt, and Marta Volonteri. S. Kazantzidis is supported by the Center for Cosmology and Astro-Particle Physics (CCAPP) at The Ohio State University. A. Escala is funded by the U.S. Department of Energy through a KIPAC Fellowship at Stanford University and the Stanford Linear Accelerator Center. All simulations were performed on Lemieux at the Pittsburgh Supercomputing Center, on the Zbox and Zbox2 supercomputers at the University of Z\"urich, and on the Gonzales cluster at ETH Z\"urich. \end{acknowledgements}
1,108,101,562,637
arxiv
\section{Introduction} The idea of compressing a teacher model into a smaller student model one by matching the predictions of the teacher was introduced by \citet{Caruana2006model}. After training the teacher, they performed the transfer on new, unlabelled data by minimizing the squared difference between the logits of the final softmax of the teacher and student models. A related technique, called ``distillation", was introduced by \citet{hinton2015distilling}. That paper performed the transfer on the labelled training data rather than on new, unlabelled data. The student is trained to minimize a weighted sum of two different cross entropies. The first is the cross entropy with the correct answer using a standard softmax. The second is the cross entropy with the probability distribution produced by the teacher when using a temperature higher than 1 in the softmax of both models. The point of using a higher temperature is to emphasize the differences between the probabilities of wrong answers that would all be very close to zero at a temperature of 1. There have since been some interesting theoretical developments of distillation \cite{lopez2015unifying} and it is now being widely used to produce small models that generalize well. These are needed for resource constrained applications of neural networks such as text-to-speech \citep{oord2017parallel} and mobile on-device convolutional neural networks \citep{howard2017mobilenets}. In this work, we focus on distillation for datasets where there are only a few possible classes, resulting in limited information to be transferred (e.g.\ binary classification). We show that we can improve the transfer by forcing the teacher to divide each class into many subclasses that it invents during the supervised training. We propose an auxiliary loss that encourages each subclass to be used equally while ensuring that each prediction is ``peaky". We show experimentally that the subclasses learned have semantic meaning and help distillation. The subclasses can also be used to interpret the models predictions by clustering them in discrete bins. \begin{figure*}[h] \begin{center} \includegraphics[width=0.8\textwidth]{schematic} \end{center} \label{fig:schematic} \vskip -1em \caption{Comparison between distillation and subclass distillation using 2 classes and 2 subclasses per class. The teacher is usually deeper and/or wider than the student. For distillation, the student mimics (using temperature-scaled cross-entropy) the teacher's class predictions while in subclass distillation the student mimics the subclasses predictions that were invented by the teacher. The class predictions are derived by summing the subclass predictions and the only ground-truth supervision for both cases are binary class labels.} \end{figure*} The paper is organized as follows. We start with a description of subclass distillation and a comparison to a related method, penultimate layer distillation. First, we train models on a binary split of CIFAR-10 \citep{krizhevsky2009learning} that we call CIFAR-2x5, where we group sets of 5 classes together to create a binary classification task. We show that a teacher trained to produce subclasses is able to discover the original CIFAR-10 classes, despite receiving only binary supervision. We also show that distilling from this teacher using these learned subclasses leads to better results as compared to conventional distillation and penultimate layer distillation. We next move to the CelebA dataset \citep{liu2015faceattributes}, in which each example has 40 binary labels. We show that when predicting a single one of these binary labels, the subclasses produced by the teacher are highly correlated with the other binary labels it has never been trained on, which helps subsequent subclass distillation. We conclude the experimental section with two additional results. First, on the Criteo click prediction dataset \citep{criteo_labs_2017}, we show that subclass distillation outperforms conventional distillation in terms of training speed. We also show that when the student does not see the full dataset, subclass distillation provides significant generalization gains. Second, using MNIST-2x5 \citep{lecun1998mnist}, we show that the student can learn to predict the binary label by learning to predict the relative subclass probabilities (intra-class), without having ever seen the binary labels or receiving class relative probabilities from the teacher. \section{Subclass distillation} During distillation, the amount of information that the student network receives about the generalization tendencies of the teacher network depends on the number of classes. The information provided by the hard target labels is logarithmic in the number of classes, but the information about how the teacher generalizes is linear in the number of classes provided we distill using the logits or using cross-entropy at a high temperature. This means that distillation is considerably less efficient for models with few classes. Binary classifiers are important in many applications, and the aim of this paper is to make distillation more efficient for such models by forcing the teacher to invent $s$ subclasses for each of the $c$ classes in the dataset, as shown in Fig. \ref{fig:schematic}. The teacher computes $c \times s$ logits and puts these through a softmax to get $c \times s$ probabilities that sum to 1. The probabilities of all the subclasses of a class are then added to get the teacher's predicted probability for that class. The teacher is trained by minimizing the cross-entropy with the class probabilities: \begin{align} \label{eq:lxent} \mathcal{L}_\text{xent} = -\frac{1}{n} \sum_{i=1}^n \sum_{j=1}^c {\bm{Y}}_{i, j} \log \left(\sum_{k=1}^s {\etens{P}}_{i,j,k} \right) \end{align} where ${\bm{Y}}_{i,j} \in \{0, 1\}$ are the correct targets for the $j^\text{th}$ class of the $i^\text{th}$ example as by the dataset and ${\etens{P}}_{i, j, k}$ is the output probability for the $k^\text{th}$ subclass of that example. Given logits ${\etens{Z}}$, the output probabilities ${\etens{P}}$ are computed in the usual fashion by performing a softmax operation over all logits belonging to the same example: \begin{align} {\etens{P}}_{i, j, k} &= \frac{\exp({\etens{Z}}_{i, j, k} / T)}{\sum_{l=1}^c\sum_{m=1}^s\exp({\etens{Z}}_{i, l, m} / T)}. \end{align} The temperature parameter $T$ controls the entropy of the output distribution. When training the teacher, it is set to 1. When distilling knowledge from the teacher to the student, it is often beneficial to increase the temperature. In subclass distillation, as in conventional distillation, the student is trained to match the teacher. However, rather than use only the $c$ classes in the original dataset, the student learns to mimic the teacher's output for $c \times s$ subclasses. Like the teacher, the student produces $c \times s$ output probabilities $\tilde {\tens{P}}_{i, :, :}$ for each example $i$, resulting in the subclass distillation loss: \begin{align} \mathcal{L}_\text{distill} &= -T^2 \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^c \sum_{k=1}^s {\etens{P}}_{i, j, k} \log \left( \tilde {\etens{P}}_{i, j, k} \right), \end{align} where we scale the loss by $T^2$ in order to keep gradient magnitudes approximately constant when changing the temperature \cite{hinton2015distilling}. Thus, with this loss, knowledge is transferred from the teacher to the student not merely through the probabilities the teacher assigns to the classes in the original dataset, but also through the probabilities assigned to the subclasses.\footnote{In conventional distillation the cross-entropy loss is $\mathcal{L}_\text{distill} = -T^2 \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^c {\etens{P}}_{i, j} \log \left( \tilde {\etens{P}}_{i, j} \right)$ since the teacher only produces class probabilities.} When training the student, we typically use a combination of the distillation loss $\mathcal{L}_\text{distill}$ and the standard cross-entropy loss $\mathcal{L}_\text{xent}$: \begin{align} \mathcal{L}_\text{student} &= \alpha \mathcal{L}_\text{distill} + (1-\alpha)\mathcal{L}_\text{xent} \end{align} where $\alpha \in [0, 1]$ controls the balance between hard and soft targets, which we call ``task balance" \subsection{Penultimate layer distillation} \label{sec:pld} An alternative to subclass distillation that also incorporates more information into distillation is to distill not from the logits, but from the penultimate layer's activations (or from other layers as in \citet{romero2014fitnets}). In this case: \begin{align} \mathcal{L}_\text{distill} &= \frac{1}{n} \sum_{i=1}^n \|{\bm{a}}_{i}- {\bm{W}} \tilde{\bm{a}}_{i}\|^2. \end{align} where $\tilde{\bm{a}}_{i}$ are the penultimate layer's activations of the student for the $i^{th}$ example in the minibatch, $ {\bm{a}}_{i}$ are the respective activations in the teacher and ${\bm{W}}$ is a projection matrix to match the dimensions of teacher/student \textit{learned in the distillation phase}. Note that, the student will use its capacity to match the teacher's representations even for directions that may not be relevant for predicting the classes. In subclass distillation, the teacher's subclass logits are a projection of the teacher's penultimate layer activations into a lower dimension which is \textit{learned during the teacher's training phase}. Therefore, the projection into subclasses can remove irrelevant information present in the penultimate layer while retaining more information compared to the ``class" logits. Note that \citet{hinton2015distilling} shows that minimizing the squared difference between the zero-meaned logits of the teacher and student is the limit of distillation as the temperature goes to infinity, provided that the learning rate is scaled as the squared temperature. Therefore, subclass distillation, as the temperature goes to infinity, is equivalent to penultimate layer distillation applied not on the full penultimate layer, but on a low-dimensional projection of that layer. \section{Auxiliary loss} In subclass distillation, the cross-entropy loss (Eq.~\ref{eq:lxent}) constrains only the class probabilities and not the subclass probabilities. Without an additional loss encouraging the network to use all subclasses, it may consistently assign high probability to a single subclass of each class and assign extremely low probability to the others. In this case, the subclasses would provide almost no additional signal for distillation. We thus propose an auxiliary loss that encourages the network to assign different examples to different subclasses, even when they belong to the same class. Given a minibatch of $n$ logit vectors ${\bm{v}}_i = \text{vec}({\tens{Z}}_{i, :, :})$, we compute: \begin{align} \mathcal{L}_\text{aux} &= -\frac{1}{n} \sum_{i=1}^n \log \frac{e^{\hat{{\bm{v}}}_i^\text{T} \hat{{\bm{v}}}_i/T}}{\frac{1}{n} \sum_{j=1}^n e^{\hat{{\bm{v}}}_i^\text{T} \hat{{\bm{v}}}_j/T}}\\ &= \frac{1}{n} \sum_{i=1}^n \log\left(\sum_{j=1}^ne^{\hat{{\bm{v}}}_i^\text{T} \hat{{\bm{v}}}_j/T}\right) -\frac{1}{T} - \log(n), \end{align} where $\hat{{\bm{v}}}_i$ is a normalized version of ${{\bm{v}}}_i$ (zero-mean, unit-variance) to prevent easy solution of the minimization by making the logits large. As above, $T$ is a temperature hyper-parameter, although its value need not correspond to the temperature used for distillation. This auxiliary loss encourages the normalized logit vector corresponding to each example to have a low dot product with other normalized logit vectors. In practice, the network accomplishes this by distributing examples across subclasses. The total loss for the teacher is: \begin{align} \mathcal{L}_\text{teacher} &= \mathcal{L}_\text{xent} + \beta\mathcal{L}_\text{aux} \end{align} where $\beta$ controls the strength of the auxiliary loss. \section{Experimental results} \subsection{CIFAR-10} In this section, we experimentally test the ideas presented in the previous sections. We start by providing a visual demonstration that the hidden representations of neural networks contain semantically meaningful information that is not present in the class logits. In Fig.\ \ref{fig:nncifar10} (top), we show the nearest neighbors using Euclidean distance in the class logits layer of a network trained on CIFAR-10 classification. We observe that the nearest neighbors are examples of the same class (horse) as we expected. However, if instead of using the logits layer, we find the nearest neighbors in the penultimate layer, we notice that not only the closest examples are from the same class, but they are also semantically similar to the query image (horse head). This is the sort of information that is present in the penultimate layer but not in the logits that we want to use to improve distillation. \begin{figure}[h] \begin{center} \includegraphics[width=\linewidth]{nearest_neighbors} \end{center} \caption{Finding the nearest neighbor in a network trained on CIFAR-10. Query is a close-up on a horse's head. If the nearest neighbor is calculated in the ``class" logits layer, we find examples from the same class (horse), but the semantically similar image with a close-up head is only the $5^{th}$ nearest-neighbor. If distance is calculated in the penultimate layer, all nearest neighbors are semantically similar to the query. This shows that some semantic information is lost in the ``class" logits and distillation can benefit from using more information.} \label{fig:nncifar10} \end{figure} Next, we move to the quantitative results. We use the CIFAR-10 dataset to construct an artificial binary classification task where we group together examples from the classes airplane, automobile, bird, cat and deer to construct the first class and dog, frog, horse, ship and truck to construct the second one. We call this task CIFAR-2x5 and by using this artificial construction we have natural semantic subclasses corresponding to the original CIFAR-10 classes. \subsubsection{Unsupervised subclass classification} We train a ResNet \citep{he2016deep} network with 20 layers to be used as a teacher (see results in Table~\ref{tab:teacher-results} and training details including hyperparameters in Appendix~\ref{app:experimental_setup}). We first train this network on CIFAR-10 as a baseline and obtain 93.5\% accuracy (averaged over 3 runs as all the results in this section). We use the same network with frozen weights to evaluate how well it does on the binary classification task and we obtain 95.6\% (+2.1\%). If we train this network directly on the binary classification task (CIFAR-2x5), we get 94.3\%. Note that although it is evaluated on the same task, the first network is trained with 3.32 ($\log_{2}{10}$) label bits per example compared to only 1 label bit per example in the second network. This difference in the number of bits of label information explains the 1.3\% accuracy gap between them in the binary classification task and the benefit of using ``subclass" information even when the evaluation is done at the ``class" level. \begin{table}[t] \caption{Teacher/ResNet results over 3 runs trained on CIFAR-10 or CIFAR-2x5 and top-1 accuracy evaluation on both tasks. Additionally, we evaluate the effect of adding a subclass head and auxiliary loss on unsupervised subclass classification. For reference we include the state-of-the art result on fully unsupervised CIFAR-10 using the invariant information clustering method (IIC) in last line.} \label{tab:teacher-results} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule CIFAR- &head &aux. loss &acc. (2) &acc. (10)\\ \midrule 10 & & &95.6$\pm$ 0.1 & 93.5 $\pm$ 0.2\\ 2x5 & & &94.3$\pm$ 0.2 & \\ 2x5 &$\surd$ & &94.2$\pm$ 0.2 & 39.3 $\pm$ 4.0\\ 2x5 &$\surd$ &$\surd$ &94.2$\pm$ 0.0 & 64.6 $\pm$ 4.8 \\ \midrule \multicolumn{4}{l}{Unsupervised \citep{ji2018invariant}} &57.6 $\pm$ 5.0 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Next, we investigate how making the teacher ``invent" subclasses affects the network performance. The subclass head enables the network to output 10 logits (5 subclasses per class) which are marginalized (after softmax) over the subclasses before binary cross-entropy loss. Simply adding the head produces no improvement in binary classification despite the increase in the number of parameters in the last layer by a factor of 5. We also measure the accuracy of this network on all the 10 classes by directly taking the $\argmax$ of the subclass layer and picking the permutation that maximizes the accuracy. Although the result of 39.3\% is better than chance (20\%\footnote{Corresponding to perfect knowledge of the class and random choice of the subclass.}), we observed that since there is nothing encouraging the network to use all subclasses, they can ``die" during training. The subclass accuracy can significantly be improved by adding the auxiliary loss which increases the accuracy to 64.6\%. Note that this network has only seen binary labels, but is able to separate the classes in meaningful subclasses without extra supervision. Fig.\ \ref{fig:sub_class} shows how the best network out of 3 runs (70.2\%) splits a subset of examples in the validation set into subclasses. Most errors arise in distinguishing among cats, birds and deer, while other subclasses correspond to the original dataset classes. For comparison, the state-of-the-art \citep{ji2018invariant} on fully unsupervised classification on CIFAR-10 is 57.6\% using the invariant information clustering method (IIC). Here, we show that, with little extra supervision, (binary labels) we can outperform this result with a very simple approach. \begin{figure}[h] \begin{center} \includegraphics[width=0.45\textwidth]{sub_class} \end{center} \vskip -0.1in \caption{Unsupervised subclass discovery. Examples of the validation set grouped by the subclass logit they activate most (one row per subclass). Using the validation set, we find the 1-to-1 assignment that maximizes accuracy, resulting in the following permutation: automobile, cat, bird, airplane, deer (first class), truck, frog, boat, horse and dog (second class).} \label{fig:sub_class} \end{figure} In the analysis above, we use the accuracy on 10-class classification as a measure of how well the network separates the examples into meaningful subclasses. The idea is that this subclass information will help the student generalize better through subclass distillation. We can use a very simple model to measure how much extra label information the subclass teacher can provide. In the ideal case where the teacher perfectly learns the subclasses, it provides 1 + 2.32 label bits ($\log_{2}{2} + \log_{2}{5}$) per example, where the first bit comes from the binary class and the remaining ones from the subclass. In the case where the teacher can ``relabel" $P\times100$\% of the subclasses correctly and the remaining errors are distributed equally over the remaining 4 subclasses, the effective number of label bits is given by the $q$-ary symmetrical channel \citep{cover2012elements} and is equal to $\log_{2}{5} + P\log_{2}{P} + (1-P)\log_{2}{(1-P)/4}$. The teacher trained with binary classification + the subclass head + the auxiliary loss gets on average 67.7 $\pm$ 4.5\% subclass accuracy on the training set. This result is slightly better than results from Table \ref{tab:teacher-results} from validation set, but they are relevant for the analysis since with distillation we reuse the training set in the transferring phase. The best of the 3 runs gets 73.0\%, which results in 0.94 effective extra label bits per example given by the teacher compared to a student that only sees the binary labels. This assumes that the teacher provides noisy one-hot encoded subclass labels ("hard information") to the student, while distillation can also benefit from ``soft" information (small differences in relative probabilities) which can increase the effective number of subclass bits per example, but with the simple model our subclass teacher can already provide roughly the double amount of label information per example. Additionally, we would like the subclass predictions for each example to be ``peaky", resulting in probability mass concentrated mostly in a single subclass. This can be translated to having low-entropy predictions. For the network trained without the auxiliary loss the average entropy is 0.13 $\pm$ 0.02 bits while it increases to 0.42 $\pm$ 0.05 bits using the auxiliary loss, which is still far away from 3.32 bits for the uniform distribution. However, just having low-entropy predictions is not enough, since, for all examples belonging to a given dataset class, the network may assign a confident prediction to the same subclass. Therefore, we would like to ensure that after making a hard decision ($\argmax$), the distribution of subclass utilization is close to the uniform distribution (high entropy). The subclass utilization entropy is 1.87 $\pm$ 0.11 bits (without) and 3.19 $\pm$ 0.02 bits (with) the auxiliary loss. This shows that the auxiliary loss helps the subclass predictions to be confident and diverse at the same time, resulting in discovery of the original subclasses for the CIFAR-2x5 example. \subsubsection{Subclass distillation} In this section, we investigate how to transfer the teacher's knowledge to a low capacity student. We pick the AlexNet architecture as the student \citep{krizhevsky2012imagenet}. Results are shown in Table \ref{tab:student-results}. We start by training the network on the two tasks without distillation, as a baseline. We observe a gap of 2.2\% between a network trained with subclass labels (CIFAR-10) and a network without access to this extra information (CIFAR-2x5). Next, we train the student in two different situations. First we use conventional distillation. We observe a 1.0\% accuracy gain compared to the baseline student. Then we train the same student with penultimate layer distillation and we get similar gain to conventional distillation: 1.0\% accuracy gain. Finally, we test subclass distillation, where we distill from a teacher that was trained to perform binary classification, but with the subclass head and auxiliary loss. With subclass distillation, we observe a 2.3\% accuracy improvement compared to the baseline student. The subclass distillation student can also classify the examples over 10 classes with 68.3\% accuracy which is slightly below the teacher (70.2\% which was the best of 3 runs). Note that the student trained with subclass distillation can completely recover the 2.2\% gap between the models trained with hard targets on CIFAR-10 and CIFAR-2x5 without ever seeing the ``true" subclass labels \begin{table}[t] \caption{Student/AlexNet results over 3 runs. The baselines (two first rows) correspond to training the network with only the labels in the dataset. Then the distillation results correspond to training the student to match the teacher's class predictions (\textbf{D}istillation), to match the penultimate layer's activations (\textbf{P}enultimate \textbf{L}ayer \textbf{D}istillation) or the teacher's subclass predictions (\textbf{S}ub\textbf{C}lass \textbf{D}istillation).} \label{tab:student-results} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccccr} \toprule CIFAR- &D&PL-D&SC-D&acc. (2)&acc. (10)\\ \midrule 10 & & & &91.3$\pm$ 0.1 & 86.7 $\pm$ 0.2\\ \midrule 2x5 & & & &89.1$\pm$ 0.2 & \\ 2x5 &$\surd$ & & &90.1$\pm$ 0.1 & \\ 2x5 &&$ \surd$ & &90.1$\pm$ 0.1 & \\ 2x5 & & & $\surd$ &\bf{91.4}$\pm$ 0.2 & 68.3 $\pm$ 0.2 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \subsubsection{Training speed} In addition to improving performance, subclass distillation also makes training faster. Figure \ref{fig:training_speed} shows the evolution of accuracy on the validation set through training. First we train a baseline network using only the dataset's ``hard" labels represented by the blue curve and the second row in Table \ref{tab:student-results}. We observe a large variation of performance early in training and performance increase is slow. When we train the student with conventional distillation (D), shown in green, training progresses much faster, and the final performance is better than the baseline. Since the teacher provides only a single real number per training example, there is not much information to enable the student to significantly outperform the baseline. Subclass distillation (SC-D), shown in red, addresses this issue. This results in faster training, more stable performance and higher final accuracy, matching a student trained directly on the ``true hidden" subclasses (blue dashed line). Note that both the subclass teacher and student have only seen binary labels. Finally, we show the results of penultimate layer distillation (PL-D). Although the performance is similar to distillation, training is slower, as the student tries to match the 128-dimensional teacher's activations, which may have directions that are not important for final classification. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{cifar_distill} \end{center} \caption{CIFAR-2x5: Evolution of validation accuracy of a student (AlexNet) during training and comparison between: training only with dataset labels (baseline binary targets), distillation (D), penultimate layer distillation (PL-D) and our proposed solution, subclass distillation (SC-D). For reference, we add the performance of the teacher (ResNet-20) trained on binary labels and a student trained on 10-ary labels but evaluated on binary classification (baseline 10-ary targets).} \label{fig:training_speed} \end{figure} \subsection{CelebA} Although CIFAR-2x5 is suitable to demonstrate the subclass distillation concept and we can show significant gains in performance and training speed, the fact that the true subclass structure matches our choice of the number of subclasses makes the task easier. Therefore, we decided to test our approach on CelebA, a more realistic and challenging dataset. CelebA comprises 202,599 images of celebrity faces, annotated with 40 binary attributes that are highly correlated and unbalanced. We pick the male/female classification task and we use 10 subclasses per class, which does not match the number of features. We obtain 1.51\% error rate using a ResNet-20 network (averaged over 3 runs). For some of the annotated labels, we can find a corresponding subclass that is activated by said feature. For example, in Fig. \ref{fig:celeba}, we show the proportion of examples in the validation set labeled ``blond" in each subclass, where the first 10 subclasses represent the ``female" class and the remaining the ``male" one. Dashed lines represent the average of the class (more female than male blonds in the dataset). We highlight examples that activate the first and ninth subclass and we observe that indeed the teacher has split the predictions into semantic subclasses and we speculate that this helps distillation. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{celeba} \end{center} \caption{CelebA: proportion of examples per subclass that have the ``blond-hair" feature. We highlight some examples of subclass ``0" and ``8", where we observe that our teacher network splits the dataset in semantic meaningful subclasses. Dashed lines represent the class-average (female/male).} \label{fig:celeba} \end{figure} Next, we transferred knowledge from the teacher (ResNet-20) to a student (AlexNet). Results are shown in Table \ref{tab:celeba-results} in terms of error rate for the male/female prediction. The teacher achieves 1.51\% error rate while a student trained only with the hard labels achieves 2.05\%. Using conventional distillation, the error drops to 1.83\% while with subclass distillation we achieve the best performance of 1.70\%. This shows that the learned subclass factorization is useful for distillation and helps the student generalize better. \begin{table}[t] \caption{CelebA: results over 3 runs. Both the teacher (ResNet-20) and student (AlexNet) are trained with to predict the binary male/female label. Distillation results correspond to training the student to match the teacher's class predictions (\textbf{D}istillation) or the teacher's subclass predictions (\textbf{S}ub\textbf{C}lass \textbf{D}istillation).} \label{tab:celeba-results} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccccr} \toprule Net&D&SC-D&Error rate\\ \midrule Teacher & & &1.51$\pm$ 0.04 \\ \midrule Student & & &2.05$\pm$ 0.07 \\ Student &$\surd$ & &1.83$\pm$ 0.05 \\ Student &&$ \surd$ &\bf{1.70}$\pm$ 0.12 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \subsection{Criteo} In our CIFAR-2x5 and CelebA experiments, we ignored some of the available supervision during training time and instead used it for evaluation, in order to verify that our approach learns meaningful subclasses. In a real-world scenario, we would use all the available information for training. Therefore, we also tested our approach on a binary dataset without a known subclass structure, the Criteo click prediction dataset \cite{criteo_labs_2017}. This dataset consists of anonymized real-valued and categorical features. The target is a binary label indicating whether the ad was clicked. Subclass distillation accelerates training on the Criteo dataset and leads to accuracy improvements when limited data is used for distillation. We use the large version of this dataset and we downsample the non-click examples to create a balanced dataset. The teacher is a 5-layer fully-connected network achieving 71.5\% accuracy, while the student is a 1-hidden layer network achieving 71.4\%. Note that a tiny accuracy improvement is significant in click prediction tasks since it results in large revenue increase for large user bases \citep{wang2017deep}. We then compare distillation to subclass distillation. Both achieve 71.6\% accuracy, which is better than the teacher. More important, subclass distillation again trains faster, as it provides more information about teacher generalization per example, but the dataset is so big this does not affect final performance. If we artificially reduce the amount of data that the student is trained on (10\% of the total) to exaggerate the performance difference, then we observe accuracy gains by using subclass distillation. The ability to perform distillation with limited data is attractive for large datasets such as Criteo (over 1 terabyte in size). \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{Criteo} \end{center} \label{fig:nn} \caption{Criteo click prediction: Evolution of validation accuracy of a student during training and comparison between: distillation (D) and subclass distillation (SC-D). When the transfer set contains all the training data, SC-D trains faster but final performance is comparable. By reducing the transfer set by a factor of 10, we exagerate the performance gap and SC-D outperforms D as it provides the student more bits per training example.} \end{figure} \subsection{MNIST} As our final experiment, we split the MNIST dataset into a binary classification (MNIST-2x5), by grouping digits 0 to 4 in one class and digits 5 to 9 in the other. We train a convolutional teacher to produce 10 subclasses. Fig. \ref{fig:mnist} shows how the network groups the examples into subclasses (each column represents one subclass). This network achieves 0.73\% $\pm$ 0.09 error rate in the binary classification task. A fully connected 2 hidden layer student achieves 1.57\% $\pm$ 0.06. We then distill the teacher using distillation (1.23\% $\pm$ 0.04) while subclass distillation achieves 0.93\% $\pm$ 0.06. More interestingly, we can train the student without the hard targets by encouraging the student to mimic the intra-class relative probabilities provided by the teacher. We apply a separate softmax to each group of subclass logits to keep relative intra-class probabilities and erase relative class probability. Then we train the student with two cross-entropy losses over 5 subclasses, one per class. This way, the student never sees the binary label, but surprisingly learns it indirectly, obtaining 2.06\% $\pm$ 0.18 error rate. This is analogous to the experiment in Section 3 of \citet{hinton2015distilling}, where the authors omit the digit ``3" in the transfer set and the networks learned to correctly classify them just by observing the soft prediction of the digit ``3" for the remaining digits it has seen. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{mnist} \end{center} \caption{MNIST unsupervised subclass discovery. Examples of the validation set and which subclass logit they activate most (one column per subclass).} \label{fig:mnist} \end{figure} \section{Related work} Several distillation methods have been proposed in the last few years \citep{dist1,dist2,dist3,dist4,dist5,dist6,dist7,dist8,dist9,dist10}. Some methods focus on teachers and students with the same architecture which can be trained sequentially \citep{furlanello2018born,xie2019self} (using unlabeled data and noisy student), \citep{bagherinezhad2018label} (using extensive data augmentation) or in parallel \citep{anil2018large} (ensemble). Other methods distill from earlier layers using $L_2$ loss \citep{romero2014fitnets, sun2019patient}. The relationship between our method and these methods is described in section \ref{sec:pld}. Recently, \citet{tian2019contrastive} proposed to distill from the penultimate layer using a contrastive loss. The relationship between our approach contrastive distillation is more vague; we use a contrastive loss during the teacher training phase to learn the subclasses while in their method it is used during distillation phase. Our method also bears some resemblance to clustering methods. \citet{ji2018invariant} use a contrastive loss similar to our auxiliary loss (they use pairs of data augmented examples to create an anchor, whereas our loss effectively pairs the example with itself) to obtain state-of-the art results on CIFAR-10 in unsupervised and semi-supervised settings. A similar loss has been used for representation learning in \citep{hjelm2018learning, tian2019contrastive,he2019momentum, oord2018representation}. In these works, the loss is applied either in an unsupervised setting, or in a semi-supervised setting where only part of the dataset has labels. By contrast, in our case, all examples have a binary label, and we want to learn the hidden subclass labels. Moreover, these methods learn a high dimension representation of the data, whereas we learn exactly the number of subclasses with no need for a linear layer on top. An alternative method for unsupervised clustering with deep neural networks that is not based on the contrastive loss can be found in \citet{kosiorek2019stacked}, where they use capsule networks to directly learn MNIST and CIFAR-10 classes. The closest method to ours is that of \citet{krause2010discriminative}, which also uses a probabilistic classifier for clustering by optimizing for class balance and class separation, although the authors use a different loss for this purpose, and perform experiments with kernel methods rather than deep neural networks. \section{Conclusion} We propose subclass distillation, a distillation method where the teacher divides each class into many subclasses that it invents, and the student matches these subclass probabilities. We show that we can improve learning compared to conventional distillation and penultimate layer distillation in terms of generalization and/or training speed. We showed that with a simple auxiliary loss, our teacher divides examples of the dataset into semantically meaningful subclasses. The loss encourages the subclass predictions to be confident and diverse. We showed that when the underlying subclass structure is known and matches the choice of number subclasses (CIFAR-2x5 and MNIST-2x5), we can discover the original subclasses with high accuracy, and subclass distillation outperforms other distillation methods. When there is a subclass structure in the dataset which does not match the number of subclasses chosen (CelebA), our method can still discover semantic subclasses which help subclass distillation. Finally, when there is no known subclass structure (Criteo), subclasses can provide faster transfer and more bits per example when the data available is limited. We further validated that subclass distillation provides additional bits per example by showing on MNIST that we can learn to predict the binary label without any binary supervision, just by mimicking the (intra-class) teacher subclass relative probabilities.
1,108,101,562,638
arxiv
\section{#1}} \newcommand{\col}[2]{\left(\begin{array}{c} #1 \\ #2 \end{array}\right)} \newcommand{\fl}[1]{ \lfloor #1 \rfloor } \newcommand{\ket}[1]{\vert #1 \rangle} \renewcommand{\mod}[1]{\ (\mathrm{mod}\ #1)} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \def\M{\mathcal{M}} \def\Im{\mathop{\mathrm Im }} \def\Re{\mathop{\mathrm Re }} \def\half{\frac{1}{2}} \def\Z{\mathbb{Z}} \def\F{\mathbb{F}} \def\C{\mathbb{C}} \def\H{\mathcal{H}} \def\K{\mathcal{K}} \def\G{\mathcal{G}} \def\cL{\mathcal{L}} \def\R{\mathbb{R}} \def\N{\mathcal{N}} \def\P{\mathbb{P}} \def\Pp{\mathbb{P}} \def\tC{\tilde{C}} \def\talpha{\tilde{\alpha}} \def\nhv{H-V} \def\n3a{t} \def\Tr{{\mathrm{Tr}}} \def\tr{{\mathrm{tr}}} \def\ord{\mathrm{ord}} \def\O{\mathcal{O}} \def\cg{G} \def\phiorig{\phi} \def\phit{\phi_0} \def\phif{\phi_1} \def\phifb{\nu} \def\phis{\psi_2} \def\phisa{\phi_2} \def\phie{\psi_1} \def\ola{\psi_3} \def\alphabeta{\xi} \def\pz{\phit} \def\po{\phif} \def\betah{\rho} \def\ho{h^{1, 1}} \newcommand{\eq}[1]{(\ref{#1})} \title{Matter and singularities} \author{David R. Morrison$^{1,2}$ and Washington Taylor$^3$\\ $^1$Departments of Mathematics and Physics\\ University of California, Santa Barbara\\ Santa Barbara, CA 93106, USA\\ \\ $^2$Institute for the Physics and Mathematics of the Universe\\ University of Tokyo\\ Kashiwa, Chiba 277-8582, Japan\\ \\ $^3$Center for Theoretical Physics\\ Department of Physics\\ Massachusetts Institute of Technology\\ 77 Massachusetts Avenue\\ Cambridge, MA 02139, USA\\ \\ {\tt drm} {\rm at} {\tt math.ucsb.edu}, {\tt wati} {\rm at} {\tt mit.edu} } \preprint{UCSB Math 2011-10, IPMU11-0108, MIT-CTP-4200} \abstract{We analyze the structure of matter representations arising from codimension two singularities in F-theory, focusing on gauge groups $SU(N)$. We give a detailed local description of the geometry associated with several types of singularities and the associated matter representations. We also construct global F-theory models for 6D and 4D theories containing these matter representations. The codimension two singularities encountered include examples where the apparent Kodaira singularity type does not need to be completely resolved to produce a smooth Calabi-Yau, examples with rank enhancement by more than one, and examples where the 7-brane configuration is singular. We identify novel phase transitions, in some of which the gauge group remains fixed but the singularity type and associated matter content change along a continuous family of theories. Global analysis of 6D theories on $\P^2$ with 7-branes wrapped on curves of small degree reproduces the range of 6D supergravity theories identified through anomaly cancellation and other consistency conditions. Analogous 4D models are constructed through global F-theory compactifications on $\P^3$, and have a similar pattern of $SU(N)$ matter content. This leads to a constraint on the matter content of a limited class of 4D supergravity theories containing $SU(N)$ as a local factor of the gauge group.} \begin{document} \section{Introduction} Over the last decade, the development of D-branes in string theory has led to dramatic new insights into the connection between gauge theory and geometry. This connection is made particularly explicit in the language of F-theory \cite{Vafa-f, Morrison-Vafa, Morrison-Vafa-II}, where gauge theory coupled to supergravity in an even number of space-time dimensions is described by an elliptically fibered Calabi-Yau manifold over a base $B$ of complex dimension $d$ for a low-energy theory in $10-2d$ space-time dimensions. Recent reviews of the aspects of F-theory relevant for the discussion in this paper are given in \cite{Denef-F-theory, WT-TASI}. In F-theory, the structure of the gauge group in the low-energy theory is primarily encoded in the singularities of the elliptic fibration (with certain global aspects of the gauge group encoded in the Mordell--Weil and Tate--Shafarevich groups of the elliptic fibration \cite{pioneG,triples}). In the language of type IIB string theory, the gauge group is carried by 7-branes wrapped on topologically nontrivial cycles (divisors) of the F-theory base manifold $B$. In the geometrical language of F-theory such 7-branes are characterized by complex codimension one singularities in the structure of the elliptic fibration. Such codimension one singularities were systematically analyzed by Kodaira \cite{Kodaira} well before the advent of F-theory. For a base of complex dimension one, such singularities are characterized by the familiar ADE classification of simple Lie algebras. For each type of codimension one singularity, the low-energy gauge group contains a local factor with the associated nonabelian Lie algebra. When the base is of higher dimension, monodromies around these codimension one loci can give rise to non-simply laced groups as well as the simply-laced groups found on bases of dimension one \cite{Bershadsky-all}. While the geometry of gauge groups is well understood in F-theory, the geometry of matter representations in such theories has only been worked out in a limited set of cases, and there is no general classification of the range of possibilities. Many types of matter representations can arise from local codimension two singularities in the elliptic fibration in the F-theory picture. Other types of matter (such as matter in the adjoint representation of $SU(N)$) can arise from the global structure of the divisor locus \cite{enhanced,Witten-mf}. For the simplest types of representations, such as the fundamental representation of ADE groups, or the two-index antisymmetric representation of $SU(N)$, matter fields arise from a local rank one enhancement of the singularity structure, and the matter content is easily determined from a decomposition of the adjoint representation of the correspondingly enhanced group, as described by Katz and Vafa \cite{Katz-Vafa}. When the singularity structure of the elliptic fibration becomes more intricate, the associated matter representations become more exotic. Other examples associated with rank one enhancement were worked out in \cite{Bershadsky-all, Katz-Vafa, Grassi-Morrison, Grassi-Morrison-2}. In this paper we consider rank one enhancements as well as other kinds of singularity structures. In some cases the apparent Kodaira singularity associated with a coordinate transverse to the brane does not need to be completely resolved for the elliptic fibration to become smooth. In other cases, the local enhancement of the gauge group increases the rank by more than one. By carefully analyzing the local structure of such singularities, we can see how the resolution of the geometry gives rise to matter in a natural generalization of the rank one enhancement mechanism. When the codimension one locus in the base carrying a local factor of the gauge group itself becomes singular, corresponding to a singular geometry for the 7-branes themselves, matter representations are possible that cannot be realized through elliptic fibrations whose nonabelian gauge symmetry corresponds to a smooth component of the discriminant locus. The specific local singularity types we consider in this paper are motivated by global constructions. We develop a general analysis of F-theory Weierstrass models for theories with $SU(N)$ gauge group localized on a generic divisor $\sigma$ on a generic base $B$. As $N$ increases, the set of possible singularity structures for the Weierstrass model becomes more complicated. While we do not complete the general analysis of all possibilities, we systematically show how different singularity types can arise through different choices of algebraic structure for the Weierstrass model. (This analysis complements the results of \cite{Morrison-sn}, where the form of the Weierstrass model is determined for large $N$.) We then apply this general analysis to the specific cases of 6D and 4D F-theory models on bases $\P^2$ and $\P^3$. In six dimensions, the space of allowed supergravity theories is strongly constrained by anomalies and other simple features of the low-energy theory \cite{Grassi-Morrison, Grassi-Morrison-2, universality, finite, KMT, tensors, 0, Seiberg-Taylor}. We can therefore combine the classification of theories from low-energy constraints with the analysis of singularity structures in global F-theory models to develop a fairly complete picture of the set of allowed matter representations in 6D quantum supergravity theories and their realizations through F-theory. In particular, when $\sigma$ is a degree one curve (complex line) on $\P^2$ we are able to reproduce all possible matter configurations for an $SU(N)$ gauge group compatible with anomaly conditions. The structure of the space of 4D F-theory constructions and possible matter representations for an $SU(N)$ theory is closely parallel to the 6D story; though fewer constraints are understood from low-energy considerations in four dimensions, similar restrictions appear on matter representations arising in F-theory constructions. The work presented here represents some first steps towards a systematic understanding of the structure of matter in the global space of supergravity theories arising from F-theory compactifications. In Section \ref{sec:local} we give the results of a local analysis for a variety of codimension two singularities associated with matter transforming under an $SU(N)$ gauge group. We summarize the geometric resolution and group theory in each case, with details of the calculations given in an Appendix. In Section \ref{sec:global} we develop the general structure of Weierstrass models with gauge group $SU(N)$ realized on a specific divisor. We use this general analysis in Section \ref{sec:6D} to explicitly construct classes of global models in 6D without tensor multiplets associated with F-theory compactifications on $\P^2$, and in Section \ref{sec:4D} to construct some 4D models associated with F-theory on $\P^3$. Section \ref{sec:conclusions} contains concluding remarks and discussion of further directions and related open questions. As this work was being completed we learned of related work on codimension two singularities by Esole and Yau \cite{Esole:2011sm}. \section{Local analysis of codimension two singularities} \label{sec:local} The matter structure associated with any elliptic fibration can be understood through a local analysis of the singularity structure of the fibration. Such a local analysis involves the simultaneous resolution of all singularities in the elliptic fibration along the lines of \cite{Katz-Morrison}. The way in which matter arises in F-theory can be understood from the related geometry of matter in type IIA compactifications \cite{enhanced} and in M-theory compactifications as discussed by Witten \cite{Witten-mf}. Generally, matter fields arise from $\P^1$'s in a smooth Calabi-Yau that have been shrunk to vanishing size in the F-theory limit. When these $\P^1$'s arise over codimension two loci in the F-theory base they correspond to local codimension two singularities giving rise to localized matter. In addition to the matter arising from local singularities, there are also global contributions to the matter content from $\P^1$'s that live in continuous families over the divisor $\sigma$ in the base supporting the local factor of the gauge group. For example, in a 6D model there are $g$ adjoint matter fields for $SU(N)$, where $g$ is the genus of the curve defined by $\sigma$. We focus in this paper on the local contributions to the matter content, though as we discuss in Section \ref{sec:local-singular}, global matter contributions can become local, for example when $\sigma$ develops a node. In this section we describe the detailed local geometry of matter in some representations of the gauge group $SU(N)$, which is associated with a local $A_{N -1}$ singularity on a codimension one locus (divisor) $\sigma$ in the F-theory base $B$. We will describe several different classes of singularities in the discussion in this section. We begin with the simplest types of singularities, where the matter content can be understood through the standard Katz--Vafa \cite{Katz-Vafa} analysis, and then consider cases where the codimension two enhanced singularity is incompletely resolved. We then discuss cases where the relevant component of the discriminant locus itself is singular. In this paper we use explicit geometric methods to analyze F-theory singularities. Recently Donagi and Wijnholt \cite{Donagi-Wijnholt} and Beasley, Heckman, and Vafa \cite{Beasley-hv} have developed an approach to resolving singularities on intersecting 7-branes based on normal bundles and a topological field theory on the world-volume of the intersection. It would be interesting to develop a better understanding of how the analyses of this paper can be understood from the point of view of the topological field theory framework. To fix notation, we will be describing a local elliptic fibration characterized by a Weierstrass model \begin{equation} y^2 = x^3 + fx + g \, \label{eq:Weierstrass} \end{equation} where $f, g$ are local functions on a complex base $B$. We choose local coordinates $t, s$ on the base $B$ so that the gauge group $SU(N)$ arises from a codimension one $A_{N -1}$ singularity on the locus $ \sigma (t, s) = 0$. For compactification on an elliptically fibered Calabi-Yau threefold, $s, t$ are the only two local coordinates needed on the base. For 4D theories associated with a Calabi-Yau fourfold, another coordinate $u$ is needed for the base. This additional coordinate plays no r\^ole in the analysis in this section. In the simplest (smooth locus $\{\sigma = 0\}$) cases, we can choose local coordinates with $\sigma = t$, so that the codimension one singularity arises at $t = 0$ and the codimension two singularity of interest arises at the coordinate $s = 0$. In general, the Weierstrass form (\ref{eq:Weierstrass}) of an $A_{N -1}$ singularity describes a singularity associated with a double root at $x = x_0$ in the elliptic fiber, where $3x_0^2 + f (t = 0) = 0$, so that it is convenient to change coordinates to $x' = x-x_0$. The singularity then arises at $x' = 0$, though the description of the elliptic fibration then contains an $x^2$ term on the RHS of (\ref{eq:Weierstrass}). This ``Tate form'' of the description of the elliptic fibration is often used in the mathematical analysis of singular elliptic fibrations \cite{Tate, Bershadsky-all, Morrison-sn}, but is less convenient for a global description of the elliptic fibration in the context of F-theory, where the Weierstrass form allows a systematic understanding of the degrees of freedom associated with moduli of the physical theory. Some analyses of matter content associated with codimension two singularities related to constructions we consider here are also considered in \cite{Donagi:2011jy} from the spectral cover point of view. \subsection{Standard rank one enhancement: $A_3 \rightarrow D_4$} \label{sec:a3} In the simplest cases, matter arises from a codimension two singularity in which the $A_{N -1}$ singularity, which is associated with a rank $N -1$ gauge group, is enhanced to a singularity such as $A_N$ or $D_N$ of one higher rank. Such matter is characterized by the breaking of the adjoint of the corresponding rank $N$ group\footnote{We stress that this is not in general an enhancement of the gauge symmetry group. However, the adjoint breaking provides a convenient dictionary for the combinatorics involved, which works because in special cases there {\em is}\/ a related gauge symmetry enhancement and Higgs mechanism.} through an embedding of $A_{N -1}$, as described in \cite{Bershadsky-all, Katz-Vafa}. In particular, matter in the fundamental ({\tiny\yng(1)}) representation can be realized through a local codimension two singularity enhancement $A_{N -1} \rightarrow A_N$ and matter in the two-index antisymmetric ( ${\tiny\yng(1,1)}$ ) representation (for which we will sometimes use the shorthand notation $\Lambda^2$) can be realized through the enhancement $A_{N -1} \rightarrow D_N$. Matter in the three-index antisymmetric ( ${\tiny\yng(1,1,1)}$ or $\Lambda^3$) representation can also be realized for $SU(6), SU(7),$ and $SU(8)$ through local enhancement $A_5 \rightarrow E_6$ \cite{Bershadsky-all, Katz-Vafa, Grassi-Morrison-2}, $A_6\rightarrow E_7$ \cite{Katz-Vafa, Grassi-Morrison-2} and $A_7 \rightarrow E_8$ \cite{Grassi-Morrison-2}. As a simple example of this kind of singularity enhancement consider a Weierstrass model for the codimension two singularity enhancement $A_3 \rightarrow D_4$. Though the basic physics of the matter associated with this configuration are well understood, we go through the details as a warmup for more complicated examples. We consider an $A_3$ singularity on the locus $\sigma =t = 0$ with a $D_4$ singularity at $s = 0$, given by the Weierstrass form (\ref{eq:Weierstrass}) with \begin{eqnarray} f & = & -\frac{1}{3}s^4-t^2 \label{eq:a3-fg}\\ g & = & \frac{2}{27} s^6 +\frac{1}{3} s^2 t^2 \,. \nonumber \end{eqnarray} This particular form for $f, g$ is chosen to match a form of this singularity that appears in the general global Weierstrass analysis in the next section of the paper. The $A_3$ form of the singularity follows from the standard Kodaira classification \cite{Kodaira, Morrison-Vafa-II}, since at generic $s \neq 0$ $f, g$ have degree $0$ in $t$, while the discriminant \begin{equation} \Delta = 4 f^3 + 27 g^2 = -s^4t^4-4t^6 \label{eq:discriminant} \end{equation} is of degree 4. At $s = 0$, $f$ has degree 2 and the discriminant has degree 6, so we have a $D_4$ singularity. As mentioned above, it is convenient to change coordinates \begin{equation} x \rightarrow x + \frac{1}{3}s^2 \end{equation} to move the singularity to $x = 0$. The Weierstrass equation then becomes \begin{equation} \Phi = -y^2 + x^3 + s^2 x^2 -t^2 x = 0 \,. \label{eq:a3} \end{equation} This gives a local equation for the Calabi-Yau threefold described by an elliptic fibration in coordinates $(x, y, t, s) \in \C^2 \times \C^2$ where $x, y$ are (inhomogeneous) local coordinates on the elliptic fiber living in $\P^{2, 3, 1}$ and $s, t$ are local coordinates on the base $B$. An explicit analysis of the singularity resolution of the Calabi-Yau threefold defined by \eq{eq:a3} is given in Section \ref{sec:a3-appendix} of the Appendix. Even in this rather simple case, the details of the resolution are slightly intricate. At a generic point $s \neq 0$ along the $A_{3}$ singularity $\sigma$, a blow-up in the transverse space gives two $\P^1$'s ($C_{\pm}$) fibered over $\sigma$, which intersect at a singular point for each $s$. A further blow-up gives a third $\P^1$ ($C_2$) fibered over $\sigma$, which intersects each of $C_\pm$, giving a realization of the Dynkin diagram $A_3$ in terms of the intersections of these curves. At $s = 0$, the resolution looks rather different. The first blow-up gives a single curve $\delta_1$, which both $C_+, C_-$ approach in the limit $s \rightarrow 0$. A further blow-up at a singular point on $\delta_1$ gives $\delta_2 \sim C_2$, and codimension two conifold-type double point singularities occur at two other points on $\delta_1$. Each of these codimension two singularities has two possible resolutions, giving four possible smooth Calabi-Yau threefold structures related by flops. In each resolution an additional $\P^1$ is added at $s = 0$, completing the $D_4$ Dynkin diagram. An example of how the curves $C_a$ at generic $s$ converge to the curves $\delta_b$ at $s = 0$ for one of the four combinations of resolutions is shown graphically in Figure~\ref{f:a3d4}. \begin{figure} \begin{center} \begin{picture}(200,160)(- 100,- 90) \put(-100,0){\makebox(0,0){\includegraphics[width=8cm]{d4-a.eps}}} \put(-100,-80){\makebox(0,0){(a)}} \put(100,0){\makebox(0,0){\includegraphics[width=8cm]{d4-b.eps}}} \put(100,-80){\makebox(0,0){(b)}} \end{picture} \end{center} \caption[x]{\footnotesize Embedding of $A_3 \rightarrow D_4$ singularity encoded in eq.\ (\ref{eq:a3d4}). Curves in $D_4$ are depicted in black solid lines, while $A_3$ curves are in colored dashed lines. Two different methods are used to depict the same embedding. (a) depicts each curve as a line, with intersections associated with crossings, as in much mathematical literature. (b) depicts $D_4$ curves in Dynkin diagram notation, with nodes for curves and lines for intersection, and depicts $A_3$ curves as colored dashed curves depicting $\P^1$'s at generic $s$ and limit as $s \rightarrow 0$, with intersections denoted by ``x'''s. There are four possible embeddings depending upon choices for codimension two resolutions. Choice depicted has $\tau_+ = 1, \tau_-= 0$, according to notation in Section \ref{sec:a3-appendix} of Appendix, so for example $C_1^+ \rightarrow \delta_1 + \delta_2^+$ as $s \rightarrow 0$.} \label{f:a3d4} \end{figure} The additional matter associated with the $D_4$ can be understood by embedding $A_3 \subset D_4$ and decomposing the adjoint of $D_4$ into irreducible representations of $A_3$. The roots in the adjoint of $D_4$ correspond to distinct $\P^1$'s at $s = 0$ in the resolved Calabi-Yau. The subset of these roots corresponding to the adjoint of $A_3$ are associated with the $SU(4)$ vector bosons, and the remainder are matter fields. The adjoint of $D_4$ decomposes into irreducible representations of $A_3$ as ${\bf 28} \rightarrow {\bf 15} + {\bf 6} + {\bf \bar{6}} + {\bf 1}$. In a 6D theory, matter hypermultiplets live in quaternionic representations of the gauge group. The ${\bf 6}$ and ${\bf \bar{6}}$ combine into a single quaternionic matter hypermultiplet in the $\Lambda^2$ representation of $A_3$ in 6D. An easy way to see that this representation appears is from the Dynkin diagram description of the embedding $A_3 \subset D_4$. (The embedding shown in Figure~\ref{f:a3d4} is equivalent to this embedding under an isomorphism of $D_4$.) \begin{center} \begin{picture}(200,50)(- 100,- 35) \put(0,0){\makebox(0,0){$\longrightarrow$}} \put(-60,-5){\line( 1, 0){20}} \put(40,-5){\line( 1, 0){20}} \multiput(- 60,-5)(10,0){3}{\circle*{4}} \multiput(40,-5)(10,0){3}{\circle*{4}} \put(50,5){\circle{4}} \multiput(50,-5)(0,4){3}{\line(0,1){2}} \put(-50,-20){\makebox(0,0){$A_3$}} \put(50,-20){\makebox(0,0){$D_4$}} \end{picture} \end{center} The Dynkin weight $[0, 1, 0]$ is the highest weight of the $\Lambda^2$ representation of $A_3$. The $\P^1$ associated with this state is precisely the extra (empty) node added to form $D_4$ from $A_3$ in this embedding. The weight of this state can be determined from the intersection numbers of this $\P^1$ with the roots of $A_3$; the additional $\P^1$ has intersection number 1 with the middle root of $A_3$ and no intersection with the other roots. (See \cite{Slansky} for a review of the notation of Dynkin weights and the relevant group theory.) \subsection{Incomplete and complete resolutions} \label{sec:a5} We now consider a slightly more complicated set of enhancements of an $A_{N -1}$ codimension one singularity. In this case we consider the enhancement of $A_5$ by various types of local singularities and the associated matter content. We will begin with the example of $A_5$ enhanced to $D_6$ through a standard rank one enhancement quite similar to the preceding analysis of $A_3 \rightarrow D_4$. This again gives a matter field in the $\Lambda^2$ antisymmetric representation. We will then consider the effect of a local $E_6$ singularity. Depending upon the degree of vanishing of certain terms in the local defining equation, the $E_6$ can either be incompletely resolved or can be completely resolved in the threefold. The $E_6$ singularity gives rise to matter in the three-index antisymmetric ($\Lambda^3$) representation; in the 6D context we get a half or full hypermultiplet in this representation depending on whether the singularity is completely resolved. In each case, we choose a non-generic Weierstrass model, with a specific form motivated by the global analysis carried out in the following section. \subsubsection{Enhancement $A_5 \subset D_6$} We begin with the Weierstrass coefficients \begin{eqnarray} f & = & -\frac{1}{3}s^4 -2s^3t + (2s^2 -3)t^2 + 3 t^3, \label{eq:a5-d6-fg}\\ g & = & \frac{2}{27} s^6 + \frac{2}{3} s^4 t + (2 s^2-\frac{2}{3} s^4) t^2 +(2-3 s^2) t^3 + (s^2 -3)t^4 \,. \nonumber \end{eqnarray} These describe an $A_5$ singularity on the locus $t = 0$ enhanced to a $D_6$ singularity at $s = 0$, where the orders of vanishing of $f, g, \Delta$ are 2, 3, 6. Changing variables through \begin{equation} x \rightarrow x + \frac{1}{3}s^2+ t \end{equation} gives the local equation \begin{equation} \Phi = -y^2 + x^3 +s^2 x^2 +3x^2 t + 3t^3x + 2s^2 t^2 x + s^2 t^4 = 0\,. \label{eq:phi-a5-d6} \end{equation} \vspace*{0.05in} An analysis much like that of $A_3 \subset D_4$, summarized in Section \ref{sec:a5-d6} of the Appendix, shows that this singularity is resolved to give a set of curves with $D_6$ structure at $s = 0$, giving matter in the $\Lambda^2$ representation of $A_5$ with highest weight vector having Dynkin indices $[0, 1, 0, 0, 0]$. \subsubsection{Enhancement $A_5 \subset E_6$} We now consider a situation where $A_5$ is enhanced to $E_6$. The local model we consider is closely related to (\ref{eq:phi-a5-d6}). We begin with the Weierstrass coefficients \begin{eqnarray} f & = & -\frac{1}{3}\betah^4 -2 \betah^3t + (2\betah -3 \betah^2)t^2 + 3 t^3, \label{eq:a5-e6-fg}\\ g & = & \frac{2}{27} \betah^6 + \frac{2}{3} \betah^5 t + (2 \betah^4-\frac{2}{3} \betah^3) t^2 +(2 \betah^3-3 \betah^2) t^3 + (1 -3 \betah)t^4 \,. \nonumber \end{eqnarray} Changing variables through \begin{equation} x \rightarrow x + \frac{1}{3}\betah^2+ \betah t \end{equation} gives \begin{equation} \Phi = -y^2 + x^3 + \betah^2 x^2 +3 \betah x^2 t + 3t^3x + 2 \betah t^2 x + t^4 = 0\,. \label{eq:phi-a5-e6} \end{equation} We describe explicit global 6D models in which this singularity structure arises in Section \ref{sec:6D}. In (\ref{eq:phi-a5-e6}), the parameter $\betah$ can be either $\betah = s$ or $\betah = s^2$. The detailed analysis of the singularity resolution in both cases is carried out in Section \ref{sec:a5-e6} of the Appendix. To understand the results of this analysis it is helpful to clarify the structure of the $E_6$ singularity at $s = 0$. The Kodaira classification of singularities is really only applicable in the context of codimension one singularities. For generic $s$, we can take a slice at constant $s$, giving a codimension one singularity of type $A_5$ on each slice intersecting the curve at $t = 0$. To determine the type of singularity at $s = t = x =y =0,$ we are considering a slice at $s = 0$. Just because there is a singularity in this slice, however, does not mean that the full Calabi-Yau threefold is singular. In particular, in the case at hand, when $\betah = s$, systematically blowing up the singularity at the origin allows the Calabi-Yau threefold to be smoothed before the full $E_6$ singularity has been resolved. At the final stage of this resolution process, there is an apparent singularity in the slice at $s = 0$ but the full threefold has no singularity. A diagram depicting the blown-up $\P^1$'s away from $s = 0$ ($C_a$'s) and at $s = 0$ ($\epsilon_b$'s) for the incomplete $E_6$ resolution from $\betah = s$ is shown in Figure~\ref{f:a5-e6x}. \begin{figure} \begin{center} \begin{picture}(200,160)(- 100,- 90) \put(-100,10){\makebox(0,0){\includegraphics[width=8cm]{e6-ax.eps}}} \put(-100,-90){\makebox(0,0){(a)}} \put(100,0){\makebox(0,0){\includegraphics[width=7cm]{e6-bx.eps}}} \put(100,-90){\makebox(0,0){(b)}} \end{picture} \end{center} \caption[x]{\footnotesize Embedding $A_5 \rightarrow E_6$ with incomplete resolution of $E_6$ singularity in threefold.} \label{f:a5-e6x} \end{figure} When $\betah = s^2$, the full $E_6$ singularity is resolved, giving the configuration depicted in Figure~\ref{f:a5-e6}. Although this explicit singularity resolution gives an embedding of $A_5 \subset E_6$ with a somewhat unconventional appearance, this embedding is unique up to automorphisms of $E_6$, so is equivalent to the embedding associated with extending the Dynkin diagram $A_5$ by adding a new node attached to middle node of the $A_5$ to form the $E_6$ diagram. \begin{center} \begin{picture}(200,70)(- 100,- 35) \put(0,0){\makebox(0,0){$\longrightarrow$}} \put(-70,-5){\line( 1, 0){40}} \put(30,-5){\line( 1, 0){40}} \multiput(- 70,-5)(10,0){5}{\circle*{4}} \multiput(30,-5)(10,0){5}{\circle*{4}} \put(50,5){\circle{4}} \multiput(50,-5)(0,4){3}{\line(0,1){2}} \put(-50,-20){\makebox(0,0){$A_5$}} \put(50,-20){\makebox(0,0){$E_6$}} \end{picture} \end{center} \begin{figure} \begin{center} \begin{picture}(200,180)(- 100,- 90) \put(-100,10){\makebox(0,0){\includegraphics[width=8cm]{e6-a.eps}}} \put(-100,-90){\makebox(0,0){(a)}} \put(100,0){\makebox(0,0){\includegraphics[width=7cm]{e6-b.eps}}} \put(100,-90){\makebox(0,0){(b)}} \end{picture} \end{center} \caption[x]{\footnotesize Embedding $A_5 \rightarrow E_6$ with complete resolution of $E_6$ singularity in threefold.} \label{f:a5-e6} \end{figure} Let us now consider the matter content in each of these situations. In the fully resolved $E_6$, we have the usual story of rank one enhancement and the adjoint of $E_6$ decomposes under $A_5 \subset E_6$ as \begin{equation} {\bf 78} = 3 \cdot {\bf 1} + {\bf 35} + 2 \cdot {\bf 20} \,. \end{equation} This gives matter in the 3-index antisymmetric ($\Lambda^3$) {\bf 20} representation of $A_5$. Again, the appearance of this matter representation is apparent from the Dynkin index $[0, 0, 1, 0, 0]$ associated with the intersection of the added node with the original nodes of the $A_5$. The possibility of this kind of matter associated with a local $E_6$ enhancement was previously discussed in \cite{Bershadsky-all, Katz-Vafa, Grassi-Morrison-2}. In a 6D theory, as in the $\Lambda^2$ matter story, the two {\bf 20}'s combine into a single full hypermultiplet. Because the ${\bf 20}$ is by itself already a quaternionic representation of $A_5$, however, this can also be thought of as two half-hypermultiplets. Now we consider the case where the $E_6$ is incompletely resolved. In this case, the set of roots of $E_6$ are not all associated with $\P^1$'s in the full Calabi-Yau over the point $s = 0$. Thus, the amount of matter is reduced. The root of $E_6$ that is not blown up, associated with the curve $\epsilon_4$ in the complete resolution depicted in Figure~\ref{f:a5-e6}, is orthogonal to all roots of $A_5$. For example, $\epsilon_4 \cdot C_1^+ = \epsilon_4 \cdot ( \epsilon_1 + \epsilon_3^+ + \epsilon_4) = -2 + 1 + 1 = 0$. We can therefore describe the matter content of the incompletely resolved $E_6$ by projecting in the direction parallel to $\epsilon_4$. This collapses the two ${\bf 20}$'s into a single matter representation (see Figure~\ref{f:collapse}). \begin{figure} \begin{center} \begin{picture}(200,80)(- 90,- 40) \multiput(-24,15)(8,0){7}{\circle*{4}} \multiput(-40, 0)(8,0){11}{\circle*{4}} \multiput(-24,-15)(8,0){7}{\circle*{4}} \put(60, 15){\makebox(0,0){{\bf 20}}} \put(60, 0){\makebox(0,0){{\bf 35}}} \put(60, -15){\makebox(0,0){{\bf 20}}} \put(70,15){\vector(1,-1){10}} \put(70,0){\vector(1,0){10}} \put(70,-15){\vector(1,1){10}} \put(110,0){\makebox(0,0){ {\bf 35} + {\bf 20}}} \put(-50,-15){\vector(0,1){30}} \put(-65,-10){\makebox(0,0){$\epsilon_4$}} \end{picture} \end{center} \caption[x]{\footnotesize A schematic depiction of the decomposition of the adjoint of $E_6$ under the action of $A_5$. The action of $A_5$ is taken to be in the horizontal direction. The root $\epsilon_4$ is perpendicular to all roots of $A_5$. In the incompletely resolved $E_6$, a projection is taken in the $\epsilon_4$ direction that combines the two {\bf 20}'s of $A_5$ into a single half hypermultiplet.} \label{f:collapse} \end{figure} In the 6D theory this gives a half-hypermultiplet in the $\Lambda^3$ representation. It was also noted in \cite{Katz-Vafa} that the appearance of a quadratic parameter like $\betah = s^2$ in the defining equation of the singularity is associated with a pair of half-hypermultiplets in certain situations; this observation matches well with the appearance of a single half-hypermultiplet when $s^2$ is replaced with $s$. It is interesting to understand how the intersection properties of the $C$ curves from $A_5$ are realized in the incomplete $E_6$ resolution. The detailed expansion of the $C$'s in terms of the roots $\epsilon$ of the $E_6$ is given in \eq{eq:c-e}, and shown graphically in Figure~\ref{f:a5-e6x}. In the incompletely resolved $E_6$, we can consider the geometry of the slice at $s = 0$ containing blown-up $\P^1$'s associated with all roots of $E_6$ other than $\epsilon_4$. In this slice, there is a $\Z_2$ singularity at the intersection point of $\epsilon_1, \epsilon_3^\pm$. This point contributes only 1/2 to the Euler characteristic of spaces in which it is contained. Each of the curves intersecting the point consequently has a self-intersection given by $\epsilon_1 \cdot \epsilon_1 = -3/2$, {\it etc.} and the intersection between each pair of curves meeting at this point is 1/2, so $\epsilon_1 \cdot \epsilon_3^\pm = 1/2$, {\it etc.}. We see that the linear combinations of these singular curves spanned by the $C$'s preserve the correct intersection rules for $A_5$; for example, \begin{equation} C_1^+ \cdot C_3 = (\epsilon_1 + \epsilon_3^+) \cdot (\epsilon_3^+ + \epsilon_3^-) = -3/2 + 3 (1/2) = 0, \end{equation} \begin{equation} C_3 \cdot C_3 = (\epsilon_3^+ + \epsilon_3^-)\cdot (\epsilon_3^+ + \epsilon_3^-)= 2 (-3/2) + 2 (1/2) = -2 \,. \end{equation} We expect that there are many types of codimension two singularities that can appear in F-theory with analogous descriptions in terms of incomplete resolutions. In Section \ref{sec:6D} we describe global 6D F-theory models in which this kind of incomplete resolution appears explicitly, affecting the matter content of the theory. \subsection{Matter on a singular 7-brane} \label{sec:local-singular} For the fundamental and multi-index antisymmetric representations of $SU(N)$ that we have studied so far, the associated F-theory geometry involves the enhancement at a codimension two locus of an $A_{N -1}$ singularity living on a 7-brane that itself is wrapped on a smooth codimension one locus in the base. Other kinds of representations can arise when the 7-branes are wrapped on a singular divisor. In \cite{Sadov}, Sadov gave some evidence suggesting that a two-index symmetric ( ${\tiny\yng(2)}$ or Sym${}^2$) representation of $SU(N)$ should arise when the gauge group is realized on a codimension one space having an ordinary double point singularity. The connection between matter representations and geometric singularities can be made much more general from analysis of anomaly cancellation in 6D theories. We describe the general connection that we expect between matter representations and singularities in the 7-brane configuration, and then describe in some detail the case of the ordinary double point singularity from this point of view. \subsubsection{Representation theory and singularities} It was found in \cite{0} that associated with each representation of $SU(N)$ there is a numerical factor $g_R$ that corresponds in a 6D F-theory model to a contribution to the genus of the divisor associated with the $SU(N)$ local factor. The analysis in \cite{0} was based on compactifications on $\P^2$, but the result can be stated more generally. From the anomaly cancellation conditions for an F-theory construction on an arbitrary base (see \cite{WT-TASI} for a review), the genus $g$ of the curve $C$ on which the 7-branes associated with any $SU(N)$ local factor of the gauge group are wrapped can be written in terms of a sum over contributions from each matter representation \begin{equation} 2g-2 = (K + C) \cdot C= \sum_{R}x_R g_R -2 \,, \label{eq:Euler} \end{equation} where $x_R$ is the number of matter hypermultiplets in representation $R$, and the genus contribution of a given representation is defined to be \begin{equation} g_R = \frac{1}{12}\left(2 C_R + B_R -A_R \right) \,. \label{eq:genus} \end{equation} In this formula, $A_R, B_R, C_R$ are group theory coefficients defined through \begin{align} \tr_R F^2 & = A_R \tr F^2 \\ \tr_R F^4 & = B_R \tr F^4+C_R (\tr F^2)^2 \,, \end{align} where $\tr_R$ denotes the trace in representation $R$, while $\tr$ without a subscript denotes the trace in the fundamental representation. A table of group theory coefficients and genera for some simple $SU(N)$ representations appears in \cite{0}; we reproduce here the part of the table describing representations with nonzero genus in Table~\ref{t:coefficients}. All single-column antisymmetric representations ($\Lambda^2$, $\Lambda^3$, \ldots) have vanishing genus. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Rep. & Dimension & $A_R$ & $B_R$ & $C_R$ & $g_R$\\ \hline Adjoint & $N^2-1$ & $2N$ & $2N$ & 6 & 1\\ ${\tiny\yng(2)}$ & $ \frac{N(N+1)}{2} $ & $ N+2 $ & $ N+8 $ & 3 & 1\\ ${\tiny\yng(2,1)}$ & $ \frac{N(N^2-1)}{3} $& $ N^2-3 $& $ N^2-27 $& $ 6N $ & $N -2 $\\ ${\tiny\yng(3)}$ & $ \frac{N(N+1)(N+2)}{6} $ &$ \frac{N^2+5N+6}{2} $&$ \frac{N^2+17N+54}{2} $ & $ 3N+12 $ & $N + 4$\\ ${\tiny\yng(2,2)}$ & $ \frac{N^2(N+1)(N-1)}{12} $ & $\frac{N(N-2)(N+2)}{3} $ & $\frac{N(N^2-58)}{3}$ & $3(N^2+2)$ & $\frac{(N-1)(N-2)}{2}$ \\[0.07in] \hline \end{tabular} \caption{Values of the group-theoretic coefficients $A_R, B_R, C_R$, dimension and genus for some representations of $SU(N)$, $N \geq 4$.} \label{t:coefficients} \end{table} While we have described so far the relationship between representation theory and geometry of singularities only for $SU(N)$ local factors and representations, a similar result holds for any simple local factor of the gauge group with the inclusion of appropriate numerical factors depending on the normalization of the trace. For a general gauge group the genus contribution is \begin{equation} g_R = \frac{1}{12}\left(2 \lambda^2 C_R + \lambda B_R -\lambda A_R \right) \,, \label{eq:genus-general} \end{equation} where $\lambda$ is a group-dependent normalization factor, with $\lambda_{SU(N)} = 1, \lambda_{SO(N)} = 2$, etc.; values of $\lambda$ for all simple groups are listed in \cite{tensors}. We now review some elementary features of plane curves (complex curves in $\P^2$) that clarify the connection of the group theory structure just described with singularities in F-theory. For more background in the basic algebraic geometry of plane curves see, e.g., \cite{Perrin}. In algebraic geometry, a smooth plane curve is characterized by two invariants: the degree $b$ of the polynomial defining the curve in $\P^2$, and the genus $g$ of the curve, which is related to the Euler characteristic of the curve through the usual relation \begin{equation} \chi = 2-2g \end{equation} For a smooth plane curve, the degree and genus are related by \begin{equation} 2g = (b -1) (b-2). \label{eq:arithmetic-genus} \end{equation} Thus, lines ($b = 1$) and conics ($b = 2$) have genus 0, smooth cubics ($b = 3$) are elliptic curves of genus 1, curves of degree 4 have genus 3, etc. Using inhomogeneous coordinates $t, s$ on $\P^2$, a curve $f (t, s) = 0$ is singular at any point where \begin{equation} \partial f/\partial t = \partial f/\partial s = 0. \end{equation} For example, the cubic \begin{equation} f (t, s) = t^3 + s^3-st = 0 \label{eq:singular-cubic} \end{equation} is singular at the point $(t, s) = (0, 0)$, and locally takes the form $st = 0$, describing two lines crossing at a point. For a singular curve, there are two distinct notions of genus that become relevant. The {\it arithmetic} genus is given by (\ref{eq:arithmetic-genus}) for any curve, singular or nonsingular. The {\it geometric} genus (which we denote by $p_g$) is the topological genus of a curve after all singularities have been appropriately smoothed. For example, the singularity in (\ref{eq:singular-cubic}) is known as an {\it ordinary double point} singularity, where two smooth branches of the curve cross at a point. This singularity can be removed by blowing up the origin to a $\P^1$, which separates the two points, giving a curve of geometric genus 0. In general, the arithmetic and geometric genera of a plane curve $C$ with multiple singularities are related through \begin{equation} g = (b-1) (b-2)/2 = p_g + \sum_P \frac{m_P(m_P-1)}{2}, \label{eq:genus-relation} \end{equation} where the sum is over\footnote{There is an important subtlety here: after blowing up a singular point, there may still be singular points in the inverse image of the original point, and they must be blown up as well, {\em ad infinitum.} The sum in (\ref{eq:genus-relation}) must include these ``infinitely near'' points.} all singular points $P$ in $C$, and $m_P$ is the multiplicity of the singularity at $P$. The multiplicity of an ordinary singularity where $k$ branches of the curve cross at a common point is $k$. It is easy to see that deforming such a singularity leads to $k (k -1)/2$ ordinary double point singularities, each of which contributes one to the genus. More generally, the multiplicity of a singularity in a plane curve is given by the lowest power of a monomial appearing in the polynomial defining the curve in local coordinates around the singularity. For example, for degree 3 curves, in addition to the ordinary double point type of singularity encountered in (\ref{eq:singular-cubic}), a {\it cusp} (non-ordinary) double point singularity can arise at points like the origin in the cubic \begin{equation} f (t, s) = t^3-s^2 = 0 \,. \label{eq:cubic-cusp} \end{equation} The multiplicity of such a cusp singularity is 2; this cusp can be found as a degenerate limit of the class of cubics with ordinary double point singularities $t^3 + at^2 -s^2 = 0$ as $a \rightarrow 0$. Note that a curve of geometric genus 0 is a {\it rational} curve, meaning that the curve can be parameterized using rational functions. For example, (\ref{eq:cubic-cusp}) has arithmetic genus 1 but geometric genus 0, and can be parameterized as $t = a^2, s = a^3$. For higher degree curves, more exotic types of singularities can arise with higher intrinsic multiplicities. While an ordinary double point singularity is resolved by a single blow-up, as the singularity becomes more extreme, the point must be blown up more times to completely resolve the singularity. From (\ref{eq:genus-relation}), we see that the total arithmetic genus of a curve has a contribution from the geometric genus and also a contribution from the various singular points in the curve. We now return to the discussion of matter and singularities in the F-theory context. The genus appearing in \eq{eq:Euler} is the arithmetic genus. For a local factor of the gauge group associated with 7-branes on a smooth curve, there are $g$ matter fields in the adjoint representation, associated with $\P^1$'s in the resolved space that are free to move over the curve of genus $g$. Since the adjoint representation has $g_R = 1$, these non-localized adjoint matter fields saturate \eq{eq:genus-relation}; a gauge group realized on a smooth curve can thus only have local matter in the fundamental and multi-index antisymmetric representations. If the gauge group lives on a singular curve, however, the number of adjoint representations is given by $p_g$, with the type of singularities in the curve determining the types of additional matter that can arise. From (\ref{eq:genus-relation}), we expect that a matter representation $R$ will be associated with a localized singularity in $\sigma$ contributing $g_R$ to the genus. This gives a clear picture of how matter representations should be associated with singular divisor classes in F-theory. For example, consider the symmetric (Sym${}^2$) representation of $SU(N)$. This representation has $g_{{\rm Sym}{}^2} = 1$ and should be associated with a singularity of multiplicity 1. This matches with Sadov's prediction that such matter should be associated with an ordinary double point in $\sigma$. We analyze this type of singularity in detail in the following section. As another example, consider the ``box'' ( ${\tiny\yng(2,2)}$ ) representation of $SU(4)$. From Table~\ref{t:coefficients}, we see that this representation has genus $g_R = 3$. Thus, we expect that it will be produced by a singularity of multiplicity 3 in the divisor locus carrying a stack of 7-branes in a 6D F-theory model on $\P^2$. In \cite{0} it was shown that this representation can arise in apparently consistent 6D supergravity models with $SU(4)$ gauge group and no tensor multiplets. We discuss this representation further in Section \ref{sec:box}. Although the preceding discussion was based on the analysis of 6D supergravity models, where anomaly cancellation strongly constrains the range of possible models, the connection between the group theoretic genus contribution of a given matter representation and the corresponding singularity type in the F-theory picture should be independent of dimension. Thus, in particular, the same correspondence will relate matter in 4D supergravity models to localized codimension two singularity structures in F-theory compactifications on a Calabi-Yau threefold, just as the Kodaira classification describes gauge groups based on codimension one singularities in all dimensions. \subsubsection{Ordinary double point singularities} \label{sec:double} We now consider the simplest situation where the locus $\sigma$ on which the 7-branes are wrapped itself becomes singular in a 6D F-theory model. This occurs when $\sigma$ contains an ordinary double point singularity, such as arises at the origin for the curve $u^3 + u^2 -v^2 = u^3 + (u + v) (u-v) = 0$. Locally, an ordinary double point singularity takes the form $st = 0$ in a local coordinate system; here this is the case with $s = u + v, t = u-v$. The geometry and physics of such an intersection is well-known. A stack of 7-branes associated with an $A_{N -1}$ codimension one singularity that has a transverse intersection with a stack of 7-branes associated with an $A_{M -1}$ singularity gives rise to matter in a bifundamental representation \begin{equation} (N, \bar{M}) + (\bar{N}, M) \; \; {\rm or} \; \; (N, M) + (\bar{N}, \bar{M}). \label{eq:bifundamental} \end{equation} For two branes that intersect each other in only one place, and do not intersect other branes, these two representations are effectively indistinguishable, being equivalent under a redefinition of the gauge group on one of the branes. When the branes have multiple intersections, or are identified, however, the relative structure of representations from one intersection to another (or from one branch of the brane to another) means that these two distinct types of bifundamental representations must be distinguished. When the two stacks of 7-branes are actually the same, corresponding to a self-intersection of $\sigma$, the resulting representation of $SU(N)$ is either an adjoint hypermultiplet, or a symmetric and an antisymmetric hypermultiplet ($\Lambda^2$ + Sym${}^2$) \begin{eqnarray} 1 + {\rm adj}: & & {\rm singlet} (1) + {\rm adjoint} (N^2 -1) \nonumber\\ & {\rm or} & \label{eq:crossing-options}\\ \Lambda^2 + {\rm Sym}^2: & & {\tiny\yng(1,1)} \;(N (N -1)/2)+ {\tiny\yng(2)}\;(N (N +1)/2)\,. \nonumber \end{eqnarray} To understand which of these representations is realized, and to connect with the general discussion of matter and singularities, it is helpful to go through the F-theory singularity analysis in a similar fashion to that done for the singularities analyzed above. For concreteness, we describe the self-intersection of a curve $\sigma$ carrying an $A_3$ singularity in a 6D model. To describe an $SU(4)$ gauge group on a singular divisor class $\sigma$, we can substitute $s\rightarrow 1, t \rightarrow \sigma$ in the equation (\ref{eq:a3}) for an $A_3$ singularity \begin{equation} \Phi = -y^2 + x^3 + x^2 -\sigma^2 x = 0 \,. \end{equation} For the local ordinary double point $\sigma = st$, we have \begin{equation} \Phi = -y^2 + x^3 + x^2 -s^2 t^2 x = 0 \,. \label{eq:ordinary-double} \end{equation} This defines a Calabi-Yau threefold that is singular along the lines $s = 0$ and $t = 0$ with an enhancement to $A_7$ at the point $s = t = 0$. We can resolve the singularity systematically by blowing up the curves along $t = 0$, $s = 0$ and at the origin. The details are described in Section \ref{sec:ordinary-double-appendix}. The result of this analysis is that the 3 $\P^1$'s giving the $A_3$ structure along each of the curves $s = 0, t = 0$ are embedded into two orthogonal $A_3$ subgroups of the $A_7$ Dynkin diagram. The embedding found from explicit singularity resolution is equivalent to the canonical embedding of $SU(4) \times SU(4) \subset SU(8)$ depicted in terms of Dynkin diagrams as \begin{center} \begin{picture}(200,50)(- 100,- 35) \put(0,0){\makebox(0,0){$\longrightarrow$}} \put(-100,0){\line( 1, 0){20}} \put(-40,0){\line( 1, 0){20}} \put(30,0){\line( 1, 0){60}} \multiput(- 100,0)(10,0){3}{\circle*{4}} \multiput(- 40,0)(10,0){3}{\circle*{4}} \multiput(30,0)(10,0){3}{\circle*{4}} \multiput(70,0)(10,0){3}{\circle*{4}} \put(60,0){\circle{4}} \put(-60,0){\makebox(0,0){$\times$}} \put(-60,-20){\makebox(0,0){$A_3\times A_3$}} \put(60,-20){\makebox(0,0){$A_7$}} \end{picture} \end{center} We can then decompose the adjoint of $A_7$ as usual to get the matter content. If the two $SU(4)$ gauge groups were independent, this would give one of the bifundamental representations \eq{eq:bifundamental}. When the two $A_3$ singularity loci are connected, however, which of the matter representations \eq{eq:crossing-options} are realized depends upon the geometry of $\sigma$. Locally, the 3 $\P^1$'s associated with simple roots of $A_3$ on one branch can be labeled with 1, 2, 3. When this labeling is followed around $\sigma$ onto the second branch, we have an embedding of a single $A_3$ through \begin{equation} A_3 \rightarrow A_3 \times A_3 \rightarrow A_7 \end{equation} that can be realized through either of the two possibilities \begin{center} \begin{picture}(200,70)(- 30,- 35) \put(-10,0){\makebox(0,0){$\longrightarrow$}} \put(-70,0){\line( 1, 0){20}} \put(30,0){\line( 1, 0){60}} \put(140,0){\line( 1, 0){60}} \multiput(- 70,0)(10,0){3}{\circle*{4}} \multiput(30,0)(10,0){3}{\circle*{4}} \multiput(70,0)(10,0){3}{\circle*{4}} \multiput(140,0)(10,0){3}{\circle*{4}} \multiput(180,0)(10,0){3}{\circle*{4}} \put(60,0){\circle{4}} \put(170,0){\circle{4}} \put(-60,-20){\makebox(0,0){$A_3$}} \put(60,-20){\makebox(0,0){$A_7$}} \put(170,-20){\makebox(0,0){$A_7$}} \put(-70,10){\makebox(0,0){1}} \put(-60,10){\makebox(0,0){2}} \put(-50,10){\makebox(0,0){3}} \put(30,10){\makebox(0,0){1}} \put(40,10){\makebox(0,0){2}} \put(50,10){\makebox(0,0){3}} \put(70,10){\makebox(0,0){1}} \put(80,10){\makebox(0,0){2}} \put(90,10){\makebox(0,0){3}} \put(140,10){\makebox(0,0){1}} \put(150,10){\makebox(0,0){2}} \put(160,10){\makebox(0,0){3}} \put(200,10){\makebox(0,0){1}} \put(190,10){\makebox(0,0){2}} \put(180,10){\makebox(0,0){3}} \put(115,0){\makebox(0,0){or}} \end{picture} \end{center} These two possibilities correspond to the two matter options \eq{eq:crossing-options}. Thus, we see that an ordinary double point singularity in $\sigma$ can either be associated with an adjoint plus a singlet, or a symmetric and an antisymmetric matter multiplet. In each case, the contribution through \eq{eq:genus} to the genus is 1, so either possibility is consistent with the general picture of the association between geometry and group theory. Which of the possible representations is realized, however, is determined by nonlocal features of the geometry. To see which of the embeddings from the diagram above is realized it is necessary to track the labeling of the $A_3$ roots around a closed path in $\sigma$ connecting the two branches that intersect. The information in the orientation of the ordering of these roots amounts to an additional $\Z_2$ of information contained in the structure of any brane. It is interesting to note that this degree of freedom is present in any configuration of type II D-branes, although it is not generally discussed. The explicit singularity resolution computed in Section \ref{sec:ordinary-double-appendix} is depicted graphically in Figure~\ref{f:a3-a7}, for a particular choice of relative orientation of the $A_3$ curves in the two branes. \begin{figure} \begin{center} \begin{picture}(200,150)(- 100,- 70) \put(0,0){\makebox(0,0){\includegraphics[width=9cm]{a7.eps}}} \end{picture} \end{center} \caption[x]{\footnotesize Embedding of $A_3 \rightarrow A_7$ at an ordinary double point singularity, giving a two-index symmetric representation as well as antisymmetric representation ( ${\tiny\yng(2)} + {\tiny\yng(1,1)}$ ).} \label{f:a3-a7} \end{figure} For this choice of orientation, the representation given is the symmetric + antisymmetric representation. This can be seen by computing the Dynkin weight of the curve $\gamma_1^-+ \gamma_2^-+ \gamma_3^-+ \gamma_4+\gamma_3^+$. The only nonzero inner product of this curve with $C_1^\pm, C_2$ is \begin{equation} (\gamma_1^-+ \gamma_2^-+ \gamma_3^-+ \gamma_4+\gamma_3^+) \cdot C_1^-= -2 \,. \label{eq:} \end{equation} The resulting Dynkin weight $[-2, 0, 0]$ occurs in the (conjugate of the) symmetric representation, and not in the adjoint, so this embedding corresponds to the matter representation ( ${\tiny\yng(2)} + {\tiny\yng(1,1)}$ ). One simple class of global models that contain ordinary double point singularities is the set of models where an $A_N$ singularity is wrapped on a divisor class $\sigma$ that has a self-intersection, but which can be continuously deformed into a smooth divisor class without changing the gauge group of the theory. In this case, the self-intersection is expected to generically be of the type that gives an adjoint representation, since there is no reason to expect the type of matter to change discontinuously as the divisor becomes singular. We give an example of such a configuration in Section \ref{sec:6D}. In some cases, however, a more complicated global Weierstrass model can give self-intersections that produce symmetric and antisymmetric matter fields. The presentation of an explicit example of such a configuration is left for the future work. \subsection{Group theory of novel matter representations} The range of possible codimension two singularities in F-theory is very large, and provides an inviting territory for exploration. One guide in exploring this space is the set of matter representations that may be expected to arise from F-theory singularity structures based on analysis of low-energy theories. As we discuss in more detail in Section \ref{sec:6D}, a systematic analysis of $SU(N)$ matter representations in 6D supergravity theories without tensor multiplets in \cite{0} identified a number of representations that may arise in F-theory constructions. In this section we discuss the group theory aspect of how two of these representations may arise. Identification of local and global models for singularity structures realizing these matter representations is left for the future. The two representations we focus on here are the 4-index antisymmetric ($\Lambda^4$) representation of $SU(8)$ with Young diagram ${\tiny\yng(1,1,1,1)}$ and the ``box'' representation of $SU(4)$ with Young diagram ${\tiny\yng(2,2)}$ \subsubsection{4-index antisymmetric representation of $SU(8)$} To realize a representation $R$ of a group $G$ through the Katz--Vafa analysis, $G$ must embed into a group $G'$ of one rank higher, and the representation $R$ must appear in the decomposition of the adjoint of $G'$ under $G\subset G'$. At first appearance, this seems difficult for the $\Lambda^4$ representation of $SU(8)$. There is a natural embedding of $A_7$ into $E_8$ associated with the obvious embedding of Dynkin diagrams \begin{center} \begin{picture}(200,50)(- 100,- 35) \put(0,0){\makebox(0,0){$\longrightarrow$}} \put(-90,-5){\line( 1, 0){60}} \put(30,-5){\line( 1, 0){60}} \multiput(- 90,-5)(10,0){7}{\circle*{4}} \multiput(30,-5)(10,0){7}{\circle*{4}} \put(50,5){\circle{4}} \multiput(50,-5)(0,4){3}{\line(0,1){2}} \put(-60,-20){\makebox(0,0){$A_7$}} \put(60,-20){\makebox(0,0){$E_8$}} \end{picture} \end{center} Under this embedding, the adjoint of $E_8$ decomposes as \cite{Slansky} \begin{equation} {\bf 248} ({\rm Adj}) \rightarrow {\bf 63} ({\rm Adj}) +{\bf 1} + {\bf 28} \left( \;{\tiny\yng(1,1)}\; \right) + {\bf \bar{28}} + \left[ {\bf 8} \left( \;{\tiny\yng(1)}\; \right) + {\bf \bar{8}} + {\bf 56} \left( \;{\tiny\yng(1,1,1)}\;\right) + {\bf \bar{56}} \right]\,. \label{eq:78-1} \end{equation} The appearance of the $\Lambda^3$ representation, corresponding to Dynkin indices $[0, 0, 1, 0, 0, 0, 0]$ is clear from the geometry associated with the Dynkin diagram embedding depicted above. The $\P^1$ associated with the extra (empty) circle in the $E_8$ Dynkin diagram has inner product 1 with the $\P^1$ associated with the third root in the $A_7$ diagram, giving the Dynkin weight $[0, 0, 1, 0, 0, 0, 0]$. In addition to the above embedding, however, there is a second, inequivalent embedding of $A_7 \subset E_8$ \cite{Dynkin,Oguiso-Shioda}. This alternate embedding can be understood through a sequence of maximal subgroup embeddings $A_7 \subset E_7 \subset E_8$. The form of the embedding $A_7 \subset E_7$ can be understood through extended Dynkin diagrams. In general \cite{BDS}, a maximal subgroup $H \subset G$ of simple Lie algebras is associated with an embedding of the Dynkin diagram of $H$ into the {\it extended} Dynkin diagram of $G$. The embedding of $A_7$ into the extended Dynkin diagram of $E_7$ is depicted as (denoting the extra node extending the $E_7$ with an ``x'') \begin{center} \begin{picture}(200,50)(- 100,- 35) \put(0,0){\makebox(0,0){$\longrightarrow$}} \put(-90,-5){\line( 1, 0){60}} \put(30,-5){\line( 1, 0){60}} \multiput(- 90,-5)(10,0){7}{\circle*{4}} \multiput(30,-5)(10,0){7}{\circle*{4}} \put(60,5){\circle{4}} \multiput(60,-5)(0,4){3}{\line(0,1){2}} \put(-60,-20){\makebox(0,0){$A_7$}} \put(60,-20){\makebox(0,0){$\hat{E}_7$}} \put(30,-12){\makebox(0,0){x}} \end{picture} \end{center} The diagram suggests that the decomposition of the $E_7$ adjoint will include a state in an $A_7$ representation with a Dynkin weight of $[0, 0, 0, 1, 0, 0, 0]$, which is the desired highest weight of the $\Lambda^4$ representation. Indeed, under this embedding of $A_7 \rightarrow E_7$ the adjoint decomposes as \begin{equation} {\bf 133} ({\rm Adj}) \rightarrow {\bf 63} ({\rm Adj}) + {\bf 70}\left( \;{\tiny\yng(1,1,1,1)}\; \right) \,. \end{equation} Using this embedding to further embed $A_7 \subset E_7 \subset E_8$ gives the decomposition of the adjoint of $E_8$ \begin{equation} {\bf 248} ({\rm Adj}) \rightarrow {\bf 63} ({\rm Adj}) +{\bf 1} + {\bf 28} \left( \;{\tiny\yng(1,1)}\; \right) + {\bf \bar{28}} + \left[{\bf 1} + {\bf \bar{1}} +{\bf 28} + {\bf \bar{28}} + {\bf 70} \left( \;{\tiny\yng(1,1,1,1)}\; \right) \right]\,. \label{eq:78-2} \end{equation} So under this embedding, the adjoint of $E_8$ decomposes in a way that gives the $\Lambda^4$ representation of $A_7$. Note that the representation content in brackets in \eq{eq:78-1}, \eq{eq:78-2}, giving the difference in content between the two decompositions, is given in the two cases by \begin{equation} \Lambda^1 + \Lambda^3 + \Lambda^5 + \Lambda^7 \;\; \; {\rm vs.} \; \; \; \Lambda^0 + \Lambda^2 + \Lambda^4 + \Lambda^6 + \Lambda^8 \,. \end{equation} This is precisely the difference in representation content between the two spinor representations of $SO(16)$ when decomposed under $SU(8)$. As we discuss further in Section \ref{sec:6D}, we anticipate that a further analysis of global 6D models with $SU(8)$ gauge group will provide Weierstrass forms that locally contain singularities giving rise to matter in the $\Lambda^4$ representation of $SU(8)$. The group theory structure just described is one natural way in which this may occur. \subsubsection{Box representation of $SU(4)$} \label{sec:box} Now let us consider the ``box'' representation of $SU(4)$. In terms of Dynkin indices, this representation is \begin{equation} {\tiny\yng(2,2)}({\bf 20'}) \;\leftrightarrow \; [0, 2, 0] \,. \label{eq:box} \end{equation} This representation does not appear in the decomposition of the adjoint of any rank one gauge group enhancement. Since the genus \eq{eq:genus} of the representation is nonzero, we expect this representation to arise from a Weierstrass singularity where the curve on the base supporting the singularity locus is itself singular. As in the ordinary double point giving matter in the adjoint and symmetric representations of $SU(N)$ through the embedding $A_{N -1}\rightarrow A_{N -1} \times A_{N -1} \rightarrow A_{2 N -1}$, we look for a similar multiple embedding that may give rise to the representation \eq{eq:box}. As in the previous example, such an embedding can be realized through an embedding of $A_3 \times A_3$ into the extended Dynkin diagram for $D_6$ \begin{center} \begin{picture}(200,50)(- 100,- 35) \put(0,0){\makebox(0,0){$\longrightarrow$}} \put(-60,-5){\line( 1, 0){20}} \put(40,-5){\line( 1, 0){40}} \multiput(- 60,-5)(10,0){3}{\circle*{4}} \multiput(40,-5)(10,0){2}{\circle*{4}} \multiput(70,-5)(10,0){2}{\circle*{4}} \put(50,5){\circle*{4}} \put(70,5){\circle*{4}} \put(60,-5){\circle{4}} \put(50,-5){\line(0,1){10}} \put(70,-5){\line(0,1){10}} \put(-50,-20){\makebox(0,0){$A_3$}} \put(60,-20){\makebox(0,0){$\hat{D}_6$}} \put(40,-12){\makebox(0,0){x}} \end{picture} \end{center} Under this embedding of $A_3 \rightarrow A_3 \times A_3 \rightarrow D_6$ the adjoint of $D_6$ decomposes as \begin{equation} {\bf 66} = 3 \times {\bf 15} + {\bf 1} + {\bf 20'} \,. \end{equation} The $D_6$ group can be further embedded in $D_7$ or $E_7$ giving a rank one enhancement. In either case, the box representation appears in the decomposition of the adjoint. As discussed further in Section \ref{sec:6D}, we expect that a further analysis of 6D Weierstrass models for $A_3$ on a singular curve of arithmetic genus 3 on $\P^2$ will give a global model with a local singularity type giving matter in the box representation of $SU(4)$; the group theory mechanism just described provides one natural way in which this may occur. \section{Systematic analysis of Weierstrass models} \label{sec:global} We now perform a systematic analysis of Weierstrass models for $SU(N)$ gauge groups on a general F-theory base. Thus we are looking for an $A_{N -1}$ ($I_N$) Kodaira type singularity on a codimension one space described by a divisor $\{\sigma=0\}$. We assume in the analysis that $\{\sigma=0\}$ is nonsingular, so that any ring of local functions $R_\sigma$ on a sufficiently small open subset of $\{\sigma=0\}$ is a unique factorization domain (UFD). We comment on extensions of this analysis to singular divisors $\{\sigma=0\}$ at various points in the discussion. The idea of this analysis is to use the Kodaira conditions on the form of the singularity to determine the form of the coefficients $f, g$ in the Weierstrass form for a fairly general class of models. A related analysis is carried out in \cite{Morrison-sn} using the Tate form for various gauge groups\footnote{The results of \cite{Morrison-sn} are complementary to the ones derived here, and include the case of $SU(N)$ for large $N$.}. Here we primarily use Weierstrass form since the counting of degrees of freedom is clearest in this language. The goal of the analysis here is to follow various branches of the conditions on the discriminant realized by a type $I_N$ singularity to identify models with matter content associated with various local singularity types such as those identified in the previous section. We begin with the Weierstrass form \begin{equation} y^2 = x^3 + fx + g \,. \label{eq:Weierstrass-global} \end{equation} Here $f \in -4K$, $g \in -6K$ where $K$ is the canonical class on the base $B$. We expand \begin{equation} f= \sum_{i}f_i \sigma^i \,,\qquad g= \sum_i g_i \sigma^i \,. \label{eq:fg-expansion} \end{equation} where as above, $\{\sigma=0\}$ is the codimension one locus on the base $B$ carrying the $A_{N -1}$ singularity. For this general analysis we leave the dimension of the base and degree of $\sigma$ unfixed. In the following sections we specialize to the cases where the base is a complex surface (6D space-time theories) or complex 3-fold (4D space-time theories). In most situations we can consider $f_i, g_i$ as polynomials in local coordinates $s, t$ (or $s, t, u$ for 4D theories) on the base, with degrees that will depend on the particular situation. If we are working with an elliptically-fibered Calabi-Yau $d$-fold over the base ${\P}^{d-1}$, and the degree of $\sigma$ is $b$ then the degrees of $f_i, g_i$ are \begin{equation} [f_i] = 4d-bi, \;\;\;\;\;[g_i] = 6d-bi \,. \end{equation} For example, for 6D theories with no tensor multiplets (the case studied from the low-energy point of view in \cite{0}), the dimension is $d = 3$, and $B =\P^2$, so for an $SU(N)$ group associated with a singularity on a divisor class of degree $b = 1$, $f_0$ is a polynomial in $s$ of degree 12, $f_1$ has degree 11, etc. Note that since $f, g$ are really sections of line bundles, they can generally only be treated as functions locally. The discriminant describing the total singularity locus is \begin{equation} \Delta = 4 f^3 + 27 g^2\,. \label{eq:discriminant-2} \end{equation} We can expand the discriminant in powers of $\sigma$, \begin{equation} \Delta = \sum_{i}\Delta_i \sigma^i \,. \end{equation} For an $I_N$ singularity type we must have $\Delta_i = 0$ for $i < N$. For each power of $\sigma$, the condition that $\Delta_i$ vanish imposes various algebraic conditions on the coefficients $f_i, g_i$. These conditions can be derived by a straightforward algebraic analysis (some of which also appears in \cite{Morrison-sn}). For local functions $\Phi$ and $\Psi$ defined on an open set of the base $B$, we use the notation \begin{equation}\label{eq:notation} \Phi \sim \Psi \end{equation} to indicate that $\Phi$ and $\Psi$ have identical restrictions to $\{\sigma=0\}$, i.e., $\Phi|_{\{\sigma=0\}}=\Psi|_{\{\sigma=0\}}$. Equivalently, $\Phi$ and $\Psi$ differ by a multiple of $\sigma$, i.e., $\Phi = \Psi + {\cal O}(\sigma)$. We proceed by systematically imposing the condition that the discriminant (\ref{eq:discriminant-2}) vanish at each order in a fashion compatible with an $A_{N -1}$ singularity on $\{\sigma=0\}$. \vspace*{0.05in} \noindent $\Delta_0 = 0$: The leading term in $\Delta$ is \begin{equation} \Delta_0 = 4 f_0^3 + 27 g_0^2 \,. \end{equation} For this to vanish in a fashion compatible with an $A_{N -1}$ singularity, we must be able to locally express $f_0, g_0$ in terms of some $\phiorig$ by \begin{eqnarray} f_0 & \sim & -\frac1{48} \phiorig^2 \label{eq:dorig}\\ g_0 & \sim & \frac1{864} \phiorig^3 \nonumber \end{eqnarray} Moreover, when $N\ge3$, $\phiorig$ has a square root (locally), and we can rewrite this condition as \begin{eqnarray} f_0 & \sim & -\frac1{48} \phit^4 \label{eq:d0}\\ g_0 & \sim & \frac1{864} \phit^6 \nonumber \end{eqnarray} The condition that $f_0|_{\{\sigma=0\}} = x^2$ for some $x \in R_\sigma$ follows from the condition that the ring of local functions on sufficiently small open subsets of the variety defined by $\{\sigma=0\}$ is a unique factorization domain (so each factor of $g_0|_{\{\sigma=0\}}$ must appear an even number of times in $(f_0|_{\{\sigma=0\}})^3$); the local function $x$ on the divisor $\{\sigma=0\}$ can then be ``lifted'' to a function $X$ on an open subset of $B$ such that $X|_{\{\sigma=0\}}=x$. Note that the existence of $X$ is definitely only a local property in general: \cite{Morrison-sn} has an explicit example that shows that it may not be possible to find $X$ (or its square root when that is appropriate) globally. The condition that $X$ is itself a square modulo $\sigma$ follows from the ``split'' form of the singularity in the Tate algorithm \cite{Tate,Bershadsky-all} for determining the Kodaira singularity type from the Weierstrass form\footnote{When $N=2$, there is no split form and no monodromy, and we cannot conclude that $X$ is a square modulo $\sigma$.}. This condition can be seen explicitly in the $A_3$ and $A_5$ examples described in Sections \ref{sec:a3} and \ref{sec:a5}. In those cases, $f_0$ is proportional to $s^4$, modulo $\sigma$. If $s^4$ in these situations were replaced with $s^2$, the exceptional curve in the first chart would be defined by $y^2 = sx^2$, and would not factorize into $C_1^\pm$, so that the resulting gauge group would be the symplectic group $Sp(N)$ instead of $SU(N)$. The numerical coefficients in (\ref{eq:dorig}) and (\ref{eq:d0}) are chosen to simplify parts of the algebra in other places and to match with other papers including \cite{Morrison-sn} \footnote{For reference, we give a dictionary relating the variables used here to analogous variables used in \cite{Morrison-sn}. The variables $(\phit,\phif,\phisa,\dots,\phie,\phis,\dots)$ in this paper correspond to the variables $(s_0,u_1,u_2,\dots,t_1,t_2,\dots)$ in \cite{Morrison-sn}. Note that $\mu$ from \cite{Morrison-sn} must be set equal to $1$ to match this paper.}. \vspace*{0.05in} We phrase the arguments of this section in terms of quantities such as $\phi_0$ that are in general only locally defined functions. However, in some key examples (such as the ones at the beginning of Section~\ref{sec:6D}) it is known that these quantities are actually globally defined on $B$. In those cases, we are easily able to count parameters in the construction by considering the degrees of these globally defined objects. \noindent $\Delta_1 = 0$: In light of (\ref{eq:dorig}), we now replace $f_0$ and $g_0$ by $-\phiorig^2/48$ and $\phiorig^3/864$, respectively. This may produce additional contributions to $f, g$ at higher order in $\sigma$, since for example the original $f_0$ was only equal to $-\phiorig^2/48$ up to terms of order $\sigma$. Such additional contributions can be absorbed by redefining the coefficients $f_i$ and $g_i$ from (\ref{eq:fg-expansion}) accordingly. The coefficient of the leading term in the discriminant then becomes \begin{equation} \Delta_1 =\frac{1}{192} \left(12 \phiorig^3 g_1 + \phiorig^4 f_1 \right) \,. \end{equation} This vanishes exactly when \begin{eqnarray} g_1 & = & -\frac1{12}\phiorig f_1 \,. \label{eq:d1} \end{eqnarray} A similar term must be removed from $g_i$ at each order (this can be seen just from the terms $g_0g_i, f_0^2 f_i$ in the discriminant; a more general explanation for this structure is described at the end of this section), so we generally define \begin{equation} \tilde{g_i} = g_i +\frac1{12} \phiorig f_i \label{eq:gt} \end{equation} \noindent $\Delta_2 = 0$: After imposing (\ref{eq:d0}) (as a substitution) and (\ref{eq:d1}), the coefficient of the next term in the discriminant is \begin{equation} \Delta_2 =\frac{1}{16} \left(\phiorig^3 \tilde{g}_2 - \phiorig^2 f_1^2 \right) \,. \label{eq:delta2} \end{equation} At this stage, we also impose the condition $\phiorig=\phit^2$ to guarantee $SU(N)$ gauge symmetry, so that the next term in the discriminant becomes \begin{equation} \Delta_2 =\frac{1}{16} \left(\phit^6 \tilde{g}_2 - \phit^4 f_1^2 \right) \,. \label{eq:delta2bis} \end{equation} For (\ref{eq:delta2bis}) to vanish in our UFD, $f_1|_{\{\sigma=0\}}$ must be divisible by $\phit|_{\{\sigma=0\}}$, so there is a locally defined function $\psi_1$ such that \begin{equation} f_1 \sim \frac12 \phit \phie .\label{eq:d2} \end{equation} We replace $f_1$ by $\frac12\phit\phie$ and adjust coefficients accordingly; we can then solve $\Delta_2=0$ for $\tilde{g}_2$, obtaining: \begin{equation} \tilde{g}_2 = \frac14 \phie^2. \label{eq:d2bis} \end{equation} (Note from (\ref{eq:gt}) that this last equation is equivalent to $g_2=\frac14\phie^2-\frac1{12}\phit^2f_2$.) \vspace*{0.1in} \noindent ${\bf SU(4)}$ ($\Delta_3 = 0$): At the next order in $\sigma$ the coefficient in the discriminant is \begin{equation} \Delta_3 =\frac{1}{16} \left(\phit^6 \tilde{g}_3-\phit^3 \phie^3 - \phit^5 \phie f_2 \right) \,. \label{eq:d3} \end{equation} We see that in order for $\Delta_3$ to vanish along $\{\sigma=0\}$, $\phie|_{\{\sigma=0\}}$ must be divisible by $\phit|_{\{\sigma= 0\}}$. Thus, there must exist a locally defined function $\phi_1$ such that \begin{equation} \phie \sim -\frac13 \phit \phif . \end{equation} We replace $\phie$ by $-\frac13\phit\phif$ and adjust coefficients accordingly; we can then solve $\Delta_3=0$ for $\tilde{g}_3$, obtaining: \begin{equation} \tilde{g}_3 = -\frac13\phif f_2 -\frac1{27}\phif^3 \label{eq:g2} \end{equation} (This last equation is equivalent to $g_3 = -\frac1{12}\phit^2f_3-\frac13\phif f_2 -\frac1{27}\phif^3$.) Again, a term such as the first term on the RHS of (\ref{eq:g2}) will arise for each $\tilde{g}_i$, so we define \begin{equation} \hat{g_i} = \tilde{g}_i + \frac13\phif f_{i-1} \end{equation} and the latter condition (\ref{eq:g2}) is just $\hat{g}_3 = -\phif^3/27$. It is also convenient to define $\hat{f}_2 = f_2+\frac13\phif^2$. We have now arranged a theory with an $SU(4)$ local factor in the gauge group. The construction is completely general, given our assumption about $\{\sigma=0\}$ being nonsingular. Making the substitutions above, adjusting coefficients, and expanding $f, g,$ and $\Delta$ we have \begin{align} f & = -\frac1{48} \phit^4 -\frac16 \phit^2 \phif \sigma + f_2 \sigma^2 + f_3\sigma^3+f_4\sigma^4+{\cal O}(\sigma^5) \label{eq:4-f}\\ g & = \frac1{864} \phit^6+\frac1{72} \phit^4 \phif \sigma + (\frac1{36} \phit^2 \phif^2 -\frac1{12}\phit^2 f_2) \sigma^2 + ( -\frac1{12}\phit^2 f_3 -\frac13 \phif f_2 -\frac1{27} \phif^3) \sigma^3 \label{eq:4-g}\\ &\quad + {g}_4 \sigma^4 + {\cal O} (\sigma^5) \notag \\ \Delta & = \frac{1}{16}\phit^4(- \hat{f}_2^2 + \phit^2 \hat{g}_4) \sigma^4 +{\cal O} (\sigma^5) \label{eq:d4} \end{align} We see that at a generic point on the curve $\{\sigma=0\}$ the singularity type is $I_4$, with vanishing degrees of $f, g, \Delta$ of 0, 0, $4$, corresponding to an $A_3$ singularity giving a $SU(4)$ gauge group. At the roots of $\phit$, the vanishing degrees become 2, 3, 6, corresponding to a $D_4$ singularity, giving a two-index antisymmetric ($\Lambda^2$) matter representation. The remaining part of the leading component of the discriminant, $\tilde{\Delta}_4 = \Delta_4/\phit^4 = (- \hat{f}_2^2 + \phit^2 \hat{g}_4)/16$, is of degree $8d -4b$. For generic choices of the coefficients of the other functions $\phif, f_2, \ldots$, the roots of $\tilde{\Delta}_4$ will correspond to an enhancement to $A_4$, giving matter in the fundamental representation of $SU(4)$. For non-generic choices of the functions $f_2, \phif$, there can be enhanced singularities. In particular if $f_2$ and $\phit$ share a root the degree of vanishing of $f$ is enhanced to 3. The following table shows the possibilities for enhanced singularities \begin{center} \begin{tabular}{| c | c | c | c | c | c | c|} \hline Label & Root & ${f}$ & ${g}$ & ${\Delta}$ & Singularity & G/Rep.\\ \hline $4_0$ & generic & 0 & 0 & 4 & $A_3$ & $SU(4)$\\ \hline &&&&&&\\[-6pt] $4_a$ & $ \tilde{\Delta}_4 = 0$ & 0 & 0 & 5 & $A_4$ & {\tiny\yng(1)}\\ $4_b$ & $ \phit = 0$ & 2 & 3 & 6 & $D_4$ & ${\tiny\yng(1,1)}$ ($\Lambda^2$)\\[4pt] \hline &&&&&&\\[-5pt] $4_c$ & $ \phit = 0, f_2 = 0$ & 3 & 3 & 6 & $D_4$ & ${\tiny\yng(1,1)}$\\[8pt] $4_d$ & $ \phit = f_2 = \phif = 0$ & 3 & 4 & 8 & $E_6$ & $\left[ \;{\tiny\yng(1,1)} + 2 \; {\tiny\yng(1)}\;\right]$\\[8pt] \hline \end{tabular} \end{center} The explicit local resolution of singularity type $4_b$ with $A_3$ enhancement to $D_4$ is that described in Section \ref{sec:a3}, with details in Section \ref{sec:a3-appendix} of the Appendix. Replacing $\phit \rightarrow 2s, f_2 \rightarrow -1,$ and for simplicity $\phif \rightarrow 0$ (which does not affect the singularity), $f, g$ from \eq{eq:4-f} and \eq{eq:4-g} take precisely the forms \eq{eq:a3-fg} used in that analysis. In the last case ($4_d$) a more exotic singularity appears but no new matter representations arise. The brackets in the table indicate that we have not explicitly resolved the singularity, but the matter content is uniquely determined by the 6D anomaly cancellation conditions, as we discuss in the following section. \vspace*{0.1in} \noindent ${\bf SU(5)}$ ($\Delta_4 = 0$): The vanishing of the leading term in (\ref{eq:d4}) requires that $\hat{f}_2|_{\{\sigma=0\}}$ be divisible by $\phit|_{\{\sigma=0\}}$. Thus, in this case there exists a locally defined function $\phis$ such that \begin{equation} \hat{f}_2 \sim \frac12 \phit \phis . \end{equation} We replace $\hat{f}_2$ by $\frac12\phit\phis$ and adjust coefficients accordingly; we can then solve $\Delta_4=0$ for $\hat{g}_4$, obtaining: \begin{equation} \hat{g}_4 = \frac{1}{4} \phis^2 \,. \end{equation} (In other words, $f_2$ has been replaced by $\frac12 \phit \phis-\frac13 \phif^2$ and $g_4= \frac{1}{4} \phis^2 - \frac1{12} \phit^2f_4 - \frac13 \phif f_3$.) We have now arranged a theory with an $SU(5)$ local factor in the gauge group (again completely general, assuming $\{\sigma=0\}$ is nonsingular). Expanding $f, g,$ and $\Delta$ we have \begin{align} f & = -\frac1{48} \phit^4 -\frac16 \phit^2 \phif \sigma + (\frac12 \phit \phis-\frac13 \phif^2) \sigma^2 + f_3\sigma^3+f_4\sigma^4+f_5\sigma^5+{\cal O}(\sigma^6) \label{eq:5-f}\\ g & = \frac1{864} \phit^6+\frac1{72} \phit^4 \phif \sigma + (\frac1{18} \phit^2 \phif^2 -\frac1{24}\phit^3 \phis) \sigma^2 \label{eq:5-g}\\ & \quad+ ( -\frac1{12}\phit^2 f_3 -\frac16 \phit \phif \phis + \frac{2}{27} \phif^3 ) \sigma^3 \nonumber\\ &\quad + (\frac{1}{4} \phis^2 - \frac1{12} \phit^2f_4 - \frac13 \phif f_3) \sigma^4 + g_5\sigma^5+{\cal O} (\sigma^6) \notag \\ \Delta & = \frac1{16}\phit^4( \phit^2 \hat{g}_5-\phit\phis f_3+\phif\phis^2) \sigma^5 +{\cal O} (\sigma^6) \label{eq:d5} \end{align} The range of possible singularities is similar to that encountered in the $SU(4)$ case above. At the roots of $\phit$ the singularity type is enhanced to $D_5$, and the roots of the remaining $\tilde{\Delta}_4 = \Delta_4/\phit^4$ give $A_5$ singularities. There are also various enhanced singularities for non-generic configurations, but no new matter representations are possible. We again summarize the possible singularity types in the following table \begin{center} \begin{tabular}{| c | c | c | c | c | c | c|} \hline Label & Root & ${f}$ & ${g}$ & ${\Delta}$ & Singularity & G/Rep.\\ \hline $5_0$ & generic & 0 & 0 & 5 & $A_4$ & $SU(5)$\\ \hline &&&&&&\\[-8pt] $5_a$ & $ \tilde{\Delta}_4 = 0$ & 0 & 0 & 5 & $A_5$ & {\tiny\yng(1)}\\ $5_b$ & $ \phit = 0$ & 2 & 3 & 7 & $D_5$ & ${\tiny\yng(1,1)}$ ($\Lambda^2$)\\[4pt] \hline &&&&&&\\[-6pt] $5_c$ & $ \phit = \phif = 0$ & 3 & 4 & 8 & $ E_6$ & $\left[ \;{\tiny\yng(1,1)} + {\tiny\yng(1)} \;\right]$\\[6pt] $5_d$ & $ \phit = \phis = 0$ & 2 & 3 & 8 & $D_6$ & $\left[ \;{\tiny\yng(1,1)} + {\tiny\yng(1)}\;\right]$\\[6pt] $5_e$ & $ \phit = \phif = \phis = 0$ & 3 & 5 & 9 & $E_7$ & $\left[ \;{\tiny\yng(1,1)} + 2\;{\tiny\yng(1)}\;\right] $\\[6pt] \hline \end{tabular} \end{center} \vspace*{0.1in} \noindent ${\bf SU(6)}$ ($\Delta_5 = 0$): The analysis becomes more interesting at the next order. Using the above conditions the leading order term in the discriminant is \begin{equation} \Delta_5 = \frac1{16}\phit^4 (\phif \phis^2 - \phit \phis f_3 + \phit^2 \hat{g}_5) \,. \label{eq:de5} \end{equation} From this it follows that each root of $\phit|_{\{\sigma=0\}}$ must either divide $\phif|_{\{\sigma=0\}}$ or $\phis|_{\{\sigma=0\}}$. We can find locally defined functions $\alpha$ and $\beta$ such that \begin{equation} \phit \sim \alpha \beta \,, \label{eq:pt-ab} \end{equation} where $\alpha|_{\{\sigma=0\}}$ is the greatest common divisor of $\phit|_{\{\sigma=0\}}$ and $ \phis|_{\{\sigma=0\}}$. There must then also be locally defined functions $\phisa$ and $\phifb$ such that \begin{eqnarray} \phis & \sim & -\frac13\alpha \phisa \label{eq:psf-ab}\\ \phif & \sim & \beta \phifb \,. \label{eq:pf} \end{eqnarray} Note that by construction, $\beta|_{\{\sigma=0\}}$ and $\phisa|_{\{\sigma=0\}}$ are relatively prime. We make all of the corresponding substitutions and adjust coefficients; then (\ref{eq:de5}) becomes: \begin{equation} \label{eq:de5bis} \Delta_5 = \frac1{48}\alpha^6\beta^5\left(\phisa (f_3+\frac13\phifb\phisa) + 3\beta \hat{g}_5 \right) \,. \end{equation} In order for this to vanish we then must have $(f_3+\frac13\phifb\phisa)|_{\{\sigma=0\}}$ divisible by $-3\beta|_{\{\sigma=0\}}$ and $\hat{g}_5|_{\{\sigma=0\}}$ divisible by $\phisa|_{\{\sigma=0\}}$, with identical quotients. That is, there must exist a locally defined function $\lambda$ such that \begin{eqnarray} f_3 & \sim & -\frac13\phifb \phisa -3 \beta\lambda \label{eq:f3e}\\ \hat{g}_5 & \sim & \phisa \lambda \,. \end{eqnarray} The second relation can also be written as \begin{equation} \begin{split} g_5 &\sim -\frac1{12} \phit^2f_5 -\frac13\phif f_4 + \phisa \lambda\\ &\sim -\frac1{12} \alpha^2\beta^2f_5 -\frac13\beta\phifb f_4 + \phisa \lambda \end{split} \end{equation} The possible singularities are now \begin{center} \begin{tabular}{| c | c | c | c | c | c | c|} \hline Label & Root & ${f}$ & ${g}$ & ${\Delta}$ & Singularity & G/Rep.\\ \hline $6_0$ & generic & 0 & 0 & 6 & $A_5$ & $SU(6)$\\ \hline &&&&&&\\[-8pt] $6_a$ & $ \tilde{\Delta}_6 = 0$ & 0 & 0 & 7 & $A_6$ & {\tiny\yng(1)}\\[6pt] $6_b$ & $ \alpha = 0$ & 2 & 3 & 8 & $D_6$ & ${\tiny\yng(1,1)}$ ($\Lambda^2$)\\[4pt] \hline &&&&&&\\[-6pt] $6_c$ & $ \beta = 0$ & 3 & 4 & 8 & $E_6$ & $\frac{1}{2} \; {\tiny\yng(1,1,1)}\; (\Lambda^3$)\\[8pt] $6_d$ & $ \alpha = \beta = 0$ & 3 & 5 & 9 & $E_7$ & $\left[\frac{1}{2}\;{\tiny\yng(1,1,1)} + {\tiny\yng(1,1)}\;\right]$\\[8pt] $6_e$ & $ \beta = \phifb = 0$ & 4 & 4 & 8 & $E_6$ & $\left[\frac{1}{2} \;{\tiny\yng(1,1,1)} \;\right]$\\[8pt] $6_f$ & $ \alpha = \phifb = 0$ & 3 & 5 & 9 & $E_7$ & $\left(\left[\frac{1}{2} \;{\tiny\yng(1,1,1)} + {\tiny\yng(1)} \;\right) / \; {\tiny\yng(1,1)}\;\right]$\\[8pt] \hline \end{tabular} \end{center} We see now the appearance of a 3-index antisymmetric matter field. The singularity types $6_b$ and $6_c$ are precisely the enhancements of $A_5$ to $D_6$ and $E_6$ analyzed locally in Section \ref{sec:a5}, with details in Section \ref{sec:a5-appendix} of the Appendix. To relate \eq{eq:5-f}, \eq{eq:5-g} to the local forms there we use \eq{eq:pt-ab}, \eq{eq:psf-ab} and make the replacements \begin{equation} (6_b): \; \alpha \rightarrow s, \beta \rightarrow 2, \phisa \rightarrow -6, \phifb \rightarrow 3/2, \lambda \rightarrow 0, f_4 \rightarrow 0, \end{equation} \begin{equation} (6_c): \; \alpha \rightarrow 1, \beta \rightarrow 2 s, \phisa \rightarrow -6, \phifb \rightarrow 3/2, \lambda \rightarrow 0, f_4 \rightarrow 0 \,. \label{eq:6c-replacement} \end{equation} The replacement \eq{eq:6c-replacement} gives \eq{eq:phi-a5-e6} with $\rho = s$. Thus produces a half-hypermultiplet in the $\Lambda^3$ representation. Two coincident roots of $\beta$ give $\rho = s^2$, for a full hypermultiplet in the $\Lambda^3$ representation, as discussed in Section \ref{sec:a5}. It is interesting to note that the $6_b$ and $6_c$ singularities with $D_6$ and $E_6$ enhancements are connected. If we consider a $6_c$ branch with $\beta = 0$, we can continuously deform the coefficients of the Weierstrass form so that the root $\beta$ coincides with a root of $\phisa$. At this point, the root of $\phisa$ divides $\phit$, so in the decomposition \eq{eq:pt-ab}, \eq{eq:psf-ab} the simultaneous root of $\beta, \phisa$ becomes a root of $\alpha, \phifb$, giving a singularity of type $6_f$. The root of $\alpha$ can then be deformed independently of $\phifb$. In six dimensions, this deformation transforms a combination of a half hypermultiplet in the $\Lambda^3$ representation and a hypermultiplet in the fundamental representation into a single hypermultiplet in the $\Lambda^2$ representation. This novel phase transition is clear from the F-theory description but does not have a simple description in the low-energy theory in terms of Higgsing. We describe an explicit example of a transition of this kind in a specific 6D theory in the following section. Note that the intermediate state in this transition associated with a singularity of type $6_f$ involves a local enhancement $A_5 \subset E_7$ with rank increase of more than one. This kind of transition will be discussed further elsewhere. \vspace*{0.1in} \noindent ${\bf SU(7)}$ ($\Delta_6 = 0$): At order 7, it becomes more difficult to identify the general Weierstrass form. Imposing the conditions above, the 6th order term in the discriminant is \begin{equation} \Delta_6 =\frac{1}{16}\alpha^4 \beta^3 \left[ -\frac19 \beta \left( \phifb \phisa - 9\beta\lambda \right)^2 +\alpha^2 \left( \frac1{27} \phisa^3 + \frac13\beta^2 \phisa f_4 + \beta^3 \hat{g}_6\right) \right] \label{eq:d6} \end{equation} We do not have a completely general form for the structure needed to make this term vanish. But there are two special cases in which we can carry out the analysis and guarantee the vanishing of (\ref{eq:d6}) \vspace*{0.05in} \noindent {\bf Case 7A} \begin{eqnarray} \beta & = & 1\\ \lambda & = & \frac19\phifb \phisa -\frac16 \ola \alpha \label{eq:lambda}\\ \hat{g}_6 & = & -\frac1{27}\phisa^3 + \frac14\ola^2 -\frac13 \phisa f_4 \,. \end{eqnarray} In this case the local singularities can appear as in the following table \begin{center} \begin{tabular}{| c | c | c | c | c | c | c|} \hline Label & Root & ${f}$ & ${g}$ & ${\Delta}$ & Singularity & G/Rep.\\ \hline $7_0$ & generic & 0 & 0 & 7 & $A_6$ & $SU(7)$\\ \hline &&&&&&\\[-6pt] $7_a$ & $ \tilde{\Delta}_7 = 0$ & 0 & 0 & 8 & $A_7$ & {\tiny\yng(1)}\\[4pt] $7_b$ & $ \alpha = 0$ & 2 & 3 & 9 & $D_7$ & ${\tiny\yng(1,1)}$ ($\Lambda^2$)\\[4pt] \hline &&&&&&\\[-8pt] $7_c$ & $ \alpha = \phifb = 0$ & 4 & 6 & 12 & $\star $ & $\Delta T$\\[2pt] \hline \end{tabular} \end{center} In case $7_c$ the singularity of degrees 4, 6, 12 goes outside the Kodaira list. To resolve the singularity, the codimension two singularity locus on the base must be blown up. In six-dimensional gravity theories this leads to the appearance of an additional tensor multiplet. \vspace*{0.05in} \noindent {\bf Case 7B} In general, for (\ref{eq:d6}) to vanish we must have $ (\alpha|_{\{\sigma=0\}})^2$ divisible by $\beta|_{\{\sigma=0\}}$. We can then write \begin{equation} \beta \sim \gamma \delta^2 \label{eq:b-gd} \end{equation} for appropriate locally defined functions $\gamma$ and $\delta$ such that $(\gamma \delta)|_{\{\sigma=0\}}$ is the GCD of $\alpha|_{\{\sigma=0\}}$ and $ \beta|_{\{\sigma=0\}}$. We must then have $ \alpha|_{\{\sigma=0\}}$ divisible by $(\gamma|_{\{\sigma=0\}})^2 $ and furthermore we can decompose \begin{eqnarray} \alpha & \sim & \gamma^2 \delta \alphabeta\\ \phifb & \sim & \gamma \zeta \,. \end{eqnarray} for appropriate locally defined functions $\alphabeta$ and $\zeta$. We can arrange for (\ref{eq:d6}) to vanish (case B) if we make the assumption that $\gamma = \alphabeta =1$, so that $\beta \sim \alpha^2$. In this case the singularities that can arise are \begin{center} \begin{tabular}{| c | c | c | c | c | c | c|} \hline Label & Root & ${f}$ & ${g}$ & ${\Delta}$ & Singularity & G/Rep.\\ \hline $7'_0$ & generic & 0 & 0 & 7 & $A_6$ & $SU(7)$\\ \hline &&&&&&\\[-6pt] $7'_a$ & $ \tilde{\Delta}_7 = 0$ & 0 & 0 & 8 & $A_7$ & {\tiny\yng(1)}\\[8pt] $7'_b$ & $ \alpha = \beta = 0$ & 3 & 5 & 9 & $E_7$ & $\left[\;{\tiny\yng(1,1,1)} (\Lambda^3)\;\right]$\\[8pt] \hline &&&&&&\\[-8pt] $7'_c$ & $ \alpha = \phifb = 0$ & 4 & 6 & 13 & $\star $ & $\Delta T$\\[2pt] \hline \end{tabular} \end{center} The singularities at $\alpha = \beta$ give rise to 3-index antisymmetric matter representations of SU(7). \vspace*{0.1in} \noindent ${\bf SU(8)}$ {\bf and beyond} A complete treatment of all possible branches of the Weierstrass model for $A_7$ and beyond would be very involved algebraically. We do not attempt a complete analysis but describe the generic structure of Weierstrass models giving codimension one $A_{N -1}$ singularities for $N \geq 8$. To proceed further we need to get $\Delta_7$ to vanish. This cannot be done in case B above since vanishing at order 8 given the conditions imposed in that case would give a common root to $\beta$ and $\phisa$, which is not possible since $\beta$ and $\phisa$ are relatively prime. We can, however proceed to arbitrary order in $N$ under the generic assumption that $\beta = 1$. This corresponds to case 7A above. Note that all of the representations beyond the fundamental and $\Lambda^2$ representations arose from situations where $\beta \neq 1$. First, we note that the condition $\beta = 1$ simplifies the algebra at $SU(6)$ and beyond. This condition sets $\alpha = \phit$ and replaces \eq{eq:psf-ab} with \begin{equation} \phis \sim-\frac13\phit \phisa \,, \end{equation} and fixes $\phifb = \phif$. Furthermore, \eq{eq:f3e} and \eq{eq:lambda} become \begin{equation} f_3 \sim\frac12\phit\ola -\frac23\phif\phisa \,. \end{equation} We can proceed with the generic $A_{N -1}$ model by simply following this pattern. To get an $SU(8)$ model we substitute \begin{equation} \ola \sim-\frac13\pz \phi_3 \,, \end{equation} and solve for $g_7$ \,. To get an $SU(9)$ model we substitute \begin{equation} f_4 \sim \frac12\phit \psi_4-\frac23\phif \phi_3-\frac13 \phisa^2 \,, \label{eq:f4} \end{equation} and solve for $g_8$ \,. To get an $SU(10)$ model we substitute \begin{equation} \psi_4 \sim-\frac13\pz \phi_4 \,, \label{eq:s4} \end{equation} and solve for $g_9$, etc. A simple way of expressing the conditions being imposed is that the leading terms in the expansions of $f, g$ can be written in the form \begin{eqnarray} f & = & -\frac13 \Phi^2 + {\cal O}(\sigma^k) \label{eq:expansion-f-even}\\ g + \frac13 \Phi f & = & -\frac1{27} \Phi^3 + {\cal O}(\sigma^{2k}) \label{eq:expansion-g-even}\, \end{eqnarray} for $SU(2k)$, and \begin{eqnarray} f & = & -\frac13 \Phi^2 + \frac12\sigma^k\phi_0\psi_k+ {\cal O}(\sigma^{k+1}) \label{eq:expansion-f-odd}\\ g + \frac13 \Phi f & = & -\frac1{27} \Phi^3 + \frac14\sigma^{2k}\psi_k^2+ {\cal O}(\sigma^{2k+1}) \label{eq:expansion-g-odd}\, \end{eqnarray} for $SU(2k+1)$, where \begin{equation} \Phi=\frac{1}{4} \phi_0^2 +\phi_1 \sigma +\phi_2\sigma^2+\phi_3\sigma^3 + \cdots + \phi_{k-1}\sigma^{k-1} \,. \end{equation} (This is the same form used in the inductive argument given in \cite{Morrison-sn} for $SU(N)$ with large $N$.) In this way, we can find a systematic solution out to the point where there are no more $g_i$'s for which to solve. In the following section we describe the details of how the analysis continues beyond this point for a specific class of 6D models. The numerical factors here, and the form of the equation, can be explained by converting our Weierstrass equation (\ref{eq:Weierstrass-global}) to Tate form. Let $\Upsilon=\phi_1 +\phi_2\sigma+\phi_3\sigma^2 + \cdots + \phi_{k-1}\sigma^{k-2}$, so that $\Phi = \frac14\phi_0^2+\sigma\Upsilon$. For $SU(2k)$, we convert to Tate form using the coordinate change \begin{eqnarray} x & = & X+\frac13\Phi \\ y & = & Y + \frac12\phi_0X \end{eqnarray} giving an equation of the form \begin{equation} \label{eq:Tate-even} Y^2 + \phi_0 XY = X^3 + \sigma\Upsilon X^2 + \sigma^k FX + \sigma^{2k} G \,. \end{equation} Similarly, for $SU(2k+1)$, we convert to Tate form using the coordinate change \begin{eqnarray} x &=& X + \frac13 \Phi \\ y &=& Y + \frac12\phi_0X + \frac12 \sigma^k\psi_k \end{eqnarray} giving an equation of the form \begin{equation} \label{eq:Tate-odd} Y^2 + \phi_0 XY + \sigma^k\psi_k Y = X^3 + \sigma\Upsilon X^2 + \sigma^{k+1}FX + \sigma^{2k+1}G\,. \end{equation} \section{6D supergravity without tensor fields} \label{sec:6D} We now use the general analysis of the previous section to describe a particular class of 6D supergravity theories arising from F-theory. We consider the class of 6D models with no tensor multiplets ($T = 0$) and a gauge group having a nonabelian local factor $SU(N)$. These theories correspond to F-theory constructions on the base $\P^2$. In \cite{0} theories of this kind were analyzed from the point of view of the anomaly cancellation conditions in the low-energy theory. A complete list of all possible matter representations for each local gauge group factor $SU(N)$ was constructed for theories with $T = 0$. From the point of view of the low-energy theory, each local $SU(N)$ factor is associated with an integer $b \in\Z_+$ appearing in the anomaly polynomial and topological $BF^2$ couplings of the theory. For theories with an F-theory realization, $b$ is the degree of the divisor on $\P^2$ carrying the $SU(N)$ local factor. For small values of $b$, anomaly analysis of the 6D supergravity theories shows that $N$ can range up to 24, and the set of possible matter representations is strongly constrained. For larger values of $b$ the range of possible values of $N$ is more restricted, but a wider range of possible matter representations is compatible with the anomaly conditions. We now recall from \cite{0} the possible matter content for models with gauge group $SU(N)$ and small values of $b$, and consider the explicit F-theory constructions of such models. \subsection{$SU(N)$ on curves of degree $b = 1$} From anomaly cancellation alone, the complete set of possible matter representations for an $SU(N)$ local factor with $b =1$ in a 6D ${\cal N} = 1$ supergravity theory is constrained to the following combinations of matter fields (note that for $N = 3$ the antisymmetric $\Lambda^2$ representation is really the (conjugate of) the fundamental representation while for $N = 2$ the fields denoted by this representation are really uncharged.): \begin{center} \begin{tabular}{| c | c | c | c |c |} \multicolumn{5}{c}{$b = 1$ $SU(N)$ matter possibilities}\\ \hline & & & &\\[-9pt] $N$ & {\tiny\yng(1)}&${\tiny\yng(1,1)}$& ${\tiny\yng(1,1,1)}$ & {\rm neutral}\\[8pt] \hline $N \leq 24$ & $24 -N$ & 3 & 0 & $273-N (45-N)/2-1$\\ 6 & $18 + k$ & $ 3-k $ & $k/2 , k \leq 3 $&$ 155-k $\\ 7 & 22 & 0 & 1&132\\ \hline \end{tabular} \end{center} We now show that global F-theory models can be realized for theories with $SU(N)$ gauge group and all these possible matter representations through the general construction described in the previous section, except the special cases $N = 21, 23$. Furthermore, the number of neutral scalar fields in each of these models can be identified with the number of unfixed parameters in the Weierstrass description of each model when $N < 18$. For $b =1$ on $\P^2$, the structure of the general Weierstrass model is fairly simple. Taking the locus of the $SU(N)$ to be the zero locus of the function $\sigma = t$ (in appropriate local coordinates $s, t$ on $\P^2$), the functions $f_i, i = 0, \ldots, 12$ in the expansion of $f$ \eq{eq:fg-expansion} are polynomials in $s$ of degree $12-i$, and the functions $g_i, i = 0, \ldots, 18$ are polynomials in $s$ of degree $18-i$. The functions $f_i$ contain $1, \ldots, 13$ coefficients for a total of 91 coefficients while the $g_i$ contain 190 coefficients. The total number of coefficients appearing in the Weierstrass polynomials $f, g$ is therefore 281. There is a redundancy in this description under general linear transformations of homogeneous coordinates $s, t, u$ on the F-theory base $\P^2$, removing 9 parameters. The total number of independent parameters in the Weierstrass model is therefore $281-9 = 272$. There is one further scalar appearing in the low-energy 6D theory associated with the overall K\"ahler modulus of the base, so the number of scalar fields associated with the Weierstrass moduli is in precise agreement with the gravitational anomaly condition, which states that \begin{equation} H-V = 273 \,, \label{eq:hv} \end{equation} where $H, V$ are the numbers of charged matter hypermultiplets and vector multiplets in the generic model. Now we apply the methods of Section~\ref{sec:global}. Since $\{\sigma=0\}$ is a line in $\mathbb P^2$, all of the functions $\phi_0$, $\phi_1$, \dots, etc.\ that occur in the analysis are in fact homogeneous polynomials on $\mathbb P^2$ whose degrees are easily determined\footnote{For curves of higher degree, particularly ones of higher genus, this statement may fail to hold and the global analysis is more subtle.}. Fixing the first few orders of the discriminant to vanish, (\ref{eq:d0}) fixes the $13 + 19 = 32$ coefficients in $f_0, g_0$ in terms of the four coefficients of $\phit$, thus removing 28 coefficients. When the singularity locus is fixed at $t = 0$ this removes two of the redundancies in the linear transformation parameters. Fixing $\Delta_1 = 0$ through (\ref{eq:d1}) removes another 18 degrees of freedom by fixing $g_1$ in terms of $\phit, f_1$, leaving $272-46 + 2 = 228$ degrees of freedom in the Weierstrass coefficients\footnote{Note that the count we are performing here only applies to $SU(N)$, $N\ge3$, since the Weierstrass coefficients for $SU(2)$ involve $\phiorig$ rather than $\phit$. In the case of $SU(2)$, we use (\ref{eq:dorig}) to fix the $32$ coefficients in $f_0, g_0$ in terms of the seven coefficients of $\phiorig$, removing only 25 coefficients this time; (\ref{eq:d1}) still removes another 18 degrees of freedom, leaving $231$ degrees of freedom in the Weierstrass coefficients. The ``extra'' 3 degrees of freedom are accounted for by the fact that the $\Lambda^2$ representation is trivial, so the three copies of $\Lambda^2$ provide 3 additional neutral fields. We thank Volker Braun for discussion on this point.}. Fixing $\Delta_2 = 0$ through (\ref{eq:d2}) and (\ref{eq:d2bis}) removes another 20, bringing the number of unfixed parameters in the $SU(3)$ model to 208. This corresponds precisely to the number of scalar fields (209) in the $N = 3$ model from the table above. Fixing $\Delta_3 = 0$ through (\ref{eq:d3}) removes another 19 parameters, leaving 189 degrees of freedom in the Weierstrass coefficients, again in agreement with the 190 expected scalar fields for the $SU(4)$ model above. Note that the degrees of freedom in the Weierstrass coefficients are complex degrees of freedom, while the hypermultiplets parameterize a quaternionic K\"ahler moduli space and hence contain four real scalars. There are thus additional real degrees of freedom not captured by the Weierstrass coefficients; these are associated with degrees of freedom on the branes \cite{Bershadsky-jps}, and may be related to the T-brane construction of \cite{T-branes}. We now consider in more detail the matter content in the set of theories with $SU(4)$ gauge group. The 189 complex-dimensional moduli space of Weierstrass models with $SU(4)$ realized on a curve on $\P^2$ of degree $b = 1$ describes a family of generic models with 3 matter fields in the two-index antisymmetric ($\Lambda^2$) representation. This set of F-theory models satisfies the conditions (\ref{eq:d0}-\ref{eq:d4}), and for a generic model in this class there are three distinct roots of $\phit$ giving singularity type $4_b$. For each such root, we can choose a local coordinate $s$ so that $s = 0$ at the root, and we can expand \begin{equation} \phit = 2 s + {\cal O} (s^2) \,. \label{eq:a3-phi-expansion} \end{equation} Plugging (\ref{eq:a3-phi-expansion}) into (\ref{eq:4-f}), (\ref{eq:4-g}), and choosing $\phif = 0, f_2 = -1$ gives precisely the expressions (\ref{eq:a3-fg}) for $f, g$ used in the $A_3 \rightarrow D_4$ singularity analysis of Section \ref{sec:a3}. For any $\phif, f_2$ an equivalent analysis will give a local singularity enhancement from $A_3$ to $D_4$ giving matter in the two-index antisymmetric ($\Lambda^2$) representation. Thus, these models all have 3 matter fields in the $\Lambda^2$ representation, in agreement with the generic class of models identified from the anomaly analysis. The discriminant locus $\Delta$ is divisible by $\phit^4$, and the remaining factor $\tilde{\Delta}_4$ is a degree 20 polynomial in $s$ and has 20 roots associated with singularities of type $4_a$ providing 20 fundamental representations, and completing the matter content of these theories. Though various non-generic singularities can be constructed by tuning some roots of the discriminant to coincide, such as the $E_6$ type singularity realized when $\phit = \phif = f_2 = 0$, the anomaly analysis guarantees that such singularities cannot change the total matter content of the theory as long as the gauge group remains $SU(4)$ and no singularity becomes bad enough to provide an extra tensor multiplet. Continuing to higher $N$, the top class of models in the table above is associated with generic singularities at the vanishing locus of $\phit$, with no additional singularity structures. As $N$ increases up to $N = 17$, at each step an additional $3 + 20-N$ degrees of freedom in the Weierstrass form are fixed, matching the decrease in uncharged scalar degrees of freedom in the low-energy theory. For small $N$ more restricted classes of Weierstrass coefficients reproduce the other models in the table. For $SU(5)$ the story is very similar to $SU(4)$. There is a 171-dimensional space of models with 3 $\Lambda^2$ matter hypermultiplets and 19 hypermultiplets in the fundamental representation. For $SU(6)$ the most generic model has $\phit = \alpha$, so there are three singularities of type $6_b$ giving $\Lambda^2$ representations and 18 fundamentals. In this case, however, there are now other possibilities. Up to 3 of the roots of $\phit$ can be in $\beta$, corresponding to singularities of type $6_c$, and giving half-hypermultiplets in the $\Lambda^3$ representation. This precisely reproduces the range of possible $SU(6)$ models in the table above. There are several interesting features of these models. First, consider the number of unfixed Weierstrass degrees of freedom in these configurations. From \eq{eq:psf-ab}, \eq{eq:pf} we see that the number of degrees of freedom in $\phis, \phif$ is reduced by 3 when fixing the $A_5$ singularity, independent of the distribution of roots between $\alpha$ and $\beta$. From \eq{eq:f3e}, however, we see that the number of degrees of freedom in $f_3$ ({\it i.e.}, in $\lambda$) is reduced by one for each root of $\beta$. Therefore, the dimension of the space of models with $k$ roots $\beta = 0$ is reduced by $k$ from that of the generic $SU(6)$ moduli space. This agrees with the numbers of neutral scalar fields listed in the table above for these models. When $\beta = s$, as discussed in Section \ref{sec:a5}, the $E_6$ singularity is incompletely resolved, giving a half-hypermultiplet in the $\Lambda^3$ representation. When two roots of $\beta$ coincide, however, we have $\beta = s^2$, giving a full $\Lambda^3$ hypermultiplet. A further interesting feature of the $SU(6)$ models is the possibility of a continuous phase transition between models with different numbers of $\Lambda^3$ representations. Consider a model with a (half-hypermultiplet) $\Lambda^3$ representation associated with a type $6_c$ singularity at a root $r$ of $\beta = 0$. Such a model will also have an $A_6$ at every root of $\phisa$. By tuning one parameter, a root of $\phisa$ and the root $r$ of $\beta$ can be made to coincide. But at this point, this is a common root of $\phit$ and $\phis$, and therefore from the definitions of $\alpha$ and $\beta$ becomes a root of $\alpha$ and $\phifb$, and also of $\lambda$, with $\alpha$ and $\lambda$ increasing in degree by one and $\beta$ decreasing in degree by one. At this point there is a singularity of type $6_f$, as discussed in Section \ref{sec:global}. From here, however, the roots of $\alpha, \lambda$ and $\phifb$ can be freely and independently varied. This phase transition thus has the effect of transforming matter between the representations \begin{equation} \frac{1}{2} \;{\tiny\yng(1,1,1)}+ {\tiny\yng(1)}\; \; \rightarrow \; {\tiny\yng(1,1)} \,. \end{equation} This is not a simple Higgsing transition, since the gauge group does not change. There is no obstruction to such a transition from anomalies, since the anomaly content of the matter representations is the same on both sides of the transition. We leave a further study of this type of continuous F-theory transition between different types of matter for further work. For $SU(7)$, we again have a generic class of models of the correct dimension with three $\Lambda^3$ representations. There is also a model of type 7B discussed in the previous section. Since $\phit$ has 3 roots, in the decomposition \eq{eq:b-gd}, $\alpha, \beta, \gamma, \delta$ can have respectively 1, 2, 0, 1 roots. There is a single singularity of type $7'_b$ in such models, associated with a single $\Lambda^3$ representation. We have thus reproduced all matter possibilities for $SU(N)$ models with $b = 1$. We return to the discussion of the generic class of models for $N \geq 8$. As discussed above, by tuning 3 parameters in an $f_i$ at each step through a relation like \eq{eq:f4} or \eq{eq:s4}, and $20-N$ parameters through $g_{N -1}$, we can continue to generate $A_{N -1}$ singularities up to a certain point. This continues up to $SU(17)$ without change, generating models with these groups having three $\Lambda^3$ matter representations and the correct number of degrees of freedom. The story changes slightly, however, at $SU(18)$. At this point, the equation analogous to \eq{eq:s4} would be $\psi_8 =-\frac13 \phit \phi_8$, imposing the condition that $\psi_8$ vanishes wherever $\phit$ vanishes. Since $\psi_8$ is linear, however, it must vanish. But $\psi_8$ only has 2 degrees of freedom, so the correspondence between the number of degrees of freedom in the Weierstrass model and the number of neutral scalar fields breaks down at this point. We return to this point below; nonetheless, we can continue to construct models with $SU(N)$ groups beyond this point by setting $\psi_8 = 0$. The next point where the analysis diverges from the general pattern is at $SU(20)$. At this point there is no further function $g_{19}$ to fix, and $\psi_9$ is a scalar that we can set to 0. This is enough to guarantee vanishing of the discriminant to order 20. At the next order, fixing $f_{10}$ to match \eq{eq:expansion-f-even} immediately guarantees vanishing to order 22, and fixing $f_{11}$ in an analogous fashion gives a discriminant of order 24. The correspondence with the number of neutral scalar fields becomes quite unclear in these last steps, since in the 6D theories the number of neutral scalars is expected to {\it increase} at 24 (with 20 neutral scalars for $SU(24)$, 19 for $SU(23)$ and $SU(22)$, and 20 for $SU(21)$). In any case, we can move directly to the end of the process just described and write a general form for a class of models with $SU(24)$ local gauge group and three antisymmetric matter representations \begin{eqnarray} f & = & -\frac13 \Phi^2 + \tilde{F}_{12}t^{12}\label{eq:exact-24}\\ g & = &-\frac13\Phi f -\frac1{27} \Phi^3 = \frac2{27}\Phi^3 -\frac13\Phi \tilde{F}_{12} t^{12}\nonumber\\ \Phi & = & \left[\frac14 \phi_0^2 + \phi_1 t +\phi_2t^2+\phi_3t^3 \cdots +\phi_6t^6 \right]\,, \nonumber \end{eqnarray} where $\phi_0$ is a polynomial in $s$ of degree $3$, $\phi_k$ is a polynomial in $s$ of degree $6-k$ for $k>0$ and $\tilde{F}_{12}$ is a constant (note that $G$ in \eq{eq:Tate-even} is set to vanish in this class of models). If $\tilde{F}_{12}$ is set to 0, then the model becomes everywhere singular. This is a good opportunity to comment on why our discussion has always been about ``local'' gauge groups. The geometry of the singular fibers in an elliptic fibration actually determines only the Lie algebra of the gauge theory, and there are typically several different compact Lie groups with the same Lie algebra. (In the mathematics literature, these groups are said to be ``locally isomorphic.'') The actual gauge group is determined by the torsion in the Mordell--Weil group of the elliptic fibration \cite{pioneG}. For the $SU(24)$ example just given, we show below that the Mordell--Weil group is in fact the group $\mathbb Z_2$ with two elements, and this implies that the true gauge group of the theory is $SU(24)/\mathbb Z_2$ rather than $SU(24)$. To see that this local $SU(24)$ example has a non-trivial Mordell--Weil group, it is convenient to rewrite the example in Tate form, as in (\ref{eq:Tate-even}). The result is \begin{equation} Y^2 + \phi_0 XY = X^3 + t \Upsilon X^2 + t^{12} \tilde{F}_{12}X \,. \end{equation} where $\Upsilon = \phi_1 +\phi_2t+\phi_3t^2 \cdots +\phi_6t^5 $. The elliptic curve contains the point $(X,Y)=(0,0)$ and has a vertical tangent there, for every value of $s$ and $t$. This implies by the usual geometric law of addition on elliptic curves \cite{silverman-tate} that $(0,0)$ is a point of order $2$ in the group law on each elliptic curve, so that the corresponding section defines a point of order two in the Mordell--Weil group. We have thus explicitly reproduced all the (local) $SU(N)$ models in the table above, except $SU(23)$ and $SU(21)$. It is possible that those two gauge groups can be realized through Higgsing of the $SU(24)$ model or specialization of models with lower gauge groups. It is also possible that the limitations we have encountered in constructing F-theory models with these groups correspond to physical constraints, perhaps associated with the discrete $\Z_2$ structure in the $SU(24)$ theory. Further analyses of these models, as well as a precise understanding of the counting of degrees of freedom for the space of models with large $N$ are left for future work. \vspace*{0.1in} \subsection{$b = 2$} We now consider the $T = 0$ 6D models with an $SU(N)$ gauge group and $b = 2$. For $b = 2$ the $SU(N)$ matter structures allowed by anomaly cancellation are \begin{center} \begin{tabular}{| c | c | c | c |c |} \multicolumn{5}{c}{$b = 2$ $SU(N)$ matter possibilities}\\ \hline & & & &\\[-9pt] $N$ & {\tiny\yng(1)}&${\tiny\yng(1,1)}$&${\tiny\yng(1,1,1)}$ &${\tiny\yng(1,1,1,1)}$\\[12pt] \hline $N \leq 12$ & $ 48 -4N$ & 6 & 0& 0\\ 6 & $24 + k$ & $6-k$ & $k/2 \leq 3$& 0\\ 7 & 20 + $5k$ & $6-3k$ & $k\leq 2$& 0\\ 8 & 25 & 2 & 1& 0\\ 8 & $16 + 8k$ & $6-3k$ & 0 & $k/2 \le1$\\ \hline \end{tabular} \end{center} In this case the analysis is slightly more complicated as we cannot just take $\sigma = t$ and treat $f_i, g_i$ as functions of $s$, since $\sigma$ is quadratic in $s, t$. We do not attempt to do a complete analysis constructing the most general classes of models, but describe some simple salient features of the models in this case. The equation of a generic nonsingular degree two curve $\{\sigma=0\}$ can be put into the form $\sigma = t^2 -s$ by choosing coordinates appropriately. We can then do an expansion in $\sigma$ of the form $f = f_0 + f_1 \sigma + \cdots$ where the $f_i$ are linear in $t$ and otherwise generic polynomials in $s$. Treating the expansions in this way we can systematically carry out the analysis using the method described in the previous section, since the ring of functions on sufficiently small open subsets of $\{\sigma=0\}$ is a UFD. This becomes complicated in practice since at each step we must use $t^2 \rightarrow s$ to bring products of functions back to the canonical form where the coefficients in the $\sigma$ expansion are linear in $t$. In principle, this approach leads to constructions of general models with $b = 2$. A non-generic class of such models is where we take $\sigma = t^2 -s$ with the $f_i$ being functions only of $s$. This simplifies the analysis of roots; the analysis is essentially as in the $b = 1$ case but each function such as $\phit$ has twice as many roots when considered on $\{\sigma=0\}$; for example, $\phit$ has six roots on $\{\sigma=0\}$: $s = r, t = \pm \sqrt{r}$ for each root $r$ of $\phit$ considered as a function of $s$. This leads to a construction of models precisely analogous to those in the $b = 1$ case, including the models with six $\Lambda^2$ representations as well as the cases with $\Lambda^3$ representations of $SU(6)$ and $SU(7)$. Because this simple class of models is not completely generic the number of parameters is smaller than would be associated with the full moduli space, and not all configurations are possible within this Ansatz. In particular, because the roots of any function in $s$ are always doubled in $\{\sigma=0\}$, we must get an even number of roots of $\beta$, and the number of half-hypermultiplets for $SU(6)$ in the $\Lambda^3$ representation is always even. Similarly, for $SU(7)$ the number of $\Lambda^3$ representations is even, so we can get the model with 2 such representations but not the model with one. To get the other models with odd numbers of $SU(6)$ and $SU(7)$ $\Lambda^3$ representations it is necessary to go beyond this Ansatz. A more generic class of $b = 2$ models can be identified following the structure of \eq{eq:exact-24}. We can construct a generic local $SU(12)$ model with six $\Lambda^3$ representations through \begin{eqnarray} f & = & -\frac13 \Phi^2 + \tilde{F}_{6}\sigma^{6}\label{eq:exact-12}\\ g & = &-\frac13\Phi f -\frac1{27} \Phi^3 = \frac2{27}\Phi^3 -\frac13\Phi \tilde{F}_{6} \sigma^{6}\nonumber\\ \Phi & = & \left[\frac14 \phi_0^2 + \phi_1 \sigma +\phi_2\sigma^2+\phi_3\sigma^3 \right]\,, \nonumber \end{eqnarray} where $\phi_i$ are in the ring of functions on $\{\sigma=0\}$. As in the $SU(24)$ case for $b=1$, putting the equation into Tate form \begin{equation} Y^2 + \phi_0 XY = X^3 + \sigma \Upsilon X^2 + \sigma^{6} \tilde{F}_{6}X \end{equation} shows that $(0,0)$ is a point of order $2$ in the Mordell--Weil group, and hence the actual gauge group is $SU(12)/\mathbb Z_2$. Models with smaller gauge groups can be found by adding higher order terms to $f, g,$ to reduce the order of vanishing of $\Delta$. By tuning parameters in such models it should be possible to identify the $b = 2$ models with odd numbers of (half/full) $\Lambda^3$ hypermultiplets. We have not identified the class of global F-theory models giving rise to the $\Lambda^4$ representation of $SU(8)$. As discussed in Section \ref{sec:local}, such matter representations should arise from a singularity with a specific $A_7 \rightarrow E_8$ embedding. Because the $SU(8)$ model with a single $\Lambda^4$ representation (i.e., $k=2$ in the last line of the table above) does not contain any $\Lambda^2$ representations, it seems that this model cannot arise from a complete enhancement to $E_8$ through the embedding discussed in Section \ref{sec:local}. A related mechanism may be at work, however, perhaps involving an incompletely resolved singularity. We leave the identification of the global $b = 2$ model with this matter structure for further work. We note, however, that since the $\Lambda^4$ representation of $SU(8)$ is quaternionic it can come in 1/2 hypermultiplet representations. A half hypermultiplet of $\Lambda^4$ combined with eight fundamental representations has the same contribution to the anomalies as 3 $\Lambda^2$ representations. We thus expect that there may be another class of exotic transitions transforming matter in an $SU(8)$ gauge group from \begin{equation} \frac{1}{2} \;{\tiny\yng(1,1,1,1)} \;+ 8 \times \;{\tiny\yng(1)}\; \; \rightarrow \; \; 3 \times \;{\tiny\yng(1,1)}\; \,. \end{equation} Finally, we identify another new type of phase transition associated with $b = 2$ models. Consider a class of $b = 2$ models with \begin{equation} \sigma = t^2 -\epsilon s \,, \end{equation} where $\epsilon$ is a parameter for the models. We can use the method described above where each $f_i, g_i$ is a function purely of $s$ to construct a subclass of the generic set of models with 6 $\Lambda^3$ representations of $SU(N)$. Now we take the parameter $\epsilon \rightarrow 0$. This is just a parameter in the space of Weierstrass models. In the limit $\epsilon = 0$ this becomes a model with a codimension one $A_{2 N -1}$ singularity localized on the zeros of the function $\sigma' = t$. This is therefore identical to a $b = 1$ model with 3 $\Lambda^2$ representations of $SU(2N)$. Considered in the opposite direction, this transition provides a non-standard breaking of an $SU(2N)$ theory with 3 $\Lambda^2$ representations to an $SU(N)$ theory with 6 $\Lambda^2$ representations. A related transition has recently been identified in the context of intersecting brane models \cite{Nagaoka-Taylor}. We leave a more complete discussion of this type of phase transition for future work. \vspace*{0.1in} \subsection{$b = 3$} For $b = 3$ the total genus (\ref{eq:genus-relation}) associated with the matter content must be 1. The only representations with genus 1 are the adjoint and two-index symmetric (Sym${}^2$) representations. So each model must have one or the other of these. We list the set of possible matter contents for an $SU(N)$ theory with $b = 3$ \begin{center} \begin{tabular}{| c | c | c | c |c |c |} \multicolumn{6}{c}{$b = 3$ $SU(N)$ matter possibilities}\\ \hline & & & & &\\[-8pt] $N$ & ${\tiny\yng(1)}$ & ${\tiny\yng(1,1)}$ & ${\tiny\yng(1,1,1)}$ &Adj & ${\tiny\yng(2)}$ \\[8pt] \hline $N \leq 8$ & $ 72-9N$ & 9 & 0& 1 & 0\\ $N \leq 8$ & $ 72-9N$ & 10 & 0& 0 &1\\ 6 & $18 + k$ & $9-k$ & $k/2 \leq 4$& 1 & 0\\ 6 & $18 + k$ & $10-k$ & $k/2 \leq 5$& 0 &1\\ 7 & $9 +5k$ & $9-3k$ & $k\leq 3$& 1 & 0\\ 7 & $9 +5k$ & $10-3k$ & $k\leq 2$& 0 &1\\ 8 & 9 & 5 & 1& 1 & 0\\ 8 & 9 & 6 & 1& 0 &1\\ 9 & 5 & 4 & 1& 1 & 0\\ \hline \end{tabular} \end{center} Note that the general pattern is that for $N > 5$, any number of $\Lambda^3$ representations can be realized along with $(N -4) (N -3)/2 -1$ extra fundamentals, at the cost of $N -4$ $\Lambda^2$ representations, beginning with the model with 9 $\Lambda^2$'s, one adjoint, and $72-9 N$ fundamentals (or the same with 10 $\Lambda^2$'s and one symmetric representation instead of the adjoint). Such exchanges are possible in the space of allowed theories except when ruled out by the gravitational anomaly bound on scalar degrees of freedom or positivity of the number of fundamentals; for example at $SU(9)$ the number of fundamentals would become negative if we attempted to remove the $\Lambda^3$ representation. As in the $SU(6)$ case discussed above, we expect that all of these changes in matter can be realized through phase transitions along continuous one-parameter families of F-theory models. From the anomaly point of view, we can also exchange an adjoint representation, along with one neutral scalar, for one symmetric and one antisymmetric representation. This cannot be done through continuous phase transitions, however, since as discussed in Section \ref{sec:local} the distinction between these representations is determined by global monodromy on the brane structure. Note that there are two models appearing in the list of models with an adjoint that have no corresponding model with an Sym${}^2$ representation, the model with $SU(7), k = 3$ and that with $SU(9)$. In both cases this can be seen from counting degrees of freedom. These two models with the adjoint representation have a total of $273 + N^2 -1$ charged hypermultiplets. Thus there are no uncharged scalars in these models, by \eq{eq:hv}. To exchange an adjoint for a symmetric and an antisymmetric would require one additional charged hypermultiplet, for a total of $273 + N^2$, violating the gravitational anomaly bound. As in the $b = 2$ case, we can proceed in several ways to construct models of the generic $b = 3$ type with 9 $\Lambda^2$ representations and one adjoint. Choosing a generic cubic smooth $\sigma$, the corresponding curve is an elliptic curve of genus one, giving one adjoint representation. We can expand order by order in the ring of local functions on $\{\sigma=0\}$, or we can take a cubic such as $\sigma =t^3 + s$ with non-generic coefficient functions depending only on $s$, or we can construct the $N = 8$ model using an analogous construction to \eq{eq:exact-24}, \eq{eq:exact-12}. By continuously deforming $\sigma$ we can get a singular curve with an equation such as $\sigma = t^3 + st$ with a double point singularity. Because this is continuously connected to the family of theories with smooth $\sigma$, however, this class of models should always have an adjoint representation and not a symmetric representation. We can describe various models with $\Lambda^3$ matter content as discussed in the $b = 2$ case above, though as in that discussion we cannot explicitly identify all such models. Note in particular that the single $N = 9$ model cannot be realized in this way, and must require some further tuning of the Weierstrass coefficients. We leave a further study of these models to future work. \vspace*{0.1in} \subsection{$b = 4$} Now let us consider degree 4 curves, corresponding to $b = 4$ matter content in the low-energy theory. For $b = 4$, the total genus is 3. So we expect 3 adjoints for a smooth degree 4 curve in F-theory. From the genus formula \eq{eq:genus}, the other possibilities for saturating the genus are either a linear combination of $3-x$ adjoints and $x$ Sym${}^2$ representations for arbitrary $SU(N)$, or several exotic possibilities: a single ``box'' ( ${\tiny\yng(2,2)}$ ) representation for $SU(4)$ or a ${\tiny\yng(2,1)}$ representation for $SU(5)$; each have genus 3. There are a variety of anomaly-free low-energy $SU(N)$ models with various types of matter content, as in the cases with smaller $b$. For $N \leq 6$ there are models with 3 adjoints, 12 $\Lambda^2$ representations, and no $\Lambda^3$ representations. These correspond to the generic branch in the Weierstrass models as described above and can be constructed in a similar fashion to $b = 2, 3$. There are a variety of models that exchange $\Lambda^2$'s for $\Lambda^3$'s + fundamentals. We assume that these models correspond to various singular limits in a similar fashion to that described above. There are also various models that replace some or all of the adjoints with $\Lambda^2$ + ${\rm Sym}^2$ (again at the cost of a single neutral scalar). We do not have anything to say about these models that goes beyond the discussion of the analogous models with $b = 3$. The most novel feature that arises at $b = 4$ is the possibility of a new matter representation as mentioned above. Although there is no apparently-consistent low-energy model that contains the ${\tiny\yng(2,1)}$ representation of $SU(5)$ at $b = 4$ (this representation does appear for $SU(5)$ in combination with 3 adjoints at $b = 5$), for $N = 4$ there is a model \begin{equation} SU(4): \;\;\;\;\; \;\;\;\;\; {\rm matter} = 1 \times {\tiny\yng(2,2)}+ 64 \times {\tiny\yng(1)} \label{eq:4-box} \end{equation} While we identified a group-theoretic embedding of the box representation of $SU(4)$ in Section \ref{sec:local}, we do not have an explicit realization of a theory containing this representation as a global Weierstrass model on $\P^2$. Finding such a singularity may involve an incomplete resolution of some kind, since the embedding $A_3 \rightarrow D_6$ discussed in Section \ref{sec:local} would otherwise seem to give rise to additional adjoint matter fields. We leave the construction of a global theory describing the model with matter content \eq{eq:4-box} as a challenge for future work. \section{4D models} \label{sec:4D} The general formalism developed for describing $SU(N)$ models in Section \ref{sec:global} applies just as well to F-theory on an elliptically fibered Calabi-Yau 4-fold as in the case of elliptically fibered threefolds. This provides a framework for systematically analyzing F-theory constructions of 4D theories of supergravity coupled to $SU(N)$ gauge theories. For 4D F-theory constructions the full story is more complicated, since fluxes must be present \cite{Becker-m, Dasgupta-rs}. The fluxes generate a superpotential, and nonperturbative contributions from instantons are also present. These effects produce a potential on the moduli space that lifts the continuous flat moduli space to a landscape with separated vacua and stabilized moduli. Nonetheless, underlying this more complicated physics is the continuous moduli space of degrees of freedom associated with the Weierstrass coefficients in an F-theory construction. When the compactification space is large, these moduli will be light, and the moduli space is approximate. \subsection{4D Weierstrass models} \label{sec:4D-Weierstrass} We do not go far into the issues regarding moduli stabilization and fluxes on 4D F-theory vacua here. F-theory methods for analyzing matter in 4D theories in the presence of flux were developed in \cite{Donagi-Wijnholt,Beasley-hv}; following these works there has been a great deal of recent work on 4D F-theory constructions with particular focus on phenomenological applications (see for example \cite{Beasley-hv2, Marsano-F-theory, Blumenhagen-F-theory, Cvetic-gh}); for reviews of some recent developments in these directions see \cite{Denef-F-theory, Heckman-review, Weigand-review}. In this paper we take a simplistic approach where we ignore fluxes and the lifting of moduli, and consider the tuning necessary in Weierstrass models to achieve an $SU(N)$ gauge group. We can then consider constructions with matter fields in different representations. Although the number of fields appearing in a particular representation may depend upon the details of fluxes and the full F-theory construction, the type of representation should depend only on the classification of codimension two singularities, on which we are focused here. In four dimensions, as in six dimensions, the simplest F-theory compactification we may consider is compactification on projective space. We thus consider F-theory on a 4-fold that is elliptically fibered over $\P^3$. We consider some explicit examples of the structure of Weierstrass models giving ${\cal N} = 1$ 4D supergravity theories in this context. Previous work in which F-theory constructions over $\P^3$ were considered includes \cite{Klemm-lry}. On $B =\P^3$ we have $K = -4H$, where $H$ is the hypersurface divisor generating $H_2 (B,\Z)$. The Weierstrass functions $f, g$ are then polynomials of degree $16$ and 24 in local variables $r, s, t$, and the discriminant is of degree 48. We are looking for an $SU(N)$ gauge group associated with an $A_{N -1}$ singularity. We consider the discriminant locus on a degree $b$ hypersurface $\{\sigma=0\}$. The coefficient functions $f_i (g_i)$ then have degree $16-bi, 24 -bi$ We begin as in 6D with $b = 1$. We again follow the systematic analysis of Section \ref{sec:global}, using $\sigma = t$, so that $f, g$ are functions of $r, s$. The function $\phit$ controlling the leading term $f_0$ is now of degree 4. We can construct generic models with $SU(N)$ gauge groups by tuning the coefficients to make each term $\Delta_n$ in the discriminant vanish order by order, as in Section \ref{sec:global}. In the generic model, matter will be associated with the points where $\Delta_{N}$ acquires extra degrees of vanishing, associated with codimension two singularities. The intersection between $\{\sigma=0\}$ and $\{\phit=0\}$ defines a curve in $\P^3$ that is generically a genus 3 curve. There will be matter in the 2-index antisymmetric $\Lambda^2$ representation of $SU(N)$ localized on this curve. As mentioned above, a precise determination of the number of matter fields in this representation depends on details of the theory such as fluxes that we do not consider here. The rest of $\Delta$ defines another divisor (possibly reducible) whose intersection with $\{\sigma=0\}$ gives another curve (possibly disconnected) that supports matter in the fundamental representation. Although the curve $\{\phit= \sigma=0\}$ is of higher genus, at generic points along this curve the singularity is a codimension two singularity identical to the $A_{n -1}\rightarrow D_{n}$ singularities discussed earlier. The generic 4D model with $b = 1$ having largest gauge group can be described in a fashion similar to \eq{eq:exact-24} \begin{eqnarray} f & = & -\frac13 \Phi^2 + \tilde{F}_{16}t^{16}\label{eq:exact-32}\\ g & = &-\frac13\Phi f -\frac1{27} \Phi^3 = \frac2{27}\Phi^3 -\frac13\Phi \tilde{F}_{16} t^{16}\nonumber\\ \Phi & = & \left[\frac14 \phi_0^2 + \phi_1 t +\phi_2t^2+\phi_3t^3 \cdots +\phi_8t^8 \right]\,, \nonumber \end{eqnarray} where $\phi_0$ is a polynomial in $r, s$ of degree $0$, $\phi_k$ is a polynomial in $r, s$ of degree $8-k$ for $k>0$ and $\tilde{F}_{16}$ is a constant. Once again, we can write this in Tate form \begin{equation} Y^2 + \phi_0 XY = X^3 + t \Upsilon X^2 + t^{16} \tilde{F}_{16}X \,. \end{equation} to see that $(0,0)$ is a point of order $2$ in the Mordell--Weil group of the elliptic fibration. Thus, the gauge group in this case is $SU(32)/\mathbb Z_2$. The curve supporting the $\Lambda^2$ matter is the intersection between $\{\sigma=0\}$ and $\{\phi_0=0\}$. Models with smaller gauge group can be found by adding high-order terms to $f, g$ to reduce the order of vanishing of the discriminant $\Delta$. Just as in 6D, the parameters of the theory can be tuned so that there are more elaborate codimension two singularities in the 4D $SU(N)$ models. For an $SU(6)$ model, for example, as in \eq{eq:pt-ab}, if $\phit$ does not divide $\psi_2$, then there must be a component $\beta$ of $\phit$ that is a factor of $\phi_1$. The intersection of $\{\beta = 0\}$ with $\{\sigma=0\}$ gives a curve supporting matter in the $\Lambda^3$ representation of $SU(N)$. Since $\{\sigma=0\}$ is smooth, and $\Delta$ is smooth at generic points, for $b = 1$ the only general classes of codimension two singularity types are the same as those that can arise for $b = 1$ models in 4D, namely $n$-index antisymmetric matter fields. As we discuss further in the following subsection, this gives a constraint (though relatively mild) on certain classes of 4D ${\cal N} = 1$ supergravity theories that can be realized in F-theory. For higher $b$, the story is again parallel to that in 6D, although our understanding of the details such as the number of types of each matter field is not as complete without a careful treatment of fluxes. Nonetheless, just as in 6D, matter with a nonzero genus contribution $g_R$ can only arise when $b > 2$, and will be associated with codimension one singularities on $\{\sigma=0\}$. \subsection{A (mild) constraint on 4D supergravity theories} The above analysis leads to a constraint on the set of 4D ${\cal N} = 1$ supergravity theories that can be realized from F-theory. This constraint is rather specific to the models associated with the $\P^3$ compactification, but serves as an example of a constraint on possible low-energy 4D supergravity models. From the point of view of the 4D theory, the constraint is of the form ``any theory with property $X$ has features $Y$,'' where $X$ describes a set of properties that uniquely determine the F-theory construction to come from an elliptic fibration over $\P^3$ with a gauge group $SU(N)$ realized on a divisor $\{\sigma=0\}$ of degree $b = 1$, and $Y$ are the constraints on models of this type. We briefly summarize the features ($X$) of a 4D model that uniquely determine the F-theory base and $SU(N)$ divisor class to be $\P^3$ and $\{\sigma=0\} = H$. We begin with the correspondence between discrete structures in the 4D supergravity theory and in the base of the F-theory compactification; the connection between the F-theory geometry and the low-energy theory is systematically described in \cite{Grimm-F-theory}, and further analysis of this correspondence will appear in \cite{Grimm-Taylor}. Similar to the story in 6D, a 4D F-theory compactification on a base $B$ gives rise to topological terms in the low-energy action of the form \begin{eqnarray} \tau_R & \sim & -K \cdot\chi \; \tr R \wedge R \label{eq:topological-couplings}\\ \tau_F & \sim & b \cdot\chi \;\tr F \wedge F \,, \nonumber \end{eqnarray} where $\chi$ are axions coming from wrapping the $C_4$ Ramond-Ramond field on divisors of the base. In 6D, the corresponding terms appear with two-form fields in place of axions, since $C_4$ is wrapped on 2-cycles instead of 4-cycles. The $\tau_F$ term is simply the usual coupling between $C_4$ and 3-branes associated with instantons on the 7-branes, where $b$ is the divisor class of the 7-branes carrying the local factor of the gauge group. The $\tau_R$ term comes from the coupling of the 7-branes to curvature, summed over all 7-branes as described in the 6D case by Sadov \cite{Sadov}, and $K$ is the canonical class of the base. The number of axions of this type is given by the Hodge number $h^{(1, 1)} (B)$ of the F-theory base. For $\P^3$, $h^{1, 1} = 1$ and there is only one such axion. In general, $K, b$ are elements of a lattice $L$, where the shift symmetries of the axions live in the dual lattice $L^*$. In the case of only one axion $\chi$ such as for F-theory on $\P^3$, the couplings in $\tau_R, \tau_F$ are each quantized so that $K, b$ are integers. Now let us consider the special features of the base $\P^3$ that may be visible in the 4D supergravity theory. There are a number of spaces with $h^{1, 1} = 1$ that could act as bases for a 4D F-theory compactification. Any such space must be Fano, since $-K$ must be effective (though note that F-theory bases with $h^{1, 1} > 1$ need not be Fano). Fano spaces with $h^{1, 1} = 1$ have been completely classified \cite{Iskovskih}. For such a space, the {\sl index} is the ratio $-K/x$ where $x$ is the smallest effective divisor class, the generator of $H^2 (B,\Z)$. For projective space $\P^3$, the index is 4, since $K = -4H$. All other Fano spaces with $h^{1, 1} = 1$ have a smaller value of the index. The ratio between the integers parameterizing the topological couplings \eq{eq:topological-couplings} is the ratio $-K/b$ between the canonical class of the base and the divisor class characterizing each local factor of the gauge group. This ratio must be less than the index of the F-theory base, since $b \geq 1$. Thus, for theories with $\ho = 1$, the maximum value of $-K/b$ possible is 4, and this value is only attained when the base is $\P^3$ and the local factor of the gauge group is wrapped on the divisor $H$, corresponding to the case $b = 1$ analyzed above. Thus, we can state a weak constraint on 4D supergravity theories that come from F-theory: any 4D ${\cal N} = 1$ supergravity theory with only one of the appropriate type of axion, and couplings \eq{eq:topological-couplings} with a ratio of integers $-K/b = 4$ that has an $SU(N)$ local gauge group factor must have $N \leq 32$, and can only have matter in $k$-index antisymmetric representations of $SU(N)$. In particular, such a theory cannot have matter in the adjoint representation of the gauge group. This is not a strong constraint. And there are a number of rather subtle issues in making this constraint rigorous. In particular, the lifting of the moduli by the flux and nonperturbative superpotential make the determination of the spectrum and terms in the action less clear than in 6D theories where the spectrum must be massless. Nonetheless, at least for large volume compactifications the structure of the theory determined by F-theory should be apparent in the low-energy theory, and at least in this regime this constraint should hold. Despite the limitations in the range of applicability and interpretation of this constraint, it is interesting to study the constraints that F-theory places on 4D supergravity theories. It should not be surprising that such constraints exist; string constructions generally place many constraints on which possible low-energy theories can be realized. In six dimensions, anomalies provide a window on the strong constraints imposed by F-theory constructions \cite{universality}-\cite{0}, and other F-theory constraints can also be identified as consistency conditions from the point of view of the low-energy theory \cite{Seiberg-Taylor}. Further discussion of constraints on 4D theories from F-theory will appear in \cite{Grimm-Taylor}. It will be interesting to investigate whether the type of constraint on gauge group and matter content identified in this paper can be generalized and understood in terms of macroscopic consistency conditions from the point of view of 4D supergravity. \section{Conclusions} \label{sec:conclusions} We have explored the structure of some codimension two singularities in F-theory and the matter representations to which they give rise. The focus here has been on understanding how such codimension two singularities arise in global F-theory models. We have developed a very general characterization of global Weierstrass models giving rise to $SU(N)$ gauge groups, and analyzed how this general framework applies for F-theory constructions on the bases $\P^2$ and $\P^3$. It is clear that there is still much unexplored territory in the full range of codimension two singularities. Beyond the standard rank one enhancement studied by Katz and Vafa, there are singularities with incomplete resolution, higher rank enhancement, and singularities associated with singular curves in the base, all of which can give rise to different kinds of matter in F-theory constructions. Further exploring this range of possibilities should provide a fruitful enterprise for further understanding the r\^ole of matter in F-theory and string theory. One interesting feature that we have encountered here is the presence of novel phase transitions in F-theory. We have identified phase transitions in which a matter field transforming in the 3-index antisymmetric representation of $SU(6)$ combines with a matter field in the fundamental representation to produce a matter field in the 2-index antisymmetric representation. This transition does not change the gauge group and hence is not a standard Higgsing transition, but should have some description in the low-energy field theory. There are analogous transitions for the 3-index antisymmetric representation of any $SU(N), N \geq 6$. We expect similar transitions for other recombinations of matter fields that leave the 6D anomaly contributions unchanged, such as transitions involving the 4-index representation of $SU(N), N \geq 8$. We have also found unusual transitions where the group $SU(2N)$ breaks to $SU(N)$ with three matter fields in the two-index antisymmetric representation going to six such fields in the $SU(N)$ theory. We hope to return to a more detailed study of these exotic phase transitions in future work. Using global F-theory models on the base $\P^2$ to describe 6D supergravity theories without tensor multiplets, we have shown that a systematic parameterization of Weierstrass models precisely matches the space of theories identified through anomaly constraints in the low-energy theory, at least for $SU(N)$ gauge groups supported on curves of low degree in the F-theory base. The structure of matter representations in these theories and number of degrees of freedom matches neatly between F-theory and the low-energy analysis for small $N$ and degree, with more complicated phenomena arising at higher $N$ and degree that pose interesting questions for future work. Applying the global analysis of Weierstrass models to 4D F-theory constructions we have characterized the matter content of a simple class of $SU(N)$ models on $\P^3$. This leads to a mild constraint on 4D supergravity theories, limiting the gauge group and matter content for this specific class of models. This class of models can be identified from the spectrum and topological couplings of the 4D theory. Further work in this direction promises to expand our understanding of F-theory constraints on 4D supergravity theories, and to clarify the structure of matter fields in general F-theory constructions. \vspace*{0.3in} \noindent {\bf \large Appendix}
1,108,101,562,639
arxiv
\section{Introduction} Ideal quantum key distribution (QKD) {\em with qubits}\cite{BB84} is known to be secure\cite{Mayers96,LC98,bbbmr,sp00,gl03}, and the security proofs are based on what are called information-vs.-disturbance results. The basic QKD protocol involves the following steps: Alice transmits one of four possible states randomly chosen from $|0\rangle_{X}, |1\rangle_{X}, |0\rangle_{Z},$ and $|1\rangle_{Z}$, i.e., the basis vectors in the $X$ and $Z$ bases. The basic information-vs.-disturbance result states that if the eavesdropper, Eve, obtains information about which basis vector was sent in for example, the $X$ basis, then she must introduce disturbance in the $Z$ basis. By disturbance, it is meant that if Bob made measurements to distinguish between the two states sent in the $Z$ basis, then he will observe errors. Thus Alice and Bob can test a random subset of a transmitted block of qubits in the $Z$ basis and estimate the information that Eve has about those in the $X$ basis. If the error rate is small enough in the tested qubits (hence, Eve's information about the qubits in the $X$ basis is small enough), then Alice and Bob can use classical error correcting and amplification schemes to distill an informationally secure key from the qubits sent in the $X$ basis. In this paper, we consider a general setup involving $D$ dimensional quantum states, instead of the 2-dimensional systems considered in the QKD literature. The basic setup is as follows: Alice sends states chosen randomly from among the basis vectors of a particular basis of the $D$ dimensional Hilbert space. She intends these states to act as the information states, i.e., the $\log D$ bits per transmitted state will be used to distill a final key. The natural questions that arise are (i) which set of states should the ``test" states come from, and (ii) what is the corresponding information-vs.-disturbance result for a $D$-dimensional space. We first extend some basic distinguishability bounds found for qubits\cite{fuchs99} to $D$-level systems. That is, if a source $S$ outputs one of $n$ $D$-dimensional quantum states randomly, then we derive bounds on the mutual information between $S$ and any measurement output $E$, only in terms of the properties of the quantum states generated by $S$. In other words, we bound the mutual information between the random variable representing which state was generated by $S$ and the random variable representing the output from a generalized measurement of the states output by $S$. These results are powerful because they only depend on the source and not on any measurement done. We next apply these bounds on distinguishability to relate the amount of information eavesdroppers can obtain to the disturbance they cause in the quantum state. In particular, we prove a generalized information-vs.-disturbance result: if Eve gets information about which basis vector (from the chosen basis in $D$ dimensions) was sent by Alice, then she must introduce {\em disturbance in any basis that is mutually unbiased to the basis chosen by Alice}. In terms of previous work, our results generalize those in \cite{bbbmr,boykin02}. We would also like to note that QKD in dimension $3$ was studied in \cite{bm02,btb04}. Security bounds for individual cloning attacks in dimension $D$ have been reported\cite{ags03}. More recently, qubit QKD techniques\cite{LC98,sp00} have been generalized to prime dimensions\cite{Chau04}. By contrast, our bounds {\em apply to any attack in any dimension}. Also, this work further illuminates the relationship of mutually unbiased bases (MUBs)\cite{ivanovic81} to quantum cryptography. Previously, it was shown that the eigenvectors of maximally commuting quantum encryption operators form MUBs\cite{bbr02}. Here we show that when Eve tries to get information in one basis, she disturbs \emph{all} MUBs. Our result may be viewed as form of an uncertainty principle: the more Eve knows about one basis, the more she disturbs \emph{all} conjugate bases. In addition to applying the above bounds and techniques to the security of quantum keys, we also consider \emph{functions of messages encrypted with those keys}. If Alice and Bob share a key $k$, it may be that Eve learns only exponentially little information about $k$, but she may be able to learn a lot about some function of a message $f(M)$, given the encrypted version of that message $m+k$. In particular, consider the following setup: Alice sends a random basis vector $\left| k\right\rangle$ belonging to a chosen basis to Bob. Alice next publicly announces she sent basis vector $\left| k\oplus m \right\rangle$, where $\oplus$ is the bitwise exclusive or (XOR) operation. Bob could then recover the encrypted message $m$. Now, we know that information of Eve about $k$ is bounded by the error she causes in any basis that is mutually unbiased to the chosen basis. How about a function $f(M)$ of the message? For example, Eve might be interested in only learning whether $m=0$ or not. In a previous work\cite{boykin02}, it was shown that given the encrypted message, $m + k$, the information that Eve gets about any function of an encrypted n-bit message $f(m)$, is bounded by the square root of the error Eve's attack causes in the Hadamard transformed basis. More recently, alternative and more general solutions to this problem have been given \cite{bhl04,rk04}. In this work we extend previous results\cite{boykin02} beyond qubits to $d$-dimensional systems. Also, we show that Eve's information is bounded by the error she causes is \emph{any} MUB. This paper is structured as follows: Section \ref{sec:info_bounds} gives various new bounds on distinguishability and classical information accessible from quantum states; Section \ref{sec:qkd} applies these results to obtain ``information-vs-disturbance" results for QKD; finally in Section \ref{sec:sfunc} we show these results also hold for \emph{functions of encrypted messages} and not just for the keys themselves. \section{Bound On Information For Any Source} \label{sec:info_bounds} In \cite{fuchs99}, many bounds are given on the distinguishability of two quantum states. In this section we generalize some of those to the distinguishability of $n$ quantum states. Our setting is the following: A source outputs one of $n$ quantum states. The random variable representing the source is $S$ i.e., it is the identifier of the particular quantum state made available at the output and can be generated by purely classical means, such as flipping coins or spinning wheels. A general measurement is made on the state, which results in one of several measurement outcomes represented by the random variable $E$. We consider bounds on the mutual information $I(S;E)$ valid for any measurement, which is to say, the bound will only be a function of the quantum states emitted by the source. The bounds here address the same problem as the well known Holevo bound\cite{Kholevo73}, which is: \begin{equation} \label{eq:holevo} I(S;E) \le H(\rho) - \sum_s p_s H(\rho_s) \end{equation} where $H(\rho)$ is the Von-Neumann entropy of the density matrix $\rho$. The main difference between the results of this section and the Holevo bound is that these results deal explicitly with a distance metric, namely the trace norm distance, between two density matrices. Using a simple distance metric allows a certain ease in proving the results in Section \ref{sec:qkd}\footnote{We do believe, however, that it is possible to obtain similar results by applying the purification techniques of Section \ref{sec:qkd} directly to the Holevo bound.} In the appendix, we review certain previously published \cite{fuchs99,boykin02} bounds on distinguishability of quantum states. As we will see later in the paper, this allows us to derive the fundamental information vs. disturbance results that are at work in quantum security protocols. Additionally, these results give an important insight into the robustness of the trace norm as a metric bound for information. We begin by developing a lower bound on entropy and then applying that bound to the mutual information. \begin{lemma} \label{lemm:gen-entropy-bound} For any random variable $X'$ with each probability ${p_i}' \le 1/2$: \begin{eqnarray*} H(X)\ge H(X') - \sum_i \log(\frac{1}{{p_i}'})|p_i - {p_i}'| \end{eqnarray*} \end{lemma} {\textbf Proof.} $H(X)=-\sum_i p_i \log p_i$, so if we define $f(p_i)\equiv -p_i \log p_i$, we see that $H(X) = \sum_i f(p_i)$. See that $f$ is concave and is zero at $p_i=0,1$; thus lemma \ref{lemm:entropy-bound} applies: \begin{eqnarray*} f(p_i)&\ge&f({p_i}') - \frac{f({p_i}')}{{p_i}'}|p_i - {p_i}'| \end{eqnarray*} Plugging this into the definition of entropy: \begin{eqnarray*} H(X)&=&\sum_i f(p_i)\\ &\ge& \sum_i (f({p_i}') - \frac{f({p_i}')}{{p_i}'}|p_i - {p_i}'|)\\ &=& H(X') - \sum_i \log(\frac{1}{{p_i}'})|p_i - {p_i}'| \end{eqnarray*} \mbox{\rule{1.6mm}{4.3mm}} \begin{lemma} \label{lemm:nbit_mi} For any source S that outputs $s$ with probability $p_s$ such that $p_s \le 1/2$, the mutual information is bounded: \begin{eqnarray*} I(S;E)&\le&\sum_s p_s \log(\frac{1}{p_s})\sum_e |p(e|s) - p(e)| \end{eqnarray*} \end{lemma} {\textbf Proof.} Make use of lemma \ref{lemm:gen-entropy-bound}: \begin{eqnarray*} I(S;E)&=&H(S) - H(S|E)\\ &=&H(S) - \sum_e p_e H(S|E=e)\\ &\le&H(S) - \sum_e p_e \left( H(S) - \sum_s \log(\frac{1}{p_s})|p(s|e) - p_s|\right)\\ &=& \sum_e p_e \sum_s \log(\frac{1}{p_s})|p(s|e) - p_s|\\ &=& \sum_e \sum_s p_s\log(\frac{1}{p_s})|\frac{p(e)p(s|e)}{p_s} - p(e)|\\ &=& \sum_e \sum_s p_s\log(\frac{1}{p_s})|p(e|s) - p(e)|. \end{eqnarray*} \mbox{\rule{1.6mm}{4.3mm}} \begin{lemma} \label{lemm:nbitSD} If a source $S$ outputs quantum states $\rho_i$ with probabilities $p_i$ with $p_i \le 1/2$, then mutual information between this source and the output of any measuring device $E$ is bounded: \begin{eqnarray*} I(S;E) &\le& \sum_s p_s \log(\frac{1}{p_s})Tr|\rho_s - \sum_s p_s \rho_s|. \end{eqnarray*} \end{lemma} {\textbf Proof.} Define the notation $\rho = \sum_s p_s \rho_s$. Starting from lemma \ref{lemm:nbit_mi}, we use the definition of a POVM to replace $p(e|s)$ with $Tr(E_e \rho_s)$: \begin{eqnarray*} I(S;E) &\le& \sum_e \sum_s p_s\log(\frac{1}{p_s})|p(e|s) - p(e)|\\ &=& \sum_e \sum_s p_s\log(\frac{1}{p_s})|Tr(E_e \rho_s) - Tr(E_e \rho)|\\ &=& \sum_e \sum_s p_s\log(\frac{1}{p_s})|Tr(E_e(\rho_s - \rho))| \end{eqnarray*} Using the same facts about POVMs as in lemma \ref{lemm:1bitSD}, one can show that \begin{eqnarray*} \sum_e |Tr(E_e(\rho_s - \rho))| &\le& Tr|\rho_s - \rho|. \end{eqnarray*} Hence, we have: \begin{eqnarray*} I(S;E) &\le& \sum_s p_s\log(\frac{1}{p_s})Tr|\rho_s - \rho|. \end{eqnarray*} \mbox{\rule{1.6mm}{4.3mm}} \begin{corollary} \label{co:sd_equal_p} If a source $S$ outputs one of $n$ quantum states $\rho_i$ with probability $1/n$, then mutual information between this source and the output of any measuring device $E$ is bounded: $I(S;E)\le \log n \sum_s \frac{1}{n}|\rho_s - \rho|$. \end{corollary} {\textbf Proof.} For all $n\ge 2$, then $1/n \le 1/2$, hence lemma \ref{lemm:nbitSD} applies: \begin{eqnarray*} I(S;E) &\le& \sum_s p_s\log(\frac{1}{p_s})Tr|\rho_s - \rho|\\ &=& \log n \sum_s \frac{1}{n}Tr|\rho_s - \rho| \end{eqnarray*} \mbox{\rule{1.6mm}{4.3mm}} Now we have a basic lemma in hand which gives an upper bound on the information any measurement device can get from any source, purely in terms of the quantum states emitted from that source. In the next section, we will model the eavesdropping process as a source of quantum states for Eve. Eve is free to measure states in any way, but using the previous lemma, we have an upper bound on how much information she may obtain. \section{Security of Quantum Key Distribution} \label{sec:qkd} We now have the tools necessary in order to derive an {\em information theoretic counterpart to the Heisenberg uncertainty principle}. This result is the basis for quantum security results in \cite{bbbmr}. Quantum key distribution (QKD) is directly related to the setup we considered in the previous section. In general, in a QKD setup Alice has the source $S$ that outputs one of $n$ quantum states; Alice transmits the output state over a quantum channel to Bob. This quantum channel, however, can belong to the eavesdropper Eve, who can perform any operation that quantum mechanics allows. Figure \ref{fig:basic} gives a schematic of the most general attack that Eve might perform. From her perspective, she has access to a source, and she can make any measurement to get information about what was sent. Bob thus receives a state that Eve has already processed and makes his own measurements using a fixed protocol that is known to everyone. Alice and Bob complete a block transmission of several output states of the source $S$, and then use classical communication over an open channel to distill a secret key. Eve can listen in as well on the classical channel, but cannot perform a person-in-the-middle attack on the classical channel, which will make the whole protocol trivially unsecured. Such a classical channel can be easily implemented by message authentication, e.g., via previously shared secret bits between Alice and Bob. Security of the QKD schemes depend on the amount of mutual information between Alice's source, $S$, and Eve's measurement $E$ (i.e., $I(S;E)$ as considered in the previous section) when measured as a function of the disturbance that she causes to the state received by Bob. The intuition from quantum mechanics is that measurements will disturb the system; hence, Alice and Bob can use a random subset of the transmitted quantum states for testing purposes, and detect the error rate on this subset, and thereby infer how strongly has Eve attacked the whole block. The underlying result and assumption here is that if the error she causes is less than a threshold then so is the mutual information $I(S;E)$. They proceed with key distillation only if the test errors are below a pre-specified threshold. Next, one can use classical privacy amplification schemes to show that as long as $I(S;E)$ is small enough (as implied by the disturbance), then one can make the mutual information between $E$ and a final distilled key as low as possible. These classical techniques involve the use of error correcting codes. \begin{figure} \begin{center} \input{basic_attack.eepic} \hbox{Fig.~\ref{fig:basic} Most general attack by an eavesdropper.} \end{center} \label{fig:basic} \end{figure} Thus, the derivation of an appropriate ``information vs. disturbance" result lies at the heart of all security proofs for QKD. While it is clear what we mean by ``information," (as defined by the quantity $I(S;E)$), we have not yet quantified and defined what we mean by ``disturbance." In various security proofs of QKD, researchers have adopted the following strategy: (i) In the protocol, the source $S$ outputs states chosen from the basis vectors belonging to two different bases, e.g., the $X$ and $Z$ bases. (ii) The information vs. disturbance results then refer to the information about which basis vector from one of the bases (e.g., $X$) was sent, and the disturbance caused in the second basis (e.g., $Z$). That is, Eve cannot simultaneously get significant information about which basis vector was sent in one basis, without causing errors in Bob's inference about which basis vector was sent in the other basis. Thus for testing purposes, one could use the states in one of the bases and the observed error rate will put a bound on the information that Eve has about which basis vectors were sent in the other bases. Specifically, Lo and Chau\cite{LC98} use an EPR based scheme and show (using the Holevo bound, equation \ref{eq:holevo}) that if the fidelity between Alice and Bob is greater than $1-\delta$ for $R$ singlets, then Eve's information about the final key is bounded by: \begin{eqnarray*} I &\le& -(1-\delta)\log(1-\delta) - \delta \log \frac{\delta}{2^{2R}-1} \end{eqnarray*} The above information-vs-disturbance result is used directly by Shor and Preskill in their quantum code based proof\cite{sp00}. Rather that deal with the fidelity of singlets, Biham et. al.\cite{bbbmr} use trace-norm techniques to show that Eve's information on each bit is bounded by the square root of the probability that she would cause more than $\hat{v}/2$ errors had Alice sent the bits in the opposite basis (X replaced with Z and vice-versa), where $\hat{v}$ is the minimum distance between the privacy amplification code and the error correction code. The security of QKD directly depends on the above results: Eve's information is always bounded once Alice and Bob verify that their states have not been greatly disturbed. In this section, we generalize such information vs. disturbance bounds for states in any dimension $D$, and {\em also provide a natural choice of the bases} to be used in these results. At this point it is useful to define the concept of Mutually Unbiased Bases: \noindent {\bf Definition.} Let $B_1=\left\{\ket{\varphi_1},\ldots,\ket{\varphi_D}\right\}$ and $B_2=\left\{\ket{\psi_1},\ldots,\ket{\psi_D}\right\}$be two orthonormal bases in the $D$ dimensional state space. They are said to be {\bf mutually unbiased bases (MUB)} if and only if $\left\vert\braket{\varphi_i}{\psi_j}\right\vert=\frac{1}{\sqrt{D}}$, for every $i,j=1,\ldots,d$. A set $\left\{ {\cal B}_1,\ldots,{\cal B}_m\right\}$ of orthonormal bases in $\cc^D$ is called a {\em set of mutually unbiased bases} (a set of MUB) if each pair of bases ${\cal B}_i$ and ${\cal B}_j$ are mutually unbiased. Thus, given two MUB $B_1$ and $B_2$, we get $B_1B_2^{\dag}= H$, where $|H_{i,j}| =1/\sqrt{D}$, and $H$ is a unitary matrix. Hence, {\em $H$ can be regarded as a generalized Hadamard matrix} in dimension $D$, and the two bases are related by the transformation $B_1=HB_2$. We next derive a general theorem which shows that whatever the dimension, if Eve gets information in one basis, she disturbs \emph{all} bases which are MUBs of that basis. Since two MUB are related by a generalized Hadamard transformation, the result in Theorem 1 implies that retrieving information in one basis causes disturbances in {\em all the conjugate bases}. Finally, it should be emphasized that we only consider a single $D$-dimensional state. This is not a limitation: any product of quantum states can be thought of as a state in a larger dimensional space. Thus, if we consider standard BB84, $n$ 2-dimensional systems (bits) are sent. In our approach we would consider that as one $2^n$ dimensional system. The same applies for any product of quantum states. These results generalize those presented in \cite{boykin02}, which proved the following theorem only for dimension $2^n$ and for one pair of bases (the standard $Z$ and $X$ bases). \begin{theorem} \label{thm:ivd} If Alice sends a randomly selected element from a $D$-dimensional basis (represented by the random variable $A$) to Bob, the information Eve's measurement (represented by $E$) has about Alice's state is bounded by the square root of the probability that Eve would have caused errors in any MUB with respect to Alice's basis: \begin{eqnarray*} I(A;E)&\le& 4\log D\sqrt{P_{\widetilde{e}}}\ . \end{eqnarray*} \end{theorem} {\textbf Proof.} We will use lemmas \ref{lemm:trace_out_bound} and \ref{lemm:trace_bound_mixed_pure} and corollary \ref{co:sd_equal_p}. Starting from corollary \ref{co:sd_equal_p} we see that: $I(A;E)\le \log D \sum_i \frac{1}{D}|\rho_i - \rho|$. Our approach will be to bound this by introducing a purification\footnote{see definition \ref{def:purification}} for $\rho_i$ (the state that Eve holds when Alice sends $i$). Using the purification and lemma \ref{lemm:trace_out_bound} we can bound the original trace norm distance. To attack the state sent to Bob, Eve attaches a probe in a fixed state (say the $\ket{0}$ state) and applies a unitary operator. She then passes Bob his part, and does some generalized measurement on what she still holds. We can characterize this formally: \begin{eqnarray*} \ket{0}_{E}\ket{i}_A\stackrel{U}{\rightarrow}\sum_j \ket{E_{i,j}}\ket{j} \end{eqnarray*} We represent the MUB as: \begin{eqnarray*} \ket{\widetilde{i}}&\equiv& \sum_j H_{ji}\ket{j} \end{eqnarray*} With $H$ being a generalized Hadamard matrix on these $D$-dimensional basis: $|H_{ji}|=\frac{1}{\sqrt{D}}$. Applying this to Eve's attack, we obtain: \begin{eqnarray*} \ket{0}_{E}\ket{\widetilde{i}}_A\stackrel{U}{\rightarrow}\sum_j \ket{\widetilde{E_{i,j}}}\ket{\widetilde{j}} \end{eqnarray*} where $\ket{\widetilde{E_{i,j}}}\equiv \sum_{i',j'}H_{i' i}H^{*}_{j' j}\ket{E_{i',j'}}$. From the axioms of quantum mechanics, we know that if Alice sends $\ket{i}$ the probability that Bob will measure $\ket{j}$ is $P(j|i)=\braket{E_{i,j}}{E_{i,j}}$. Similarly, if Alice sends $\ket{\widetilde{i}}$ Bob will measure $\ket{\widetilde{j}}$ with probability $\widetilde{P}(j|i)=\braket{\widetilde{E_{i,j}}}{\widetilde{E_{i,j}}}$. We are now prepared to compute the probability that there are no errors in the MUB: \begin{eqnarray} P_0&\equiv&\sum_i p(i)\widetilde{P}(i|i)\nonumber\\ &=&\frac{1}{D}\sum_i \braket{\widetilde{E_{i,i}}}{\widetilde{E_{i,i}}}\nonumber\\ &=&\frac{1}{D}\sum_i\sum_{k,l,k',l'}H^{*}_{l i} H_{k i} H_{l' i} H^{*}_{k' i} \braket{E_{l,k}}{E_{l',k'}}\nonumber\\ &=&\frac{1}{D}\sum_{k,l,k',l'}\braket{E_{l,k}}{E_{l',k'}} \sum_i H^{*}_{l i} H_{k i} H_{l' i} H^{*}_{k' i}\label{eq:p0_sum} \end{eqnarray} When Eve's states are considered without Bob, her state will look like $\rho_i = \sum_j \density{E_{i,j}}{E_{i,j}}$. Now we will define a purification for Eve's states that will allow us to compute a bound on $P_0$. We assume that Eve holds \begin{equation} \label{eq:phi_i} \ket{\phi_i}\equiv\sum_j\ket{E_{i,j}}_1\ket{\psi^i_j}_2\nonumber \end{equation} where $\ket{\psi^i_j}$ is an orthonormal basis for each choice of $i$. Due to the orthonormality of $\ket{\psi^i_j}$, $\ket{\phi_i}$ is a purification of $\rho_i$ because $Tr_2\density{\phi_i}{\phi_i} = \rho_i$. We also define the generalized Hadamard transform of these states: \begin{equation} \ket{\widetilde{\phi_j}}\equiv\sum_i H^{*}_{i j}\ket{\phi_i} \ . \end{equation} The Hadamard transform is unitary, so see that $\ket{\phi_i} = \sum_j H_{i j}\ket{\widetilde{\phi_j}}$. It should be noted that our purification $\ket{\phi_i}$ for Eve's states is not orthonormal or normalized. In fact, this is a property of which we will make use in order to get a bound. We now calculate the norm of the $\ket{\widetilde{\phi_0}}$ and see that with the proper choice of $\ket{\psi^i_j}$ that it is proportional to the probability that there was no error, $P_0$: \begin{eqnarray} \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}}&=& \sum_{l,l'}H_{l 0} H^{*}_{l' 0}\braket{\phi_l}{\phi_{l'}}\nonumber\\ &=&\sum_{l,l'}\sum_{k,k'}H_{l 0} H^{*}_{l' 0} \braket{E_{l,k}}{E_{l',k'}}\braket{\psi^{l}_k}{\psi^{l'}_{k'}} \label{eq:phi_bar_0} \end{eqnarray} At this point we will parameterize $\ket{\psi^l_k}$: \begin{eqnarray*} \ket{\psi^l_k} &=& \sum_i \alpha_{l k i} \ket{i} \end{eqnarray*} with any choice of $\alpha_{l k i}$ so long as $\braket{\psi^l_{k'}}{\psi^l_k}=\delta_{k' k}$. In order to match equation \ref{eq:p0_sum} with equation \ref{eq:phi_bar_0}, we choose \begin{equation} \alpha_{l k i} = \frac{H_{l i} H^{*}_{k i}}{H^{*}_{l 0}} \ . \end{equation} To see that our choice of $\alpha_{l k i}$ is valid, recall that $|H_{i j}|^2 = 1/D$ and simply compute \begin{eqnarray*} \braket{\psi^l_{k'}}{\psi^l_{k}} &=& \sum_i \alpha^{*}_{l k' i} \alpha_{l k i}\\ &=&\frac{1}{|H_{l 0}|^2} \sum_i |H_{l i}|^2 H_{k' i} H^{*}_{k i}\\ &=& \sum_i H_{k' i} H^{*}_{k i}\\ &=& \delta_{k' k} \end{eqnarray*} which is what we need to show to make equation \ref{eq:phi_i} a valid purification. With the above choice, equation \ref{eq:phi_bar_0} becomes \begin{eqnarray*} \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}}&=&\sum_{l,l'}\sum_{k,k'}H^{*}_{l 0} H_{l' 0} \braket{E_{l,k}}{E_{l',k'}}\braket{\psi^{l}_k}{\psi^{l'}_{k'}}\\ &=&\sum_{k,l,k',l'}\braket{E_{l,k}}{E_{l',k'}} \sum_i H^{*}_{l i} H_{k i} H_{l' i} H^{*}_{k' i}\\ &=& D P_0 \ . \end{eqnarray*} Thus we have related the norm of $\ket{\widetilde{\phi_0}}$ to the probability that there are no errors \footnote{If the Hadamard transform is isomorphic to a group such that $H_{i k}H_{j k} = H_{i+j,k}\frac{1}{\sqrt{D}}$ and $H_{i k}H^{*}_{j k} = H_{i-j,k}\frac{1}{\sqrt{D}}$ we can show that the probability of an error $e$ in the Hadamard transformed basis (i.e. Alice sends $i$ but Bob receives $i+e$ averaged over all $i$), is $P_e = \braket{\widetilde{\phi_e}}{\widetilde{\phi_e}}/D$. In this case, $\ket{\psi^i_j}=\ket{\widetilde{i-j}}$. Indeed, this is the case for the standard Sylvester type Hadamard matrices. } in the MUB. Define ${\rho_i}'\equiv \density{\phi_i}{\phi_i}$ and ${\rho}'\equiv \frac{1}{D}\sum_i\rho_i$. Now we compute $\bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}}$: \begin{eqnarray*} \bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}} &=&\sum_i \frac{1}{D}|\braket{\widetilde{\phi_0}}{\phi_i}|^2\\ &=&\sum_i \frac{1}{D}|\bra{\widetilde{\phi_0}}\sum_j H_{i j}\ket{\widetilde{\phi_j}}|^2 \end{eqnarray*} Since $|H^{*}_{i k}|^2 D = 1$, we can rewrite the above as: \begin{eqnarray*} \bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}} &=& D \sum_i \frac{1}{D} |H^{*}_{i k}\bra{\widetilde{\phi_0}}\sum_j H_{i j}\ket{\widetilde{\phi_j}}|^2 \end{eqnarray*} Since $f(x)=|x|^2$ is convex, then $|\sum_i p_i x_i|^2 \le \sum_i p_i |x_i|^2$. \begin{eqnarray*} \bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}} &=& D \sum_i \frac{1}{D} |H^{*}_{i k}\bra{\widetilde{\phi_0}}\sum_j H_{i j}\ket{\widetilde{\phi_j}}|^2\\ &\ge& D | \sum_i \frac{1}{D} H^{*}_{i k}\bra{\widetilde{\phi_0}}\sum_j H_{i j}\ket{\widetilde{\phi_j}} |^2\\ &=& D | \frac{1}{D} \bra{\widetilde{\phi_0}}\sum_j\sum_i H^{*}_{i k} H_{i j}\ket{\widetilde{\phi_j}} |^2\\ &=& D | \frac{1}{D} \bra{\widetilde{\phi_0}}\sum_j\delta_{k j}\ket{\widetilde{\phi_j}} |^2\\ &=& D | \frac{1}{D} \braket{\widetilde{\phi_0}}{\widetilde{\phi_k}} |^2\\ &=& \frac{1}{D} | \braket{\widetilde{\phi_0}}{\widetilde{\phi_k}} |^2 \end{eqnarray*} We can set $k$ to any value we like, in particular $k=0$. We have previously shown that $\braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} = D P_0$, putting this together: \begin{eqnarray*} \bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}} &\ge& \frac{1}{D}|\braket{\widetilde{\phi_0}}{\widetilde{\phi_0}}|^2\\ &=& \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} P_0\\ \frac{ \bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } &\ge& P_0 \end{eqnarray*} We are now ready to prove the theorem. Since $Tr_2(\rho_i')=\rho_i$ and $Tr_2(\rho')=\rho$ we may apply lemma \ref{lemm:trace_out_bound}. We will see that we may introduce an intermediate pure state to make the bounding of the information easier. The pure state we will use is $\nproj{\widetilde{\phi_0}}$. Starting with corollary \ref{co:sd_equal_p}: \begin{eqnarray*} I(A;E)&\le& \log D \sum_i \frac{1}{D}|\rho_i - \rho|\\ &\le& \log D \sum_i \frac{1}{D}|{\rho_i}' - {\rho}'|\\ &=& \log D \sum_i \frac{1}{D}|{\rho_i}' - \nproj{\widetilde{\phi_0}} + \nproj{\widetilde{\phi_0}} - {\rho}'|\\ &\le& \log D \sum_i \frac{1}{D}(|{\rho_i}' - \nproj{\widetilde{\phi_0}}| + |\nproj{\widetilde{\phi_0}} - {\rho}'|)\\ &\le& \log D \sum_i \frac{1}{D} \left(2\sqrt{1 - \frac{\bra{\widetilde{\phi_0}}{\rho_i}'\ket{\widetilde{\phi_0}}} {\braket{\widetilde{\phi_0}}{\widetilde{\phi_0}}} } + 2\sqrt{1 - \frac{\bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}}} {\braket{\widetilde{\phi_0}}{\widetilde{\phi_0}}} } \right)\\ &=& 2\log D \left(\sqrt{1 - \frac{\bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}}} {\braket{\widetilde{\phi_0}}{\widetilde{\phi_0}}} } + \sum_i \frac{1}{D} \sqrt{1 - \frac{\bra{\widetilde{\phi_0}}{\rho_i}'\ket{\widetilde{\phi_0}}} {\braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } }\right)\\ &\le& 2\log D \left(\sqrt{1 - \frac{ \bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } } + \sqrt{1 - \frac{ \bra{\widetilde{\phi_0}}(\sum_i \frac{1}{D}{\rho_i}')\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } }\right)\\ &=& 4\log D \sqrt{1 - \frac{\bra{\widetilde{\phi_0}}{\rho}'\ket{\widetilde{\phi_0}}} { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } }\\ &\le & 4\log D\sqrt{ 1 - P_0} \end{eqnarray*} Where $1-P_0 = P_{ \widetilde{ e } }$ is the probability that there is an error in the MUB, which proves the theorem. \mbox{\rule{1.6mm}{4.3mm}} The previous theorem is what gives security to quantum key distribution schemes; however, we have only shown that QKD schemes are secure if the errors caused in any MUB are extremely small. Using quantum coding based approaches\cite{sp00}, we believe it is possible to use the above theorem to get a simple unconditional security proof that applies in dimension $D$. In the following section, we will apply these same techniques to show that Eve also cannot learn functions of messages. \section{Security of Functions of Messages} \label{sec:sfunc} According to theorem \ref{thm:ivd}, if the fidelity Bob would have had in any MUB is exponentially close to unity, then Eve's information is exponentially low about which of the basis vectors in the chosen basis was sent. We will refer to the identifier of the basis vector sent by Alice as the {\em key}, and {\em Alice can use the key to encrypt a classical message}. For example, after sending a basis vector $\left| k\right\rangle$ to Bob, Alice could publicly announce she sent basis vector $\left| k\oplus m \right\rangle$, where $\oplus$ is the bitwise exclusive or (XOR) operation. Bob could then recover the encrypted message $m$. The above mentioned information vs. disturbance result does not address the question of what information Eve might get about a \emph{function} of a message encrypted with that key. Suppose Eve only wants to know if the message has a particular value, i.e., she wants to learn the indicator function: $f(m)=1$ if $m=m_1$, else $f(m)=0$. This function only has exponentially little information about the message itself. To see this, suppose each of $d$ messages are equally likely, then \begin{eqnarray*} H(M)&=&\log d\\ H(f(M))&=&\frac{1}{d}\log d - (1-\frac{1}{d})\log(1-\frac{1}{d})\\ H(f(M)|M)&=& 0\\ I(f(M);M) &=& H(f(M)) \ . \end{eqnarray*} If $d$ is large, then $H(f(M))\approx \frac{1}{d}\log d$, but, $d = 2^{H(M)}$, so $H(f(M)) \approx 2^{-H(M)} H(M)$. Hence, in this case, Eve only has to learn exponentially little information. Since QKD security proofs\cite{Mayers96,LC98,bbbmr,sp00,gl03} only give exponentially strong security, it is not clear a priori that QKD will be sufficient to prevent Eve from learning any function of the message. The next theorem will show that Eve must cause errors to {\em learn any function of the message}, even if it has exponentially little information with the message itself\footnote{It should be noted that this result is \emph{not} true for the key itself. If Eve only wants to learn if the key was a particular value $k_0$, she may do so without disturbing the state very much}. Throughout this section we work with some group operator $+$ and all operations are in that group. In dimension $2^n$ the $+$ operator will usually be bitwise exclusive or (XOR). \begin{theorem} \label{thm:ivd_fm} Alice sends the $D$ dimensional state $\ket{k}$ to Bob, with $k$ chosen uniformly at random, and after Bob has received the state Alice announces $a=m + k$ (represented by the random variable A). Denote $f(M)$ as the function $f$ of the random variable $M$, and $f(K)$ is the function $f$ of the random variable $K$. The information Eve can get about any function of $m$, $f(m)$, is bounded by the square root of the probability that Eve would have caused errors in any MUB: \begin{eqnarray*} I(f(M);E|A)&\le& H(f(K))4\sqrt{P_{\widetilde{e}}} \end{eqnarray*} \end{theorem} {\textbf Proof.} This proof will follow closely the proof of theorem \ref{thm:ivd} and use the same tools. If $a = m + k$, then $f(m)=f(a - k)$. The state consistent with a function value $i$ is: \begin{eqnarray*} {\sigma_i}^a &\equiv& \frac{1}{q_i} \sum_{k:f(a - k)=i} p_k \rho_k \end{eqnarray*} with $q_i \equiv \sum_{k:f(a - k)=i} p_k$. Note that since $p_k=\frac{1}{d}$, then the probability of an announcement $a=m + k$ is also $\frac{1}{d}$. As such, $q_i$ does not depend on $m$ and is only related to the number of inputs to the function $f$ which have a given output. The averaged state is: \begin{eqnarray*} \sigma^a &\equiv& \sum_i q_i {\sigma_i}^a\\ &=&\sum_i\sum_{k:f(a - k)=i}p_k\rho_k \end{eqnarray*} Since each input has one and only one output and $p_k = \frac{1}{d}$: \begin{eqnarray*} \sigma^a &=&\sum_k\frac{1}{d}\rho_k=\rho \end{eqnarray*} The definition of mutual information\cite{CoverThomas} means that: \begin{eqnarray*} I(f(M);E|A)&=&\sum_a p_a I(f(M);E|A=a) \end{eqnarray*} Using lemma \ref{lemm:nbitSD} \begin{eqnarray*} \lefteqn{\sum_a p_a I(f(M);E|A=a)}\\ &\le& -\sum_a p_a \sum_i q_i \log q_i |{\sigma_i}^a - {\sigma}^a|\\ &=& -\sum_i q_i \log q_i \sum_a p_a | {\sigma_i}^a - \rho|\\ &=& -\sum_i q_i \log q_i \sum_a p_a | {\sigma_i}^a - \nproj{\widetilde{\phi_0}} + \nproj{\widetilde{\phi_0}} - \rho|\\ &\le& -\sum_i q_i \log q_i \sum_a p_a \left(| {\sigma_i}^a - \nproj{\widetilde{\phi_0}}| + |\nproj{\widetilde{\phi_0}} - \rho|\right)\\ &=& -\sum_i q_i \log q_i\sum_a p_a \left(2 \sqrt{ 1 - \frac{ \bra{\widetilde{\phi_0}}{\sigma_i}^a\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } } + 2 \sqrt{ 1 - \frac{ \bra{\widetilde{\phi_0}}\rho\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } }\right)\\ &\le& -\sum_i q_i \log q_i \left(2 \sqrt{ 1 - \frac{ \bra{\widetilde{\phi_0}}\sum_a p_a {\sigma_i}^a\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } } + 2 \sqrt{ 1 - \frac{ \bra{\widetilde{\phi_0}}\rho\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } }\right) \end{eqnarray*} We can simplify the quantity $\sum_a p_a {\sigma_i}^a$ by remembering that $p_a = 1/d$ and $q_i$ is independent of $a$: \begin{eqnarray*} \sum_a \frac{1}{d}{\sigma_i}^a &=&\sum_a \frac{1}{d} \frac{\sum_{k:f(a - k)=i}\frac{1}{d}\rho_k}{q_i}\\ &=&\frac{1}{q_i}\sum_a \frac{1}{d}\sum_{m:f(m)=i} \frac{1}{d}\rho_{a + m}\\ &=&\frac{1}{q_i}\sum_{m:f(m)=i}\frac{1}{d}\sum_a \frac{1}{d}\rho_{a + m} \end{eqnarray*} In the last sum, we sum over all $a$ with equal weight; hence, the $m$ dependence disappears: \begin{eqnarray*} \sum_a \frac{1}{d}{\sigma_i}^a &=& \frac{1}{q_i}\sum_{m:f(m)=i}\frac{1}{d}\sum_a \frac{1}{d}\rho_{a + m}\\ &=& \frac{1}{q_i}(\sum_{m:f(m)=i}\frac{1}{d}) \rho\\ &=& \rho \end{eqnarray*} Putting this back into the information bound: \begin{eqnarray*} \lefteqn{\sum_a p_a I(f(M);E|A=a)}\\ &\le& -\sum_i q_i \log q_i \left(2 \sqrt{ 1 - \frac{ \bra{\widetilde{\phi_0}}\sum_a p_a {\sigma_i}^a\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } } + 2 \sqrt{ 1 - \frac{ \bra{\widetilde{\phi_0}}\rho\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } }\right)\\ &=&-\sum_i q_i \log q_i (4 \sqrt{ 1 - \frac{ \bra{\widetilde{\phi_0}}\rho\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } })\\ &=& 4 H(Q) \sqrt{ 1 - \frac{ \bra{\widetilde{\phi_0}}\rho\ket{\widetilde{\phi_0}} } { \braket{\widetilde{\phi_0}}{\widetilde{\phi_0}} } }\\ &\le & H(f(K))4\sqrt{ P_{ \widetilde{ e } } } \end{eqnarray*} Which proves the result. \mbox{\rule{1.6mm}{4.3mm}} \section{Concluding Remarks} By developing bounds on entropy, we are able to bound the amount of information that measurements can get from a quantum source. Modeling eavesdropping in quantum key distribution as a quantum source, we are able to bound information that an eavesdropper can get. Since this bound is a function of the errors that would be caused in any MUB, Alice and Bob can use their measurements to estimate this figure. Therefore, Alice and Bob can bound information that Eve has about the information they share. In addition to showing security of such information, we show that any function of messages encrypted with this secret information is secure. This is a very strong statement about the robustness of quantum security. \nonumsection{References} \bibliographystyle{unsrt}
1,108,101,562,640
arxiv
\section{0pt}{12pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsection{0pt}{10pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsubsection{0pt}{8pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \title{Automatic normal orientation in point clouds of building interiors} \usepackage{authblk} \renewcommand*{\Authfont}{\bfseries} \author[1]{Sebastian Ochmann} \author[1]{Reinhard Klein} \affil[1]{Institute of Computer Science II, University of Bonn, Germany} \begin{document} \twocolumn[ \begin{@twocolumnfalse} \maketitle \begin{abstract} \input{abstract} \end{abstract} \vspace{0.35cm} \end{@twocolumnfalse} ] \input{figures} \input{tables} \input{sec_introduction} \input{sec_relatedwork} \input{sec_method} \input{sec_evaluation} \input{sec_conclusion} \input{sec_acknowledgments} \normalsize \section*{Acknowledgments} { This work was supported by the DFG projects KL 1142/11-1 (DFG Research Unit FOR 2535 Anticipating Human Behavior) and KL 1142/9-2 (DFG Research Unit FOR 1505 Mapping on Demand). } \section{Conclusion and future work} \label{sec:conclusion} We have presented a fast and fully automatic approach for orienting normals of the main building structures in multi-room, multi-story indoor point cloud measurements. The input to our algorithm are unstructured point clouds without any additional information such as scanner positions. Using a path tracing approach, we first classify points as interior, exterior, and outside surfaces, and estimate an initial orientation of all non-outside surfaces. In a second phase, we correct the orientation of fa\c{c}ade parts which may be incorrectly oriented in the first phase. Additionally, we perform a voting step for consistently orienting normals within surfaces after each phase. We evaluated our approach on multiple, real-world datasets with respect to orientation correctness and runtime. The resulting, automatically estimated orientation information can greatly facilitate or enable tasks such as visualization or reconstruction of building models which rely on correctly oriented surface normals. While the core of our algorithm provides fast processing of even larger datasets, the overall runtime is strongly dominated by the plane detection. One direction for future work is the evaluation of either different plane detection methods or alternatives for fast patch generation. Also, the fa\c{c}ade correction step sometimes incorrectly interprets surfaces as boundaries of rooms and thus performs incorrect flipping of already correct normals. A more sophisticated interpretation of the surfaces may thus be a worthwhile direction for future research. \section{Evaluation} \label{sec:evaluation} \tblEvaluation \figBounces \figClassification \figFacade \figFacadeFail To test the correctness and runtime of our approach, we applied it to multiple real-world datasets with ground truth normal orientations. Specifically, the datasets consist of multiple, registered scans with known scanner positions for each scan. This allows us to flip normals towards the respective scanner positions to obtain the correct orientations. In order to test our approach, we ignore the known orientation and scanner positions, and then compare our estimated orientations with the ground truth. We also measure the runtime of the main processing steps. Table \ref{tbl:evaluation} summarizes the results of our experiments which are further discussed below. \subsection{Input data, planes, and patches} The first part of Table \ref{tbl:evaluation} shows general statistics about the datasets such as number of points and scans. Note that information about individual scans and scanner positions is only used for generating ground truth normal orientations. It also lists the percentage of the total points which are part of detected planes (and thus belong to patches), and the percentage of the total points which are on patches that are \emph{not} classified as outside area (i.e.\ $C(p) \neq out$). Note that it is exactly this set of non-outside points for which our algorithm estimates normal orientations, and that the correctness is measured with respect to this set of points. The number of detected planes and the runtime of plane detection using the RANSAC implementation of the CGAL library \cite{CGAL-2018-Shapes} is also listed in the table. \subsection{Correctness} The next part of Table \ref{tbl:evaluation} shows the percentage of points on non-outside patches which have been correctly oriented by our algorithm with respect to the ground truth orientations determined using given correspondences of points to known scanner positions. We list the correctness after different phases of our algorithm. Phase 1 is after the initial orientation by path tracing (Section \ref{subsec:pathtracing}), before (1A) and after (1B) making normals consistent within surfaces. Phase 2 is after the fa\c{c}ade correction step (Section \ref{subsec:facade}), again before (2A) and after (2B) ensuring consistency within surfaces. For each dataset, the images at the bottom of the table show horizontal cross sections of the point clouds for the upper and lower stories. For each story, the upper image shows the unlabeled input point cloud. The middle image shows the classification into interior surfaces (green), exterior (blue), outside (yellow), and points that are not on patches (gray). The lower image shows the final correctness (after phase 2B) of the normal orientation with correctly oriented points (green), incorrectly oriented points (red), outside (yellow), and not on patches (gray). Figure \ref{fig:bounces} shows the effect of allowing multiple bounces in our path tracing approach. The images show the orientation correctness after the path tracing phase and before applying surface consistency. Increasing the number of ray bounces helps to correctly identify the orientation of patches which are strongly occluded by surrounding rooms. An overview of the classification of outside area is shown in Figure \ref{fig:classification} (a) which shows large areas scanned through windows or from balconies of the building. Outside area is colored yellow. The detail view in Figure \ref{fig:classification} (b) shows a cross section of the same building with the more fine grained classification with the same color scheme as in Table \ref{tbl:evaluation}. An example for fa\c{c}ade patches which are initially oriented incorrectly by the path tracing phase is shown in Figure \ref{fig:facade} (a). After applying the correction as described in Section \ref{subsec:facade}, the normals are oriented correctly (Figure \ref{fig:facade} b). A failure case of the fa\c{c}ade correction step is shown in Figure \ref{fig:facade_fail}. The surface highlighted red was incorrectly oriented since the opened, almost parallel door next to it was interpreted as a wall surface. Note that this example is taken from Dataset 4 which explains the decreased final correctness as shown in Table \ref{tbl:evaluation}. \subsection{Runtime} Below the correctness percentages, Table \ref{tbl:evaluation} also lists the runtime of the individual phases of our algorithm as described above. Clearly, the path tracing phase 1A takes more time than the single-bounce fa\c{c}ade correction phase 2A. Also, the surface consistency correction 1B and 2B have similar runtimes since they are the same operation performed after phases 1A and 2A, respectively. Even in case of the largest dataset (Dataset 4), the total runtime of the core normal orientation approach takes well below 10 seconds. We are using the NVIDIA OptiX framework \cite{OptiX} for GPU-accelerated ray tracing against the set of patches which makes the actual ray tracing part a minor part of the total runtime requirements. By far the largest contributor to the overall runtime of our approach is the plane detection for which we currently use a RANSAC implementation in the CGAL library. \section{Introduction} \label{sec:introduction} For many applications in computer graphics and related domains, surface normals are an important property of 3D point cloud or mesh data. While normal \emph{directions} can usually be estimated sufficiently well by analyzing local surface properties using e.g.\ principal component analysis (PCA) in case of point clouds, automatically determining the correct normal \emph{orientation}, i.e.\ the sign of the normal vectors, generally is a much harder problem. In particular for mesh data, there exists a wide variety of approaches based on principles such as voting, visibility, propagation, and optimization. Since point clouds are increasingly used as a means for representing various kinds of objects and scenes in fields like architecture, design, archaeology, and cultural heritage, methods working directly on point clouds have also received attention. A particularly important and discriminating aspect of any method is the severity of the assumptions made on the input and the particular geometry represented by the data. Regarding the input data, the assumptions range from very restrictive such as watertight, connected meshes, to unconnected polygon soups, possibly with missing parts. Point clouds pose additional challenges for certain kinds of methods based on connectivity for propagation, or surfaces for performing ray casting against, since such information is not directly available from the data. With respect to the class of the underlying object or scene, some methods make the assumption that the object itself is a closed 2-manifold. While this assumption simplifies the task of distinguishing between inside and outside space, many kinds of larger-scale datasets such as 3D urban environments do not fulfill this requirement. Our work targets the challenging task of automatically determining normal orientations in completely unstructured 3D point cloud datasets of building interiors with multiple stories and rooms. We are specifically interested in the main structure of the building consisting of floor, ceiling, and wall surfaces. This information is an important prerequisite for e.g.\ reconstruction tasks aiming at automatic generation of higher-level 3D models from point cloud data. Clearly, knowledge about correct surface orientation in previously unstructured data greatly helps to distinguish between room interior, interior of wall volumes, and outside area. Point cloud scans of building interiors, possibly including parts of exterior fa\c{c}ade and parts of the outside area scanned through windows, pose two particular challenges. First, such datasets usually consist of millions of points and cover a relatively large area which requires efficient means of processing them. Second, the constellation of rooms within a building can be quite complex, yielding a much more intricate surface topology than 2-manifolds. The proposed method for automatically orienting normals in point clouds of building interiors combines different ideas to provide efficient processing of real-world scans. We first simplify the scene by detecting planes in the point cloud and subsequently working on surface patches instead of individual points. One advantage of working on patches instead of individual points is the drastically reduced computational complexity. In addition, the surface representation enables us to employ a specifically tailored path tracing approach to estimate which side of each patch is probably room interior, wall, or outside area. While visibility information is exploited by several algorithms, our method not only takes direct visibility into account but also higher-order visibility through multiple ray bounces. Using this initial, per-patch estimation, we then vote for a global orientation for each surface to increase the robustness of the estimation. Finally, the determined surface orientations are used to flip the normal orientations of the points belonging to the respective surface. Our approach is evaluated on multiple real-world datasets for which ground truth normal orientations for comparison are available by means of known scanner positions. In summary, the contribution of our approach is a fast and fully automatic normal orientation estimation for the challenging scenario of indoor building scans without strong assumptions on the input data. The results of our method can greatly facilitate tasks such as reconstruction of 3D models from point clouds which require knowledge about the orientation of surfaces of the main building structure such as floors, ceilings, and walls. \section{Method} \label{sec:method} The input of our approach is a set of points in $\mathbb{R}^3$ mainly representing the interior of a building, possibly with some parts of exterior fa\c{c}ade and parts of outside area. If (unoriented) normals are not given in the data, they are estimated by means of local principal component analysis (PCA) for each point. \subsection{Plane detection and patch generation} We first detect planes in the point cloud data to obtain a simplified and more structured representation of the scene. Detection of primitive shapes in point clouds is a well studied problem and any reasonable method can be applied. We use the CGAL implementation \cite{CGAL-2018-Shapes} of the random sample consensus (RANSAC) method by Schnabel et al.\ \cite{Schnabel-2007-Primitives} for its efficiency and quality of the resulting shapes. The rationale behind using planes is that the main structure of buildings can usually be represented well in a piecewise planar manner. Note that other shapes such as spheres or cylinders are also supported by the detection algorithm and could in principle also be used for representing e.g.\ columns or curved walls. For each of the detected planes, a relatively coarse 2D \emph{occupancy bitmap}, i.e.\ a uniform grid on the surface on which each cell or pixel may have the value $0$ or $1$, represents the support of the plane by the points constituting the plane. A pixel of the bitmap has the value $1$ if and only if at least one point is located within the pixel. All pixels with value $1$ yield the set $P$ of patches which will be used in the following steps. Each patch $p \in P$ originates from an original surface (i.e.\ plane) $s_p$, has a center position $c_p \in \mathbb{R}^3$ and an initial normal $\tilde{n}_p$ with arbitrary but fixed orientation. \subsection{Orientation by path tracing} \label{subsec:pathtracing} Our first goal is to estimate an initial normal orientation for each individual patch. Specifically, given a patch $p \in P$, there are two possible orientations for its normal, $\tilde{n}_p$ and $-\tilde{n}_p$. For most points in the datasets we consider, we wish to select the one orientation which points towards the interior of a room. Conversely, the normal should point away from outside area (in case the patch is part of a surface separating room interior and outside area), and away from the interior of wall, floor or ceiling structures (in case the surface separates neighboring rooms). This classification task is formulated as a voting scheme based on path tracing. Intuitively, for each patch, we trace a number of random paths into both hemispheres for the two possible orientations. We then use the number of ray bounces as well as the path lengths to analyze two aspects. First, we classify whether patches belong to interior or exterior walls, or are located completely outside of the building. Second, we use this classification as well as the path lengths to flip the normal of each patch to the more likely correct orientation. Finally, the reoriented patch normals vote for a normal orientation of whole surfaces. \newcommand{L_{p, \tilde{n}_p}}{L_{p, \tilde{n}_p}} \newcommand{L_{p, -\tilde{n}_p}}{L_{p, -\tilde{n}_p}} \newcommand{B_{p, \tilde{n}_p}}{B_{p, \tilde{n}_p}} \newcommand{B_{p, -\tilde{n}_p}}{B_{p, -\tilde{n}_p}} We now formalize the approach. All ray intersections are tested against the set of patches $P$. Let us first consider a patch $p$ with center $c_p \in \mathbb{R}^3$ and one specific orientation $\tilde{n}_p$. We cast $k$ rays $r_i$, $i \in \{1, \dots, k\}$, each with origin $c_p$ and a direction randomly sampled within a $120^\circ$ cone directed towards $\tilde{n}_p$. In our experiments, $k = 50$. If a ray $r_i$ with direction $d_{r_i}$ intersects with a patch $p'$, the ray is reflected into a sampled direction within a cone oriented towards the hemisphere of the incoming ray. The direction $n_H$ of the hemisphere is computed as \[ n_H = \begin{cases} \tilde{n}_{p'}, & \text{if } \langle \tilde{n}_{p'}, d_{r_i} \rangle < 0, \\ -\tilde{n}_{p'}, & \text{otherwise.} \end{cases} \] Note that the normal $\tilde{n}_{p'}$ of the intersected patch $p'$ used for this computation is arbitrary but fixed. In particular, the path tracing is invariant under the initial orientation of the patches. We allow up to $b = 8$ ray bounces for each initial ray $r_i$. If a ray does not hit any patch, the respective path is terminated at that point. The result are $k$ ray paths, each with up to $b$ bounces for the considered patch $p$ and orientation $\tilde{n}_p$. Let $l^{i,j}_{p, \tilde{n}_p}$ be the length of the $j$th segment along the $i$th path traced for patch $p$ and orientation $\tilde{n}_p$ (note that segments after termination of a ray are considered to have zero length). We define the accumulated length $L_{p, \tilde{n}_p}$ as \[ L_{p, \tilde{n}_p} = \sum_{i=1}^k \sum_{j=1}^b \log(1 + l^{i,j}_{p, \tilde{n}_p}). \] The rationale for taking the logarithm is to decrease the influence of particularly long segments while still distinguishing between short and medium-length segments. Furthermore, let $b_i$ be the number of bounces of the $i$th path. We consider the average number of ray bounces $B_{p, \tilde{n}_p}$ over all $k$ paths \[ B_{p, \tilde{n}_p} = \frac{1}{k} \sum_{i=1}^k b_i. \] Note that we analogously have $L_{p, -\tilde{n}_p}$ and $B_{p, -\tilde{n}_p}$ for the opposite direction. We now define a classification function $C(p) : P \to \{in, ex, out\}$ of patch $p$ into interior, exterior, or outside as \[ C(p) = \begin{cases} out & \text{if } (B_{p, \tilde{n}_p} < \tau) \text{ and } (B_{p, -\tilde{n}_p} < \tau) \\ ex & \text{if } (B_{p, \tilde{n}_p} < \tau) \text{ xor } (B_{p, -\tilde{n}_p} < \tau) \\ in & \text{otherwise,} \end{cases} \] where $\tau$ is a threshold which was empirically chosen as $4$ in our experiments. A patch $p$ with $C(p) = out$ is considered to be clutter outside of the building and subsequently ignored. A patch with $C(p) = ex$ is considered to be part of a surface separating room interior from outside area. Its corrected normal orientation $\hat{n}_p$ is set to point away from the outside area, i.e. \[ \hat{n}_p = \begin{cases} \tilde{n}_p & \text{if } B_{p, -\tilde{n}_p} < \tau \\ -\tilde{n}_p & \text{if } B_{p, \tilde{n}_p} < \tau. \end{cases} \] A patch with $C(p) = in$ is considered to be part of a surface between neighboring rooms. In this case, we assume that the orientation with the longer total path length points towards the room interior and we thus set the corrected normal orientation to \[ \hat{n}_p = \begin{cases} \tilde{n}_p & \text{if } L_{p, \tilde{n}_p} > L_{p, -\tilde{n}_p}, \\ -\tilde{n}_p & \text{otherwise.} \end{cases} \] The orientation estimation up to this point was performed separately for each patch. Assuming that all points of each of the originally detected planes share a common normal orientation, we can easily vote for an orientation using all patches belonging to a common plane. Let $s$ be one of the detected planes with arbitrarily oriented normal $\tilde{n}_s$ and let $P_s = \{p \ \vert\ s_p = s\}$ be the set of patches originating from surface $s$. For voting, we determine the value \[ \theta_s = \sum_{p \in P_s} \text{sgn}\left( \langle \hat{n}_{p}, \tilde{n}_s \rangle \right), \] where $\text{sgn}(\cdot)$ is the standard signum function, and determine the corrected surface normal $n_s$ as \[ n_s = \begin{cases} \tilde{n}_s, & \text{if } \theta_s > 0, \\ -\tilde{n}_s, & \text{otherwise.} \end{cases} \] Then all patch normals are flipped to point in the same direction as $n_s$. For simplicity, we will still call this corrected normal $\hat{n}_p$ in the following. \subsection{Correction for fa\c{c}ade parts} \label{subsec:facade} Surfaces belonging to exterior fa\c{c}ade are sometimes encountered in interior scans due to scanning through windows. For such patches, the above estimation may erroneously prefer the direction pointing away from the outside area since ray paths towards the outside area are terminated quickly while rays towards the exterior wall of the building generate longer paths. An example for such an erroneous estimation is shown in Figure \ref{fig:facade}. To correct the orientation in these cases, we perform a second, simpler ray casting pass as follows. For each patch $p$ with center $c_p$ and orientation $\hat{n}_p$ as estimated above, we cast $k$ rays $r_i$, $i \in \{1, \dots, k\}$, originating at $c_p$ with directions $d_i$ sampled in a cone oriented towards $\hat{n}_p$ without allowing ray bounces. Let $p'_i$ be the patch which is hit by ray $r_i$. We then consider the value of \[ \phi_p = \sum_{i=1}^k \text{sgn}\left( \langle \hat{n}_{p'_i}, d_i \rangle \right). \] If $\phi_p > 0$, the estimated orientation $\hat{n}_p$ is probably incorrect since it points towards the back side of a surface of a room interior. We thus define the corrected oriented normal $\overline{n}_p$ as \[ \overline{n}_p = \begin{cases} \hat{n}_p & \text{if } \phi_p < 0, \\ -\hat{n}_p & \text{otherwise.} \end{cases} \] The patch normals are then again used to vote for a common normal orientation within each surface in the same way as described at the end of Section \ref{subsec:pathtracing}. As a final step, the oriented normals of the patches are used to orient the normals of the original points of the point cloud which lie within the respective patch. \section{Related Work} \label{sec:relatedwork} A classical propagation-based approach for orienting normals of point sets is described by Hoppe et al.\ \cite{Hoppe_1992_Reconstruction}. It derives a consistent orientation of tangent planes for data points by means of solving an optimization problem on the Riemannian graph of the points with edge weights proportional to the normal direction deviation between neighboring points. The method is only applicable for densely sampled, closed surfaces and may fail at sharp creases. König et al.\ \cite{Koenig_2009_Consistent} base their method on the method by Hoppe et al.\ \cite{Hoppe_1992_Reconstruction} but propose a new unreliability cost for traversing the Riemannian graph based on Hermite curves. One recent point cloud based approach by Schertler et al.\ \cite{Schertler_2017_Globally} generalizes propagation as a graph-based energy minimization problem. To this end, the graph-based idea by Hoppe et al.\ \cite{Hoppe_1992_Reconstruction} is reformulated to a maximum-likelihood problem on a Markov random field. They also propose to use the streaming approach by Pajarola \cite{Pajarola_2005_Stream} to perform out-of-core processing. The volumetric approach to solid inside/outside classification of polygonal data by Murali et al.\ \cite{Murali_1997_Solid} is based on a partitioning of space into polyhedral cells on which a consistent classification is derived by optimization. Xie et al.\ \cite{Xie_2004_Noisy} segment an input point cloud into so-called mono-oriented regions through an active contour method. Subsequently, a consistent inside/outside partitioning is achieved by means of a voting algorithm. The approach by Mello et al.\ \cite{Mello_2003_InOut} constructs an adaptively subdivided tetrahedral decomposition from an input point cloud for which a consistent labeling as inside/outside over all tetrahedra is determined by means of a simulated annealing approach. Alliez et al.\ \cite{Alliez_2007_Voronoi} present a variational framework for combined normal direction and orientation estimation as part of their surface reconstruction approach. They first compute a tensor field using a Voronoi diagram of the input point set and derive a best-fitting isosurface by solving a generalized eigenvalue problem. Another variational approach which finds normal directions and orientations simultaneously is presented by Wang et al.\ \cite{Wang_2012_Variational}. Liu et al.\ \cite{Liu_2010_Orienting} transfer an input point cloud to a coarse triangulated mesh in order to determine normal orientations on this mesh representing the underlying topology. This information is subsequently used to orient normals on the original point set. An approach which employs stochastic ray voting is presented by Mullen et al.\ \cite{Mullen_2010_Signing}. An unsigned distance function is first estimated on a 3D Delaunay triangulation of an input point set. Initial estimates for the sign of the distance function are obtained by means of ray shooting and testing for intersections with an $\varepsilon$-band of the unsigned function which is then smoothed and propagated. Borodin et al.\ \cite{Borodin_2004_Consistent} combine a proximity- and visibility based approach to orient polygons in meshes. A connectivity graph between patches of the model is constructed in which each patch has two visibility coefficients which encode how much of the two sides of the patch is visible from outside. To achieve this, one of the proposed methods is a ray casting approach similar to ours. However, the assumption is that most of the object's surface is visible from outside the model. Takayama et al.\ \cite{Takayama_2014_Raycasting} also employ a ray casting based approach to orient facets in polygon meshes. They cast rays in both directions of facets to determine where outside space is located. For inner facets, they attempt to determine which side of the facet has more free space than the other which is similar to our idea for inner walls. Since this method may fail in cavities, they propose an alternative method based on intersection parity which is prone to modeling errors. In contrast, we employ path tracing with multiple bounces to deal with cavities in the scene. One method that implicitly considers ray paths to propagate inside/outside classifications in triangular meshes is presented by Zhou et al.\ \cite{Zhou_2008_Visibility}. Based on point samples on triangles, a weighted visibility graph is constructed between the points whose nodes are classified as inside or outside using graph cut.
1,108,101,562,641
arxiv
\section{Introduction} In this article we study $\mathbb{R}^d$-valued \textit{Stochastic Differential Equation}s (SDE) whose dynamics are confined to a subset $\mathcal{D} \subset \mathbb{R}^d$, namely, the solution $X_t$ is repelled away from the boundary $\partial \mathcal{D}$ by a reflection mechanism defined in terms of the outward normal and a local time at the boundary. These \textit{reflected SDEs}, enable one to model an impenetrable frontier at which the process is ``constrained'' and have advanced as a rich field within the applied probability theory. They are used to model physical transport processes \cite{costantini1991diffusion}, molecular dynamics \cite{saisho1994model}, biological systems \cites{dangerfield2012modeling,niu2016modelling} and appear in mathematical finance \cite{HanHuLee2016} and stochastic control \cites{kruk2000optimal,ramasubramanian2006insurance}. Lastly, this reflection problem, the so-called \textit{Skorokhod problem} \cites{Skorokhod1961stochastic,Skorokhod1962stochastic}, has also proven particularly useful in analysing a variety of queuing and communication networks. The literature on the latter is vast, see \cites{ward2003diffusion,ramanan2003fluid} or \cites{chen2013fundamentals}. In this work, we focus on the general class of \textit{reflected McKean-Vlasov equations} \begin{equation} \label{eq:MVE} \begin{split} X_t^{i} =& X_0 +\int_0^t b(s, X_s^{i}, \mu_s)ds + \int_0^t f\ast \mu_s(X_s^{i}) ds + \int_0^t \sigma(s, X_s^{i}, \mu_s) dW_s^{i} - k_t^{i}, \\ |k^i|_t=& \int_{0}^{t} \mathbbm{1}_{ \partial\mathcal{D} }(X_s^i) d|k^i|_s, \qquad k_t^i= \int_{0}^{t} \mathbbm{1}_{ \partial\mathcal{D}}(X_s^i)\textbf{n} (X_s^i) d|k^i|_s, \qquad \mu_t(dx) = \mathbb{P}\big[ X_t^i \in dx\big], \end{split} \end{equation} where $\textbf{n}$ is a vector field on the boundary of the domain $\mathcal{D}$ in an outward normal direction, $W$ is a Brownian motion and $k$ is a bounded variation process with variation $|k|$ acting as a local time that constrains the process to the domain $\mathcal{D}$. Thus, the instant the path attains the boundary $\partial \mathcal{D}$ of the domain, $k$ increases creating a contribution that ensures the path remains inside the domain. $\mu$ is the law of the solution process $X$ and the coefficients $b$ and $f$ are locally Lipschitz over the domain $\mathcal{D}$. We denote by $f\ast\mu(\cdot)$ the convolution of a function $f$ with the measure $\mu$. The law of the above diffusion solves the nonlinear Fokker-Planck equation with a Neumann boundary condition (see also \cite{wang2021distribution}), formally \begin{equation} \label{eq:FokkerPlanckEquation} \begin{split} & \partial_t \mu_t(x) = \nabla \cdot \Big( \tfrac{1}{2} \nabla^T \cdot (\sigma \cdot \sigma^T)(t, x, \mu_t) \mu_t(x) - b(s, x, \mu_t)\mu_t(x) - f \ast \mu_t(x) \mu_t(x) \Big) \\ & \Big\langle \textbf{n}(x), \tfrac{1}{2} \nabla^T \cdot (\sigma\cdot \sigma^T) (t, x, \mu_t) \mu_t(x) - b(t, x, \mu_t) \mu_t(x) - f \ast \mu_t(x)\mu_t(x) \Big\rangle=0 \quad \forall x\in \partial \mathcal{D}. \end{split} \end{equation} It is widely known that McKean-Vlasov equations arise as the mean field limit of a system of interacting particles, the so-called \textit{Propagation of Chaos} (PoC): for $N\in \mathbb{N}$ and $i\in\{1, ..., N\}$, the system of equations \begin{equation} \label{eq:ParticleSystem} \begin{split} X_t^{i, N} =& X_0 + \int_0^t b( s, X_s^{i, N}, \mu_s^N ) ds + \int_0^t f\ast \mu_s^N (X_s^{i, N}) ds + \int_0^t \sigma( s, X_s^{i, N}, \mu_s^N ) dW_s^{i, N} - k_t^{i, N}, \\ |k^{i, N}|_t=& \int_{0}^{t} \mathbbm{1}_{ \partial\mathcal{D} }(X_s^{i, N}) d|k^{i, N}|_s, \qquad k_t^{i, N}= \int_{0}^{t} \mathbbm{1}_{ \partial\mathcal{D}}(X_s^{i, N}) \textbf{n} (X_s^{i, N}) d|k^{i, N}|_s, \qquad \mu_t^N = \tfrac{1}{N} \sum_{j=1}^N \delta_{X_t^{j, N}}, \end{split} \end{equation} has a dynamics that converges as $N\to \infty$ to that of Equation \eqref{eq:MVE}, The problem of confining a stochastic process to a domain was first posed by Skorokhod in \cite{Skorokhod1961stochastic}. The seminal works \cite{tanaka2002stochastic}, \cite{lions1984stochastic} and \cite{saisho1987stochastic} prove that such solutions exist and are unique in the multi-dimensional case for different classes of domain. \cite{tanaka2002stochastic} works with processes on a convex domain while \cite{saisho1987stochastic} studies domains that satisfy a ``Uniform Exterior Sphere'' and ``Uniform Interior Cone'' condition but imposes more restrictive assumptions on the equation's coefficients. \cite{sznitman1984nonlinear} was the first to prove wellposedness of reflected McKean-Vlasov equations in smooth bounded domains. The above works impose strong restrictions on the coefficients, usually requiring that they are Lipschitz and bounded. We prove the existence and uniqueness for a broader class of McKean-Vlasov reflected SDE in general convex domains, crucially not requiring global Lipschitz continuity, nor bounded coefficients, nor a bounded domain. We allow for superlinear growth components in both space and in the convolution component (the measure component). Very recently, \cite{wang2021distribution} contributes new wellposedness results under singular coefficients and establishes exponential ergodicity under a variety of conditions. In this work we focus on reflections according to an outward normal of the solution's path as $X_t\in\partial\mathcal{D}$, but other types of reflections exist. \textit{Oblique reflected SDEs} are reflected SDEs where the vector field $\textbf{n}$ is not normal but oblique to the boundary. Wellposedness is studied in \cites{lions1984stochastic,anderson1976small} and in \cites{costantini1992skorohod,dupuis1993sdes} for non-smooth domains. \textit{Elastic reflections} appears in \cite{Spiliopoulos2007ReflectedAndLangevin}. A recently introduced form of reflections motivated by financial applications, see \cite{briand2018bsdes}, is the \textit{reflection in mean} where the reflection happens at the level of the distribution and is generally weaker than the classical pathwise constraint. A typical mean reflection constraint asks for the expected value (of a given function of the solution) to be non-negative, e.g.~$\mathbb{E}[h(X_t)]\geq 0$. See \cite{briand2016particles} for a particle system approximation of mean reflected SDE and its numerics. The particle system approximations are similar to the classical McKean-Vlasov setting. Lastly, a Large Deviation Principle for mean reflected SDE is achieved in \cite{li2018large} while the exit-time problem, in the likes of our study in Section \ref{sec:ExitTimes} below, is open. \subsubsection*{Large Deviations and Exit-times} The second part of this work focuses in obtaining a \textit{Large Deviations Principle} and the characterisation of the exit-time from a subdomain $\mathfrak{D}\subsetneq \mathcal{D}$ for the small noise limit for the reflected McKean-Vlasov equation \begin{equation} \label{eq:MVELimiting} \begin{split} X_t^\varepsilon &= X_0 + \int_0^t b(s, X_s^\varepsilon, \mu_s^\varepsilon )ds + \int_0^t f \ast \mu_s^\varepsilon (X_s^\varepsilon ) ds + \sqrt{\varepsilon}\int_0^t \sigma(s, X_s^\varepsilon, \mu_s^\varepsilon) dW_s - k_t^\varepsilon, \\ |k^{\varepsilon}|_t &= \int_{0}^{t} \mathbbm{1}_{ \partial\mathcal{D} }(X_s^{\varepsilon}) d|k^{\varepsilon}|_s, \qquad k_t^{\varepsilon}= \int_{0}^{t} \mathbbm{1}_{ \partial\mathcal{D}}(X_s^{\varepsilon}) \textbf{n} (X_s^{\varepsilon}) d|k^{\varepsilon}|_s, \qquad \mu_t^\varepsilon(dx) = \mathbb{P}\big[ X_t^\varepsilon \in dx\big]. \end{split} \end{equation} The asymptotic theory of Large Deviations Principles (LDP) \cite{DZ} quantifies the rate of convergence for the probability of rare events. First developed by Schilder in \cite{schilder1966some}, an LDP is equivalent to convergence in probability with the addition that the rate of convergence is a specific speed controlled by the rate function. Consider a drift term $b$ that has some basin of attraction and assume the noise in our system is small. Under such conditions, it is common for the system to exhibit a meta-stable behaviour. Loosely speaking, this terminology refers to when a particle is forced towards a basin of attraction and spends long periods of time there before moving to the next basin of attraction. The particle only leaves after receiving a large "kick" from its noise which in the small noise limit, i.e., as the noise vanishes, is an increasingly rare event. This property of the dynamics poses a difficulty for numerical simulations since the numerical scheme takes an impractical amount of time to observe any deviations from the basin. LDPs help by quantifying the probability of this rare event. A Freidlin–Wentzell LDP provides an estimate for the probability that the sample path of an It\^o diffusion will stray far from the mean path when the size of the driving Brownian motion is small with respect to a pathspace norm. Freidlin-Wentzell LDPs for reflected SDEs have been explored in a number of works. For bounded and Lipschitz coefficients, \cite{dupuis1987large} provides the LDP in general convex domains. For smooth domains, \cite{anderson1976small} obtains the LDP under the assumption of bounded and Lipschitz coefficients. Additional references on LDPs for reflected processes can be found in \cite{priouret1982remarques}. Close to our work is \cite{liu2020large} where large and moderate deviations for non-reflected McKean-Vlasov equations with jumps is addressed via the Dupuis-Ellis weak convergence framework \cite{dupuis2011weak}. Their comprehensive wellposedness results \cite{liu2020large}*{Proposition 5.3} are established under a uniformly Lipschitz measure assumption on the coefficients (their assumption A1 and A2) while here we allow for fully super-linear growth in both measure and space components. LDPs are a suitable language for studying the rare event of exiting from a basin of attraction. For classical reflected SDEs the exit-time from a subdomain $\mathfrak{D}\subsetneq \mathcal{D}$ is a trivial problem as one exits the subdomain $\mathfrak{D}$ before hitting the boundary of $\mathcal{D}$, and hence, the exit-time result for $\mathfrak{D}$ is recovered from standard SDE counterpart. This is a priori \textit{not} the case for reflected McKean-Vlasov equations where the reflection term affects the law and paths to ensure it remains on the domain and is thus different from the law of the non-reflected McKean-Vlasov. In the small noise limit the exit-problem for non-reflected SDEs is well documented. A great introduction to the subject can be found in \cite{DZ}*{Section 5.7}; for an in-depth study with slowly-varying time-dependent coefficients see \cite{Herrmann2013StochasticR}*{Section 4}; the excellent work \cite{HIP} characterises the exit-time of a McKean-Vlasov equation after obtaining a large deviation principle; see \cite{tugaut2016simple} for a simpler proof relying only on classical Freidlin-Wentzell estimates; and \cite{T2011f}, where the same results are obtained by transference from the particle system to the McKean-Vlasov system via propagation of chaos and Freidlin-Wentzell estimates. \subsubsection*{Our motivation and contributions} Our \textit{contributions} are threefold: (i) existence and uniqueness results for McKean-Vlasov SDEs constrained to a convex domain $\mathcal{D}\subseteq \mathbb{R}^d$ with coefficients that have superlinear growth in space and are non-Lipschitz in measure; (ii) a large deviations principle for this class of processes; and, (iii) the explicit characterisation of the first exit-time of the solution process from a subdomain $\mathfrak{D}\subsetneq \mathcal{D}$. For (i), unlike previous works on reflected SDEs, we do not rely on the domain as a way of ensuring the coefficients are bounded or Lipschitz. We work with drift terms that satisfy a one-sided Lipschitz condition over the (possibly unbounded) domain and are locally Lipschitz. Further, we do not restrict ourselves to measure dependencies that are Lipschitz on the domain, but additionally work with a drift term that satisfies a self-stabilizing assumption that ensures any particle is attracted towards the mean of the distribution/particle system. Critically, in a convex domain this will always be away from the boundary. From a technical point of view, the non-Lipschitz measure component, $f$ in \eqref{eq:MVE}, destroys the standard contraction argument. Nonetheless, we are able to establish an intermediate fixed point argument which decouples $f$, leaving $b$ to be dealt with. The main workaround result is Lemma \ref{lemma:Gamma-FirstContraction} in combination with a specific moment estimate mechanism. The closest result to ours is that of \cite{HIP}. There, specific structural assumptions are required: drift of specific polynomial form, $\sigma$ is constant, no-time dependencies, deterministic coefficients and, critically, $b$ and $f$ need to be combined into a mean-field interaction term of order $1$. We lift all these constraints. To the best of our knowledge, the scope of our well-posedness results for McKean-Vlasov equations, and separately for reflected SDEs, are not found in the literature. Thus, our contributions extend known results for McKean-Vlasov equations and reflected SDEs. For (ii), our study of the LDPs is based on techniques which directly address the presence of the law in the coefficients and avoid the associated particle system. Methodologically, our approach relies on the classical mechanism of exponentially good approximations but employing judiciously chosen auxiliary processes and less standard tricks to obtain the main results. As in \cite{dos2019freidlin}, it turns out that the correct LDP rate function for McKean-Vlasov equations can be found through certain ODE equations (skeletons) where the McKean-Vlasov's noise and distributions are replaced by smooth functions and the degenerate distribution corresponding to the ODE's solution respectively. For (iii), the LDP results are the intermediate step necessary to study the exit-time of $X^\varepsilon$ from an open subdomain $\mathfrak{D}\subsetneq \mathcal{D}$. Motivated by numerical applications, as in \cites{di2017jump,di2019sharp}, we provide the \textit{explicit form of the rate function} for the exit-time distribution (the exit-cost $\Delta$ in Theorem \ref{thm:ExitTime}). Intuitively, the solution to \eqref{eq:MVELimiting} depends on its own law, hence one expects its exit-time from a subdomain to differ from the exit-time of its non-reflected analogue. Similarly, the exit-time of one of the particles in the system \eqref{eq:ParticleSystem} will be altered by the presence of the reflection since this particle will interact with other particles which have already been reflected. However, we will show that, in the small noise limit the exit-time of our McKean-Vlasov reflected SDE is unaltered and we are able to establish a familiar Eyring-Kramer's type law. The \textit{motivation} of our work stems from numerical considerations around the simulation of McKean-Vlasov equations (reflected or not) where the measure component is non-Lipschitz, in finite and infinite time horizon, and non-constant diffusion coefficients. For instance, reflected McKean-Vlasov equations appear in \cite{LeiteWilliams2019} and \cite{Anderson2019} as models for bio-chemistry and our framework allows us to study the Granular media equation (see \eqref{eq:FokkerPlanckEquation}) $$ \partial_t \mu_t(x) = \tfrac{1}{2} \nabla^2 \mu_t(x) + \nabla\cdot \Big( \nabla B(x)\mu_t(x) + \nabla F \ast \mu_t(x) \mu_t(x) \Big), $$ where $B$ is the constraining potential and $F$ is the interactive potential. This models the velocity distribution in the hydrodynamic limit of a collection of inelastic particles. In the case where the potentials $B$ and $F$ are convex, it is well known that the solution rapidly converges (as $t\to \infty$) towards an invariant distribution \cite{BGG1}. Our work opens a clear pathway to analyse the behaviour of \eqref{eq:MVE} and \eqref{eq:ParticleSystem} as $t\to \infty$. An important and fully unanswered question left open by this work relates to effective numerical methods for this class of McKean-Vlasov equationss (even in the non-reflected case). On one hand the penalisation methodology of \cite{Slominski2013-rSDE-penalization} seem feasible, where the reflection on the bounded domain enforces boundedness of the solution process and the compact support of its law (a trick exploited in \cite{bouchard2017numerical}). On the other hand, explicit step Euler-type discretizations \cite{dos2018simulation} for super-linear drifts have been shown to work but only for drifts that are Lipschitz in the measure components. \medskip \textit{This work is organised as follows.} Section \ref{sec:Preliminaries} introduces notation, setting and objects of interest. In Section \ref{section exist.unique} we address the wellposedness of the reflected McKean-Vlasov equations, of the associated reflected interacting particle system and present a Propagation of Chaos result. Sections \ref{sec:LDPs} and \ref{sec:ExitTimes} cover the Freidlin-Wentzell Large deviations and exit-time results respectively. \section{Preliminaries} \label{sec:Preliminaries} We denote by $\mathbb{N}=\{1,2,\cdots\}$ the set of natural numbers; $\mathbb{Z}$ and $\mathbb{R}$ denote the set of integers and real numbers respectively, with the real positive half-line set as $\mathbb{R}^+=[0,\infty)$. For $t\in\mathbb{R}$, we denote its floor as $\lfloor t \rfloor$ (the largest integer less than or equal to $t$). For any $x,y\in\mathbb{R}^d$, $\langle x,y\rangle$ stands for the usual Euclidean inner product and $\|x\|=\langle x,x\rangle^{1/2}$ the usual Euclidean distance. Let $A$ be a $d\times d'$ matrix, we denote the transpose of $A$ by $A'$ and let $\| A \|$ be the Hilbert-Schmidt norm. Define the derivative of a function $f:\mathbb{R}\to \mathbb{R}^d$ as $\dot{f}$. For sequences $(f_n)_{n\in \mathbb{N}}$ and $(g_n)_{n\in\mathbb{N}}$, we use the symbols $\lesssim ,\gtrsim $ in the following way: \begin{align*} f_n \lesssim g_n \ \ \iff \ \ \limsup_{n\to \infty} \frac{f_n}{g_n}\leq C,~\text{for some}~C>0, \end{align*} and \begin{align*} f_n \gtrsim g_n \ \ \iff \ \ \liminf_{n\to \infty} \frac{f_n}{g_n}\geq C,~\text{for some}~C>0. \end{align*} For a set $\mathcal{D} \subset \mathbb{R}^d$, we denote its interior (largest open subset) by $\mathcal{D}^\circ$, its closure (smallest closed cover) by $\overline{\mathcal{D}}$ and the boundary by $\partial \mathcal{D} = \overline{\mathcal{D}} \backslash \mathcal{D}^{\circ}$. For $x\in \mathbb{R}^d$,$r\geq 0$, denote $B_r(x)\subset \mathbb{R}^d$ as the open ball of radius $r$ centred at $x$. Let $f:\mathbb{R}^d \to \mathbb{R}$ be a differentiable function. Then we denote by $\nabla f$ the gradient operator and $\nabla^2 f$ to be the Hessian operator. Let $C([0,T]; \mathbb{R}^d)$ be the space of continuous function $f:[0,T] \to \mathbb{R}^d$ endowed with the supremum norm $\|\cdot\|_{\infty,[0,T]}$. For $x\in \mathbb{R}^d$ let $C_{x}([0,T]; \mathbb{R}^d)$ be the subspace of $C([0,T]; \mathbb{R}^d)$ of functions $f:[0,T] \to \mathbb{R}^d$ with $f(0)=x$. Let $\tilde{\Omega}=C_0([0,T]; \mathbb{R}^{d'})$ be the canonical $d'$-dimensional Wiener space and let $W$ be the Wiener process with law $\tilde{\mathbb{P}}$. Let $(\tilde{\mathcal{F}_t})_{t\in[0,T]}$ be the standard augmentation of the filtration generated by the Brownian motion. Then we have the probability space $(\tilde{\Omega}, \tilde{\mathcal{F}}, (\tilde{\mathcal{F}_t})_{t\in[0,T]}, \tilde{\mathbb{P}})$. Additionally, let $([0,1], \mathcal{B}([0,1]), \overline{\mathbb{P}})$ be a probability space with the Lebesgue measure $\overline{\mathbb{P}}$. Our probability space is structured as follows: \begin{enumerate} \item The sample space will be $\Omega =[0,1] \times \tilde{\Omega}$, \item The $\sigma$-algebra over this space will be $\mathcal{F} = \sigma( \mathcal{B}([0,1]) \times \tilde{\mathcal{F}})$ with filtration $\mathcal{F}_t = \sigma( \mathcal{B}([0,1]) \times \tilde{\mathcal{F}_t})$, \item The probability measure will be the product measure $\mathbb{P} = \overline{\mathbb{P}} \times \tilde{\mathbb{P}}$. \end{enumerate} For $p\geq 1$, let $L^p(\Omega, \mathcal{F}, \mathbb{P}; \mathcal{D})$ be the space of random variables over the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ with state space $\mathcal{D}$ and finite $p$ moments. For $p\geq 1$, let $\mathcal{S}^p([0,T];\mathbb{R}^d)$ be the space of $(\tilde{\mathcal{F}_t})_{t\in[0,T]}$-adapted processes $X:\Omega\times [0,T]\to \mathcal{D}$ satisfying $\mathbb{E}[ \|X\|^p_{\infty, [0,T]} ]^{1/p} < \infty$ where $\|X\|_{\infty,[0,T]}:=\sup_{s\in[0,T]} \| X_s \|$. Let $\mathcal{H}_1^0$ be the Cameron Martin Hilbert space for Brownian motion: the space of all absolutely continuous paths on the interval $[0, T]$ which start at $0$ and have a derivative almost everywhere which is $L^{2}([0, T]; \mathbb{R}^{d'})$ integrable $$ \mathcal{H}_1^0:=\big\{h:[0, T] \to \mathbb{R}^{d'},\ h(0)=0,\ h(\cdot)=\int_{0}^{\cdot} \dot{h}(s) d s,\ \dot{h} \in L^{2}([0, T]; \mathbb{R}^{d'} )\big\}. $$ Let $\mathcal{D}$ (possibly unbounded) be a subset of $\mathbb{R}^d$ and $\mathcal{B}_\mathcal{D}$ be the Borel $\sigma$-algebra over $\mathcal{D}$. Let $\mathcal{P}_r(\mathcal{D})$ be the set of all Borel probability measures which have finite $r^{th}$ moment. \begin{defn} \label{defn:Wasserstein} Let $r\geq 1$. Let $(\mathcal{D}, d)$ be a metric space with Borel $\sigma$-algebra $\mathcal{B}_\mathcal{D}$. Let $\mu, \nu \in \mathcal{P}_r(\mathcal{D})$. We define the Wasserstein $r$-distance $\mathbb{W}_\mathcal{D}^{(r)}: \mathcal{P}_r(\mathcal{D}) \times \mathcal{P}_r(\mathcal{D}) \to \mathbb{R}^+$ to be $$ \mathbb{W}_\mathcal{D}^{(r)} (\mu, \nu) = \Big( \inf_{\pi \in \Pi_r(\mu,\nu)} \int_{\mathcal{D} \times \mathcal{D}} d(x, y)^r \pi(dx, dy) \Big)^{\frac{1}{r}}, $$ where $\Pi_r(\mu,\nu)\subset \mathcal{P}_r(\mathcal{D} \times \mathcal{D})$ is the space of joint distributions over $\mathcal{D} \times\mathcal{D}$ with marginals $\mu$ and $\nu$. \end{defn} \subsubsection*{Domain, outward normal vectors and properties} The processes that we consider in this paper are confined to a domain $\mathcal{D}$. \begin{defn} Let $\mathcal{D}$ be a subset of $\mathbb{R}^d$ that has non-zero Lebesgue measure interior. For $x\in \partial \mathcal{D}$, define \begin{align*} \mathcal{N}_{x, r}:=& \{ \textbf{n}\in \mathbb{R}^d: \|\textbf{n}\|=1, B_r(x+r\textbf{n})\cap \mathcal{D}^{\circ} = \emptyset\} \quad\textrm{and}\quad \mathcal{N}_x:= \cup_{r>0} \mathcal{N}_{x, r}. \end{align*} We call the set $\mathcal{N}_x$ the outward normal vectors. \end{defn} For general domains, the set $\mathcal{N}_x$ can be empty, for example if the boundary contains a concave corner. Furthermore if the boundary is not smooth at $x$ then it may be the case that $| \mathcal{N}_{x,r}| = \infty$. \begin{defn} Let $\mathcal{D}\subset \mathbb{R}^d$ with non-zero Lebesgue measure interior. We say that $\mathcal{D}$ has a \emph{Uniform Exterior Sphere} if $\exists r_0>0$ such that $\forall x\in \partial \mathcal{D}$, $\mathcal{N}_{x, r_0} \neq \emptyset$. \end{defn} The existence of a uniform exterior sphere ensures there is at least one outward normal vector at every point on the boundary. When this is not the case, there is no canonical choice for the reflective vector field. The following property of convex domains will be used extensively throughout this paper. \begin{comment} \begin{lemma} \label{lem:UES-Condition} Let $\mathcal{D}\subset \mathbb{R}^d$ be a domain with interior that has non-zero Lebesgue measure. Suppose that $\mathcal{D}$ is convex. Then $\mathcal{D}$ has a Uniform Exterior Sphere. \end{lemma} \begin{proof} Let $r>0$ be fixed. Let $x\in \partial \mathcal{D}$. If $\mathcal{D}$ is a convex subspace of $\mathbb{R}^d$, then there exists a semi-plane $(\mathcal{S})$ which contains $\mathcal{D}$. Thus we have a hyperplane $\mathcal{H}_x$ that contains $x$ and $\mathcal{D}^{\circ}\cap \mathcal{H}_x=\emptyset$. Then, $\exists \textbf{n}$ such that $\forall y\in \mathcal{H}_x$ we have $\< y, \textbf{n}\>=0$. Without loss of generality, $\textbf{n}$ can be chosen to be an exiting vector from $\mathcal{D}$. Consider the open ball $B_r (x + r\textbf{n})$. This is an open set contained in the complement of the closed semi-plane ($\mathcal{S}^c$). Thus $B_r(x + r\textbf{n}) \cap \mathcal{D}^{\circ} = \emptyset$. Hence $\mathcal{N}_{x, r}\neq \emptyset$. \end{proof} \begin{lemma} \label{lem:NormalToDomain} Let $\mathcal{D}\subset \mathbb{R}^d$ be a non-zero Lebesgue measure interior, convex set. Suppose that for $x\in \partial \mathcal{D}$, $\textbf{n}(x)\in \mathcal{N}_{x}$. Then $\forall y\in \mathcal{D}$ $$ \langle \textbf{n}(x), y-x \rangle \leq 0 . $$ \end{lemma} \begin{proof} For $x\in \partial \mathcal{D}$, we know by Lemma \ref{lem:UES-Condition} that a vector $\textbf{n}(x) \in \mathcal{N}_{x}$ exists. Further, $\exists r>0$ such that $\textbf{n}\in \mathcal{N}_{x, r}$ and denote $z = x+r\textbf{n}(x)$. Then $$ \inf_{y\in \mathcal{D}} \| z-y\| = \| z-x\|. $$ If this is not the case the ball of radius $r$ centred at $y$ would intersect with the $\mathcal{D}^{\circ}$ and hence \begin{align*} \| (x-z) + (y-x)\| \geq& \| z-x\| \quad \Rightarrow \quad \langle x-z, y-x\rangle \geq 0, \end{align*} rearranging this yields that \eqref{equation uniform exterior sphere property}. \end{proof} \end{comment} \begin{lemma}\label{lem:NormalToDomain} Let $\mathcal{D}\subset \mathbb{R}^d$ be a convex domain with interior that has non-zero Lebesgue measure. Then $\mathcal{D}$ has a Uniform Exterior Sphere, and for any $x\in \partial \mathcal{D}$ and $\textbf{n}(x)\in \mathcal{N}_{x}$ it holds that \begin{equation}\label{equation uniform exterior sphere property} \langle \textbf{n}(x), y-x\rangle \leq 0,~\forall y\in \mathcal{D}. \end{equation} \begin{proof} First we prove that $\mathcal{D}$ has a Uniform Exterior Sphere. Let $r>0$ be fixed and let $x\in \partial \mathcal{D}$. If $\mathcal{D}$ is a convex subspace of $\mathbb{R}^d$, then there exists a semi-plane $(\mathcal{S})$ which contains $\mathcal{D}$. Thus we have a hyperplane $\mathcal{H}_x$ that contains $x$ and $\mathcal{D}^{\circ}\cap \mathcal{H}_x=\emptyset$. Then, $\exists \textbf{n}$ such that $\forall y\in \mathcal{H}_x$ we have $\< y, \textbf{n}\>=0$. Without loss of generality, $\textbf{n}$ can be chosen to be an exiting vector from $\mathcal{D}$. Consider the open ball $B_r (x + r\textbf{n})$. This is an open set contained in the complement of the closed semi-plane ($\mathcal{S}^c$). Thus $B_r(x + r\textbf{n}) \cap \mathcal{D}^{\circ} = \emptyset$. Hence $\mathcal{N}_{x, r}\neq \emptyset$. Now we show \eqref{equation uniform exterior sphere property}, For $x\in \partial \mathcal{D}$, we have just shown that a vector $\textbf{n}(x) \in \mathcal{N}_{x}$ exists. Further, $\exists r>0$ such that $\textbf{n}\in \mathcal{N}_{x, r}$ and denote $z = x+r\textbf{n}(x)$. Then $$ \inf_{y\in \mathcal{D}} \| z-y\| = \| z-x\|. $$ If this is not the case the ball of radius $r$ centred at $y$ would intersect with the $\mathcal{D}^{\circ}$ and hence \begin{align*} \| (x-z) + (y-x)\| \geq& \| z-x\| \quad \Rightarrow \quad \langle x-z, y-x\rangle \geq 0, \end{align*} rearranging this yields that \eqref{equation uniform exterior sphere property}. \end{proof} \end{lemma} Motivated by this Lemma, we will make the following assumption throughout this paper. \begin{assumption} \label{assumption:domain} Let $\mathcal{D}\subset \mathbb{R}^d$ be a closed, convex set with non-zero Lebesgue measure interior. \end{assumption} For example, if $d=2$ a possible choice is $\mathcal{D}=[0,\infty)^2$ or $\mathcal{D}=[0,a]\times (-\infty,\infty)$ for some $a>0$, stressing the fact that we allow for unbounded domains with non-smooth boundaries. At this point it is worth mentioning that if the domain is non-convex, it may not satisfy such helpful conditions. For example both \cite{saisho1987stochastic} and \cite{lions1984stochastic} assume the uniform exterior sphere condition and cannot access Lemma \ref{lem:NormalToDomain}, whereas \cite{tanaka2002stochastic} relies on Lemma \ref{lem:NormalToDomain}. \subsubsection*{Reflective boundaries and the Skorokhod problem} We are now in the position to formulate the Skorokhod problem which was first stated and studied in \cites{Skorokhod1961stochastic, Skorokhod1962stochastic}. A path $\gamma:[0,T] \to \mathbb{R}^d$ is said to be c\`adl\`ag if it is right continuous and has left limits. \begin{defn} \label{dfn:Skorokhodproblem} Let $\gamma:[0,T] \to \mathbb{R}^d$ be a c\`adl\`ag path and let $\mathcal{D}$ be a subset of $\mathbb{R}^d$. Suppose additionally that $\gamma_0\in \mathcal{D}$. For each $x\in \partial \mathcal{D}$, suppose that $\mathcal{N}_x\neq \emptyset$. Let $\textbf{n}:\partial \mathcal{D} \to \mathbb{R}^d$ such that $\textbf{n}(x)\in \mathcal{N}_{x}$. The triple $(\gamma, \mathcal{D}, \textbf{n})$ denotes the \emph{Skorokhod problem}. We say that the pair $(\eta, k)$ is a solution to the Skorokhod problem $(\gamma, \mathcal{D}, \textbf{n})$ if $\eta:[0,T] \to \overline{\mathcal{D}}$ is a c\`adl\`ag path, $k:[0,T] \to \mathbb{R}^d$ is a bounded variation path and \begin{equation} \label{eq:Skorokhodproblem} \eta_t=\gamma_t - k_t , \quad k_t = \int_0^t \textbf{n}(\eta_s)\mathbbm{1}_{\partial \mathcal{D}}(\eta_s) d|k|_s, \quad |k|_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}} (\eta_s) d|k|_s, \end{equation} where $\textbf{n}(x)\in \mathcal{N}_{x}$ when $x \in \partial \mathcal{D}$ and $\textbf{n}(x)=0$ otherwise. \end{defn} This problem was first studied in the deterministic setting in \cite{Chaleyat1980Reflexion} and in the stochastic setting in \cite{tanaka2002stochastic}. For general domains, one may be unable to show uniqueness, or even existence of a solution to the Skorokhod problem. We emphasise that this will not be an issue that we explore in this paper. \begin{theorem}[\cite{tanaka2002stochastic}*{Theorem 3.1}] \label{thm:SkorokhodProblem} Let $\mathcal{D}$ satisfy Assumption \ref{assumption:domain}. Let $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\in [0,T]}, \mathbb{P})$ be a filtered probability space. Let $\gamma=(\gamma_t)_{t\in[0,T]}$ be an $\mathcal{F}_t$-adapted $\mathbb{R}^d$-valued semimartingale with $\gamma_0\in \mathcal{D}$. Then there exists a unique solution to the Skorokhod problem $(\gamma, \mathcal{D}, \textbf{n})$ $\mathbb{P}$-a.s. \end{theorem} \section{Existence, uniqueness and propagation of chaos} \label{section exist.unique} In this section, we prove that under appropriate assumptions there exists a unique solution to the Stochastic Differential Equations \eqref{eq:MVE}. In the subsequent step, we address the \textit{Propagation of Chaos} result regarding convergence of the solution of the particle system \eqref{eq:ParticleSystem} to the solution of the McKean-Vlasov \eqref{eq:MVE}. In Section \ref{subsec:ExistUniq-RSDEs} we prove \textit{existence and uniqueness for a broad class of classical reflected SDEs} where the coefficients are assumed random, time-dependent and satisfying a superlinear growth condition. Crucially, we do not restrict ourselves to a bounded domain. In Section \ref{subsec:ExistUniq-RMVEs} we prove \textit{existence and uniqueness for reflected McKean-Vlasov SDEs} satisfying a $\mathbb{W}^{(2)}$-Lipschitz condition in the measure component. This is \textit{generalised in Theorem \ref{thm:ExistUnique-LocLip-SSMVE} to coefficients that are locally Lipschitz in measure}, although in this final step we necessarily restrict to deterministic coefficients; the proof of the result is provided in Section \ref{subsec:ExistUniq-SSMVEs}. Lastly, in Section \ref{sec:PoC}, we prove that the limit of a single equation within the system of interacting equations \eqref{eq:ParticleSystem} converges to the dynamics of Equation \eqref{eq:MVE}, i.e.~\textit{Propagation of Chaos (PoC)}. \subsection{Existence and uniqueness for reflected SDEs} \label{subsec:ExistUniq-RSDEs} Let $t\geq 0$. We commence by studying classical reflected SDEs of the form \begin{equation} \label{eq:reflectedSDE} \begin{split} X_t =& \theta + \int_0^t b(s, X_s) ds + \int_0^t \sigma(s, X_s) dW_s - k_t, \\ |k|_t=&\int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s) d|k|_s, \qquad k_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s) \textbf{n}(X_s) d|k|_s. \end{split} \end{equation} This first result is a generalisation of Tanaka's classical results in \cite{tanaka2002stochastic} extended to the case where the drift and diffusion terms are random and time dependent, and the drift term satisfies a one-sided Lipschitz condition. \begin{theorem} \label{thm:ExistUnique-LocLip-Ref} Let $\mathcal{D}$ satisfy Assumption \ref{assumption:domain}. Let $p\geq 2$. Let $W$ be a $d'$ dimensional Brownian motion. Let $\theta:\Omega \to \mathcal{D}$, $b:[0,T] \times \Omega \times \mathcal{D} \to \mathbb{R}^d$ and $\sigma:[0,T] \times \Omega \times \mathcal{D} \to \mathbb{R}^{d\times d'}$ be progressively measurable maps. Suppose that \begin{itemize} \item $\theta \in L^p( \mathcal{F}_0, \mathbb{P}; \mathcal{D})$. \item $\exists x_0\in \mathcal{D}$ such that $b$ and $\sigma$ satisfy the integrability conditions $$ \mathbb{E}\Big[ \Big( \int_0^T \| b(s, x_0) \| ds \Big)^p \Big] \vee \mathbb{E}\Big[ \Big( \int_0^T \| \sigma(s, x_0) \|^2 ds \Big)^{p/2} \Big] < \infty. $$ \item $\exists L>0$ such that for almost all $(s, \omega)\in [0,T] \times \Omega$ and $\forall x, y\in \mathcal{D}$, $$ \big\langle b(s, x) - b(s, y), x-y \big\rangle \leq L \| x-y \|^2 \quad\textrm{and}\quad \| \sigma(s, x) - \sigma(s, y) \| \leq L \| x-y \|, $$ \item $\forall n\in \mathbb{N}$, $\exists L_n>$ such that $\forall x,y\in \mathcal{D}_n = \mathcal{D} \cap \overline{B_n(x_0)}$, $$ \| b(s, x) - b(s, y) \| \leq L_n \| x-y \| \quad \textrm{for almost all $(s, \omega) \in [0,T] \times \Omega$. } $$ \end{itemize} Then there exists a unique solution to the reflected Stochastic Differential Equation \eqref{eq:reflectedSDE} in $\mathcal{S}^p([0,T])$ and $$ \mathbb{E}\Big[ \| X - x_0\|_{\infty, [0,T]}^p \Big] \lesssim \mathbb{E}\Big[ \| \theta - x_0 \|^p \Big] + \mathbb{E}\Big[ \Big( \int_0^T \| b(s, x_0) \| ds \Big)^p \Big] + \mathbb{E}\Big[ \Big( \int_0^T \| \sigma(s, x_0) \|^2 ds \Big)^{p/2} \Big]. $$ \end{theorem} The proof is given in Appendix \ref{appendixB}. \subsection{Existence and uniqueness for McKean-Vlasov equations} \label{subsec:ExistUniq-RMVEs} Next, for $t \geq 0$, we study reflected McKean-Vlasov equations, i.e.~stochastic processes of the form \begin{equation} \label{eq:reflectedMVE} \begin{split} X_t =& \theta + \int_0^t b(s, X_s, \mu_s) ds + \int_0^t \sigma(s, X_s, \mu_s) dW_s - k_t, \quad \mathbb{P}\big[ X_t \in dx\big] = \mu_t(dx), \\ |k|_t=& \int_0^t \mathbbm{1}_{\partial D}(X_s) d|k|_s, \qquad k_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s) \textbf{n}(X_s) d|k|_s. \end{split} \end{equation} \begin{theorem} \label{thm:ExistUnique-LocLip-MVE} Let $\mathcal{D}$ satisfy Assumption \ref{assumption:domain}. Let $p\geq 2$. Let $W$ be a $d'$ dimensional Brownian motion. Let $\theta:\Omega \to \mathcal{D}$, $b:[0,T] \times \Omega \times \mathcal{D} \times \mathcal{P}_2(\mathcal{D}) \to \mathbb{R}^d$ and $\sigma:[0,T] \times \Omega \times \mathcal{D} \times \mathcal{P}_2(\mathcal{D}) \to \mathbb{R}^{d\times d'}$ be progressively measurable maps. Assume that \begin{itemize} \item $\theta \in L^p( \mathcal{F}_0, \mathbb{P}; \mathcal{D})$ and $\theta \sim \mu_\theta$. \item $\exists x_0\in \mathcal{D}$ such that $b$ and $\sigma$ satisfy the integrability conditions $$ \mathbb{E}\Big[ \Big( \int_0^T \| b(s, x_0, \delta_{x_0}) \| ds \Big)^p \Big] \vee \mathbb{E}\Big[ \Big( \int_0^T \| \sigma(s, x_0, \delta_{x_0}) \|^2 ds \Big)^{p/2} \Big] < \infty. $$ \item $\exists L>0$ such that for almost all $(s, \omega)\in [0,T] \times \Omega$, $\forall \mu, \nu \in \mathcal{P}_2(\mathcal{D})$ and $\forall x, y\in \mathbb{R}^d$, \begin{align*} \Big\langle b(s, x, \mu) - b(s, y, \mu), x-y \Big\rangle \leq L\| x-y \|^2, \quad \| \sigma(s, x, \mu) - \sigma(s, y, \mu) \| \leq L \| x-y \|, \\ \| b(s, x, \mu) - b(s, x, \nu) \| \leq L \mathbb{W}^{(2)}_\mathcal{D} (\mu, \nu), \quad \| \sigma(s, x, \mu) - \sigma(s, x, \nu) \| \leq L \mathbb{W}^{(2)}_\mathcal{D} (\mu, \nu). \end{align*} \item $\forall n\in \mathbb{N}$, $\exists L_n>$ such that $\forall x,y\in \mathcal{D} \cap \overline{B_n(x_0)}$, $$ \| b(s, x, \mu) - b(s, y, \mu) \| \leq L_n \| x-y \| \quad \textrm{for almost all $(s, \omega) \in [0,T] \times \Omega$. } $$ \end{itemize} Then there exists a unique solution to the reflected McKean-Vlasov equation \eqref{eq:reflectedMVE} in $\mathcal{S}^p([0,T])$ and $$ \mathbb{E}\Big[ \| X - x_0\|_{\infty, [0,T]}^p \Big] \lesssim \mathbb{E}\Big[ \| \theta - x_0 \|^p\Big] + \mathbb{E}\Big[ \Big( \int_0^T \| b(s, x_0, \delta_{x_0}) \| ds \Big)^p \Big] + \mathbb{E}\Big[ \Big( \int_0^T \| \sigma(s, x_0, \delta_{x_0}) \|^2 ds \Big)^{p/2} \Big]. $$ \end{theorem} \begin{proof} Throughout this proof, we distinguish between measures $\nu \in \mathcal{P}_2 \big( C([0,T]; \mathcal{D}) \big)$ and their pushforward measure with respect to path evaluation $\nu_t \in \mathcal{P}_2(\mathcal{D})$. Then for $\nu^1, \nu^2 \in \mathcal{P}_2\big( C([0,T]; \mathcal{D})\big)$, we have \begin{align} \label{eq:thm:ExistUnique-LocLip-MVE2.1} \sup_{t\in [0,T]} \mathbb{W}_{\mathcal{D}}^{(2)} \Big(\nu_t^1, \nu_t^2 \Big) \leq \mathbb{W}_{C([0,T]; \mathcal{D})}^{(2)} \Big( \nu^1, \nu^2 \Big). \end{align} For $\nu \in \mathcal{P}_2( C([0,T]; \mathcal{D}))$, we define the reflected Stochastic Differential Equation \begin{equation} \label{eq:thm:ExistUnique-LocLip-MVE1.1} \begin{split} X_t^{(\nu)} =&\theta + \int_0^t b(s, X_s^{(\nu)}, \nu_s) ds + \int_0^t \sigma(s, X_s^{(\nu)}, \nu_s) dW_s - k_t^{(\nu)}, \\ |k^{(\nu)}|_t=& \int_0^t \mathbbm{1}_{\partial D}(X_s^{(\nu)}) d|k^{(\nu)}|_s, \quad k^{(\nu)}_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s^{(\nu)}) \textbf{n}(X_s^{(\nu)}) d|k^{(\nu)}|_s. \end{split} \end{equation} Let $x_0\in \mathcal{D}$. For $\mu_0 \in \mathcal{P}_2(\mathcal{D})$, let $\mu_0' \in \mathcal{P}_2\big( C([0,T]; \mathcal{D})\big)$ be the law of the constant path with initial distribution $\mu_0$. Using the Lipschitz condition for the measure dependency of $b$ and $\sigma$, we have \begin{align*} \mathbb{E}\Big[ \Big( \int_0^T \| b(s, x_0, \nu_s) \| ds \Big)^p \Big] \leq& \mathbb{E}\Big[ \Big( \int_0^T \| b(s, x_0, \mu_0) \| ds + L\int_0^T \mathbb{W}_\mathcal{D}^{(2)} (\nu_s, \mu_0) ds \Big)^p \Big] \\ \leq& 2^{p-1} \mathbb{E}\Big[ \Big( \int_0^T \| b(s, x_0, \mu_0) \| ds \Big)^p \Big] + 2^{p-1} L^p T^p \mathbb{W}_{C([0,T]; \mathcal{D})}^{(2)} (\nu, \mu_0')^p, \\ \mathbb{E}\Big[ \Big( \int_0^T \| \sigma(s, x_0, \nu_s) \|^2 ds \Big)^{p/2} \Big] \leq& \mathbb{E}\Big[ \Big( 2\int_0^T \| \sigma(s, x_0, \mu_0) \|^2 ds + 2L^2 \int_0^T \mathbb{W}_\mathcal{D}^{(2)} (\nu_s, \mu_0) ds \Big)^{p/2} \Big] \\ \leq& 2^{p-1} \mathbb{E}\Big[ \Big( \int_0^T \| \sigma(s, x_0, \mu_0) \|^2 ds \Big)^{p/2} \Big] + 2^{p-1} L^{p} T^{p/2}\mathbb{W}_{C([0,T]; \mathcal{D})}^{(2)} (\nu, \mu_0')^p. \end{align*} Therefore, by Theorem \ref{thm:ExistUnique-LocLip-Ref}, we have existence and uniqueness of Equation \eqref{eq:thm:ExistUnique-LocLip-MVE1.1}. Consider the operator $\Xi: \mathcal{P}_2\big( C([0,T]; \mathbb{R}^d)\big) \to \mathcal{P}_2\big( C([0,T]; \mathbb{R}^d)\big)$ defined by $$ \Xi[\nu] := \mu^{(\nu)}, $$ where $\mu^{(\nu)}$ is the law of the solution to Equation \eqref{eq:thm:ExistUnique-LocLip-MVE1.1}. Now, for any two measures $\nu^1, \nu^2 \in \mathcal{P}_2\big( C([0,T]; \mathcal{D})\big)$, \begin{align*} \Big\| X^{(\nu^1)}_t - X^{(\nu^2)}_t \Big\|^2 \leq & 2\int_0^t \Big\langle X_s^{(\nu^1)} - X_s^{(\nu^2)}, b(s, X_s^{(\nu^1)}, \nu_s^1) - b(s, X_s^{(\nu^2)}, \nu_s^2) \Big\rangle ds \\ & + 2\int_0^t \Big\langle X_s^{(\nu^1)} - X_s^{(\nu^1)}, \Big( \sigma(s, X_s^{(\nu^1)}, \nu_s^1) - \sigma(s, X_s^{(\nu^2)}, \nu_s^2) \Big) dW_s \Big\rangle \\ & + \int_0^t \Big\| \sigma(s, X_s^{(\nu^1)}, \nu_s^1) - \sigma(s, X_s^{(\nu^2)}, \nu_s^2) \Big\|^2 ds -2\int_0^t \Big\langle X_s^{(\nu^1)} - X_s^{(\nu^2)}, dk_s^{(\nu^1)} - dk_s^{(\nu^2)} \Big\rangle. \end{align*} The reflective term in the above expression is negative due to the convexity of the domain and Lemma \ref{lem:NormalToDomain}. Therefore, taking a supremum over time, expectations, and using Burkholder-Davis-Gundy inequality, we get \begin{align*} \mathbb{E}\Big[& \| X^{(\nu^1)} - X^{(\nu^2)}\|_{\infty, [0,T]}^2 \Big] \\ \leq& 2L\int_0^T \mathbb{E}\Big[ \|X^{(\nu^1)} - X^{(\nu^2)}\|_{\infty, [0,t]}^2 \Big] dt + 2L \mathbb{E}\Big[ \|X^{(\nu^1)} - X^{(\nu^2)}\|_{\infty, [0,T]} \cdot \int_0^T \sup_{s\in[0,t]} \mathbb{W}^{(2)}_{\mathcal{D}}(\nu_s^1, \nu_s^2) dt \Big] \\ &+ 4C_1L \mathbb{E}\Big[ \| X^{(\nu^1)} - X^{(\nu^2)}\|_{\infty, [0,T]} \Big( \int_0^T \sup_{s\in[0,t]} \mathbb{W}_{\mathcal{D}}^{(2)} (\nu_s^1, \nu_s^2)^2 dt \Big)^{1/2} \Big] \\ &+ 4C_1L \mathbb{E}\Big[ \| X^{(\nu^1)} - X^{(\nu^2)}\|_{\infty, [0,T]} \Big( \int_0^T \| X^{(\nu^1)} - X^{(\nu^2)} \|_{\infty, [0,t]}^2 dt\Big)^{1/2} \Big] \\ &+ 2L^2\int_0^T \mathbb{E}\Big[ \| X^{(\nu^1)} - X^{(\nu^2)}\|_{\infty, [0, t]}^2 dt + 2L^2 \int_0^T \sup_{s\in [0,t]} \mathbb{W}_{\mathcal{D}}^{(2)} (\nu_s^1, \nu_s^2)^2 dt. \end{align*} Careful application of Young's Inequality, Gr\"onwall's inequality and Equation \eqref{eq:thm:ExistUnique-LocLip-MVE2.1} yields that there exists a constant $K>0$ such that $$ \mathbb{W}_{C([0,T]; \mathcal{D})}^{(2)} \Big( \Xi[ \nu^1], \Xi[\nu^2] \Big)^2 \leq \mathbb{E}\Big[ \| X^{(\nu^1)} - X^{(\nu^1)}\|_{\infty, [0,T]}^2 \Big] \leq K \int_0^T \mathbb{W}_{C([0,t]; \mathcal{D})}^{(2)} \Big(\nu^1, \nu^2\Big)^2 dt. $$ Iteratively applying the operator $\Xi$ $n$ times gives \begin{align*} \mathbb{W}_{C([0,T]; \mathcal{D})}^{(2)} \Big( \Xi^n[ \nu^1], \Xi^n[\nu^2] \Big)^2 \leq& K^n \int_0^T \int_0^{t_1} ... \int_0^{t_{n-1}} \mathbb{W}_{C([0,t_n]; \mathcal{D})}^{(2)} \Big(\nu^1, \nu^2\Big)^2 dt_n ... dt_2 dt_1 \\ \leq& \frac{K^n}{n!} \mathbb{W}_{C([0,T]; \mathcal{D})}^{(2)}\Big( \nu^1, \nu^2 \Big)^2. \end{align*} Choosing $n\in \mathbb{N}$ such that $\tfrac{K^n}{n!}<1$, we obtain that the operator $\Xi^n$ is a contraction operator, so a unique fixed point on the metric space $\mathcal{P}_2\big( C([0,T]; \mathcal{D})\big)$ paired with the Wasserstein metric must exist. This unique fixed point is the law of the McKean-Vlasov equation \eqref{eq:reflectedMVE}. \end{proof} \begin{rem} It is worth remarking that the framework of coefficients that satisfy a Lipschitz condition in their measure dependency with respect to the Wasserstein distance is broad, but in this manuscript we are predominantly interested in coefficients where the measure dependency is not Lipschitz. \end{rem} \subsubsection*{Main result: existence and uniqueness for McKean-Vlasov equations under reflection} We next study McKean-Vlasov equations with the addition of a self-stabilizing drift term that does not satisfy a Lipschitz condition with respect to the Wasserstein distance. For example, in Equation \eqref{eq:MVE}, we have $f\ast \mu_t(x): = \int_{\mathcal{D}} f(x - y) \mu_t(dy)$, the convolution of the vector field $f$ with the measure $\mu_t$. Consider \begin{equation} \label{eq:reflectedSSMVE} \begin{split} X_t =& \theta + \int_0^t b(s, X_s, \mu_s) ds + \int_0^t \sigma(s, X_s, \mu_s) dW_s + \int_0^t f \ast \mu_s(X_s) ds - k_t, \\ |k|_t=& \int_0^t \mathbbm{1}_{\partial D}(X_s) d|k|_s, \qquad k_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s) \textbf{n}(X_s) d|k|_s, \qquad \mathbb{P}\Big[ X_t \in dx\Big] = \mu_t(dx). \end{split} \end{equation} We show existence of a solution to the above reflected McKean-Vlasov equation under the following assumption. \begin{assumption} \label{ass:ExistUnique-LocLip-SSMVE} Let $r>1$ and $p>2r$. Let $\theta:\Omega \to \mathcal{D}$, $b:[0,T] \times \mathcal{D} \times \mathcal{P}_2(\mathcal{D}) \to \mathbb{R}^d$, $f:\mathbb{R}^d \to \mathbb{R}^d$ and $\sigma:[0,T] \times \mathcal{D} \times \mathcal{P}_2(\mathcal{D}) \to \mathbb{R}^{d \times d'}$. Assume that \begin{itemize} \item $\theta \in L^p( \mathcal{F}_0, \mathbb{P}; \mathcal{D})$ and $\theta \sim \mu_\theta$, \item $\exists x_0\in \mathcal{D}$ such that $b$ and $\sigma$ satisfy the integrability conditions $$ \int_0^T \| b(s, x_0, \delta_{x_0}) \| ds \vee \int_0^T \| \sigma(s, x_0, \delta_{x_0}) \|^2 ds < \infty. $$ \item $\exists L>0$ such that for almost all $s\in [0,T]$, $\forall \mu, \nu \in \mathcal{P}_2(\mathcal{D})$ and $\forall x, y\in \mathcal{D}$, \begin{align*} \Big\langle b(s, x, \mu) - b(s, y, \mu), x-y \Big\rangle \leq L\| x-y \|^2, \quad \| \sigma(s, x, \mu) - \sigma(s, y, \mu) \| \leq L \| x-y \|, \\ \| b(s, x, \mu) - b(s, x, \nu) \| \leq L \mathbb{W}^{(2)}_\mathcal{D} (\mu, \nu), \quad \| \sigma(s, x, \mu) - \sigma(s, x, \nu) \| \leq L \mathbb{W}^{(2)}_\mathcal{D} (\mu, \nu), \end{align*} \item $f(0)=0$, $f(x) = -f(-x)$ and $\exists L>0$ such that $\forall x, y \in \mathbb{R}^d$, $ \big\langle f(x) - f(y), x-y\big\rangle \leq L \| x-y \|^2$ , \item $\forall n\in \mathbb{N}$, $\exists L_n>$ such that $\forall x,y\in \mathcal{D} \cap \overline{B_n(x_0)}$, $$ \| b(s, x, \mu) - b(s, y, \mu) \| \leq L_n \| x-y \| \quad \textrm{for almost all $(s, \omega) \in [0,T] \times \Omega$, } $$ \item $\exists L>0$ such that $\forall x,y\in \mathbb{R}^d$ $$ \| f(x) - f(y) \| \leq C \| x-y \| \big( 1 + \| x \|^{r-1} + \| y \|^{r-1} \big), \quad \| f(x) \| \leq C \big( 1 + \| x \|^r\big). $$ \end{itemize} \end{assumption} \begin{theorem} \label{thm:ExistUnique-LocLip-SSMVE} Let $\mathcal{D}\subseteq \mathbb{R}^d$ (not necessarily bounded) satisfy Assumption \ref{assumption:domain}. Let $r>1$ and $p>2r$. Let $W$ be a $d'$ dimensional Brownian motion. Let $\theta$, $b$, $\sigma$ and $f$ satisfy Assumption \ref{ass:ExistUnique-LocLip-SSMVE}. Then there exists a unique solution to the reflected McKean-Vlasov equation \eqref{eq:reflectedSSMVE} in $\mathcal{S}^p([0,T])$ (explicit $\mathcal{S}^p$-norm bounds are given below in \eqref{eq momoment bounds for existence uniqueness}). \end{theorem} The proof of this theorem is the content of the next section. \begin{rem} A nuanced detail of the following proof is the calculation of moments and potentially singular and non-integrable drifts. In \cite{imkeller2019Differentiability}, the authors studied processes where the drift term could have polynomial growth that was greater than the moments of the final solution. The conclusion was that time integrals of these drift terms ``smooth out'' the non-integrability. In this paper, we only require a one-sided Lipschitz condition for the spatial variable. However, we were unable to remove the polynomial growth condition for the self-stabilizing term $f$. This is because one needs integrability of the convolution of the law of the solution with the vector field $f$ before the self-stabilisation acts to push deviating paths back towards the mean of the distribution. \end{rem} \subsection{Proof of Theorem \ref{thm:ExistUnique-LocLip-SSMVE}} \label{subsec:ExistUniq-SSMVEs} This proof is inspired by \cite{BRTV}. Unlike the proof of Theorem \ref{thm:ExistUnique-LocLip-MVE} which constructs a contraction operator on the space of measures, we construct a fixed point on a space of functions. Each function will give rise to a McKean-Vlasov process by substituting it into the equation as a drift term. Then, the law of this McKean-Vlasov equation is convolved with the vector field $f$ to obtain a new function. This trick allows us to bypass the non-Lipschitz property of the functional $g(x, \mu):= f \ast \mu(x)$ while still exploiting the one-sided Lipschitz condition in the spatial variable. Our contributions in this section include developing this method to allow for diffusion terms that are not constant. This is novel, even before the addition of a domain of constraint. The non-constant diffusion complicates the computation of moment estimates which are key to this method. Of particular interest is Proposition \ref{prop:SSMVE-Moments}, which diverges from previous literature. \begin{defn} \label{defn:WeirdNorm-Space} Let $r>1$. Let $x_0 \in \mathcal{D}$ and $L>0$ be as in Assumption \ref{ass:ExistUnique-LocLip-SSMVE}. For $g:[0,T]\times \mathcal{D} \to \mathbb{R}^d$, let $$ \| g \|_{[0,T],r}:= \sup_{t \in [0,T]} \left( \sup_{x\in \mathcal{D}} \frac{ \|g(t,x)\|}{1 + \|x-x_0\|^{r}} \right). $$ Let $\Lambda_{[0,T], r}$ be the space of all functions $g:[0,T] \times \mathcal{D} \to \mathbb{R}^d$ such that $\| g\|_{[0,T], r}<\infty$ and $$ \langle g(t,x) - g(t,y), x-y\rangle \leq L \|x-y\|^2\qquad \forall xy,\in \mathcal{D},~t\in[0,T]. $$ \end{defn} The space $\Lambda_{[0,T], r}$ is a Banach space. For $g\in \Lambda_{[0,T], r}$, consider the reflected McKean-Vlasov equation \begin{equation} \label{eq:reflected-SSMVE-b} \begin{split} X_t^{(g)} =& \theta + \int_0^t b(s, X_s^{(g)}, \mu_s^{(g)}) ds + \int_0^t \sigma(s, X_s^{(g)}, \mu_s^{(g)}) dW_s + \int_0^t g(s, X_s^{(g)}) ds - k_t^{(g)}, \\ |k^{(g)}|_t=& \int_0^t \mathbbm{1}_{\partial D}(X_s^{(g)}) d|k^{(g)}|_s, \quad k_t^{(g)} = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s^{(g)}) \textbf{n}(X_s^{(g)}) d|k^{(g)}|_s, \quad \mathbb{P}\Big[ X_t^{(g)} \in dx\Big] = \mu_t^{(g)}(dx). \end{split} \end{equation} By Theorem \ref{thm:ExistUnique-LocLip-MVE}, we know that there exists a unique solution to this McKean-Vlasov equation for every choice of $g\in \Lambda_{[0,T], r}$ and every $r\geq 1$. Further, we have the moment estimate that for $\varepsilon>0$ and $T_0\in [0,T-\varepsilon]$, \begin{align} \nonumber &\sup_{t\in [T_0, T_0+\varepsilon]} \mathbb{E}\Big[ \| X^{(g)}_t - x_0 \|^p \Big] \\ \nonumber &\leq \Bigg( 4\mathbb{E}\Big[ \| X_{T_0}^{(g)} - x_0 \|^p \Big] + \big( 4 (p-1)\big)^{p-1} \bigg( \Big( \int_{T_0}^{T_0+\varepsilon} \| b(r, x_0, \delta_{x_0} ) \| dr\Big)^p + \Big( \int_{T_0}^{T_0+\varepsilon} \| g(r, x_0) \| dr \Big)^p \bigg) \\ \label{eq:reflected-SSMVE-b:moment} & \quad + 2(p-1)^{p/2} \cdot (p-2)^{(p-2)/2} \cdot 4^{p/2} \Big( \int_{T_0}^{T_0+\varepsilon} \| \sigma(r, x_0, \delta_{x_0} ) \|^2 dr \Big)^{\tfrac{p}{2}} \Bigg) \cdot \exp\Big( \big( 4pL + 2p(p-1)L^2 \big) \varepsilon \Big). \end{align} Our challenge will be to find a $g$ such that $g(t, x) = f \ast \mu_t^{(g)}(x)$. \begin{defn} \label{dfn:Gamma-ContractionOp} Let $b$, $\sigma$ and $f$ satisfy Assumption \ref{ass:ExistUnique-LocLip-SSMVE}. Let $g\in \Lambda_{[0,T], r}$. Let $X^{(g)}$ be the unique solution to the McKean-Vlasov equation \eqref{eq:reflected-SSMVE-b} with law $\mu^{(g)}$. Let $\Gamma: \Lambda_{[0,T], r} \to C([0,T] \times \mathcal{D}; \mathbb{R}^d)$ be defined by $$ \Gamma[g](t, x):= f\ast \mu_t^{(g)}(x) = \mathbb{E}\big[ f(x - X_t^{(g)}) \big]. $$ \end{defn} Our goal is to demonstrate that the operator $\Gamma$ has a fixed point $g'$. Then the McKean-Vlasov equation $X^{(g')}$ that solves \eqref{eq:reflected-SSMVE-b} will be the solution to the McKean-Vlasov equation \eqref{eq:reflectedSSMVE}. \begin{lemma} \label{lem:Gamma-WellDefined} Let $\Gamma$ be the operator defined in Definition \ref{dfn:Gamma-ContractionOp}. Then $\forall T_0\in[0,T]$ and $\forall \varepsilon>0$ such that $T_0+\varepsilon< T$, $\Gamma $ maps $\Lambda_{[T_0,T_0+\varepsilon], r}$ to $\Lambda_{[T_0,T_0+\varepsilon], r}$. \end{lemma} \begin{proof} Fix $T_0\in[0,T]$ and $\varepsilon>0$ appropriately. Let $g\in \Lambda_{[T_0,T_0 +\varepsilon], r}$. Then $\forall x, y\in \mathbb{R}^d$ and $\forall t\in[T_0,T_0 + \varepsilon]$, \begin{align*} \Big\langle x-y, \Gamma[g](t, x) - \Gamma[g](t, y)\Big \rangle = \int_{\mathcal{D}} \Big\langle x-y, f(x-u) - f(y-u) \Big\rangle d\mu_t^{(g)}(u) \leq L \| x-y \|^2. \end{align*} Secondly, \begin{align*} \mathbb{E}\Big[ f(X_t^{(g)} - x) \Big] \leq & 2C + \big(C+2^r\big) \Big( \| x - x_0 \|^r + \mathbb{E}\Big[ \| X_t^{(g)} \|^r \Big] \Big) \\ \leq& \Big(2C + 2^{r+1}\Big) \Big( 1 + \| x-x_0 \|^r\Big) \Big( 1 + \mathbb{E}\Big[ \| X_t^{(g)} - x_0 \|^r\Big] \Big). \end{align*} By Assumption \ref{ass:ExistUnique-LocLip-SSMVE}, we know the process $X^{(g)}$ has finite moments of order $p>2r$. Thus \begin{equation} \label{eq:lem:Gamma-WellDefined-1} \Big\| \Gamma[g] \Big\|_{[T_0,T_0+\varepsilon], r} \leq \Big( 2C + 2^{r+1}\Big)\cdot \Big( 1 + \sup_{t\in[T_0,T_0+\varepsilon]} \mathbb{E}\Big[ \| X^{(g)}_t -x_0 \|^r \Big] \Big). \end{equation} Combining these with Equation \eqref{eq:reflected-SSMVE-b:moment} and using that \begin{align*} \Big( \int_{T_0}^{T_0+\varepsilon} \| g(s, x_0) \| ds \Big)^p \leq \varepsilon^p \| g\|_{[T_0, T_0+\varepsilon], r}^p, \end{align*} we obtain that \begin{align} \nonumber \Big\| \Gamma[g] \Big\|_{[T_0, T_0+\varepsilon], r} \leq& \Big(2C+2^{r+1}\Big)\Big( 1+ \sup_{t\in[0,T_0]} \mathbb{E}\Big[ \| X_t^{(g)} - x_0 \|^r \Big] \Big) \\ \nonumber &+ \Bigg( \big( 4 (p-1)\big)^{p-1} \bigg( \Big( \int_{T_0}^{T_0+\varepsilon} \| b(s, x_0, \delta_{x_0} ) \| ds\Big)^p + \Big( \int_{T_0}^{T_0+\varepsilon} \| g(s, x_0) \| ds \Big)^p \bigg) \\ \nonumber &\quad + 2(p-1)^{p/2} \cdot (p-2)^{(p-2)/2} \cdot 4^{p/2} \Big( \int_{T_0}^{T_0+\varepsilon} \| \sigma(s, x_0, \delta_{x_0} ) \|^2 ds \Big)^{\tfrac{p}{2}} \Bigg) \\ \label{eq:lem:Gamma-WellDefined-2} &\quad \cdot \exp\Big( \big( 4pL + 2p(p-1)L^2 \big) \varepsilon \Big). \end{align} Taking $T_0=0$ and $\varepsilon = T$, we get $\Big\| \Gamma[g]\Big\|_{[0,T], r}<\infty$ for any $g\in \Lambda_{[0,T], r}$. \end{proof} \begin{lemma} \label{lemma:Gamma-FirstContraction} Let $T_0\in [0,T]$ and let $\varepsilon>0$ such that $T_0+\varepsilon < T$. Let $\Gamma$ be the operator given in Definition \ref{dfn:Gamma-ContractionOp}. Then there exists a constant $K$ such that $\forall g_1, g_2 \in \Lambda_{[T_0,T_0+\varepsilon], r}$ with $g_1(t) = g_2(t)$ $\forall t\in[0,T_0]$ we have $$ \Big\| \Gamma[g_1] - \Gamma[g_2] \Big\|_{[T_0,T_0+\varepsilon], r} \leq \| g_1 - g_2 \|_{[T_0, T_0 + \varepsilon], r} K \sqrt{\varepsilon} e^{K \varepsilon}. $$ \end{lemma} \begin{proof} Let $g_1,g_2:[0,T] \times \mathcal{D} \to \mathbb{R}^d$ such that $g_1(t) = g_2(t)$ for $t\in[0,T_0]$. Let $X^{(g_1)}$ and $X^{(g_2)}$ be solutions to Equation \eqref{eq:reflected-SSMVE-b}. Firstly, for $t\in [T_0, T_0 + \varepsilon]$ we have, applying It\^o's formula, \begin{align*} \| X_t^{(g_1)} &- X_t^{(g_2)} \|^2 \\ = & 2\int_{T_0}^t \Big\langle X_s^{(g_1)} - X_s^{(g_2)}, b(s, X_s^{(g_1)}, \mu_s^{(g_1)}) - b(s, X_s^{(g_2)}, \mu_s^{(g_2)}) \Big\rangle ds \\ &+ 2\int_{T_0}^t \Big\langle X_s^{(g_1)} - X_s^{(g_2)}, g_1(X_s^{(g_1)}) - g_1(X_s^{(g_2)}) \Big\rangle ds + 2\int_{T_0}^t \Big\langle X_s^{(g_1)} - X_s^{(g_2)}, g_1(X_s^{(g_2)}) - g_2(X_s^{(g_2)}) \Big\rangle ds \\ &+2\int_{T_0}^t \Big\langle X_s^{(g_1)} - X_s^{(g_2)}, \Big(\sigma(s, X_s^{(g_1)}, \mu_s^{(g_1)}) - \sigma(s, X_s^{(g_2)}, \mu_s^{(g_2)}) \Big)dW_s \Big\rangle \\ &+\int_{T_0}^t \Big\| \sigma(s, X_s^{(g_1)}, \mu_s^{(g_1)}) - \sigma(s, X_s^{(g_2)}, \mu_s^{(g_2)}) \Big\|^2 ds - 2\int_{T_0}^t \Big\langle X_s^{(g_1)} - X_s^{(g_2)}, dk_s^{(g_1)} - dk_s^{(g_2)}\Big\rangle. \end{align*} Taking expectations, a supremum over time and applying Lemma \ref{lem:NormalToDomain}, we get \begin{align*} \sup_{t\in[T_0,T_0+\varepsilon ]} \mathbb{E}\Big[ \| X_t^{(g_1)} - X_t^{(g_2)} \|^2 \Big]\leq& (6L + 4L^2)\int_{T_0}^{T_0+\varepsilon} \sup_{s\in[T_0,T_0+t]} \mathbb{E}\Big[ \| X_s^{(g_1)} - X_s^{(g_2)} \|^2 \Big] dt \\ &+2\int_{T_0}^{T_0+\varepsilon} \mathbb{E}\Big[ \| X_t^{(g_1)} - X_t^{(g_2)} \| \cdot \| g_1 - g_2\|_{[T_0,T_0+t], r} \Big( 1 + \| X_t^{(g_2)} - x_0 \|^{r} \Big) \Big] dt. \end{align*} An application of Gr\"onwall's Inequality yields \begin{align} \nonumber \sup_{t\in[T_0,T_0+\varepsilon]} &\mathbb{E}\Big[ \| X_t^{(g_1)} - X_t^{(g_2)} \|^2 \Big] \\ \label{eq:prop:Gamma-FirstContraction1.1} &\leq 8\| g_1 - g_2\|_{[T_0,T_0+\varepsilon], r}^2 \cdot \varepsilon \cdot e^{(8L^2+12L)\varepsilon} \cdot \bigg( 1 + \sup_{t\in [T_0, T_0+\varepsilon]} \mathbb{E}\Big[ \| X^{(g_2)}_t - x_0 \|^{2r} \Big] \bigg). \end{align} Let $x\in \mathcal{D}$. Using the polynomial growth assumption of $f$, we have that \begin{align} \nonumber \mathbb{E}\Big[ &f(x - X_t^{(g_1)}) - f(x - X_t^{(g_2)}) \Big] \\ \nonumber \leq& (C+2^r) \mathbb{E}\Big[ \| X_t^{(g_1)} - X_t^{(g_2)} \| \cdot \big( 1 + \| x - x_0 \|^r\big) \cdot \big( 1 + \| X_t^{(g_1)} - x_0 \|^{r} + \| X_t^{(g_2)} - x_0 \|^{r} \big) \Big] \\ \label{eq:prop:Gamma-FirstContraction2.1} \leq& (C+2^r) \cdot \Big( 1 + \|x - x_0 \|^r\Big) \mathbb{E}\Big[ \| X_t^{(g_1)} - X_t^{(g_2)} \|^2\Big]^{\tfrac{1}{2}} \cdot \mathbb{E}\Big[ \Big( 1 + \| X_t^{(g_1)} - x_0 \|^{r} + \| X_t^{(g_2)} - x_0 \|^{r} \Big)^2 \Big]^{\tfrac{1}{2}}. \end{align} By Assumption \ref{ass:ExistUnique-LocLip-SSMVE} and \eqref{eq:reflected-SSMVE-b:moment} we have that $$ \sup_{t\in [0,T]} \mathbb{E}\Big[ \| X_t^{(g_1)} - x_0 \|^{2r} \Big], \quad \sup_{t\in [0,T]} \mathbb{E}\Big[ \| X_t^{(g_2)} - x_0 \|^{2r} \Big] <\infty. $$ Further, these bounds are uniform and depend only on $b$ and $\sigma$. Substituting Equation \eqref{eq:prop:Gamma-FirstContraction1.1} into Equation \eqref{eq:prop:Gamma-FirstContraction2.1}, we get \begin{align} \nonumber \Big\|& \Gamma[g_1] - \Gamma[g_2] \Big\|_{[T_0,T_0+\varepsilon], r} = \sup_{t\in [T_0,T_0+\varepsilon]} \sup_{x\in \mathcal{D}} \frac{\mathbb{E}\Big[ f(x - X_t^{(g_1)}) - f(x - X_t^{(g_2)}) \Big] }{1 + | x - x_0|^r} \\ \label{eq:prop:Gamma-FirstContraction} &\leq (C+2^r) 3\sqrt{8} \| g_1 - g_2\|_{[T_0,T_0+\varepsilon], r} \sqrt{\varepsilon} e^{(4L^2+6L)\varepsilon} \Bigg( 1+ \sup_{t\in [T_0, T_0+\varepsilon]} \mathbb{E}\Big[ \| X_t^{(g_1)} \|^{2r} + \| X_t^{(g_2)} \|^{2r} \Big] \Bigg) . \end{align} \end{proof} Next, our goal is to establish a subset on which this operator is a contraction operator. \begin{defn} Let $K>0$. For $T>0$ and $r>1$, we define $$ \Lambda_{[0,T], r, K}:= \Big\{ g\in \Lambda_{[0,T], r}: \|g\|_{[0,T], r}\leq K\Big\}. $$ \end{defn} Our goal is to choose $T$ and $K$ so that $\Gamma$ is a contraction operator when restricted to $\Lambda_{[0,T], r, K}$. \begin{prop} \label{prop:SSMVE-ExistenceLocalSolution} Let $\Gamma:\Lambda_{[0,T], r} \to \Lambda_{[0,T], r}$ be as defined in Definition \ref{dfn:Gamma-ContractionOp}. Then $\exists K_1, \varepsilon>0$ such that, $$ \Gamma\Big[ \Lambda_{[0, \varepsilon], r, K_1} \Big] \subset \Lambda_{[0,\varepsilon], r, K_1}, \qquad\textrm{and}\qquad \forall g_1, g_2 \in \Lambda_{[0,\varepsilon], r, K_1}\quad \Big\| \Gamma[g_1] - \Gamma[ g_2] \Big\|_{[0,\varepsilon], r} \leq \frac{1}{2} \Big\| g_1 - g_2 \Big\|_{[0, \varepsilon], r}. $$ As such, there exists a unique solution to Equation \eqref{eq:reflectedSSMVE} on the interval $[0,\varepsilon]$. \end{prop} \begin{proof} Let $\varepsilon>0$. Let $g\in \Lambda_{[0,\varepsilon], r, K_1}$. Taking Equation \eqref{eq:lem:Gamma-WellDefined-2} with $T_0=0$ provides \begin{align*} &\Big\| \Gamma[g] \Big\|_{[0, \varepsilon], r} \\ &\leq \Big(2C+2^{r+1}\Big)\Big( 1+ \mathbb{E}\Big[ |\theta - x_0|^r \Big] \Big) + \Bigg( \big( 4 (p-1)\big)^{p-1} \bigg( \Big( \int_{0}^{\varepsilon} | b(s, x_0, \delta_{x_0} )| ds\Big)^p + \Big( \varepsilon K_1 \Big)^p \bigg) \\ &\quad + 2(p-1)^{p/2} \cdot (p-2)^{(p-2)/2} \cdot 4^{p/2} \Big( \int_{0}^{\varepsilon} |\sigma(s, x_0, \delta_{x_0} ) |^2 ds \Big)^{\tfrac{p}{2}} \Bigg) \cdot \exp\Big( \big( 4pL + 2p(p-1)L^2 \big) \varepsilon \Big). \end{align*} Choose $K_1= 2(2C + 2^{r+1}) \Big( 1+ \mathbb{E}\Big[ \| \theta - x_0 \|^p \Big]\Big)$. We have the limit $$ \lim_{\varepsilon \to 0} \Big( \int_{0}^{\varepsilon} \| b(s, x_0, \delta_{x_0} ) \| ds\Big)^p + \Big( \int_{0}^{\varepsilon} \| \sigma(s, x_0, \delta_{x_0} ) \|^2 ds \Big)^{\tfrac{p}{2}} = 0. $$ Then we can choose $\varepsilon'>0$ such that $ \big\| \Gamma[g] \big\|_{[0, \varepsilon'], r}< K_1. $ Secondly, using Equation \eqref{eq:prop:Gamma-FirstContraction} we choose $\varepsilon''>0$ such that $$ \Big\| \Gamma[g_1] - \Gamma[g_2] \Big\|_{[0,\varepsilon''], r} < \frac{\| g_1 - g_2\|_{[0,\varepsilon''],r}}{2}. $$ We emphasise that the choice of $\varepsilon = \min\{ \varepsilon', \varepsilon''\}$ is dependent on the choice of $K_1$. Define $d:\Lambda_{[0,\varepsilon], r} \times \Lambda_{[0,\varepsilon], r} \to \mathbb{R}^+$ to be the metric $d(g_1, g_2) = \| g_1 - g_2\|_{[0,\varepsilon], r}$. The metric space $(\Lambda_{[0,\varepsilon], r, K_1}, d)$ is non-empty, complete and $\Gamma:\Lambda_{[0,\varepsilon], r, K_1} \to \Lambda_{[0,\varepsilon], r, K_1}$ is a contraction operator. Therefore, $\exists g'\in \Lambda_{[0,\varepsilon], r, K_1}$ such that $\Gamma[g'] = g'$. Thus $\forall t\in [0,\varepsilon]$, $$ g'\Big(t, X_t^{(g')} \Big) = f\ast \mu_t^{(g')} (X_t^{(g')}). $$ Substituting this into \eqref{eq:reflected-SSMVE-b}, we obtain \eqref{eq:reflectedSSMVE}. Thus a solution to \eqref{eq:reflectedSSMVE} exists in $\mathcal{S}^p([0,\varepsilon])$. \end{proof} Our challenge now is to find a solution over the whole interval $[0,T]$. \begin{prop} \label{prop:SSMVE-Moments} Let $\mathcal{D}$ satisfy Assumption \ref{assumption:domain}. Let $r>1$ and $p>2r$. Let $W$ be a $d'$ dimensional Brownian motion. Let $b$, $\sigma$ and $f$ satisfy Assumption \ref{ass:ExistUnique-LocLip-SSMVE}. Suppose that a solution $X$ to the McKean-Vlasov equation \eqref{eq:reflectedSSMVE} exists in $\mathcal{S}^p([0,T_0])$ for some $0<T_0<T$. Then there exists a constant $K_2=K_2(p, T)$ such that $$ \bigg(\sup_{t\in[0,T_0]} \mathbb{E}\Big[ \| X_t - x_0 \|^p \Big]\bigg) \vee \bigg(\mathbb{E}\Big[ \| X - x_0\|_{\infty, [0,T_0]}^p \Big]\bigg)< K_2. $$ \end{prop} The challenge of this proof is that the symmetry trick for establishing second moments (see Equation \eqref{eq:prop:SSMVE-Moments1.0}) does not hold for higher moments. However, if we try to bypass this using the methods of \cite{HIP}, the non-constant diffusion terms yields integrals that blow up. Arguing by induction on $m$, we fix this by considering $$ \sup_{t\in[0,T]} \mathbb{E}\Big[ \| X_t - x_0\|^{2m} \Big] + \mathbb{E}\Big[ \| X_t - \tilde{X}_t\|^{2m} \Big], $$ and demonstrating via a Gr\"onwall argument that this is finite, even though a similar argument would not work for either of these terms on their own. \begin{proof} Suppose that $t\in[0,T_0]$. Let $(X_t, k_t)$, $(\tilde{X_t}, \tilde{k_t})$ and $(\overline{X_t}, \overline{k_t})$ be independent, identically distributed solutions of Equation \eqref{eq:reflectedSSMVE}. Consider the two processes \begin{align*} \| X_t - x_0 \|^2 = \| \theta &- x_0 \|^2 + 2\int_0^t \Big\langle X_s - x_0, b(s, X_s, \mu_s) \Big\rangle ds + 2\int_0^t \Big\langle X_s - x_0, \sigma(s, X_s, \mu_s) dW_s \Big\rangle \\ &+ \int_0^t \Big\| \sigma(s, X_s, \mu_s) \Big\|^2 ds + 2\int_0^t \Big\langle X_s - x_0, \overline{\mathbb{E}}\Big[ f(X_s - \overline{X_s}) \Big] \Big\rangle ds - 2\int_0^t \Big\langle X_s - x_0, dk_s \Big\rangle, \\ \| X_t - \tilde{X_t} \|^2 = \| \theta &- \tilde{\theta} \|^2 + 2\int_0^t \Big\langle X_s - \tilde{X_s}, b(s, X_s, \mu_s) - b(s, \tilde{X_s}, \mu_s) \Big\rangle ds \\ &+ 2\int_0^t \Big\langle X_s - \tilde{X_s}, \sigma(s, X_s, \mu_s)dW_s - \sigma(s, \tilde{X_s}, \mu_s)d\tilde{W}_s \Big\rangle \\ &+ \int_0^t \Big\| \sigma(s, X_s, \mu_s) \Big\|^2 + \Big\| \sigma(s, \tilde{X_s}, \mu_s) \Big\|^2 ds \\ &+ 2\int_0^t \Big\langle X_s - \tilde{X_s}, \overline{\mathbb{E}}\Big[ f(X_s - \overline{X_s}) - f(\tilde{X_s} - \overline{X_s}) \Big] \Big\rangle ds - 2\int_0^t \Big\langle X_s - \tilde{X_s}, dk_s - d\tilde{k_s} \Big\rangle. \end{align*} We remark that since $f$ is symmetric we have the identity \begin{equation} \label{eq:prop:SSMVE-Moments1.0} \mathbb{E}\Big[ \Big\langle X_s - x_0, \overline{\mathbb{E}}\Big[ f( X_s - \overline{X_s}) \Big] \Big\rangle \Big] \leq L\cdot \mathbb{E}\Big[ \overline{\mathbb{E}}\Big[ \| X_s - \overline{X_s}\|^2 \Big]\Big]. \end{equation} Taking expectations of both processes (and no longer distinguishing between the integral operators $\mathbb{E}$ and $\tilde{\mathbb{E}}$) and adding them together, we get \begin{align*} \mathbb{E}\Big[ \| X_t - x_0\|^2 + \| X_t - \tilde{X_t}\|^2 \Big] \leq& \mathbb{E}\Big[ \|\theta - x_0\|^2\Big] + \mathbb{E}\Big[ \|\theta - \tilde{\theta}\|^2 \Big] \\ &+ (4L+12L^2) \int_0^t \mathbb{E}\Big[ \|X_s - x_0\|^2 \Big] ds + 2\int_0^t \mathbb{E}\Big[ \|X_s - x_0\|\Big] \cdot \|b(s, x_0, \delta_{x_0})\| ds \\ & + 6\int_0^t \| \sigma(s, x_0, \delta_{x_0})\|^2 ds + 6L \int_0^t \mathbb{E}\Big[ \|X_s - \tilde{X_s}\|^2 \Big] ds. \end{align*} Taking a supremum over $t\in[0,T_0]$, then applying Young's inequality followed by Gr\"onwall's inequality, we obtain \begin{align*} \sup_{t\in[0,T_0]} \mathbb{E}\Big[ \| X_t - x_0 \|^2 + \| X_t - \tilde{X_t} \|^2 \Big] \leq&2\Bigg( \mathbb{E}\Big[ \| \theta - x_0 \|^2 \Big] + \mathbb{E}\Big[ \| \theta - \tilde{\theta} \|^2 \Big] \\ &+ \Big( \int_0^T \Big\| b(s, x_0, \delta_{x_0}) \Big\| ds\Big)^2 + \int_0^T \Big\| \sigma(s, x_0, \delta_{x_0})\Big\|^2 ds \Bigg) e^{(4L + 12L^2) T}. \end{align*} We proceed via induction. Let $$ Y_t = X_t - \mathbb{E}[ X_t] $$ be the centred process. Then \begin{equation} \label{eq:prop:SSMVE-Moments1.1} \mathbb{E}\Big[ \| X_t - x_0 \|^{2m} \Big] \leq 2^{2m-1} \Big( \mathbb{E}\Big[ \| X_t - x_0 \|^2\Big]^{m} + \mathbb{E}\Big[ \| Y_t \|^{2m}\Big] \Big) . \end{equation} Let $\xi$ and $\tilde{\xi}$ be independent copies of a scalar random variable with mean $0$. Then by the Binomial Theorem, we have that for $m\in \mathbb{N}$, \begin{align*} \mathbb{E}\Big[ ( \xi - \tilde{\xi})^{2m} \Big] = & \sum_{k=0}^{2m} (-1)^k \binom{2m}{k} \mathbb{E}\Big[ \xi^k\Big] \mathbb{E}\Big[ \xi^{2m - k}\Big], \end{align*} and therefore from \cite{HIP}*{Proposition 2.12} \begin{equation}\label{eq:prop:SSMVE-Moments1.2} 2\mathbb{E}\Big[ \| Y_t \|^{2m} \Big] \leq c(m,d) \Big( \mathbb{E}\Big[ \| X_t - \tilde{X}_t\|^{2m}\Big] + \Big( 1 + \mathbb{E}\Big[ \| Y_t\|^{2m-2} \Big] \Big)^2 \Big), \end{equation} for a constant $c(m,d)$ depending only on $m$ and $d$. In what follows we write $c(m,d,L)$ for a constant possibly changing on each line, but dependent only on $m,d$ and Lipshitz constant $L$. We combine Equations \eqref{eq:prop:SSMVE-Moments1.1} and Equation \eqref{eq:prop:SSMVE-Moments1.2} to get \begin{align} \nonumber \mathbb{E}\Big[ \| X_t &- x_0 \|^{2m} \Big] + \mathbb{E}\Big[ \| X_t - \tilde{X_t} \|^{2m} \Big] \\ & \leq c(m,d,L) \Big( \mathbb{E}\Big[ \| X_t - x_0 \|^2\Big]^m + \Big( 1+ \mathbb{E}\Big[ \| Y_t \|^{2m-2}\Big] \Big)^2 \Big) \label{eq:prop:SSMVE-Moments2.1} + c(m,d,L)\mathbb{E}\Big[ \| X_t - \tilde{X_t} \|^{2m} \Big]. \end{align} We use It\^o's formula to get that \begin{align*} \| X_t - \tilde{X}_t\|^{2m} =& \| \theta - \tilde{\theta}\|^{2m} + 2m \int_0^t \| X_s - \tilde{X}_s\|^{2m-2} \Big\langle X_s-\tilde{X}_s , b(s, X_s, \mu_s) - b(s, \tilde{X}_s, \mu_s) \Big\rangle ds \\ &+ 2m \int_0^t \| X_s - \tilde{X}_s\|^{2m-2} \Big\langle X_s - \tilde{X}_s, \overline{\mathbb{E}}\Big[ f( X_s - \overline{X}_s) - f(\tilde{X_s} - \overline{X}_s) \Big] \Big\rangle ds \\ &+ 2m \int_0^t \| X_s - \tilde{X}_s\|^{2m-2} \Big\langle X_s - \tilde{X}_s, \sigma(s, X_s, \mu_s) dW_s - \sigma(s, \tilde{X}_s, \mu_s) d\tilde{W}_s \Big\rangle \\ +m(2m-1)& \int_0^t \| X_s - \tilde{X}_s\|^{2m-2} \Big( \| \sigma(s, X_s, \mu_s) \|^2 + \| \sigma(s, \tilde{X}_s, \mu_s) \|^2 \Big) ds - 2m\int_0^t \Big\langle X_s - \tilde{X}_s, dk_s - d\tilde{k}_s \Big\rangle, \end{align*} Now for any $K>0$, \begin{align*} K& \sup_{t\in[0,T]} \mathbb{E}\Bigg[ \int_0^t \| X_s - \tilde{X}_s \|^{2m-2} \Big( \| \sigma(s, X_s, \mu_s) \|^2 + \| \sigma(s, \tilde{X}_s, \mu_s) \|^2 \Big) ds \Bigg] \\ \leq& 12L^2 K \int_0^T \mathbb{E}\Big[ \| X_s - \tilde{X}_s \|^{2m} \Big] ds + \tfrac{12L^2K}{m} \int_0^T \mathbb{E}\Big[ \| X_s - x_0\|^{2m} \Big] ds \\ &+\sup_{t\in[0,T]} \frac{ \mathbb{E}\Big[ \| X_t - \tilde{X}_t\|^{2m} \Big] }{2} + \big[ 2(m-1)\big]^{m-1} \cdot \Big[ \tfrac{6K}{m} \Big]^{m} \cdot \Big( \int_0^T \Big| \sigma(s, x_0, \delta_{x_0}) \Big|^2 ds \Big)^m. \end{align*} Applying this with Equation \eqref{eq:prop:SSMVE-Moments2.1} yields \begin{align*} &\sup_{t\in[0,T]} \mathbb{E}\Big[ \| X_t - x_0 \|^{2m} \Big] + \mathbb{E}\Big[ \| X_t - \tilde{X_t} \|^{2m} \Big] \\ &\leq c(m,d,L)\Bigg( \mathbb{E}\Big[ \| X_t - x_0 \|^2\Big]^m + \Big( 1+ \mathbb{E}\Big[ \| Y_t \|^{2m-2}\Big] \Big)^2 + \mathbb{E}\Big[ \| \theta - \tilde{\theta}\|^{2m} \Big] +\Big( \int_0^T \| \sigma(s, x_0, \delta_{x_0}) \|^2 ds \Big)^m \\ &+ \int_0^T \sup_{s\in[0,t]} \mathbb{E}\Big[ \| X_s - \tilde{X}_s \|^{2m} \Big] + \mathbb{E}\Big[ \| X_s - x_0\|^{2m} \Big] dt \Bigg) + \frac{1}{2}\sup_{t\in[0,T]} \mathbb{E}\Big[ \| X_t - \tilde{X}_t\|^{2m}\Big] . \end{align*} Combining all terms together, we get that there exist a constant $c=c(m,d,L,T)$, dependent only on $m, d, L,T$ and not $T_0$ such that \begin{align*} \sup_{t\in[0,T_0]} \mathbb{E}\Big[ \| X_t - x_0 \|^{2m} + \| X_t - \tilde{X_t} \|^{2m} \Big] \leq& c\Bigg(1+ \int_0^{T_0} \sup_{s\in[0,t]} \mathbb{E}\Big[ \| X_s - x_0 \|^{2m} + \| X_s - \tilde{X_s} \|^{2m} \Big] dt \Bigg). \end{align*} Thus via Gr\"onwall $$ \sup_{t\in[0,T_0]} \mathbb{E}\Big[ \| X_t - x_0 \|^{2m} + \| X_t - \tilde{X_t} \|^{2m} \Big] \leq c e^{c T_0}< c e^{c T}. $$ Hence, by induction we have finite moment estimates for all $m\in \mathbb{N}$ such that $2m\leq p$. In particular, this is true for $2m\geq 2r$. For sharp moment estimates, we use the methods from the proof of Theorem \ref{thm:ExistUnique-LocLip-Ref} to get \begin{align} \mathbb{E}\Big[ \| X - x_0 \|_{\infty, [0,T_0]}^p \Big] \lesssim& \mathbb{E}\Big[ \| \theta - x_0 \|^p\Big] + \Big(\int_0^{T_0} \| b(s, x_0, \delta_{x_0}) \| ds \Big)^p \nonumber \\ &+ \Big( \int_0^{T_0} \| \sigma(s, x_0, \delta_{x_0}) \|^2 ds\Big)^{p/2} + \Big( \int_0^{T_0} \Big\| \tilde{\mathbb{E}}\Big[ f(\tilde{X_s} - x_0) \Big] \Big\| ds \Big)^p \nonumber \\ \lesssim& \mathbb{E}\Big[ \| \theta - x_0 \|^p \Big] + \Big(\int_0^{T} \| b(s, x_0, \delta_{x_0}) \| ds \Big)^p \nonumber \\ &+ \Big( \int_0^{T} \| \sigma(s, x_0, \delta_{x_0}) \|^2 ds\Big)^{p/2} + \Big( T C \sup_{t\in[0,T_0]} \mathbb{E}\Big[ \| X_t - x_0 \|^r + 1 \Big] \Big)^p. \label{eq momoment bounds for existence uniqueness} \end{align} \end{proof} Finally, we are in position to prove Theorem \ref{thm:ExistUnique-LocLip-SSMVE}. \begin{proof}[Proof of Theorem \ref{thm:ExistUnique-LocLip-SSMVE}.] By Proposition \ref{prop:SSMVE-ExistenceLocalSolution}, we have that a unique solution to Equation \eqref{eq:reflectedSSMVE} exists on the interval $[0,\varepsilon]$. Let $\delta>0$ and $g\in \Lambda_{[\varepsilon, \varepsilon + \delta],r}$. Then again by \eqref{eq:lem:Gamma-WellDefined-2} \begin{align*} \Big\| \Gamma[g] \Big\|_{[\varepsilon, \varepsilon+\delta], r} \leq& \Big(2C+2^{r+1}\Big)\Big( 1+ \sup_{t\in[0,\varepsilon]} \mathbb{E}\Big[ \| X_t - x_0 \|^r \Big] \Big) \\ &+ \Bigg( \big( 4 (p-1)\big)^{p-1} \bigg( \Big( \int_{\varepsilon}^{\varepsilon+\delta} \| b(s, x_0, \delta_{x_0} )\| ds\Big)^p + \Big( \delta \|g\|_{[\varepsilon, \varepsilon+\delta],r} \Big)^p \bigg) \\ &\quad + 2(p-1)^{p/2} \cdot (p-2)^{(p-2)/2} \cdot 4^{p/2} \Big( \int_{\varepsilon}^{\varepsilon+\delta} \| \sigma(s, x_0, \delta_{x_0} ) \|^2 ds \Big)^{\tfrac{p}{2}} \Bigg) \\ &\quad \cdot \exp\Big( \big( 4pL + 2p(p-1)L^2 \big) \delta \Big). \end{align*} By Proposition \ref{prop:SSMVE-Moments}, we know that $$ 2\Big(2C+2^{r+1}\Big)\Big( 1+ \sup_{t\in[0,\varepsilon]} \mathbb{E}\Big[ \| X_t - x_0 \|^r \Big] \Big)< K_5, $$ for some $K_5$ independent of $\varepsilon$. Then for $\| g\|_{[\varepsilon, \varepsilon+\delta],r}<K_5$, we get \begin{align*} \Big\| \Gamma[g] \Big\|_{[\varepsilon, \varepsilon+\delta], r} \leq& \tfrac{K_5}{2} + \Bigg( \big( 4 (p-1)\big)^{p-1} \bigg( \Big( \int_{\varepsilon}^{\varepsilon+\delta} \| b(s, x_0, \delta_{x_0} )\| ds\Big)^p + \big( \delta K_5 \big)^p \bigg) \\ &\quad + 2(p-1)^{p/2} \cdot (p-2)^{(p-2)/2} \cdot 4^{p/2} \Big( \int_{\varepsilon}^{\varepsilon+\delta} \| \sigma(s, x_0, \delta_{x_0} ) \|^2 ds \Big)^{\tfrac{p}{2}} \Bigg) \\ &\quad \cdot \exp\Big( \big( 4pL + 2p(p-1)L^2 \big) \delta \Big). \end{align*} By the uniform continuity of the mappings \begin{align*} \delta \mapsto& \int_{\varepsilon}^{\varepsilon+\delta} \| b(s, x_0, \delta_{x_0} )\| ds \quad \mbox{and}\quad \delta \mapsto \int_{\varepsilon}^{\varepsilon+\delta} \| \sigma(s, x_0, \delta_{x_0} )\|^2 ds, \end{align*} we choose $\delta'>0$ (independently of $\varepsilon$) so that $\big\|\, \Gamma[g]\, \big\|_{[\varepsilon, \varepsilon+\delta'], r}<K_5$. Next, we use Equation \eqref{eq:prop:Gamma-FirstContraction} to get \begin{align*} \Big\|& \Gamma[g_1] - \Gamma[g_2] \Big\|_{[\varepsilon,\varepsilon+\delta], r} \\ &\leq (C+2^r) 3\sqrt{8} \| g_1 - g_2\|_{[\varepsilon,\varepsilon+\delta], r} \sqrt{\delta} e^{(4L^2+6L)\delta} \Bigg( 1+ \sup_{t\in [\varepsilon, \varepsilon+\delta]} \mathbb{E}\Big[ \| X_t^{(g_1)} - x_0 \|^{2r} + \|X_t^{(g_2)} - x_0 \|^{2r}\Big] \Bigg) . \end{align*} Next, using Equation \eqref{eq:reflected-SSMVE-b:moment}, we get \begin{align*} \Big\| \Gamma[g_1] - \Gamma[g_2] \Big\|_{[\varepsilon,\varepsilon+\delta], r} &\leq (C+2^r) 3\sqrt{8} \| g_1 - g_2\|_{[\varepsilon,\varepsilon+\delta], r} \sqrt{\delta} e^{(4L^2+6L)\delta} \Bigg( 1+ 8\sup_{t\in[0,\varepsilon]} \mathbb{E}\Big[ | X_{t} - x_0|^{2r} \Big] \\ &\quad +2\big( 4 (2r-1)\big)^{2r-1} \bigg( \Big( \int_{\varepsilon}^{\varepsilon+\delta} | b(s, x_0, \delta_{x_0} )| ds \Big)^{2r} + \big( \delta K_5 \big)^{2r} \bigg) \\ &\quad +4(2r-1)^{r} \cdot (2r-2)^{r-1} \cdot 4^{r} \Big( \int_{\varepsilon}^{\varepsilon+\delta} |\sigma(s, x_0, \delta_{x_0} ) |^2 ds \Big)^{r} \Bigg) e^{\big(8rL + 4r(2r-1)L^2 \big) \delta}. \end{align*} Finally, by Proposition \ref{prop:SSMVE-Moments}, we choose $\delta''>0$ (independently of $\varepsilon$) such that $$ \Big\| \Gamma[g_1] - \Gamma[g_2] \Big\|_{[\varepsilon,\varepsilon+\delta''], r} \leq \frac{1}{2} \| g_1 - g_2\|_{[\varepsilon,\varepsilon+\delta''], r}. $$ Let $\delta = \min\{ \delta', \delta''\}$. Define $d:\Lambda_{[\varepsilon,\varepsilon + \delta], r} \times \Lambda_{[\varepsilon,\varepsilon + \delta], r} \to \mathbb{R}^+$ be the metric $d(g_1, g_2) = \| g_1 - g_2\|_{[\varepsilon,\varepsilon + \delta], r}$. The metric space $(\Lambda_{[\varepsilon,\varepsilon + \delta], r, K_3}, d)$ is non-empty, complete and $\Gamma:\Lambda_{[\varepsilon,\varepsilon + \delta], r, K_3} \to \Lambda_{[\varepsilon,\varepsilon + \delta], r, K_3}$ is a contraction operator. Therefore, $\exists g'\in \Lambda_{[\varepsilon,\varepsilon + \delta], r, K_3}$ such that $\Gamma[g'] = g'$. Thus $\forall t\in [\varepsilon,\varepsilon + \delta]$, $$ g'\big(t, X_t^{(g')} \big) = f\ast \mu_t^{(g')} \big(X_t^{(g')}\big). $$ Repeating this argument and concatenating, we obtain a function $g\in \Lambda_{[0,T], r}$ such that $\forall t\in[0,T]$ $$ g\big(t, X_t^{(g)} \big) = f\ast \mu_t^{(g)} \big(X_t^{(g)}\big). $$ Substituting this into Equation \eqref{eq:reflected-SSMVE-b}, we obtain Equation \eqref{eq:reflectedSSMVE} over the interval $[0,T]$. \end{proof} \subsection{Propagation of chaos} \label{sec:PoC} We are interested in the ways in which the dynamics of a single equation within a system of reflected interacting equations of the form \eqref{eq:ParticleSystem} converges to the dynamics of the reflected McKean-Vlasov equation. Let $N\in \mathbb{N}$ and let $i\in\{1, ..., N\}$. We now study the law of a solution to the interacting particle system \begin{equation} \label{eq:reflectedSSPS} \begin{split} X_t^{i, N} =& \theta^{i} + \int_0^t b(s, X_s^{i, N}, \mu_s^N) ds + \int_0^t \sigma(s, X_s^{i, N}, \mu_s^N) dW_s^{i, N} + \int_0^t f \ast \mu_s^N (X_s^{i, N}) ds - k_t^{i, N}, \\ |k^{i, N}|_t=& \int_0^t \mathbbm{1}_{\partial D}(X_s^{i, N}) d|k^{i, N}|_s, \qquad k_t^{i, N} = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s^{i, N}) \textbf{n}(X_s^{i, N}) d|k^{i, N}|_s, \qquad \mu_t^N = \tfrac{1}{N} \sum_{j=1}^N \delta_{X_t^{j, N}}. \end{split} \end{equation} We demonstrate Propagation of Chaos (PoC), that is for a finite time interval $[0,T]$ the trajectories of the particle system on average converge to that of the McKean-Vlasov equation. \begin{theorem}[Propagation of Chaos (PoC)] \label{thm:PoC} Let $\mathcal{D}\subset \mathbb{R}^d$ satisfy Assumption \ref{assumption:domain}. \textcolor{black}{ Let $\theta^i$ be independent identically distributed copies of $\theta$, and let $\theta$, $b$, $\sigma$ and $f$ satisfy Assumption \ref{ass:ExistUnique-LocLip-SSMVE}. Let $W^{i, N}$ be a sequence of independent Brownian motions taking values on $\mathbb{R}^{d'}$. Additionally, suppose that $p>\max \{ 2r, 4\}$. Let $X^i_t$ be a sequence of strong solutions to Equation \eqref{eq:reflectedSSMVE} driven by the Brownian motion $W^{i,N}$, and with initial conditions $\theta^i$}. Let $X_t^{i, N}$ be the solution to particle system \eqref{eq:reflectedSSPS}. Then there exists a constant $c=c(T)>0$, depending only on $T$, such that \begin{equation} \sup_{t\in[0,T]} \mathbb{E}\Big[ \| X^{i,N}_t - X^i_t \|^2\Big] \leq c(T) \begin{cases} N^{-1/2},~&d<4, \\ N^{-1/2}\log N,~&d=4, \\ N^{\frac{-2}{d+4}},~&d>4. \end{cases} \end{equation} \end{theorem} \begin{proof} Firstly, we assume that the noise driving the McKean-Vlasov equation \eqref{eq:reflectedSSMVE} and the noise driving the particle system \eqref{eq:reflectedSSPS} have correlation 1. Using It\^o's formula, summing over $i$ and taking expectations, \color{black} \begin{align} \nonumber \sum_{i=1}^N \mathbb{E}\Big[ \| X_t^{i, N} - X_t^i\|^2\Big] \leq& 2L \int_0^t \sum_{i=1}^N \mathbb{E}\Big[ \| X_s^{i,N} - X_s^{i} \|^2 \Big] ds + 2L \int_0^t \sum_{i=1}^N \mathbb{E}\Big[ \|X_s^{i,N} - X_s^{i} \| \cdot \mathbb{W}_{\mathcal{D}}^{(2)} (\mu^N_s, \mu_s) \Big] ds \\ \nonumber &+ 4L^2 \int_0^t \sum_{i=1}^N \mathbb{E}\Big[ \| X_s^{i,N} - X_s^i \|^2 + \mathbb{W}_{\mathcal{D}}^{(2)} \Big( \mu_s^N, \mu_s \Big)^2\Big] ds \\ \label{eq:thm:PoC1.1} & + 2\int_0^t \sum_{i=1}^N \mathbb{E}\Big[ \Big\langle X_s^{i, N} - X_s^i, \tfrac{1}{N}\sum_{j=1}^N f(X_s^{i, N} - X_s^{j, N}) - f(X_s^i - X_s^{j}) \Big\rangle \Big] ds \\ \label{eq:thm:PoC1.2} & +2 \int_0^t \sum_{i=1}^N \mathbb{E}\Big[ \Big\langle X_s^{i, N} - X_s^i, \tfrac{1}{N}\sum_{j=1}^N f(X_s^i - X_s^j) - f\ast \mu_s(X_s^i) \Big\rangle \Big] ds. \end{align} Re-arranging the double sum and using that $f$ is odd, we can rewrite the integrand of \eqref{eq:thm:PoC1.1} as \begin{align} \sum_{i, j=1}^N \mathbb{E}\Big[ \Big\langle X_s^{i, N} - X_s^i,& f(X_s^{i, N} - X_s^{j, N}) - f(X_s^i - X_s^{j}) \Big\rangle \Big] \nonumber \\ & =\frac{1}{2}\sum_{i,j=1}^N \mathbb{E}\Big[ \Big\langle (X_s^{i, N} - X_s^{j,N} )-(X_s^i-X_s^j), f(X_s^{i, N} - X_s^{j, N}) - f(X_s^i - X_s^{j}) \Big\rangle \Big], \label{v1} \end{align} and thus using the one-sided Lipschitz property of $f$ we can bound \eqref{v1} by $L\sum_{i=1}^N \mathbb{E}\big[ \| X_s^{i, N} - X_s^i \|^2 \big] $. Consider the sum over $j$ in the integrand of \eqref{eq:thm:PoC1.2}. One observes that after using the Cauchy Schwartz inequality we have the product of the two terms \begin{align} \nonumber \mathbb{E}\Big[ \Big\langle X_s^{i, N} - X_s^i, \sum_{j=1}^N \big( f(X_s^i & - X_s^j) - f\ast \mu_s(X_s^i) \big) \Big\rangle \Big] \\ &\leq \mathbb{E}\Big[ \| X_s^{i, N} - X_s^i\|\Big]^{1/2} \mathbb{E}\Big[ \big\|\sum_{j=1}^N \big( f(X_s^i - X_s^j) - f\ast \mu_s(X_s^i) \big) \big\|^2 \Big]^{1/2}. \label{x3} \end{align} We next show that the second of these terms is bounded by $C \sqrt{N}$ for some fixed constant $C>0$. We have \begin{align} \mathbb{E}\Big[ \big\|\sum_{j=1}^N \big( f(X_s^i - X_s^j) - f\ast \mu_s(X_s^i) \big) \big\|^2 \Big] &= \sum_{j,k=1}^N\mathbb{E}\Big[ \big\langle f(X_s^i - X_s^j) - f\ast \mu_s(X_s^i),f(X_s^i - X_s^k) - f\ast \mu_s(X_s^i) \big\rangle \Big] \nonumber \\ &=\sum_{j=1}^N \mathbb{E}\Big[ \big\| f(X_s^i - X_s^j) - f\ast \mu_s(X_s^i)\big\|^2 \Big] \label{x1} \\ & \leq C N \label{x2} \end{align} where \eqref{x1} is due to the fact that the cross terms (i.e., $i\neq j$) are all zero since in this case $X^j$ is independent of $X^k$, and \eqref{x2} follows from the polynomial growth of $f$ and the control on the moments $\mathbb{E}[\|X^i_s\|^{2r}]$. Using \eqref{x3} in conjunction with \eqref{x2}, it is clear that the integrand in \eqref{eq:thm:PoC1.2} is some constant multiple of $\sqrt{N}+\frac{1}{\sqrt{N}}\sum_{i=1}^N \mathbb{E}[\|X^{i,N}_s-X^i_s\|^2]$ (from the inequality $|x|\leq 1+|x|^2$). Next, dealing with the $\mathbb{W}_{\mathcal{D}}^{(2)}( \mu_\cdot^N, \mu_\cdot)$ terms, set $\nu_\cdot^N = \tfrac{1}{N} \sum_{j=1}^N \delta_{X_\cdot^{j}}$. By the triangle inequality, we get \begin{align} \mathbb{E}\Big[ \mathbb{W}_{\mathcal{D}}^{(2)}( \mu_s^N, \mu_s) \Big] & \leq \mathbb{E}\Big[ \Big( \tfrac{1}{N}\sum_{i=1}^N \| X_s^{i, N} - X_s^{i} \|^2 \Big)^{1/2} + \mathbb{W}_{\mathcal{D}}^{(2)}( \nu_s^N, \mu_s) \Big]. \label{v4} \end{align} Assembling all the previous bounds with the estimate obtained after applying It\^o's formula, we get \begin{align} \nonumber \sum_{i=1}^N \mathbb{E}\Big[ \| X_t^{i, N} - X_t^i\|^2\Big] \lesssim& \int_0^t \sum_{i=1}^N \mathbb{E}\Big[ \| X_s^{i,N} - X_s^{i} \|^2 \Big] ds + t\sqrt{N} + N \int_0^t \mathbb{W}_{\mathcal{D}}^{(2)} (\mu^N_s, \mu_s) \Big] ds. \end{align} Noting that the particles are exchangeable, and taking the supremum over $t\in[0,T]$ we find that \begin{align*} \sup_{t\in[0,T]} \mathbb{E}\Big[ \| X_t^{i, N} - X_t^{i} \|^2 \Big] \lesssim& \int_0^T \sup_{t\in[0,s]} \mathbb{E}\Big[ \|X_s^{i, N} - X_s^i\|^2 \Big] ds + T\Big(\frac{1}{\sqrt{N}}+\sup_{t\in[0,T]} \mathbb{E}\Big[ \mathbb{W}_{\mathcal{D}}^{(2)} \Big(\nu^N_t, \mu_t \Big)^2 \Big]\Big). \end{align*} Applying Gr\"onwall inequality yields \begin{align*} \sup_{t\in[0,T]} \mathbb{E}\Big[ \| X_t^{i, N} - X_t^{i} \|^2 \Big] \lesssim T\Big(\frac{1}{\sqrt{N}}+\sup_{t\in[0,T]} \mathbb{E}\Big[ \mathbb{W}_{\mathcal{D}}^{(2)} \Big(\nu^N_t, \mu_t \Big)^2 \Big]\Big). \end{align*} Finally, by assumption on $p$ all processes have moments larger the 4th one, thus one can use the well known rate of convergence for an empirical distribution to the true law, see \cite{carmona2018probabilistic}*{Theorem 5.8}, and obtain $$ \mathbb{E}\Big[ \mathbb{W}_{\mathcal{D}}^{(2)} \Big(\nu^N_t, \mu_t \Big)^2 \Big] \lesssim \begin{cases} N^{-1/2},~&d<4, \\ N^{-1/2}\log N,~&d=4, \\ N^{\frac{-2}{d+4}},~&d>4, \end{cases} $$ to conclude. Note that the latter convergence rate dominates the $T/\sqrt{N}$ element in the main error estimate. \end{proof} \color{black} \subsection{An example} A key advantage of the framework that we consider for Theorem \ref{thm:ExistUnique-LocLip-MVE} and Theorem \ref{thm:ExistUnique-LocLip-SSMVE} is that the drift term $b$ is locally Lipschitz over $\mathcal{D}$. We demonstrate that the measure dependencies allowed for with the self-stabilizing term $f \ast \mu$ do not satisfy a Lipschitz condition with respect to the Wasserstein distance. \begin{example} Let $\mathcal{D} = \mathbb{R}^+$. Let $F(x) = {-x^4}/{4}$ so that $f(x) = \nabla F(x) = -x^3$. Consider the dynamics $$ X_t = W_t - \int_0^t \int_{\mathcal{D}} (X_s - y)^3 \mu_t(dy) ds - k_t, \quad \mu_t (dx) = \mathbb{P}\big[ X_t \in dx\big], \quad X_0 = 1. $$ Without entering details and assuming $\mu, \nu \in \mathcal{P}_4(\mathcal{D})$, the Lions derivative of $\mu\mapsto \Psi_x(\mu):= -\int_\mathcal{D} (x-y)^3\mu(dy)$ is unbounded, meaning that the "Lipschitz" constant of $\mu\mapsto \Psi_x(\mu)$ depends on $x$ in an unbounded way since $\mathcal{D}$ is unbounded. For the reader familiarised with the theory, see \cite{carmona2018probabilistic}*{Section 5}, the Lions derivative of the functional $\Psi_x(\cdot)$ follows from Example 1 in Section 5.2.2 (p385) and is given by $\partial_\mu \psi_x(\mu)(Z)=f'(x-Z)$ for $Z\sim \mu$. Their Remark 5.27 (p384) and Remark 5.28 (p390) connect to the Lipschitz constant. \end{example} \section{Large Deviation Principles} \label{sec:LDPs} Throughout this section let $\varepsilon>0$, all results hold under the following assumptions: \begin{assumption} \label{assumption : Holder regularity of sigma} Suppose that $\mathcal{D}\subset \mathbb{R}^d$ satisfies Assumption \ref{assumption:domain}. Suppose that $b,\sigma,$ and $f$ satisfy Assumptions \ref{ass:ExistUnique-LocLip-SSMVE}. Additionally, suppose that $\exists L>0, \exists \beta\in(0,1]$ such that $\forall s,t\in[0,T]$, $\forall \mu \in \mathcal{P}_2(\mathcal{D})$ and $\forall x \in \mathcal{D}$, \begin{equation*} \|\sigma(t,x,\mu)-\sigma(s,x,\mu)\|\leq L\|t-s\|^{\beta}. \end{equation*} \end{assumption} The regularity on $\sigma$ imposed above will allow us to make an Euler scheme approximation to the dynamics. We begin by reminding the reader of the definition of a Freidlin-Wentzell Large Deviation Principle. \begin{defn} Let $E$ be a metric space. A function $I:E \to [0,\infty]$ is said to be a \emph{rate function} if it is lower semi-continuous and the level sets of $I$ are closed. A \emph{good rate function} is a rate function whose level sets are compact. \end{defn} The rate function is used to encode the asymptotic rate for a convergence in probability statement that is called a Large Deviations Principle. \begin{defn} \label{definition LDP} Let $x\in \mathcal{D}$. A family of probability measures $\{\mu^{\varepsilon}\}_{\varepsilon>0}$ on $C_x([0,T]; \mathcal{D})$ is said to satisfy a Large Deviations Principle with rate function $I$ if \begin{equation} -\inf_{h\in G^\circ} I(h) \leq \liminf_{\varepsilon \to 0} \varepsilon \log\mu^\varepsilon [G^\circ] \leq \limsup_{\varepsilon \to 0}\varepsilon \log\mu^{\varepsilon}[\overline{G}] \leq -\inf_{h\in \overline{G}} I(h), \end{equation} for all Borel subsets $G$ of the space $C_x([0,T]; \mathcal{D})$. \end{defn} We prove a Freidlin-Wentzell Large Deviation Principle for the class of reflected McKean-Vlasov equations studied in Section \ref{section exist.unique}. The inclusion of non-Lipschitz measure dependence and reflections extends the classical Freidlin-Wentzell results for SDEs found in \cites{DZ,deuschel2001large,den2008large}. Our approach uses sequences of exponentially good approximations, inspired by the methods of \cite{HIP} and \cite{dos2019freidlin}. As with previous works proving Freidlin-Wentzell LDP results for McKean-Vlasov SDEs, the non-Lipschitz measure dependency is accounted for by establishing an LDP for a diffusion that is an exponentially tight approximation. The section is structured as follows, first a deterministic path is identified which the solution to \eqref{eq:MVSS-LDP} approaches as $\varepsilon \to 0$. Definition \eqref{equation Y classical reflected SDE} then introduces an approximation of \eqref{eq:MVSS-LDP} where the law is replaced by this deterministic path. An LDP is established for this approximation by first obtaining an LDP for its Euler scheme in Lemma \ref{lemma the LDP for Y classical euler reflected sde}, and then transferring it via the method of exponential approximations in Lemmas \ref{lemma : euler scheme is an expo good approximation} and \ref{lemma the LDP for Y classical reflected sde}. Finally the LDP for the object of interest \eqref{eq:MVSS-LDP} is acquired by establishing exponential equivalence between it and the approximation of Definition \ref{definition Y classical reflected SDE}. \subsection{Convergence of the law} Recall that the key point of an LDP is to characterise the rate at which the probability of rare events decreases as we change a parameter in our experiment. In the case of path space LDP for a stochastic processes this relies on identifying a path which the diffusion increasingly concentrates around as the noise decays. The dynamics of the process can then be seen as small perturbations from this fixed path, often referred to as the skeleton path. Consider the reflected McKean-Vlasov SDE \begin{equation} \label{eq:MVSS-LDP} \begin{split} X^{\varepsilon}_t =& x_0 + \int_0^t b(s,X^{\varepsilon}_s, \mu^{\varepsilon}_s) ds + \int_0^t f\ast \mu_s^\varepsilon ( X^{\varepsilon}_s) ds + \sqrt{\varepsilon} \int_0^t \sigma(s,X^{\varepsilon}_s,\mu^\varepsilon_s) dW_s - k_t^\varepsilon, \\ |k^\varepsilon|_t =& \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s^\varepsilon) d|k^\varepsilon|_s, \quad k^\varepsilon_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s^\varepsilon) \textbf{n}(X_s^\varepsilon) d|k^\varepsilon|_s. \end{split} \end{equation} Heuristically, as $\varepsilon \to 0$ the noise term in \eqref{eq:MVSS-LDP} vanishes, the law of $X^\varepsilon$ tends to a Dirac measure of its own deterministic trajectory and hence the interaction term vanishes. Therefore in the small noise limit the dynamics is governed by $b$ and the diffusion behaves like the solution to the following deterministic Skorokhod problem. \begin{defn}\label{definition skeleton process} Define $\psi^{x_0}$ to be the solution to the reflected ODE \begin{equation} \label{eq:SkeletonProcess-0} \begin{split} \psi^{x_0}(t) =& x_0 + \int_0^t b(s,\psi^{x_0}(s),\delta_{\psi^{x_0}(s)} )ds - k_t^\psi, \\ |k^\psi|_t =& \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(\psi(s) ) d|k^\psi|_s, \quad k^\psi_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(\psi(s) ) \textbf{n}(\psi(s) ) d|k^\psi|_s, \end{split} \end{equation} on the interval $[0,T]$. We define the Skeleton operator $H: \mathcal{H}_1^0 \to C_{x_0}([0,T]; \mathcal{D})$ by $h \mapsto H[h]$ where \begin{equation} \label{eq:SkeletonProcess-h} \begin{split} H[h]_t =& x_0 + \int_0^t b(s, H[h]_s, \delta_{\psi^{x_0}(s)}) ds + \int_0^t f(H[h]_s - \psi^{x_0}(s)) ds + \int_0^t \sigma( s, H[h]_s, \delta_{\psi^{x_0}(s)}) dh_s - k_t^h, \\ |k^h|_t =& \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(H[h]_s) d|k^h|_s, \quad k^h_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(H[h]_s) \textbf{n}(H[h]_s) d|k^h|_s. \end{split} \end{equation} \end{defn} The existence of a unique solution to the Skorokhod problem for a continuous path into a convex domain \cite{tanaka2002stochastic}*{Theorem 2.1} ensures the existence and uniqueness of a solution to Equation \eqref{eq:SkeletonProcess-h}, this can we proved in a similar and fashion to \cite{tanaka2002stochastic}*{Theorem 4.1}. Hence the operator $H[h]$ is well defined. \\ The following lemma proves that, for small $\epsilon$, the solution $X^\epsilon$ to \eqref{eq:MVSS-LDP} will remain close to the trajectory $\psi^{x_0}$ of the skeleton ODE \eqref{eq:SkeletonProcess-0}. Moreover the law $\mu^\varepsilon$ can be shown to tend to the Dirac measure of $\psi^{x_0}$. \begin{lemma} Let $X^\varepsilon$ be the solution to \eqref{eq:MVSS-LDP} and $\mu^\varepsilon$ its law. Let $\psi^{x_0}$ be the solution of \eqref{eq:SkeletonProcess-0}. Then we have for any $T> 0$, \begin{equation}\label{equation X goes to skeleton} \sup_{t\in[0,T]} \mathbb{E}\Big[ \|X_t^\varepsilon-\psi^{x_0}(t) \|^2 \Big] \leq \varepsilon T e^{cT}, \end{equation} for a constant $c$ independent of $\varepsilon$ and $x_0$. Moreover for any $x\in \mathbb{R}^d$ we have that \begin{equation}\label{equation convergence of law to the Diract path} \lim_{\varepsilon \to 0} \| f\ast \mu_t^\varepsilon (x) - f(x - \psi^{x_0}(t)) \|_{\infty, [0,T]}=0. \end{equation} \end{lemma} \begin{proof} Let $t\in [0,T]$. We have \begin{align*} \| X_t^\varepsilon - \psi^{x_0}(t)\|^2 =& 2\int_0^t \Big\langle X_s^\varepsilon - \psi^{x_0}(s), b(s, X_s^\varepsilon, \mu_s^\varepsilon) - b(s, \psi^{x_0}(s), \delta_{\psi^{x_0}(s)} ) \Big\rangle ds \\ &+ \sqrt{\varepsilon} \int_0^t \Big\langle X_s^\varepsilon - \psi^{x_0}(s), \sigma(s, X_s^\varepsilon, \mu_s) dW_s \Big\rangle +\varepsilon \int_0^t \| \sigma(s, X_s^\varepsilon, \mu_s^\varepsilon) \|^2 ds \\ &+ \int_0^t \Big\langle X_s^\varepsilon - \psi^{x_0}(s), f(X_s^\varepsilon) \ast \mu_s^\varepsilon \Big\rangle ds - \int_0^t \Big\langle X_s^\varepsilon - \psi^{x_0}(s), dk^\varepsilon_s - dk^{\psi}_s\Big\rangle. \end{align*} Thus \begin{align*} \sup_{t\in[0,T]} \mathbb{E}\Big[ \| X_t^\varepsilon - \psi^{x_0}(t)\|^2 \Big] \leq& 6L \int_0^T \sup_{s\in[0,t]} \mathbb{E}\Big[ \| X_s^\varepsilon - \psi^{x_0}(s)\|^2 \Big] ds \\ & + C \cdot \sup_{t\in[0,T]} \mathbb{E}\Big[ \Big(1 + \| X_t^\varepsilon - \psi(t)\|^{r}\Big)^2 \Big]^{1/2} \cdot \int_0^T \sup_{s\in[0,t]} \mathbb{E}\Big[ \| X_s^\varepsilon - \psi^{x_0}(s)\|^2 \Big] dt \\ &+ \varepsilon \Big( 6TL^2 \sup_{t\in[0,T]} \mathbb{E}\Big[ \| X_t^\varepsilon - x_0\|^2\Big] + 3 \int_0^T \| \sigma(t, x_0, \delta_{x_0}) \|^2 dt \Big). \end{align*} Therefore we can conclude \eqref{equation X goes to skeleton} from the finite moment estimates proved in Proposition \ref{prop:SSMVE-Moments} and Gr\"onwall's inequality. Next, \eqref{equation convergence of law to the Diract path} follows from \eqref{equation X goes to skeleton} \begin{align*} \sup_{t\in[0,T]} \| & f \ast \mu_t^\varepsilon(x) - f(x-\psi^{x_0}(t) \| \\ & \leq C \sup_{t\in[0,T]} \mathbb{E}\Big[ \|X_t^\varepsilon - \psi^{x_0}(t) \|^2 \Big]^{1/2} \cdot \mathbb{E}\Big[ \Big( 1+ \| X_t^\varepsilon \|^{r-1} + \| \psi^{x_0}(t)\|^{r-1} \Big)^2 \Big]^{1/2} \underset{\varepsilon \to 0}{\longrightarrow}0. \end{align*} \end{proof} \subsection{A classical Freidlin-Wentzell result} Since the law $\mu^{\varepsilon}$ tends to the Dirac mass of the path $ \psi^{x_0}$, we will first study SDEs where the law in the coefficients of the McKean-Vlasov equation has been replaced by $\delta_{\psi^{x_0}}$. \begin{defn}\label{definition Y classical reflected SDE} Let $Y^{\varepsilon}$ be the solution of \begin{equation} \label{equation Y classical reflected SDE} \begin{split} Y^{\varepsilon}_t =& x_0 + \int_0^t b(s,Y^{\varepsilon}_{s},\delta_{\psi^{x_0}(s)})ds + \int_0^t f\Big( Y^{\varepsilon}_{s} - \psi^{x_0}(s) \Big)ds + \sqrt{\varepsilon} \int_0^t \sigma(s,Y^\varepsilon_s,\delta_{\psi^{x_0}(s)}) dW_s - k^{Y}_{t}, \\ |k^{Y}|_t =& \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(Y^\varepsilon_s) d|k^{Y}|_s, \qquad k^{Y}_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(Y^\varepsilon_s) \textbf{n}(Y^\varepsilon_s) d|k^{Y}|_s. \end{split} \end{equation} \end{defn} The dynamics of \eqref{equation Y classical reflected SDE} satisfy those of Theorem \ref{thm:ExistUnique-LocLip-Ref}, so the existence and uniqueness of a solution is established. Further, we introduce the follow approximation of \eqref{equation Y classical reflected SDE}. \begin{defn} \label{equation Y classical reflected SDE - euler} Let $n\in \mathbb{N}$. Let $Y^{n,\varepsilon}$ be the solution of \begin{align} \nonumber Y_{t}^{n,\varepsilon}=& x_0 + \int_0^t b(s,Y^{n,\varepsilon}_{s},\delta_{\psi^{x_0}(s)}) + f\Big( Y^{n,\varepsilon}_{s} - \psi^{x_0}(s) \Big)ds \\ \nonumber &\sqrt{\varepsilon}\sum_{i=0}^{\lfloor \frac{tn}{T} \rfloor - 1} \sigma\Big( \tfrac{iT}{n} ,Y^{n,\varepsilon}_{\tfrac{iT}{n} },\delta_{\psi^{x_0}\big(\tfrac{iT}{n} \big)} \Big)\cdot \Big( W_{\tfrac{(i+1)T}{n}} - W_{\tfrac{iT}{n}} \Big) \\ \label{equation Y classical reflected SDE Euler scheme} &+ \sqrt{\varepsilon} \sigma\Big( \tfrac{T\lfloor \frac{tn}{T}\rfloor}{n} ,Y^{n,\varepsilon}_{\tfrac{T\lfloor \frac{tn}{T}\rfloor}{n} },\delta_{\psi^{x_0}\big(\tfrac{T\lfloor \frac{tn}{T}\rfloor}{n} \big)} \Big) \Big( W_{\tfrac{T\lceil \frac{tn}{T}\rceil}{n}} - W_{\tfrac{T\lfloor \frac{tn}{T}\rfloor}{n}} \Big) n\Big( t - \tfrac{T\lfloor \frac{tn}{T}\rfloor}{n}\Big) - k^{Y^{n,\varepsilon}}_{t} \\ \nonumber |k^{Y^{n,\varepsilon}}|_t =& \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(Y_s^{n, \varepsilon}) d|k^{Y^{n,\varepsilon}}|_s, \qquad k^{Y^{n,\varepsilon}}_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(Y_{s}^{n,\varepsilon}) \textbf{n}(Y_{s}^{n,\varepsilon}) d|k^{Y^{n,\varepsilon}}|_s. \end{align} \end{defn} On a subset of measure 1, Equation \eqref{equation Y classical reflected SDE Euler scheme} determines the dynamics of a random ODE for which the Skorokhod problem has already been solved, so existence and uniqueness are already assured. \begin{defn} Let $I': C_{0}([0,T]; \mathbb{R}^d) \to \mathbb{R}$ be the rate function of Schilder's Theorem \cite{DZ}*{Theorem 5.2.3}, \begin{equation*} I'(g) = \begin{cases}\frac{1}{2}\int_0^T\| \dot{g}(t) \|^2dt~& \text{if}~ g\in \mathcal{H}^0_1, \\ \infty~& \text{otherwise}, \end{cases} \end{equation*} where $\mathcal{H}^0_1$ is the Cameron Martin space for Brownian motion defined in Section \ref{sec:Preliminaries}. \end{defn} Define the functional $H^n: C_{0}([0,T]; \mathbb{R}^d) \to C_{x_{0}}([0,T]; \mathbb{R}^d)$, which maps the Brownian path to the reflected path of \eqref{equation Y classical reflected SDE Euler scheme}, that is \begin{align} \nonumber H^n[h](t) =& x_0 + \int_0^t b\big(s,H^n[h](s),\delta_{\psi^{x_0}(s)}\big) + f\Big(H^n[h](s) - \psi^{x_0}(s)\Big) ds - k^{h,n}_t \\ \nonumber &+ \sum_{i=0}^{\lfloor \frac{tn}{T} \rfloor -1} \sigma\Big(\frac{iT}{n},H^n[h]\Big(\frac{iT}{n}\Big),\delta_{\psi^{x_0}(\frac{iT}{n})} \Big) \Big(h\Big(\frac{(i+1)T}{n}\Big)-h\Big(\frac{iT}{n}\Big)\Big) \\ \label{e30} &+ \sigma\Big(\frac{T\lfloor \frac{tn}{T} \rfloor }{n},H^n[h]\Big(\frac{T\lfloor \frac{tn}{T} \rfloor }{n}\Big),\delta_{\psi^{x_0}(\frac{T\lfloor \frac{tn}{T} \rfloor }{n})} \Big)\Big( h\Big( \tfrac{T\lceil \frac{tn}{T}\rceil}{n} \Big) - h\Big(\tfrac{T\lfloor \frac{tn}{T} \rfloor }{n}\Big)\Big)\frac{n}{T} \Big(t-\frac{T\lfloor \frac{tn}{T} \rfloor}{n}\Big), \\ |k^{h,n}|_t =& \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(H^n[h](s)) d|k^{h,n}|_s, \quad k^{h,n}_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(H^n[h](s)) \textbf{n}(H^n[h](s)) d|k^{h,n}|_s. \nonumber \end{align} When restricted to $\mathcal{H}_1^0$, the operator $H^n$ represents a Skeleton operator for the random ODE \eqref{equation Y classical reflected SDE Euler scheme}. Equation \eqref{equation Y classical reflected SDE} is a classical reflected SDE and \cite{dupuis1987large}*{Theorem 3.1} proves a Freidlin-Wentzell type LDP for such reflected SDEs when the coefficients are bounded and Lipschitz. The following lemma extends this result to unbounded domains and allows for unbounded locally Lipschitz coefficients, this is done via the contraction principle \cite{DZ}*{Theorem 4.2.1}. For convenience of notation let \begin{align*} \hat{t}:=\frac{T\lceil \frac{tn}{T}\rceil}{n},~ \check{t}:=\frac{T\lfloor \frac{tn}{T}\rfloor}{n},~\text{and}~ \hat{s}:=\frac{T\lceil \frac{sn}{T}\rceil}{n},~ \check{s}:=\frac{T\lfloor \frac{sn}{T}\rfloor}{n}. \end{align*} \begin{lemma} \label{lemma : continuity of the map Hn} For each $n\in \mathbb{N}$, the mapping $H^n :C_0([0,T];\mathbb{R}^d) \to C_{x_{0}}([0,T];\mathbb{R}^d)$ defined by \eqref{e30} is continuous. \end{lemma} \begin{proof} Let $\{h_m: m\in \mathbb{N}\} \subset C_0([0,T]; \mathbb{R}^d)$ and suppose $\lim_{m\to \infty} \| h_m - h\|_{\infty, [0,T]} =0$. We denote $\phi=H^n[h]$ and $\phi_m=H^n[h_m]$. Then \begin{align*} \| \phi(t) - \phi_m(t)\|^2 =& 2\int_0^t \Big\langle \phi(s) - \phi_m(s), b(s, \phi(s), \delta_{\psi(s)} ) - b(s, \phi_m(s), \delta_{\psi(s)} ) \Big\rangle ds \\ &+ 2\int_0^t \Big\langle \phi(s) - \phi_k(s), f( \phi(s) - \psi(s) ) - f( \phi_m(s) - \psi(s) ) \Big\rangle ds \\ & -2\int_0^t \Big\langle \phi(s) - \phi_m(s), dk_s^{h, n} - dk_s^{h_m, n} \Big\rangle \\ &+ 2n\int_0^t \Big\langle \phi(s) - \phi_m(s), \sigma( \check{s}, \phi(\check{s}), \delta_{\psi(\check{s})} ) \Big( h( \hat{s}) - h( \check{s}) \Big) \\ &\qquad - \sigma(\check{s}, \phi_m(\check{s}), \delta_{\psi(\check{s})} ) \Big( h_m( \hat{s}) - h_m( \check{s}) \Big)\Big\rangle ds. \end{align*} Hence \begin{align*} \Big\| \phi(t) - \phi_m(t) \Big\|^2 \leq& 4L\int_0^t \Big\| \phi(s) - \phi_m(s) \Big\|^2 ds \\ +2n\int_0^t \Big\langle \phi(s)& - \phi_m(s), \Big( \sigma( \check{s}, \phi(\check{s}), \delta_{\psi(\check{s})} ) - \sigma( \check{s}, \phi_m(\check{s}), \delta_{\psi(\check{s})} ) \Big) \cdot \Big( h_m(\hat{s}) - h_m(\check{s}) \Big) ds \\ +2n \int_0^t \Big\langle \phi(s)& - \phi_m(s), \sigma( \check{s}, \phi(\check{s}), \delta_{\psi( \check{s})} ) \cdot \Big( (h-h_m)( \hat{s}) - (h-h_m)( \check{s}) \Big\rangle ds. \end{align*} Using the Lipschitz properties of $\sigma$ combined with $n$ being fixed, we get \begin{align*} \| \phi - \phi_m \|_{\infty, [0,T]}^2 \leq& \Big( 8L + 8n\| h\|_{\infty, [0,T]}\Big) \int_0^t \Big\| \phi(s) - \phi_m(s) \Big\|^2 ds \\ &+ 16n^2 \| h-h_m\|_{\infty, [0,T]}^2 \Big( \int_0^T \sigma( \check{s}, \phi(\check{s}), \delta_{\psi(\check{s})} ) ds \Big)^2. \end{align*} As the integral $ \int_0^T \sigma(\check{s}, \phi(\check{s}), \delta_{\psi(\check{s})} ) ds$ will be finite for any choice of $n$ and $h$, we apply Gr\"onwall inequality to conclude $$ \| \phi - \phi_m \|_{\infty, [0,T]}^2 \lesssim \| h-h_m\|_{\infty, [0,T]}^2. $$ \end{proof} \begin{lemma} \label{lemma the LDP for Y classical euler reflected sde} Let $Y^{n,\varepsilon}$ be the solution to \eqref{equation Y classical reflected SDE Euler scheme}. Then $Y^{n,\varepsilon}$ satisfies an LDP on the space $C_{x_{0}}([0,T]; \mathbb{R}^d)$, with a good rate function given by \begin{equation} \label{equation Rate function LDP.} I^{n,T}_{x_{0}}( \phi ) \coloneqq \underset{\{h\in \mathcal{H}_1^0 ~:~H^n(h) = \phi\}}{\inf} I'(h). \end{equation} \end{lemma} \begin{proof} The result is a straightforward application of the contraction principle \cite{DZ}*{Theorem 4.2.1} using the continuous map $H^n$ as established in Lemma \ref{lemma : continuity of the map Hn}). \end{proof} Next we use that $Y^{n,\varepsilon}$ is an approximation of $Y^\varepsilon$ in the appropriate sense to obtain an LDP for $Y^\varepsilon$ via \cite{DZ}*{Theorem 4.2.23}. \begin{lemma} \label{lemma : euler scheme is an expo good approximation} Let $Y^\varepsilon$ be the solution to \eqref{equation Y classical reflected SDE}, and $Y^{n,\varepsilon}$ be the solution to \eqref{equation Y classical reflected SDE Euler scheme}. Then for every $\delta>0$ \begin{align} \limsup_{n\to\infty} \limsup_{\epsilon \to 0} \epsilon \log \mathbb{P} \Big[ \sup_{t\in[0,T]} \| Y^{n,\varepsilon}_t-Y^{\varepsilon}_t\| > \delta \Big] = -\infty. \label{eq : expo goo approx definition} \end{align} That is $Y^{n,\varepsilon}$ is an exponentially good approximation of $Y^\varepsilon$, in the sense of \cite{DZ}*{Definition 4.2.14}. \end{lemma} \begin{proof} The proof makes use of the LDP for $Y^{n,\varepsilon}$ established in Lemma \ref{lemma the LDP for Y classical euler reflected sde}. We follow a similar strategy as \cite{dos2019freidlin}*{Lemma 4.6}, requiring an adapted version of \cite{DZ}*{Lemma 5.6.18} stated here in Lemma \ref{lemma : adapted D+Z lemma 5.6.18}. Define the process $Z^\varepsilon \coloneqq Y^\varepsilon-Y^{n,\varepsilon}$, so that \begin{equation*} Z^\varepsilon_t= \int_0^t b_s ds+\int_0^t \sigma_s ds+k^{Y^{n}}_{t}-k^{Y}_{t}, \end{equation*} where \begin{align*} b_t \coloneqq& b\Big(t,Y^\varepsilon_t,\delta_{\psi(t)}\Big)-b\Big(t,Y^{n,\varepsilon}_{t},\delta_{\psi(t)}\Big)+f\Big(Y^\varepsilon_t -\psi(t) \Big)-f\Big(Y^{n,\varepsilon}_{t}-\psi(t)\Big), \\ \sigma_t \coloneqq& \sigma\Big(t,Y^\varepsilon_t,\delta_{\psi(t)}\Big)- \sigma\Big(\check{t},Y_{\check{t}}^{n,\varepsilon},\delta_{\psi(\check{t})}\Big). \end{align*} Next we define the stopping time \begin{equation*} \tau_{R+1} \coloneqq \min \Big\{ T, \inf\{t\geq 0 : \|Y^{\varepsilon}_t \| \geq R+1\}, \inf\{t\geq 0:\|Y^{n,\varepsilon}_{t} \| \geq R+1 \} \Big\}. \end{equation*} Note that for $t\in[0,\tau_{R+1}]$ by the local Lipschitz property of $b$ and $f$, we have \begin{align*} \|b_t\| \leq& L_{R}\| Z^\varepsilon_t \|, \end{align*} for a constant $L_R$ only depending on $R$. Also note that \begin{align*} \|\sigma_t\|\leq& \Big\| \sigma\Big( t, Y_t^\varepsilon ,\delta_{\psi(t)}\Big) - \sigma\Big(\check{t}, Y_t^\varepsilon , \delta_{\psi(t)} \Big) \Big\| + \Big\| \sigma\Big(\check{t}, Y_{\check{t}}^{n,\varepsilon}, \delta_{\psi(t)}\Big ) - \sigma\Big(\check{t}, Y_t^\varepsilon, \delta_{\psi(t)} \Big) \Big\| \\ &+ \Big\| \sigma\Big(\check{t}, Y_{\check{t}}^{n,\varepsilon}, \delta_{\psi(\check{t})} \Big) - \sigma\Big(\check{t}, Y_{\check{t}}^{n,\varepsilon}, \delta_{\psi(t)} \Big)\Big\| \\ \leq& L\Big( \| t-\check{t}\|^\beta + \| Z_t^\varepsilon \| + \| \psi(t) - \psi(\check{t}) \| \Big) \\ \leq& M(\rho(n)+\|Z_t\|), \end{align*} for some $M$ large enough, and $\rho(n) \underset{n \to \infty }{\to} 0 $. Thus the conditions of Lemma $\ref{lemma : adapted D+Z lemma 5.6.18}$ are satisfied. Now fix any $\delta>0$ and notice that \begin{align*} \Big\{ \sup_{t\in[0,T]} \| Y^\varepsilon_t-Y^{n,\varepsilon}_{t} \| \geq \delta \Big\} \subseteq& \Big\{ \sup_{t\in[0,\tau_{R+1}]} \| Y^\varepsilon_t-Y^{n,\varepsilon}_{t} \| \geq \delta, \tau_{R+1} = T \Big\} \cup \Big\{ \sup_{t\in[0,T]} \| Y^\varepsilon_t-Y^{n,\varepsilon}_{t} \| \geq \delta, \tau_{R+1}< T \Big\} \\ \subseteq& \Big\{ \sup_{t\in[0,\tau_{R+1}]} \| Y^\varepsilon_t-Y^{n,\varepsilon}_{t} \| \geq \delta \Big\} \cup \Big\{ \tau_{R+1}< T \Big\}. \end{align*} By Lemma \ref{lemma : adapted D+Z lemma 5.6.18} we know that \begin{equation*} \lim_{n\to\infty} \limsup_{\varepsilon\to 0} \varepsilon \log \Big( \mathbb{P}\Big[ \sup_{t\in[0,\tau_{R+1}]} \| Y^\varepsilon_t-Y^{n,\varepsilon}_{t} \| \geq \delta \Big] \Big)=-\infty. \end{equation*} Furthermore define $\tau^{Y_{n}}_{R}=\inf\{t\geq0: \|Y^{n,\varepsilon}_{t}\| \geq R \} $, and notice \begin{align*} \Big\{ \tau_{R+1}< T \Big\} \subseteq& \Big\{ \tau_{R+1}< T, \tau^{Y^{n}}_{R}\leq T \Big\} \cup \Big\{ \tau_{R+1}< T, \tau^{Y^{n}}_{R}> T \Big\} \\ \subseteq& \Big\{\tau^{Y^{n}}_{R}\leq T \Big\} \cup \Big\{ \| Y_{\tau_{R+1}}^\varepsilon-Y_{\tau_{R+1}}^{n,\varepsilon} \|\geq 1 \Big\}. \end{align*} Again, by Lemma \ref{lemma : adapted D+Z lemma 5.6.18} and setting $ \delta=1$ we have that \begin{equation*} \lim_{n\to\infty} \limsup_{\varepsilon\to 0} \varepsilon \log \Big( \mathbb{P}\Big[ \sup_{t\in[0,\tau_{R+1}]} \| Y^\varepsilon_t-Y^{n,\varepsilon}_{t} \| \geq 1 \Big] \Big)=-\infty. \end{equation*} Recalling the identity, for positive $\alpha_\varepsilon,\beta_{\varepsilon}$ \begin{equation*} \limsup_{\varepsilon\to 0}\varepsilon \log \Big( \alpha_{\varepsilon}+\beta_{\varepsilon} \Big)= \limsup_{\varepsilon \to 0}\varepsilon \log \Big( \max\Big\{ \alpha_{\varepsilon},\beta_{\varepsilon} \Big\} \Big), \end{equation*} and appealing to the LDP satisfied by $Y^{n,\varepsilon}$, we are left with \begin{align*} \lim_{n\to\infty} \limsup_{\varepsilon \to 0} \varepsilon \log \Big( \mathbb{P}\Big[ \sup_{t\in[0,T]} \|Y_t^\varepsilon-Y^{n,\varepsilon}_{t} \|\geq \delta \Big] \Big) \leq& \lim_{n\to\infty} \limsup_{\varepsilon \to 0} \varepsilon \log \Big( \mathbb{P}\Big[ \sup_{t\in[0,T]}\|Y^{n,\varepsilon}_{t} \|\geq R \Big] \Big) \\ \leq& \lim_{n\to\infty} - \underset{\phi \in C_{x_0}([0,T];\mathbb{R}^d) : \sup_{t\in[0,T]}\|\phi(t)\|\geq R }{\inf} ~~ I^{n,T}_{x_{0}}(\phi). \end{align*} Hence to conclude \eqref{eq : expo goo approx definition} we show that \begin{align} \lim_{R\to \infty} \lim_{n\to \infty} \underset{\phi \in C_{x_0}([0,T];\mathbb{R}^d) : \sup_{t\in[0,T]}\|\phi(t)\|\geq R }{\inf} ~~ I^{n,T}_{x_{0}}(\phi) = \infty. \label{eq : what needs t be shown limit} \end{align} Indeed, let $\phi\in C_{x_{0}}([0,T]; \mathbb{R}^d)$ be such that $\sup_{s\in[0,T]}\|\phi(s)\| \geq R$. Let $h \in \mathcal{H}^{0}_1$ be a function such that $H^n[h] = \phi$, recall that if $h\notin \mathcal{H}^0_1$ we immediately have that $I'(h)=\infty$. Via a concatenation argument it is simple to show that we can assume the path $\phi$ is increasing on $[0,T]$. Assuming $\phi$ is increasing we have $\forall s_1\leq s_2$ the bound \begin{align} \|\phi(s_1)-x_0\|\leq& 3\|\phi(s_2)-x_0\|+2\|x_0\|.\label{eq : increasing in inf.} \end{align} Note that \begin{align*} \|\phi(t) - x_{0}\|^2 =& 2\int_0^t \Big\langle \phi(s)-x_0 , b(s,\phi(s),\delta_{\psi(s)})+f(\phi(s)-\delta_{\psi(s)})\Big\rangle ds \\ &+ \int_{0}^{t} \Big \langle \phi(s)-x_0, \sigma(\check{s},\phi(\check{s}),\delta_{\psi(\check{s})})\frac{n}{T} \Big(h(\hat{s})-h(\check{s})\Big)\Big\rangle ds \\ &- 2\int_0^t \Big\langle \phi(s)-x_{0},\mathbf{n}(\phi(s)) \Big\rangle |k^{h,n}|_s. \end{align*} By Cauchy–Schwarz and the one-sided Lipschitz properties of $b$ and $f$ we can bound the drift term by \begin{align*} \Big\langle& \phi(s)-x_0 , b(s,\phi(s),\delta_{\psi(s)})+f(h(s)-\delta_{\psi(s)})\Big\rangle \\ &\leq 2(L+2)\|\phi(s)-x_0\|^2+2\|f(x_0-\delta_{\psi(s)}) \|^2+2\| b(s,x_0,\delta_{\psi(s)}) \|^2. \end{align*} Using this bound, the integrability conditions of $f$ and $b$, and Lemma \ref{lem:NormalToDomain} we have for a constant $c_1=c_1(L,x_0)$ independent of $t$ \begin{align} \nonumber \|\phi(t)& - x_{0}\|^2 = c_1\Big(1+\int_0^t \| \phi(s)-x_0 \|^2ds\Big) \\ \label{e110} &+ \int_{0}^{t} \Big \langle \phi(s)-x_0, \sigma (\check{s},\phi(\check{s}),\delta_{\check{s})})\frac{n}{T} \Big(h(\hat{s})-h(\check{s})\Big)\Big\rangle ds. \end{align} We can further bound the above term by noting that for any vector $a\in \mathbb{R}^d$, \begin{align*} \Big\langle \phi(s)-x_0,\sigma(\check{s},\phi(\check{s}), \delta_{\check{s})}) a \Big\rangle \leq& L \| \phi(s)-x_0 \| \| \phi(\check{s})-x_0 \| \|a\| \\ &+ \| \phi(s)-x_0 \| \| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \| \|a\|. \end{align*} Since $\check{s}\leq s $ employing \eqref{eq : increasing in inf.}, and $c<c^2+1$ for $c\in \mathbb{R}$,we have for a constant $c_2=c_2(L,x_0)$ independent of $t$, $n$ \begin{align*} \Big\langle \phi(s)-x_0,\sigma(\check{s},\phi(\check{s}), \delta_{\psi(\check{s})}) a \Big\rangle \leq& c_2 \Bigg( \| \phi(s)-x_0 \|^2 \Big( \|a\| + \|\sigma(\check{s},x_0,\delta_{\psi(\check{s})})\|\|a\| \Big) \\ &+ \|a\|+ \| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \| \|a\| \Bigg). \end{align*} Setting $$ a= \frac{n}{T} \Big(h(\hat{s})-h(\check{s})\Big)=\frac{n}{T}\int_{\check{s}}^{\hat{s}}\dot{h}(u)du, $$ and substituting this bound into \eqref{e110}, we get that for a constant $c=c(L,x_0)$ independent of $t$ or $n$ \begin{align} \|\phi(t) - x_{0}\|^2 \leq & c\Bigg(\int_0^t \Big\| \frac{n}{T}\int_{\check{s}}^{\hat{s}}\dot{h}(u)du \Big\| + \Big\| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \Big\| \Big\| \frac{n}{T} \int_{\check{s}}^{\hat{s}}\dot{h}(u)du \Big\| ds \label{e120} \\ &+ \int_0^t \|\phi(s)-x_0\|^2 \Big( 1+ \Big\| \frac{n}{T} \int_{\check{s}}^{\hat{s}}\dot{h}(u)du \Big\| + \Big\| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \Big\| \Big\| \frac{n}{T} \int_{\check{s}}^{\hat{s}}\dot{h}(u)du \Big\| \Big)ds \Bigg). \nonumber \end{align} Also note that we have $$ \frac{n}{T}\int_0^t\int_{\check{s}}^{\hat{s}}\|\dot{h}(u)\| du ds \leq \int_0^{T}\| \dot{h}(s)\| ds, $$ and similarly \begin{align*} \frac{n}{T}\int_0^t\| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \| \int_{\check{s}}^{\hat{s}}\|\dot{h}(u)\| du ds=& \frac{n}{T}\int_0^t \int_{\check{s}}^{\hat{s}}\| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \|\|\dot{h}(u)\| du ds \\ \leq& \int_0^T \| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \|\|\dot{h}(s)\| ds. \end{align*} By applying to Gr\"onwall's Inequality in \eqref{e120}, and using the previous two observations, we have \begin{align*} \|\phi(t) - x_{0}\|^2 \leq c&\Bigg( \int_0^{T}\| \dot{h}(s)\| + \| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \|\|\dot{h}(s)\| ds \\ & \cdot \exp\Big(c \int_0^{T}1+\| \dot{h}(s)\| + \| \sigma(\check{s},x_0,\delta_{\psi(\check{s})}) \|\|\dot{h}(s)\| ds \Big) \Bigg). \end{align*} Now adding and subtracting the terms $\| \sigma(s,x_0,\delta_{\psi(\check{s})} ) \|,\| \sigma(\check{s},x_0,\delta_{\psi(s)} ) \|$, using the Triangle Inequality, Cauchy-Schwarz's inequality, the continuity of $\psi$, and recalling the Assumption \ref{assumption : Holder regularity of sigma} we obtain \eqref{eq : what needs t be shown limit}. \end{proof} \begin{lemma} \label{lemma the LDP for Y classical reflected sde} Let $Y^\varepsilon$ be the solution to \eqref{equation Y classical reflected SDE}. Then $Y^\varepsilon$ satisfies an LDP on the space $C_{x_0}([0,T];\mathbb{R}^d)$ with the good rate function \begin{equation}\label{eq : the rate function for our self-stab. reflec. sde} I^T_{x_{0}}(\phi)=\inf_{\{h\in \mathcal{H}_1^0 ~:~ H[h]=\phi\}} I'(h), \end{equation} where the skeleton operator $H$ was defined in \eqref{eq:SkeletonProcess-h}. \end{lemma} \begin{proof} The proof will follow by appealing to \cite{DZ}*{Theorem 4.2.23}. That is we need to show that for every $\alpha>0$ \begin{equation}\label{e1} \lim_{n\to\infty}\sup_{\{h\in \mathcal{H}^0_1 ~:~ \|h\|_{\mathcal{H}^0_1}<\alpha\}}\|H^n[h]-H[h]\|=0. \end{equation} Fix $\alpha<\infty$, $h\in \mathcal{H}^0_1 $ with $\|h\|_{\mathcal{H}^0_1}<\alpha$. Denote $\phi^n=H^n(h)$, $\phi=H(h)$. Now by the one-sided Lipschitz property of the drift and Lemma \ref{lem:NormalToDomain}, \begin{align} \nonumber \|\phi^n(t)-\phi(t)\|^2 \leq& 2 \int_0^t \Big\langle \phi^n(s)-\phi(s), \sigma(\check{s},\phi^n(\check{s}),\delta_{\psi(\check{s})}h_n(s) \\ \label{e111} &-\sigma\Big(s,\phi(s),\delta_{\psi(s)}\Big)\dot{h}(s) \Big\rangle ds + \int_0^t 4L\|\phi^n(s)-\phi(s)\|^2ds, \end{align} where we have denoted $h_n(s)\coloneqq \frac{n}{T} \Big( h( \hat{s} ) - h( \check{s})\Big) $. Next notice that \begin{align*} \Big\| \sigma(\check{s},\phi^n(\check{s}),\delta_{\psi(\check{s})})-\sigma(s,\phi(s),\delta_{\psi(s)}) \Big\| \leq& \Big\| \sigma(\check{s},\phi^n(\check{s}),\delta_{\psi(\check{s})})-\sigma(s,\phi^n(\check{s}),\delta_{\psi(\check{s})}) \Big\| \\ &+ \Big\| \sigma(s,\phi^n(\check{s}),\delta_{\psi(\check{s})})-\sigma(s,\phi^n(\check{s}),\delta_{\psi(s)}) \Big\| \\ &+\Big\| \sigma(s,\phi^n(\check{s}),\delta_{\psi(s)})-\sigma(s,\phi(s),\delta_{\psi(s)}) \Big\| \\ \leq& \rho^n(s)+L\|\phi^n(s)-\phi(s)\|, \end{align*} where $\sup_{s\in[0,T]}\rho^n(s)\underset{n\to\infty}{\to} 0$, by continuity of $\psi$ and the Assumption \ref{assumption : Holder regularity of sigma}. Hence \begin{align*} \Big\| \sigma&(\check{s},\phi^n(\check{s}),\delta_{\psi(\check{s})})h_n(s)-\sigma\Big(s,\phi(s),\delta_{\psi(s)}\Big)\dot{h}(s) \Big\| \\ \leq& (\rho^n(s)+L\|\phi^n(s)-\phi(s)\|)\| h_n(s) \|+\| \sigma(s,\phi(s),\delta_{\psi(s)}) \| \| \dot{h}(s)-h_n(s) \|. \end{align*} Substituting this bound into \eqref{e111} and applying Gr\"onwall we get that for a constant $c$ independent of $n$ or $t$, \begin{align*} &\|\phi^n(t)-\phi(t)\|^2\leq c\exp\Bigg( c\int_0^t 1 + (\rho^n(s)+1)\|h_n(s)\| + \|\sigma(s,\phi(s),\delta_{\psi(s)} ) \| \cdot \| \dot{h}(s)-h_n(s) \|ds \Bigg) \\ &\qquad \cdot \int_0^t (\rho^n(s)+1)\|h_n(s)\| + \|\sigma(s,\phi(s),\delta_{\psi(s)} ) \| \cdot \|\dot{h}(s)-h_n(s)\|ds \\ &\leq c\exp\Bigg( c\int_0^t 1+ (\rho^n(s)+1) \cdot (\|\dot{h}(s)\|+\|h_n(s)-\dot{h}(s)\|) + \| \sigma(s,\phi(s),\delta_{\psi(s)})\| \cdot \|\dot{h}(s)-h_n(s)\|ds \Bigg) \\ &\qquad \cdot \int_0^t (\rho^n(s)+1)\|\dot{h}(s)\| +(\rho^n(s)+1)\|h_n(s)-\dot{h}(s)\| +\| \sigma(s,\phi(s),\delta_{\psi(s)})\|\cdot \|\dot{h}(s)-h_n(s)\|ds. \end{align*} Applying Cauchy–Schwarz on the $\|\sigma(s,\phi(s),\delta_{\psi(s)})\| \cdot \|\dot{h}(s)-h_n(s) \|$ terms and sending $n\to \infty$ gives \eqref{e1}. The LDP for $Y^{\epsilon}$ with rate function \eqref{eq : the rate function for our self-stab. reflec. sde} now follows by appealing to \cite{DZ}*{Theorem 4.2.23} and the fact that $Y^{n,\varepsilon}$ are exponentially good approximations of $Y^\varepsilon$ Lemma \ref{lemma : euler scheme is an expo good approximation}. \end{proof} \subsection{Freidlin-Wentzell results for reflected McKean-Vlasov equations} Next we pass the LDP from the process $Y^\varepsilon$ to $X^\varepsilon$ using exponential equivalence. \begin{theorem} \label{ldp for xi} Let $x_0^\varepsilon \in \mathbb{R}^d$, converge to $x_0\in \mathbb{R}^d$ as $\varepsilon \to 0$. Let $Y^\varepsilon$ be the solution to \eqref{equation Y classical reflected SDE}, $\psi^{x_0}$ the solution of \eqref{eq:SkeletonProcess-0}, and $X^{\varepsilon}$ be the solution to Equation \eqref{eq:MVSS-LDP} started at $X^\varepsilon_0= x^\varepsilon_0$. Then the reflected McKean-Vlasov equation $X^{\varepsilon}$ satisfies an LDP on $C_{x_{0}}([0,T]; \mathbb{R}^d)$ with rate function \eqref{eq : the rate function for our self-stab. reflec. sde}. \end{theorem} \begin{proof} Firstly, one can quickly verify that $\|\psi^{x^\varepsilon_0} (t)-\psi^{x_0}(t) \|\underset{\varepsilon \to 0}{\to} 0$. Let $Z_t^\varepsilon\coloneqq X_t^\varepsilon-Y_t^\varepsilon$. Then $Z^\varepsilon$ satisfies \begin{equation*} Z^\varepsilon_t=z_0+ \int_0^t b_s ds+\int_0^t \sigma_s ds+k^{Y,\varepsilon}_{t}-k^{\varepsilon}_{t}, \end{equation*} where $ z_0 \coloneqq x_0^\epsilon-x_0$, $\sigma_t \coloneqq \sigma\big(t,X^\varepsilon_t,\mu_t^\varepsilon \big)-\sigma\big(t,Y^\varepsilon_t,\delta_{\psi^{x_{0}}(t)}\big)$ and \begin{align*} b_t \coloneqq & b\Big(t,X^\varepsilon_t,\mu_t^\varepsilon \Big) -b\Big(t,Y^\varepsilon_t,\delta_{\psi^{x_{0}}(t)}\Big) +\int_{\mathbb{R}^d} f(X^\varepsilon_t-x) d\mu_t^\varepsilon -f(Y^\varepsilon_t-\psi^{x_0}(t)) . \end{align*} Let $R>0$ be large enough so that $x_0^\varepsilon,y\in B_{R+1}(0)$, and $\psi^{x_{0}}(t)$ does not leave $B_{R+1}(0)$ up to time $T$. We are able to do since $\psi$ is non-explosive. Let $\tau_{R+1}\coloneqq \min \Big\{T , \inf\{ t\geq0 : \|X_t^\varepsilon \| \geq R+1 \},\inf\{ t\geq0 : \|Y_t^\varepsilon \| \geq R+1\} \Big\}$. Notice that for all $t\in [0,\tau_{R+1}]$ we have \begin{align*} \Big\| b\Big(t,X^\varepsilon_t, \mu_t^\varepsilon \Big) &- b\Big(t, Y^\varepsilon_t, \delta_{\psi^{x_{0}}(t)}\Big) \Big\| \\ &\leq \Big\| b\Big(t,X^\varepsilon_t,\mu_t^\varepsilon \Big) - b\Big(t,X^\varepsilon_t,\delta_{\psi^{x^{\varepsilon}_{0}}(t)}\Big) \Big\| + \Big\| b\Big(t,X^\varepsilon_t,\delta_{\psi^{x^{\varepsilon}_{0}}(t)}\Big) - b\Big(t,X^\varepsilon_t,\delta_{\psi^{x_{0}}(t)}\Big) \Big\| \\ &\quad + \Big\| b\Big(t,X^\varepsilon_t,\delta_{\psi^{x_{0}}(t)} \Big)-b\Big(t,Y^\varepsilon_t,\delta_{\psi^{x_{0}}(t)}\Big) \Big\| \\ &\leq L \mathbb{E}\Big[ \|X^\varepsilon_t - \psi^{x^{\varepsilon}_{0}}(t)\|^2\Big]^{\frac{1}{2}} + L\| \psi^{x_0^\varepsilon}(t)-\psi^{x_0}(t) \| + L_R \|X^\varepsilon_t-Y^\varepsilon_t \|. \end{align*} Hence \begin{equation*} \Big\| b\Big(t,X^\varepsilon_t, \mu_t^\varepsilon \Big) - b\Big(t, Y^\varepsilon_t, \delta_{\psi^{x_{0}}(t)}\Big) \Big\| \leq B^1_{R} \big(\rho^1(\varepsilon)+\|Z_t^\varepsilon\|^2 \big)^{\frac{1}{2}}, \end{equation*} for a constant $B^1_R$ large enough, and $\rho^1(\varepsilon)\coloneqq \mathbb{E}\|X_t^\varepsilon-\psi^{x^{\varepsilon}_0}(t)\|^2+\| \psi^{x_0^\varepsilon}(t)-\psi^{x_0}(t) \| \underset{\varepsilon \to 0}{\to} 0$ by \eqref{equation X goes to skeleton}. Furthermore for $t\in[0,\tau_{R+1}]$ we also have \begin{align*} \Big\| \int_{\mathbb{R}^d} &f(X^\varepsilon_t-x)d\mu_t^\varepsilon-f(Y^\varepsilon_t-\psi^{x_0}(t)) \Big\| \\ \leq& \Big\| \int_{\mathbb{R}^d} f(X^\varepsilon_t-x)-f(X^\varepsilon_t-\psi^{x^{\varepsilon}_0}(t)) \Big\| + \Big\| f(X^\varepsilon_t-\psi^{x^{\varepsilon}_0}(t))-f(X^\varepsilon_t-\psi^{x_0}(t)) \Big\| \\ &+ \Big\| f(X^\varepsilon_t-\psi^{x_0}(t))-f(Y^\varepsilon_t-\psi^{x_0}(t)) \Big\| \\ \leq& \Big\| \int_{\mathbb{R}^d} f(X_t^\varepsilon-x)d\mu^\varepsilon_t -f(X-\psi^{x_0^\varepsilon}(t)) \Big\| + L_R \Big\| \psi^{x^{\varepsilon}_0}(t)-\psi^{x_0}(t) \Big\| + L_R \|Z_t\|. \end{align*} Hence \begin{equation*} \|b_t\|\leq B^2_R\Big( \rho^2(\varepsilon)+\|Z_t\|^2 \Big)^{\frac{1}{2}}, \end{equation*} for a constant $B^2_R$ and $\rho^2(\varepsilon)\coloneqq \| \int_{\mathbb{R}^d} f(X_t^\varepsilon-x)d\mu^\varepsilon_t -f(X-\psi^{x_0^\varepsilon}(t)) \| +\|\psi^{x^{\varepsilon}_0}(t)-\psi^{x_0}(t) \|\underset{\varepsilon \to 0}{\to} 0$, thanks to \eqref{equation convergence of law to the Diract path}. Now for the diffusion term, \begin{align*} \|\sigma_t \|\leq& \Big\| \sigma\Big(t,X^\varepsilon_t,\mu_t^\varepsilon \Big)-\sigma\Big(t,X^\varepsilon_t,\delta_{\psi^{x^\varepsilon_{0}}(t)}\Big) \Big\| + \Big\| \sigma\Big(t,X^\varepsilon_t,\delta_{\psi^{x^\varepsilon_{0}}(t)}\Big)-\sigma\Big(t,X^\varepsilon_t,\delta_{\psi^{x_{0}}(t)}\Big) \Big\| \\ &+\Big\| \sigma\Big(t,X^\varepsilon_t,\delta_{\psi^{x_{0}}(t)}\Big)-\sigma\Big(t,Y^\varepsilon_t,\delta_{\psi^{x_{0}}(t)}\Big) \Big\| \\ \leq& L \Big( \mathbb{E}\Big[ \|X^\varepsilon_t-\psi^{x^{\varepsilon}_{0}}(t)\|^2\Big]^{\frac{1}{2}} + \| \psi^{x_0^\varepsilon}(t)-\psi^{x_0}(t) \| + \|X^\varepsilon_t-Y^\varepsilon_t \| \Big). \end{align*} Hence \begin{equation} \|\sigma_t \|\leq M\big( \rho(\varepsilon)+\|Z^\varepsilon_t\|^2 \big)^{\frac{1}{2}} , \end{equation} for a constant $M$ and $\rho(\varepsilon) \underset{\varepsilon \to 0}{\to}0$. Now fix $\delta>0$ and notice that \begin{align*} \Big\{ \sup_{t\in[0,T]} \| X^\varepsilon_t-Y^\varepsilon_t \| \geq \delta \Big\} \subseteq& \Big\{ \sup_{t\in[0,\tau_{R+1}]} \| X^\varepsilon_t-Y^\varepsilon_t \| \geq \delta, \tau_{R+1} = T \Big\} \cup \Big\{ \sup_{t\in[0,T]} \| X^\varepsilon_t-Y^\varepsilon_t \| \geq \delta, \tau_{R+1}< T \Big\} \\ \subseteq& \Big\{ \sup_{t\in[0,\tau_{R+1}]} \| X^\varepsilon_t-Y^\varepsilon_t \| \geq \delta \Big\} \cup \Big\{ \tau_{R+1}< T \Big\}. \end{align*} By Lemma \ref{lemma : adapted D+Z lemma 5.6.18} we know that \begin{equation*} \limsup_{\varepsilon\to 0}\varepsilon \log \Big( \mathbb{P}\Big[ \sup_{t\in[0,\tau_{R+1}]} \| X^\varepsilon_t - Y^\varepsilon_t \| \geq \delta \Big] \Big) =-\infty. \end{equation*} Furthermore, define $\tau^Y_{R} \coloneqq \inf\{t\geq0: \|Y^\varepsilon_t\| \geq R \} $, and notice that \begin{align*} \Big\{ \tau_{R+1}< T \Big\} \subseteq& \Big\{ \tau_{R+1}< T, \tau^Y_{R}\leq T \Big\} \cup \Big\{ \tau_{R+1}< T, \tau^Y_{R}> T \Big\} \\ \subseteq& \Big\{\tau_{R+1}< T \Big\} \cup \Big\{ \| X_{\tau^Y_{R}}^\varepsilon-Y_{\tau_{R+1}}^\varepsilon \|\geq 1 \Big\}. \end{align*} Again, setting $ \delta=1$ and using Lemma \ref{lemma : adapted D+Z lemma 5.6.18}, we have that \begin{equation*} \limsup_{\varepsilon\to 0}\varepsilon \log \Big( \mathbb{P}\Big[ \sup_{t\in[0,\tau_{R+1}]} \| X^\varepsilon_t - Y^\varepsilon_t \| \geq 1 \Big] \Big) = -\infty, \end{equation*} hence are left with \begin{align*} \limsup_{\varepsilon \to 0}\varepsilon \log \Big( \mathbb{P} \Big[ \sup_{t\in[0,T]} \|X_t^\varepsilon - Y^\varepsilon_t \|\geq \delta \Big] \Big) \leq& \limsup_{\varepsilon \to 0} \varepsilon \log \Big( \mathbb{P} \Big[ \sup_{t\in[0,T]} \|Y^\varepsilon_t \|\geq R \Big] \Big). \end{align*} Applying the LDP proved for $Y^\varepsilon$ in Lemma \ref{lemma the LDP for Y classical reflected sde} we conclude, \begin{align*} \limsup_{\varepsilon \to 0} \varepsilon \log \Big( \mathbb{P} \Big[ &\sup_{t\in[0,T]} \|X_t^\varepsilon - Y^\varepsilon_t \|\geq \delta \Big] \Big) \\ & \leq - \underset{\{\phi\in C_{x_0}([0,T];\mathbb{R}^d, ~:~\sup_{t\in[0,T]}\|\phi(t)\|\geq R \}}{\inf} ~~ I^{T}_{x_{0}}(\phi) \underset{R\to \infty}{\longrightarrow}-\infty, \end{align*} by the same arguments as the end of the proof of Lemma \ref{lemma : euler scheme is an expo good approximation}. \end{proof} An immediate consequence (choosing $x_0^\varepsilon=x_0$) we have an LDP for our reflected McKean-Vlasov equation's solution $X^\varepsilon$ of \eqref{eq:MVSS-LDP} with $X^{\varepsilon}_0=x_0$. The point of allowing $\varepsilon$-dependent initial conditions for $X^\varepsilon$ enables us to claim the LDP uniformly on compacts, similarly to \cite{HIP}*{Corollary 3.5}, or \cite{Herrmann2013StochasticR}*{Propositions 4.6 and 4.8}. We provide a statement and a brief proof, the full justification is identical to those found in \cites{HIP,Herrmann2013StochasticR}. \begin{corollary} Let $\mathbb{P}_{x_{0}}[X^\varepsilon\in\cdot]$ be the law on $C_{x_{0}}([0,T]; \mathbb{R}^d)$ of the solution $X^\varepsilon$ to \eqref{eq:MVSS-LDP} with $X_0^\varepsilon=x_0$. Let $M\subset \mathbb{R}^d$ be a compact subset. Then, for any Borel set $A\subset C([0,T]; \mathbb{R}^d)$, we have \begin{align} \label{equation uniform ldp upper} \liminf_{\varepsilon \to 0}\varepsilon \log \sup_{x_0\in M}\mathbb{P}_{x_0}[ X^{\varepsilon}\in A] \leq &-\inf_{x_0 \in M}\inf_{\phi\in \overline{A} } I_{x_0}^T(\phi), \end{align} and \begin{align} \label{equation uniform ldp lower} \liminf_{\varepsilon \to 0}\varepsilon \log \inf_{x_0\in M} \mathbb{P}_{x_0}[ X^{\varepsilon}\in A] \geq&-\sup_{x_0 \in M}\inf_{\phi \in A^{\circ} } I_{x_0}^T(\phi). \end{align} \end{corollary} \begin{proof} Allowing $\varepsilon$-dependent initial conditions, implies that (otherwise we would contradict the LDP) \begin{align*} \limsup_{\underset{ x_{\varepsilon}\to x_0}{\varepsilon \to 0}}\varepsilon\log \mathbb{P}_{x_{\varepsilon}}[ X^{\varepsilon}\in A ] \leq & - \inf_{\phi\in \overline{A}}I^T_{x_{0}}(\phi), \end{align*} then arguing as in \cite{DZ}*{Corollary 5.6.15} yields \eqref{equation uniform ldp upper}. The lower bound \eqref{equation uniform ldp lower} is done similarly. \end{proof} Furthermore, proceeding like in \cite{HIP} we could obtain uniform on compacts LDP for the process $X^\varepsilon$ started at some later time $s>0$, and initial condition $x_s^\varepsilon$. Such uniform LDP can be useful when obtaining exit-time results in the manner of \cite{HIP}. However we will not need them, and instead obtain exit-time results by the method of \cite{tugaut2016simple}. \section{Exit-time} \label{sec:ExitTimes} In this section we obtain a characterisation of the exit-time of $X^\varepsilon$ from an open subdomain $\mathfrak{D}\subset \mathcal{D}$ under several additional assumptions: strict convexity of potentials, the diffusion matrix is the identity matrix and time-homogeneity of the coefficients. These are motivated by applications (like \cites{di2017jump,di2019sharp}) where the exit-cost of the diffusion from a domain needs to be computed explicitly, here we refer to $\Delta$ in Theorem \ref{thm:ExitTime}. The results obtained in this section are, from a methodological point of view, inspired by \cite{tugaut2016simple}. Let us start by introducing the process of interest $(X_t^\varepsilon)_{t\geq 0}$ over $\mathbb{R}^d$ with dynamics \begin{align} \label{eq:ExitTime-Process} X_t^\varepsilon =& x_0 + \int_0^t b( X_s^\varepsilon) ds + \int_0^t f\ast \mu^\varepsilon_s(X_s^\varepsilon) ds + \sqrt{\varepsilon} W_t - k_t^\varepsilon, \quad \mathbb{P}\big[ X_t^\varepsilon \in dx \big] = \mu_t^\varepsilon (dx), \\ \nonumber |k^\varepsilon|_t =& \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s^\varepsilon) d|k^\varepsilon|_s, \qquad k^\varepsilon_t = \int_0^t \mathbbm{1}_{\partial \mathcal{D}}(X_s^\varepsilon)\textbf{n}(X_s^\varepsilon) d|k^\varepsilon|_s. \end{align} \begin{assumption} \label{ass:ExitTime-Coefficients} Let $\mathcal{D}$ satisfy Assumption \ref{assumption:domain}. Let $r>1$ and let $b: \mathcal{D} \to \mathbb{R}^d$, $f:\mathbb{R}^d \to \mathbb{R}^d$ satisfy \begin{itemize} \item There exist functions $B: \mathcal{D} \to \mathbb{R}$ and $F:\mathbb{R}^d \to \mathbb{R}$ such that $$ b(x) = \nabla B(x), \quad f(x) = \nabla F(x), $$ \item $B$ is uniformly strictly concave, $\exists L>0$ such that $\forall x,y\in \mathcal{D}$, $$ \big\langle x-y, b( x) - b( y) \big\rangle \leq -L \| x - y\|^2, $$ \item $\exists G:\mathbb{R} \to \mathbb{R}$ a convex even polynomial such that $F(x) = G(\| x\|)$ of order $r$ where $$ G(\| x\|) < C ( 1+ \| x\|^r), $$ and $\forall x, y\in \mathbb{R}^d$ we have $\big\langle x - y, f(x) - f(y) \big\rangle \leq 0$, \item $\exists \tilde{x} \in \mathcal{D}^\circ $ such that $\inf_{x\in \mathcal{D}} \| b( x)\| = \| b( \tilde{x})\| = 0$. \end{itemize} \end{assumption} We study the metastability of the system around $\tilde{x}$ within the domain $\mathfrak{D}$. Intuitively, the dynamics of the process are similar to those of the non-reflected case, so that in the small noise limit the process spends most of its time around the stable point $\tilde{x}$ and with a high probability excursions from the stable point promptly return to it. Therefore, the only way to leave the domain $\mathfrak{D}$ is to receive a large shock from the driving noise, which is expected to take a long time to happen. \begin{defn} \label{defnPstasta} Let $\mathcal{G}$ be a subset of $\mathcal{D}$ and let $U:\mathcal{D} \to \mathbb{R}^d$. For all $x\in \mathcal{D}$, let $\varphi$ be the dynamical system $$\mathbb{R}^{+} \ni t\mapsto \varphi_t(x)= x + \int_0^t U ( \varphi_s(x)) ds.$$ We say that the domain $\mathcal{G}$ is \emph{stable by} $U$ if $\forall x\in \mathcal{G}$, $$ \Big\{ \varphi_t(x): \ t\in\mathbb{R}^{+} \Big\} \subset \mathcal{G}. $$ \end{defn} This is also referred to as ``positively invariant'' in other works. We now introduce supplementary assumptions on the domain $\mathfrak{D}$ in order to obtain the exit-time. The first one is slightly different from the one in \cite{HIP} as we do not assume that $\mathfrak{D}$ is stable by $b$ but instead we work with the following. \begin{assumption} \label{Ass:DomainDofExitTime} Let $\mathfrak{D} \subset \mathcal{D}$ be an open, connected set containing $\tilde{x}$ such that $ \overline{\mathfrak{D}} \subset \mathcal{D}$ and $\partial \mathcal{D}\cap \mathfrak{D} = \emptyset$. Let $x_0\in \mathfrak{D}$. Let $\psi_t=x_0 + \int_0^t b( \psi_s )ds$. The orbit $$ \Big\{ \psi_{t}: t\in \mathbb{R}^{+} \Big\} \subset \mathfrak{D}. $$ Further domain $\mathfrak{D}$ is stable by $b( \cdot) +f(\cdot-\tilde{x})$. \end{assumption} Roughly speaking, when the time is small, the reflected self-stabilizing diffusion behaves like the dynamical system $\{\psi_t\}_{t\in[0,T]}$. As a consequence, and in order to have a non-trivial exit-time, we assume that the orbit of the dynamical system without noise stays in the domain $\mathfrak{D}$. After a long time, the reflected self-stabilizing diffusion stays close to a linear reflected diffusion with potential $B( \cdot) + F\ast\delta_{\tilde{x}}$. It is then natural to assume that the domain is stable by $b(\cdot) +f(\cdot-\tilde{x})$. \begin{defn} \label{defn:balance} Let $x\in \mathcal{D}$. Let $r>1$ and let $\kappa>0$. Let $\mathbb{B}_x^{\kappa, r} \subset \mathcal{P}_r(\mathcal{D})$ denote the set of all the probability measures such that $$ \int_{\mathcal{D}} \| y - x \|^{r}\mu(dy) \leq \kappa^{r}. $$ \end{defn} We study the distribution of the following stopping time. \begin{defn} \label{dfn:ExitTime} Let $\mathfrak{D} \subset \mathbb{R}^d$, $x_0, \tilde{x}\in \mathbb{R}^d$ satisfy Assumption \ref{Ass:DomainDofExitTime}. Let $\varepsilon>0$ and let $X^{\varepsilon}$ be the solution to \eqref{eq:ExitTime-Process}. Define the exit-time $\tau_\mathfrak{D}(\varepsilon)$ of $X^{\varepsilon}$ from the domain $\mathfrak{D}$ as $$ \tau_\mathfrak{D}(\varepsilon):= \inf\Big\{ t\geq 0: X^{\varepsilon}_t\notin \mathfrak{D} \Big\}. $$ \end{defn} Within classical SDE theory, there is no difference between the reflected and the non-reflected process since the exit domain $\mathfrak{D}$ is necessarily contained in the domain of constraint $\mathcal{D}$. This is not the case for McKean-Vlasov equations where the reflective term acts on the law to ensure it remains on the domain $\mathcal{D}$ and is thus different from the law of the non-reflected McKean-Vlasov. In the language of particle systems, see \eqref{eq:ParticleSystem}, each particle $i$ is additionally affected by the reflections of all other particles $j \neq i$. One of our contributions here is to rigorously argue that although the law of the reflected process and the law of the non-reflected process are different, the difference \textit{does not} affect the distribution of the exit-time $\tau_\mathfrak{D}(\varepsilon)$. Further, we remark that the results of Sections \ref{subsec;ExitTime_CoM}, \ref{subsec;ExitTime_PoEbC} and \ref{subsec;ExitTime_CR} typically hold under much broader conditions than those of Assumption \ref{ass:ExitTime-Coefficients}. This not the case for the proof of Theorem \ref{thm:ExitTime} which relies on classical methods and so determines the scope of our results. \subsection{Control of the moments} \label{subsec;ExitTime_CoM} In this section, we study the distance between the law of the process at time $t$ and the Dirac measure at $\tilde{x}$. \begin{defn} \label{dfn:ExitTime-Moments} Let $\mathcal{D}$ satisfy Assumption \ref{assumption:domain}. Let $W$ be a $d$-dimensional Brownian motion and let $r>1$, $b$, $f$, $x_0$ and $\tilde{x}$ satisfy Assumption \ref{ass:ExitTime-Coefficients}. Let $X^\varepsilon$ be the solution to Equation \eqref{eq:ExitTime-Process}. Define $\xi_\varepsilon^r: \mathbb{R}^+ \to \mathbb{R}^+$ to be $$ \xi_\varepsilon^r(t):= \mathbb{E}\Big[ \| X_t^\varepsilon - \tilde{x} \|^{r} \Big]. $$ For $\kappa>0$, define \begin{equation*} T^{\kappa,r} (\varepsilon):= \min \Big\{t\geq 0: \xi_\varepsilon^r(t)\leq\kappa^{r} \Big\}. \end{equation*} \end{defn} \begin{prop} \label{prop:ExitTime-moments:1} We have $$ \sup_{t\in \mathbb{R}^{+} } \xi_\varepsilon^r (t)\leq\max \Big\{ \|x_0-\tilde{x} \|^{r}, \Big( \tfrac{d\varepsilon(r-1)}{2L}\Big)^{r/2} \Big\}. $$ For $\varepsilon < \tfrac{\kappa^2 L}{d(r-1)}$, we have $$ T^{\kappa,r} (\varepsilon) \leq \tfrac{1}{rL} \log\Big( \tfrac{2\| x_0 - \tilde{x}\| }{\kappa^2} - 1\Big). $$ Finally, for all $t\geq T^{\kappa,r}(\varepsilon)$ with $\varepsilon < \tfrac{\kappa^2 L}{2r-1}$ we have $\xi_\varepsilon(t)\leq\kappa^{2r}$. \end{prop} \begin{proof} Let $t\in\mathbb{R}^+$. We apply the It\^o formula, integrate, take expectations and then the derivative in time. We obtain \begin{align*} \xi_\varepsilon^r(t) =& \mathbb{E}\Big[ \| x_0 - \tilde{x}\|^r \Big] \\ &+ \int_0^t r\mathbb{E}\Big[ \| X_s^\varepsilon - \tilde{x} \|^{r-2}\Big\langle X_s^\varepsilon - \tilde{x}, b( X_s^\varepsilon )\Big \rangle\Big] + r \mathbb{E}\Big[ \| X_s^\varepsilon - \tilde{x} \|^{r-2} \Big\langle X_s^\varepsilon - \tilde{x}, f\ast\mu_s^\varepsilon (X_s^\varepsilon) \Big\rangle\Big] ds \\ &+\frac{dr(r-1)}{2} \varepsilon \int_0^t \mathbb{E}\Big[ \| X_s^\varepsilon - \tilde{x} \|^{r-2} \Big] ds - r \mathbb{E}\Big[ \int_0^t \| X_s^\varepsilon - \tilde{x}\|^{r-1} \Big\langle X_s^\varepsilon - \tilde{x}, dk_s^\varepsilon \Big\rangle\Big]. \end{align*} Using the uniform strict concavity of $B$, we get $$ r \int_0^t \mathbb{E}\Big[ \| X_s^\varepsilon - \tilde{x} \|^{r-2}\Big\langle X_s^\varepsilon - \tilde{x}, b( X_s^\varepsilon )\Big \rangle\Big] ds \leq -rL \int_0^t \xi_\varepsilon^r (s) ds. $$ Next, denoting by $\overline{X_t^\varepsilon}$ an independent version of $X_t^\varepsilon$ and $G$ the concave even polynomial such that $F(x) = G(\| x\|)$, we get \begin{align*} r&\int_0^t \mathbb{E}\Bigg[ \| X_s^\varepsilon - \tilde{x}\|^{r-2} \frac{ G'\big(\| X_s^\varepsilon - \overline{X_s^\varepsilon}\| \big) }{\| X_s^\varepsilon - \overline{X_s^\varepsilon}\| } \Big\langle X_s^\varepsilon - \overline{X_s^\varepsilon}, X_s^\varepsilon - \tilde{x} \Big\rangle \Bigg] \\ &= r\int_0^t \mathbb{E}\Bigg[ \frac{ G'\big(\| X_s^\varepsilon - \overline{X_s^\varepsilon}\| \big) }{\| X_s^\varepsilon - \overline{X_s^\varepsilon}\| } \Big\langle \big( X_s^\varepsilon - \tilde{x}\big) - \big( \overline{X_s^\varepsilon} - \tilde{x}\big) , \big( X_s^\varepsilon - \tilde{x}\big) \| X_s^\varepsilon - \tilde{x}\|^{r-2} \Big\rangle \Bigg] ds \\ &= \frac{r}{2} \int_0^t \mathbb{E}\Bigg[ \frac{ G'\big(\| X_s^\varepsilon - \overline{X_s^\varepsilon}\| \big) }{\| X_s^\varepsilon - \overline{X_s^\varepsilon}\| } \Big\langle \big( X_s^\varepsilon - \tilde{x}\big) - \big( \overline{X_s^\varepsilon} - \tilde{x}\big) , \big( X_s^\varepsilon - \tilde{x}\big) \| X_s^\varepsilon - \tilde{x}\|^{r-2} - \big( \overline{X_s^\varepsilon} - \tilde{x}\big) \| \overline{X_s^\varepsilon} - \tilde{x}\|^{r-2} \Big\rangle \Bigg] ds \\ &\leq 0, \end{align*} since by Cauchy–Schwarz inequality, $\forall x,y\in\mathbb{R}^d$ (see alternatively \cite{HIP}*{Lemma 2.3 (d)}) \begin{align*} \big\langle x \| x\|^{r-2} - y\| y\|^{r-2}, x - y \big\rangle \geq \big( \| x\|^{r-1}-\| y\|^{r-1}\big)\big( \| x\| - \| y\| \big) \geq 0. \end{align*} We obtain $$ \frac{d}{dt} \xi_\varepsilon^r(t) \leq -rL \cdot \xi_\varepsilon^r (t)^{1-\frac{2}{r}} \Big( \xi_\varepsilon^r (t)^{\frac{2}{r}} - \frac{d(r-1)\varepsilon}{2L} \Big) . $$ Thus we get the bound $$ |\xi_\varepsilon^r(t)|^{\frac{2}{r}} \leq \max\Big\{ \tfrac{d(r-1)\varepsilon}{2L}, \| x_0 - \tilde{x}\|^2\Big\}. $$ Choosing $\varepsilon<\tfrac{\kappa^2 L}{d(r-1)}$, we see $\sup_{t\in \mathbb{R}^+} |\xi_\varepsilon^r(t)|^{\frac{2}{r}} \leq \max\Big\{ \tfrac{\kappa^2}{2}, \| x_0 - \tilde{x}\|^2\Big\}$. Now additionally suppose that $\| x_0 - \tilde{x}\|^2> \tfrac{\kappa^2}{2}$ then we get the upper bound $$ | \xi_\varepsilon^r(t)|^{\frac{2}{r}} \leq \tfrac{\kappa^2}{2} + \Big( \| x_0 - \tilde{x}\|^2 - \tfrac{\kappa^2}{2}\Big) \exp\Big( -rLt\Big). $$ In this case $$ T^{\kappa, r}(\varepsilon) \leq \tfrac{1}{rL} \log\Big( \tfrac{2\| x_0 - \tilde{x}\| }{\kappa^2} - 1\Big). $$ Conversely, if $\| x_0 - \tilde{x}\|^2\leq \tfrac{\kappa^2}{2}$ then $T^{\kappa, r}(\varepsilon) = 0$. \end{proof} \subsection{Probability of exiting before converging} \label{subsec;ExitTime_PoEbC} Recall that after time $T^{\kappa,r}(\varepsilon)$, the process $X_t^\varepsilon$ is expected to remain close to $\tilde{x}$. Additionally, it also happens that before time $T^{\kappa, r}(\varepsilon)$ and in the small noise limit the process $X_t^\varepsilon$ does not leave $\mathfrak{D}$. This can be argued from the fact that the dynamical system $\psi_t$ introduced in Assumption \ref{Ass:DomainDofExitTime} stays in the domain $\mathfrak{D}$. \begin{prop} Let $\tau_\mathfrak{D}(\varepsilon)$ be the stopping time as defined in Definition \ref{dfn:ExitTime}. Let $\xi_\varepsilon^r$ and $T^{\kappa, r}(\varepsilon)$ be as defined in Definition \ref{dfn:ExitTime-Moments}. Then for any $\kappa>0$ we have that $$ \lim_{\varepsilon\to 0}\mathbb{P}\Big[ \tau_{\mathfrak{D}}(\varepsilon)<T^{\kappa,r}(\varepsilon) \Big]=0. $$ \end{prop} \begin{proof} Let $t\in\mathbb{R}^+$. Then, \begin{align*} \mathbb{E}\Big[ \| X_t^\varepsilon - \psi_t \|^2 \Big] =& \varepsilon d t + 2\int_0^t \mathbb{E}\Big[ \Big\langle X_s^\varepsilon - \psi_s, b( X_s^\varepsilon) - b( \psi_s) \Big\rangle \Big] ds \\ &+2\int_0^t \mathbb{E}\Big[ \Big\langle X_s^\varepsilon - \psi_s, f\ast \mu_s^\varepsilon(X_s^\varepsilon) \Big\rangle \Big] ds -2\int_0^t \mathbb{E}\Big[ \Big\langle X_s^\varepsilon - \psi_s, dk^\varepsilon_s \Big\rangle \Big] . \end{align*} Using standard methods, we get $$ \mathbb{E}\Big[ \| X_t^\varepsilon - \psi_t\|^2 \Big] \leq \tfrac{\varepsilon d}{2L}\Big( 1 - \exp\Big(-2Lt\Big) \Big). $$ Then, for any $\delta>0$ define \begin{equation*} \tau_\delta(\varepsilon):= \inf\Big\{ t>0: \|X_t^\varepsilon - \psi_t\| > \delta \Big\}. \end{equation*} Thus for any $T>0$, $$ \lim_{\varepsilon\to 0} \mathbb{P}\Big[ \tau_\delta( \varepsilon )<T \Big]=0. $$ We are interested in the interval $[0, T^{\kappa,r} (\varepsilon)]$, which depends on $\varepsilon$ but has a uniform bound. Thus by Proposition \ref{prop:ExitTime-moments:1}, $$ \mathbb{P}\Big[ \tau_\delta(\varepsilon) < T^{\kappa,r}(\varepsilon) \Big] \leq \mathbb{P}\Big[\tau_\delta(\varepsilon) < \tfrac{1}{rL} \log\Big( \tfrac{2\| x_0 - \tilde{x}\| }{\kappa^2} - 1\Big) \Big], $$ which we just argued, goes to $0$ as $\varepsilon \to 0$. Finally, from Assumption \ref{Ass:DomainDofExitTime}, we have $\big\{\psi_t\,\,:\,\,t>0\big\}\subset \mathfrak{D}$ and consequently for any $\kappa>0$ we obtain the limit $$ \lim_{\varepsilon\to 0}\mathbb{P} \Big[ \tau_{\mathfrak{D}}(\varepsilon) < T^{\kappa,r}(\varepsilon) \Big]=0. $$ \end{proof} \subsection{The coupling result} \label{subsec;ExitTime_CR} Now, we study the exit of the diffusion from the domain after the time $T^{\kappa,r}(\varepsilon)$. To do so, we use the inequality $$ \sup_{t\geq T^{\kappa,r} (\varepsilon)} \xi_\varepsilon(t) \leq \kappa^{r}, $$ which holds for any $\kappa>0$ provided $\varepsilon < \frac{\kappa^2L}{d(r-1)}$. From this we deduce that the drift $b( \cdot) + f\ast\mu_t^\varepsilon(\cdot)$ is close to the vector field $b(\cdot) + f(\cdot - \tilde{x})$. Let $\mathcal{K}\subset \mathfrak{D}$ be a compact set with non-zero Lebesgue measure interior such that $\tilde{x} \in \mathfrak{D}$. We consider the following diffusion defined for $t\geq T^{\kappa,r}(\varepsilon)$ as \begin{align} \label{eq:RestartedSSMVE-Z} Z_t^\varepsilon =& X_{T^{\kappa,r}(\varepsilon)} + \sqrt{\varepsilon} \big(W_t - W_{T^{\kappa,r}(\varepsilon)} \big) + \int_{T^{\kappa,r}(\varepsilon)}^t b( Z_s^\varepsilon ) ds + \int_{T^{\kappa,r}(\varepsilon)}^t f\big( Z_s^\varepsilon - \tilde{x} \big) ds - k^{Z,\varepsilon}_t, \\ \nonumber |k^{Z, \varepsilon}|_t =& \int_{T^{\kappa,r}(\varepsilon)}^t \mathbbm{1}_{\partial \mathcal{D}} (Z_s^\varepsilon) d|k^{Z, \varepsilon}|_s, \quad k^{Z, \varepsilon}_t = \int_{T^{\kappa,r}(\varepsilon)}^t \mathbbm{1}_{\partial \mathcal{D}} (Z_s^\varepsilon) \textbf{n}(Z_s^\varepsilon) d|k^{Z, \varepsilon}|_s \qquad \mbox{when $X_{T^{\kappa,r}(\varepsilon)}^\varepsilon \in \mathcal{K}$} \\ \nonumber Z_t^\varepsilon =& X_t^\varepsilon \qquad \mbox{ if $X_{T^{\kappa, r}(\varepsilon)}^\varepsilon \notin \mathcal{K}$.} \end{align} \begin{defn} \label{dfn:ExitTime-StoppingTime} Let $\mathcal{D}$ satisfy Assumption \ref{assumption:domain}. Let $W$ be a $d$-dimensional Brownian motion and let $r>1$, $b$, $f$ $x_0$ and $\tilde{x}$ satisfy Assumption \ref{ass:ExitTime-Coefficients}. Let $\mathcal{K}$ be a compact set with non-zero Lebesgue measure interior that $\tilde{x}\in \mathcal{K}$ and $\mathcal{K}\subset \mathfrak{D}$. Let $X^\varepsilon$ be the solution to Equation \eqref{eq:ExitTime-Process} and let $Z^\varepsilon$ be the solution to \eqref{eq:RestartedSSMVE-Z}. Define the stopping times \begin{align*} \tau_{\mathcal{K},\kappa}(\varepsilon):= \inf \Big\{ t > T^{\kappa,r}(\varepsilon) : X_t^\varepsilon \notin \mathcal{K} \Big\}, \qquad \tau_{\mathcal{K},\kappa}'(\varepsilon):= \inf \Big\{ t > T^{\kappa,r}(\varepsilon) : Z_t^\varepsilon \notin \mathcal{K} \Big\}, \end{align*} and $\mathcal{T}_{\mathcal{K},\kappa} (\varepsilon) := \min \Big\{ \tau_{\mathcal{K},\kappa} (\varepsilon) ,\tau_{\mathcal{K},\kappa}'(\varepsilon) \Big\}$. \end{defn} The following Proposition establishes a coupling between $X^\varepsilon$ the reflected McKean-Vlasov SDE and $Z^\varepsilon$ the reflected SDE. That is, in the time interval $[T^{\kappa,r}(\varepsilon),\mathcal{T}_{\mathcal{K},\kappa}(\varepsilon)]$ the processes remain close to each other with high probability when the noise is small enough. \begin{prop} \label{prop:ExitTime-poc:uniforme} Let $\mathcal{T}_{\mathcal{K}, \kappa}$ be as in Definition \ref{dfn:ExitTime-StoppingTime}. Then $\exists \kappa_0>0$ such that $\forall \kappa < \kappa_0$ $\exists \varepsilon_0>0$ such that $\forall \varepsilon< \varepsilon_0$ we have \begin{equation*} \mathbb{P} \left[ \sup_{T^{\kappa,r}(\varepsilon) \leq t \leq \mathcal{T}_{\mathcal{K},\kappa}(\varepsilon)} \| Z_t^\varepsilon - X_t^\varepsilon \| \geq \eta(\kappa) \right] \leq \eta(\kappa), \end{equation*} where $\eta$ is some positive, continuous and increasing function such that $\eta(0)=0$. \end{prop} \begin{proof} Let $t\in \mathbb{R}^+$. If $X_{T^{\kappa,r}(\varepsilon)} \in \mathcal{K}$ then, for all $T^{\kappa,r}(\varepsilon) \leq t \leq \mathcal{T}_{\kappa}(\varepsilon)$, we have \begin{align*} \| Z_t^\varepsilon - X_t^\varepsilon \|^2 = & + 2\int_{T^{\kappa,r}(\varepsilon)}^t \Big\langle Z_s^\varepsilon - X_s^\varepsilon, b( Z_s^\varepsilon) - b( X_s^\varepsilon) \Big\rangle ds \\ &+ 2\int_{T^{\kappa,r}(\varepsilon)}^t \Big\langle Z_s^\varepsilon - X_s^\varepsilon, f(Z_s^\varepsilon - \tilde{x}) - f\ast \mu_s^\varepsilon (X_s^\varepsilon) \Big\rangle ds - 2\int_{T^{\kappa,r}(\varepsilon)}^t \Big\langle Z_s^\varepsilon - X_s^\varepsilon, dk_s^{Z, \varepsilon} - dk_s^\varepsilon \Big\rangle. \end{align*} Set $$ \eta(\kappa):= \sup_{ \nu \in \mathbb{B}_{\tilde{x}}^{\kappa,r}} \sup_{x\in\mathcal{K}} \Big( \frac{\| f \ast \nu(x) - f(x-\tilde{x}) \| }{L} \Big)^{\frac{2}{3}}, $$ where $\mathbb{B}_{\tilde{x}}^{\kappa,r}$ was introduced in Definition \ref{defn:balance}. Using Assumption \ref{assumption:domain} and Gr\"onwall Inequality, we get $$ \sup_{T^{\kappa, r}(\varepsilon) \leq t \leq \mathcal{T}_{\mathcal{K},\kappa}(\varepsilon)} \| Z_t^\varepsilon - X_t^\varepsilon \|^2 \leq \eta(\kappa)^3 \quad \Rightarrow \quad \mathbb{E}\Big[ \sup_{T^{\kappa, r}(\varepsilon) \leq t \leq \mathcal{T}_{\mathcal{K},\kappa}(\varepsilon)} \| Z_t^\varepsilon - X_t^\varepsilon \|^2 \Big]\leq \eta(\kappa)^3. $$ Appealing to Markov's inequality yields the claim. \end{proof} \subsection{The Exit-time result} \label{subsec;ExitTime_Result} Let $\tilde{Z}^\varepsilon$ evolve as $Z^\varepsilon$ without reflection, that is for $t\in[T^{\kappa, r}(\varepsilon),\infty)$, \begin{align*} \tilde{Z}_t^\varepsilon = X_{T^{\kappa, r}(\varepsilon)} + \sqrt{\varepsilon} \big( W_t - W_{T^{\kappa,r}(\varepsilon)} \big) + \int_{T^{\kappa,r}(\varepsilon)}^t b( \tilde{Z}_s^\varepsilon ) ds + \int_{T^{\kappa,r}(\varepsilon)}^t f (\tilde{Z}_s^\varepsilon - \tilde{x})ds. \end{align*} As the closure of the domain $\mathfrak{D}$ from which the process exits is included into the domain $\mathcal{D}$ where there is reflection, we remark that $Z_t^\varepsilon = \tilde{Z}_t^\varepsilon$ whilst $t\leq\tau_{\mathfrak{D}}'(\varepsilon)$, where $$ \tau_{\mathfrak{D}}'(\varepsilon):= \inf \Big\{t \geq T^{\kappa,r}(\varepsilon): \tilde{Z}_t^\varepsilon \notin \mathfrak{D} \Big\}. $$ As a consequence, the first exit-time from $\mathfrak{D}$ of the diffusion $\tilde{Z}^\varepsilon$ is the same as the first exit-time from $\mathfrak{D}$ of the diffusion $Z^\varepsilon$. However, the latter exit-time is well understood thanks to the classical Freidlin-Wentzell theory. The familiar reader will recognise $\Delta$ given as \begin{equation*} \Delta:= \inf_{z\in \partial \mathfrak{D}} \Big\{ B(z) + F(z-\tilde{x}) - B(\tilde{x}) \Big\}, \end{equation*} to be the exit cost from the domain $\mathfrak{D}$, see \cite{tugaut:tel-01748560}*{Proposition B.4, Item 3}. \begin{theorem} \label{thm:ExitTime} Let $\mathcal{D}$ satisfy Assumption \ref{assumption:domain}. Let $W$ be a $d$-dimensional Brownian motion and let $r>1$, $b$, $f$, $x_0$ and $\tilde{x}$ satisfy Assumption \ref{ass:ExitTime-Coefficients}. Let $X^\varepsilon$ be the solution to Equation \eqref{eq:ExitTime-Process}. Then for all $\delta>0$ the following limit holds \begin{equation*} \lim_{\varepsilon\to 0} \mathbb{P}\left[ \tfrac{2}{\varepsilon}(\Delta-\delta) < \log\Big( \tau_{\mathfrak{D}}(\varepsilon)\Big) < \tfrac{2}{\varepsilon}(\Delta+\delta) \right] = 1. \end{equation*} \end{theorem} \begin{proof} The proof is inspired by \cite{T2011f}, we proceed in a stepwise fashion. {\bf Step 1.} Let $\kappa>0$ and we introduce the usual least distance of $x\in\mathbb{R}^d$ to a (non-empty) set $A\subset \mathbb{R}^d$ as $d(x;A) := \inf\{ \|x-a\| : a \in A\}$. We can prove (by proceeding like in \cite{T2011f}*{Proposition 2.2}) that there exist two families of domains $\left(\mathfrak{D}_{i,\kappa}\right)_{\kappa>0}$ and $\left(\mathfrak{D}_{e,\kappa}\right)_{\kappa>0}$ such that \begin{itemize} \item $\mathfrak{D}_{i,\kappa}\subset \mathfrak{D}\subset \mathfrak{D}_{e,\kappa}$, \item $\mathfrak{D}_{i,\kappa}$ and $\mathfrak{D}_{e,\kappa}$ are stable by $b(s, \cdot) + f(\cdot -\tilde{x})$, \item $\sup_{z\in\partial \mathfrak{D}_{i,\kappa}} {\rm d}\left(z ; \mathfrak{D}^c \right) + \sup_{z\in\partial \mathfrak{D}_{e,\kappa} }{\rm d} \left( z; \mathfrak{D} \right)$ tends to $0$ when $\kappa$ goes to $0$, \item $\inf_{z\in\partial \mathfrak{D}_{i,\kappa}}{\rm d}\left(z\,;\,\mathfrak{D}^c\right)=\inf_{z\in\partial \mathfrak{D}_{e,\kappa}}{\rm d}\left(z; \mathfrak{D} \right) = r(\kappa)$. \end{itemize} \noindent{}{\bf Step 2.} By $\tau_{i,\kappa}'(\varepsilon)$ (resp. $\tau_{e,\kappa}'(\varepsilon)$), we denote the first exit-time of $Z^\varepsilon$ from $\mathfrak{D}_{i,\kappa}$ (resp. $\mathfrak{D}_{e,\kappa}$). \\ \noindent{}{\bf Step 3.} We prove here the upper bound: \begin{align*} \mathbb{P} \left[ \tau_{\mathfrak{D}}(\varepsilon) \geq e^{\frac{2(\Delta+\delta)}{\varepsilon}} \right] & = \mathbb{P}\left[ \tau_{\mathfrak{D}}(\varepsilon) \geq e^{\frac{2(\Delta+\delta)}{\varepsilon}}, \tau_{e,\kappa}'(\varepsilon)\geq e^{\frac{2(\Delta+\delta)}{\varepsilon}}\right] +\mathbb{P}\left[ \tau_{\mathfrak{D}}(\varepsilon) \geq e^{\frac{2(\Delta+\delta)}{\varepsilon}}, \tau_{e,\kappa}'(\varepsilon) < e^{\frac{2(\Delta+\delta)}{\varepsilon}} \right] \\ & \leq \mathbb{P} \left[ \tau'_{e,\kappa}(\varepsilon) \geq e^{\frac{2(\Delta+\delta)}{\varepsilon}} \right] +\mathbb{P} \left[ \tau_{\mathfrak{D}}(\varepsilon) \geq e^{\frac{2(\Delta+\delta)}{\varepsilon}} , \tau_{e,\kappa}'(\varepsilon) < e^{\frac{2(\Delta+\delta)}{\varepsilon}} \right] \\ &=:a_\kappa(\varepsilon) + b_\kappa(\varepsilon). \end{align*} {\bf Step 3.1.} By classical results in Freidlin-Wentzell theory, \cite{Herrmann2013StochasticR}*{Theorem 2.42 }, there exists $\kappa_1>0$ such that for all $0<\kappa<\kappa_1$, we have $$ \lim_{\varepsilon \to 0}\mathbb{P} \left[ \tau_{e,\kappa}'(\varepsilon) < \exp\left( \frac{2}{\varepsilon} \left(\Delta+\delta\right) \right)\right]=1. $$ Therefore, the first term $a_\kappa(\varepsilon)$ tends to $0$ as $\varepsilon$ goes to $0$. \noindent{}{\bf Step 3.2.} For $\kappa$ sufficiently small, we have $\mathfrak{D}_{e,\kappa} \subset \mathcal{K}$ and consequently we have \begin{align*} &\mathbb{P} \Big[ \tau_{\mathfrak{D}}(\varepsilon) \geq e^{\frac{2(\Delta+\delta)}{\varepsilon}} , \tau_{e,\kappa}'(\varepsilon) \leq e^{\frac{2(\Delta+\delta)}{\varepsilon}} \Big] \\ & \qquad \leq \mathbb{P} \Big[ \| X_{\tau_{e,\kappa}'(\varepsilon)} - Z_{\tau_{e,\kappa}'(\varepsilon)} \|\geq \eta(\kappa) \Big] \leq \mathbb{P} \Big[ \sup_{T^{\kappa,r} (\varepsilon) \leq t\leq \mathcal{T}_{\mathcal{K},\kappa}(\varepsilon)} \| X_{t}^\varepsilon - Z_{t}^\varepsilon \|\geq \eta(\kappa) \Big]. \end{align*} According to Proposition \ref{prop:ExitTime-poc:uniforme}, there exists $\varepsilon_0>0$ such that the previous term is less than $\eta(\kappa)$ for all $\varepsilon<\varepsilon_0$. \\ {\bf Step 3.3.} Let $\delta>0$. By taking $\kappa$ arbitrarily small, we obtain the upper bound $$ \lim_{\varepsilon \to 0}\mathbb{P} \left[ \tau_{\mathfrak{D}} (\varepsilon) \geq \exp \left( \frac{2(\Delta+\delta)}{\varepsilon} \right) \right] =0. $$ \noindent{\bf Step 4.} Analogous arguments show that $\lim_{\varepsilon \to 0} \mathbb{P} \left[ T^{\kappa, r} (\varepsilon) \leq \tau_{\mathfrak{D}}(\varepsilon) \leq e^{\frac{2(\Delta-\delta)}{\varepsilon}} \right]=0$. However, by Proposition \ref{subsec;ExitTime_PoEbC} we have $\lim_{\varepsilon\to 0}\mathbb{P}\left[ \tau_{\mathfrak{D}}(\varepsilon) \leq T^{\kappa,r}(\varepsilon) \right]=0$. This concludes the proof. \end{proof}
1,108,101,562,642
arxiv
\section{Introduction} There have been extensive studies on vortex ordering and pinning for superconductors with periodic arrays of pinning sites where the arrays have square or triangular order. In these systems the critical current or the force needed to depin the vortices passes through maxima due to commensuration effects that occur when the number of vortices is an integer multiple of the number of pinning sites \cite{1,2,3,4,5,6,7,8}. The field at which the number of vortices equals the number of pinning sites is labeled $B_\phi$, so that commensurate peaks arise at $B/B_{\phi} = n$, where $n$ is an integer. At these matching fields various types of vortex crystalline states can form as has been directly observed in experiments and confirmed in simulations \cite{9,10,11,12}. It is also possible for fractional matching commensurability effects to occur at fillings of $B/B_{\phi} = n/m$, where $n$ and $m$ are integers. For square and triangular arrays, these fractional matching peaks are typically smaller than the integer matching peaks. The fractional matchings are associated with different types of ordered or partially ordered vortex arrangements \cite{13,14,15,16,17}. In studies of rectangular pinning arrays, a crossover from matching of the full two-dimensional (2D) array to matching with only one length scale of the array occurs for increasing field \cite{18,19}. Honeycomb and kagome pinning arrays \cite{20,21,22,23,24} are constructed by the systematic dilution of a triangular pinning array. In a honeycomb array, every third pinning site of the triangular array is removed, while for a kagome array, every fourth pinning site is removed. In these systems there are strong commensurability effects at both integer and noninteger matching fields, where the noninteger matchings correspond to integer matchings of the original undiluted triangular array. A similar effect can occur for the random dilution of a triangular pinning lattice, where commensuration effects occur at integer matching fields as well as at the noninteger matching fields corresponding to the integer matching fields of the original undiluted pinning array \cite{25,26}. Other periodic pinning array geometries have also been studied which have artificial vortex spin ice arrangements \cite{27} or composite arrays of smaller and larger coexisting pinning sites \cite{28,29}. \begin{figure} \includegraphics[width=3.5in]{Fig1.eps} \caption{ (a) The sample geometry shown with the $3^34^2$ pinning array. Circles represent pinning sites, and vortices are added in the unpinned regions marked $A$. (b,c) Pinning arrays constructed from Archimedean tilings. Plaquettes around one pinning site are highlighted with dotted lines, and labeled with the number of sides of the plaquette. The side numbers read off in a clockwise order around the pinning site are used to name the array. (b) The $3^34^2$ pinning array, also called an elongated triangular tiling. (c) The $3^2434$ pinning array, also called a snub square tiling. } \label{fig:1} \end{figure} Here we propose and study new types of periodic pinning geometries that can be constructed from Archimedean tilings of the plane. In contrast to a regular tiling where a single type of regular polygon (such as a square or equilateral triangle) is used to tile the plane, an Archimedean tiling uses two or more different polygon types. We consider two examples of Archimedean tilings constructed with a combination of square and triangular plaquettes, the elongated triangular tiling illustrated in figure \ref{fig:1}(b), and the snub square tiling shown in figure \ref{fig:1}(c). The plaquettes around one vertex in each tiling are highlighted with dotted lines and marked with the number of sides. The tilings are named by reading off the number of sides in a clockwise order around the pinning site, giving 33344 (also written as $3^{3}4^{2}$) for the tiling in figure \ref{fig:1}(b), and 33434 (or $3^{2}434$) for the tiling in figure \ref{fig:1}(c). In figure \ref{fig:11} one can see the basis of plaquettes for each tiling which is translated in order to generate the full tiling. For each tiling, we place pinning sites at the vertices of the polygons to generate a pinning array. In figure \ref{fig:1}(a) we illustrate the full simulation geometry for the $3^34^2$ pinning array. There are additional types of Archimedean tilings \cite{Grunbaum}, but here we concentrate on only the two tilings illustrated in figure \ref{fig:1}; pinning arrays constructed from other tilings will be described in a future work \cite{unpub}. \begin{figure} \includegraphics[width=3.5in]{Fig2.eps} \caption{ Bases for the various arrays. For each array, the pinning site basis generating the pinning array is plotted with thick dark circles, while the plaquette basis generating the tiling is indicated by the shaded polygons. Red arrows show the elementary translation vectors. (a) $3^34^2$ array. (b) $3^2434$ array. } \label{fig:11} \end{figure} It might be expected that the behavior of the Archimedean pinning arrays would not differ significantly from that of purely square or triangular pinning arrays; however, we find that the different plaquette types in the Archimedean tilings compete. The triangular and square plaquettes comprising the array have equal side lengths $a$; thus, from simple geometry, the distance from the center of a plaquette to any of its vertices will be larger for the square plaquette ($a/\sqrt{2}$ versus $a/\sqrt{3}$). As a consequence, interstitial vortices prefer to occupy square plaquettes rather than triangular plaquettes. This produces several strong matching effects at certain non-integer filling fractions where the vortices are ordered, and suppresses commensurability effects at certain integer fillings where the vortices are disordered. In some cases we even find a drop in the pinning site occupancy with increasing magnetic field. We also observe several partially ordered states as well as different fractional fields and submatching fields that do not arise in regular square or periodic pinning arrays. \section{Simulation and System} In this work we utilize flux gradient density simulations as previously employed to study vortex pinning in random \cite{37}, periodic \cite{38}, and conformal pinning arrays \cite{30}. The sample geometry is illustrated in figure \ref{fig:1}(a). We consider a 2D system with periodic boundary conditions in the $x$- and $y$-directions. The sample size $36\lambda\times 36\lambda$ is measured in units of the penetration depth $\lambda$. Our previous studies indicate that this size of simulation box is sufficiently large to obtain experimentally relevant magnetization curves. The system represents a 2D slice of a three-dimensional type-II superconductor with a magnetic field applied in the perpendicular (${\hat z}$) direction, and we assume that the vortices behave as rigid objects. We work in the London limit of vortices with pointlike cores, where the coherence length $\xi$ is much smaller than $\lambda$. The pinning sites are located in a $24\lambda$ wide region in the middle portion of the sample, with pin-free regions labeled $A$ in figure \ref{fig:1}(a) on either side. Vortices are added to region $A$ during the simulation, and their density in this region represents the externally applied field $H$. The vortices enter the pinned region from the edges and form a Bean gradient \cite{Bean}. The motion of the vortices is obtained by integrating the following overdamped equation of motion: \begin{equation} \eta \frac{d {\bf R}_{i}}{dt} = {\bf F}^{vv}_{i} + {\bf F}^{vp}_{i}. \end{equation} Here $\eta$ is the damping constant which is set equal to 1, ${\bf R}_{i}$ is the location of vortex $i$, ${\bf F}^{vv}_{i}$ is the vortex-vortex interaction force, and ${\bf F}^{vp}_{i}$ is the force from the pinning sites. The vortex-vortex interaction force is given by ${\bf F}^{vv}_{i} = \sum_{j\neq i}F_{0}K_{1}(R_{ij}/\lambda){\hat {\bf R}_{ij}}$, where $K_{1}$ is the modified Bessel function, $R_{ij} = |{\bf R}_{i} - {\bf R}_{j}|$, $ {\hat {\bf R}_{ij}} = ({\bf R}_{i} - {\bf R}_{j})/R_{ij}$, and $F_{0} = \phi^{2}_{0}/(2\pi\mu_{0}\lambda^3)$, where $\phi_{0}$ is the flux quantum and $\mu_{0}$ is the permittivity. The pinning sites are modeled as $N_{p}$ non-overlapping parabolic traps with ${\bf F}^{vp}_{i} = \sum^{N_{p}}_{k= 1}(F_{p}R^{(p)}_{ik}/r_{p})\Theta((r_{p} -R^{(p)}_{ik})/\lambda){\hat {\bf R}^{(p)}}_{ik}$, where $\Theta$ is the Heaviside step function, $r_{p} = 0.12\lambda$ is the pinning radius, $F_{p}$ is the pinning strength, ${\bf R}_k^{(p)}$ is the location of pinning site $k$, $R_{ik}^{(p)} = |{\bf R}_{i} - {\bf R}_{k}^{(p)}|$, and $ {\hat {\bf R}_{ik}^{(p)}} = ({\bf R}_{i} - {\bf R}_{k}^{(p)})/R_{ik}^{(p)}$. We consider three pinning densities of $n_{p} = 1.0/\lambda^2$, $2.0/\lambda^{2}$, and $0.5/\lambda^{2}$, as well as a range of $F_{p}$ values. All forces are measured in units of $F_{0}$ and lengths in units of $\lambda$. The magnetic field $H$ is measured in terms of the matching field $H_{\phi}$ where the density of vortices equals the density of pinning sites. We measure only the first quarter of the magnetic hysteresis loop, which is sufficient to identify the different commensuration effects. We concentrate on the regime where only one vortex is trapped per pinning site, so that for fields greater than the first matching field, the additional vortices are located in the interstitial regions. \begin{figure} \includegraphics[width=3.5in]{Fig3.eps} \caption{ The magnetization $M$ vs $H/H_{\phi}$ for the $3^34^2$ lattice from figure \ref{fig:1}(b) with $n_{p} = 1.0/\lambda^2$. The various curves correspond to different values of the pinning strength $F_p$, which increases from bottom to top. (a) $F_{p} = 0.1$ (black), $0.2$ (red), $0.3$ (green). (b) $F_p=0.5$ (blue), $0.8$ (cyan), and $1.0$ (violet). } \label{fig:2} \end{figure} \section{Elongated Triangular Tiling ($3^34^2$) Lattice} We first consider the elongated triangular tiling or $3^34^2$ lattice shown in figure \ref{fig:1}(b), which consists of rows of square plaquettes separated by rows of triangular plaquettes. In figure \ref{fig:2} we plot the magnetization $M$ vs $H/H_{\phi}$, where $M = (1/4\pi V) \int (H - B) dV$ with $B$ representing the field inside the pinned region and $V$ its area. Here we use a pinning density of $n_{p} = 1.0/\lambda^2$. The critical current is proportional to the width of the magnetization loop, so a peak in $M$ corresponds to a peak in the critical current. figure \ref{fig:2}(a) shows $M$ for $F_{p} = 0.1$, $0.2$, and $0.3$. In each case there is a peak in $M$ associated with the first matching condition of $H/H_{\phi} = 1.0$; however, at $H/H_{\phi} = 2.0$ there is no peak. Instead, peaks appear at $H/H_{\phi} = 1.5$ and $2.5$. In figure \ref{fig:2}(b) we show $M$ versus $H/H_\phi$ for samples with stronger pinning of $F_{p} = 0.5$, $0.8$, and $1.0$. Here the peak at $H/H_{\phi} = 1.0$ is obscured by the initial rise in the magnetization; however, peaks are still present at $H/H_{\phi} = 1.5$ and $2.5$. \begin{figure} \includegraphics[width=3.5in]{Fig4.eps} \caption{ (a) Part of a sample vortex configuration in a $3^2434$ array. Large open circles: pinning sites; small filled circles: vortices. (b) Corresponding plaquette occupancy diagram. Filled black circles denote occupied pinning sites, open circles denote empty pinning sites, and colored tiles indicate plaquettes occupied by one (blue) or more (red) interstitial vortices. } \label{fig:3} \end{figure} \subsection{States at and above first matching field} In order to understand better the vortex states at fields beyond the first matching peak for both types of pinning arrays, we analyze the plaquette occupancy using the tiling coloring scheme illustrated in figure \ref{fig:3}. In figure \ref{fig:3}(a), we show a sample configuration of pinning sites and vortices for the $3^2434$ lattice from Fig.~\ref{fig:1}(c). Figure \ref{fig:3}(b) shows the same configuration represented as a plaquette occupancy diagram, with pinning sites marked either dark or light depending on whether they are filled or empty, and plaquettes marked either with white fill, dark fill, or light fill depending on whether they are occupied by zero, one, or more than one interstitial vortex, respectively. \begin{figure} \includegraphics[width=3.5in]{Fig5.eps} \caption{ Plaquette occupancy in representative strips of the $3^34^2$ sample from figure \ref{fig:1}(b), at field levels corresponding to peaks in figure \ref{fig:2}. The full width of the pinned region is shown. The coloring scheme is described in figure \ref{fig:3}. (a) $H/H_{\phi}=1.55$, $F_p=0.2$. (b) $H/H_{\phi}=1.55$, $F_p=0.5$. (c) $H/H_{\phi}=1.55$, $F_p=1.0$. At this field the square plaquettes are predominantly filled. (d) $H/H_{\phi}=2.56$, $F_p=0.2$. (e) $H/H_{\phi}=2.56$, $F_p=0.5$. (f) $H/H_{\phi}=2.56$, $F_p=1.0$. At this field most plaquettes are filled. } \label{fig:4} \end{figure} We focus on the field values at which peaks in $M$ appear in figure \ref{fig:2}. At $H/H_{\phi} = 1.0$, each pinning site captures one vortex, while at $H/H_{\phi} = 1.5$, the additional vortices predominantly occupy the interstitial regions at the center of the square plaquettes. This is shown in figure \ref{fig:4}(a-c) where we plot the plaquette occupancy at $H/H_{\phi} = 1.55$ for increasing $F_{p}$. (We select a value of $H/H_{\phi}$ slightly higher than $H/H_{\phi} = 1.5$ to compensate for the Bean gradient since the field outside the pinning region is larger than the field inside the pinned part of the sample.) As $F_{p}$ is increased, the ideal 1.5 state forms more cleanly. For the weak pinning situation $F_{p} = 0.2$ in figure \ref{fig:4}(a), there are a handful of unoccupied pinning sites, and interstitial vortices occupy some triangular plaquettes and doubly occupy some of the square plaquettes. At $F_{p} = 0.5$ in figure \ref{fig:4}(b), more of the pinning sites are filled and the interstitial vortices increasingly occupy only the square plaquettes. Finally, at strong pinning $F_{p} = 1.0$ in figure \ref{fig:4}(c), nearly all pinning sites are occupied, since those vortices which were pinned at the first matching field never depin, so that as the field is raised above the first matching field, interstitial vortices enter the sample along the rows of square plaquettes, filling them. We note that there are a larger number of empty plaquettes at the center of the sample for the stronger pinning due to the Bean gradient, which is created by the pinning. In figure \ref{fig:4}(d,e,f) we show the plaquette fillings at $H/H_{\phi} = 2.56$ corresponding to the final peak in figure \ref{fig:2} for $F_{p}= 0.2$, $0.5$, and $1.0$, respectively. For the highest $F_{p}=1.0$ in figure \ref{fig:4}(f), all the pinning sites capture one vortex and most of the plaquettes are filled; however, in a number of locations there is an empty triangular plaquette accompanied by a doubly occupied square plaquette. At the weaker pinning strength $F_p=0.2$ in figure \ref{fig:4}(d), essentially all of the plaquettes, including the triangular ones, are filled; moreover, the weaker pinning leads to a number of unoccupied pinning sites. In these cases the depinned vortices prefer to sit in the square plaquettes, making them doubly occupied. \begin{figure} \includegraphics[width=3.5in]{Fig6.eps} \caption{ $P$, the fraction of occupied pins, vs $H/H_{\phi}$. (a) The $3^34^2$ array from figure \ref{fig:2} at $F_{p} = 0.2$, $0.3$, $0.5$, and $0.8$, from bottom right to top right. Arrows pointing to various locations on the $F_p=0.3$ curve indicate field levels where we illustrate the real-space vortex configurations in figure \ref{fig:6}. (b) The $3^2434$ array at $F_p=0.2$, $0.3$, $0.5$, and $0.8$, from bottom right to top right. } \label{fig:5} \end{figure} At $H/H_{\phi} = 2.0$ the system could in principle form an ordered state where every square plaquette is occupied while every other triangular plaquette is unoccupied; however, we do not observe such an ordered state. Instead, we find that as the field is increased from the ordered $H/H_{\phi} = 1.5$ state where only the square plaquettes are occupied, some pinned vortices are pushed out of the pinning sites when additional vortices try to occupy the triangular plaquettes. As a result, the pin occupancy actually decreases with increasing field, as shown in figure \ref{fig:5}(a) where we plot the fraction of occupied pinning sites $P$ versus $H/H_{\phi}$ for the $3^34^2$ system at four different values of $F_p$. There is a peak in $P$ just above $H/H_{\phi} = 1.0$ corresponding to the first matching field where most of the pinning sites are occupied. For weaker pinning, $F_{p} \leq 0.5$, $P$ declines from its peak but stabilizes near $H/H_{\phi} = 1.5$, as illustrated for $F_p=0.3$ in figure \ref{fig:5}(a). As $H/H_{\phi}$ increases further, $P$ begins to fall substantially when vortices start to push their way into triangular plaquettes, causing the vortices at the neighboring pinning sites to depin. Near $H/H_{\phi}=2.0$, $P$ passes through a minimum; the overall vortex configuration at $H/H_{\phi} = 2.0$ is disordered and there is no peak in $M$ at this field. $P$ then recovers and increases up to the ordered state at $H/H_{\phi} = 2.5$, where every square and every triangular plaquette can contain an interstitial vortex as illustrated in figure \ref{fig:4}(d,e). The field level for this state can be understood from figure \ref{fig:11}(a), where we see that a basis for the $3^34^2$ tiling has 2 pinning sites, 2 triangular plaquettes, and 1 square plaquette; if all these are singly occupied, we obtain a field level of $5/2 = 2.5$. \begin{figure} \includegraphics[width=3.5in]{Fig7.eps} \caption{ Real space images of vortex configurations for the $3^34^2$ array at $F_p=0.3$ as the field $H/H_{\phi}$ is increased from $1.5$ to $2.5$, showing lines of interstitial vortices buckling and then reforming. Open circles: pinning sites; filled dots: vortices; thick lines: guides to the eye running along rows of plaquettes and connecting their centers, which are preferred locations for interstitial vortices when all surrounding pinning sites are occupied. (a) $H/H_{\phi} = 1.535$; (b) $1.865$; (c) $2.185$; (d) $2.524$. The field levels shown correspond to the red arrows in figure \ref{fig:5}(a). } \label{fig:6} \end{figure} It is instructive to visualize the transition from $H/H_{\phi} = 1.5$ to $H/H_{\phi} = 2.5$ in real space. After the first matching field is reached and the pinning sites become occupied, interstitial vortices enter the sample between the square tiles in neat horizontal lines. This continues until every square tile in a row contains one interstitial vortex, giving the $H/H_{\phi}=1.5$ state shown in figure \ref{fig:6}(a). When further vortices attempt to enter, the lines of interstitials buckle, as shown in figure \ref{fig:6}(b,c); in this process, many pinned vortices are dislodged, and the triangular plaquettes become filled in an irregular manner. As the sample approaches the ordered $H/H_{\phi}=2.5$ state, the lines of interstitial vortices re-form along the square plaquettes, and the triangular plaquettes now also fill with lines of interstitials which zig-zag due to the alternating orientation of the triangles. This is illustrated in figure \ref{fig:6}(d). For stronger pinning $F_{p} > 0.5$, the pinned vortices do not depin when the triangular plaquettes start to become occupied around $H/H_{\phi}=2.0$, as shown for $F_p=0.8$ in figure \ref{fig:5}(a); thus, the buckling phenomenon observed for weaker pinning does not occur in this case. However, the vortex configurations around $H/H_{\phi}=2.0$ are still disordered even for strong pinning, since the triangles do not become occupied in an orderly manner. \begin{figure} \includegraphics[width=3.5in]{Fig8.eps} \caption{ (a) $P_4$, the fraction of vortices with a coordination number of 4, vs $H/H_{\phi}$ for the $3^34^2$ array in a system with $n_{p} = 2.0/\lambda^2$ and $F_{p} = 0.2$ (blue), $0.5$ (red) and $1.0$ (green). Here a peak (marked with an arrow) occurs near $H/H_{\phi} = 0.5$ for the weakest pinning. (b) $P_4$ vs $H/H_{\phi}$ for the $3^2434$ array at $n_p=2.0/\lambda^2$ and $F_{p} = 0.2$ (blue), $0.5$ (red) and $1.0$ (green); the peak from panel (a) is absent. (c) A subsection of the $F_{p} = 0.2$ system in the $3^34^2$ sample from panel (a) at $H/H_{\phi}=0.5$. Empty circles: unoccupied pinning sites; filled circles: occupied pinning sites; white squares and triangles: unoccupied square and triangular plaquettes; filled squares and triangles: square and triangular plaquettes that each contain one vortex. A thick heavy line is drawn between pairs of occupied pinning sites to highlight the herringbone ordering. } \label{fig:7} \end{figure} \subsection{Submatching states} We also find vortex ordering at some submatching fields for the $3^34^2$ pinning array. This is more clearly visible in an array with higher density $n_{p} = 2.0/\lambda^2$ and small $F_{p}$. In figure \ref{fig:7}(a) we plot $P_4$, the fraction of vortices with a coordination number of four, versus $H/H_{\phi}$ for samples with $3^34^2$ pinning arrays with $F_p=1.0$, 0.5, and 0.2. We obtain the coordination number $z_i$ of each vortex using a Voronoi construction of the vortex positions, and take $P_4=N_v^{-1}\sum_{i=1}^{N_v}\delta(4-z_i)$. For $F_{p} = 0.2$ there is a peak in $P_4$ just above $H/H_{p} = 0.5$, marked with an arrow in figure \ref{fig:7}(a), which corresponds to an ordered vortex sub-matching configuration. This configuration is illustrated in figure \ref{fig:7}(c), where we show the locations of the vortices and pinning sites and add a thick line between pairs of occupied pinning sites to make the ordering more visible. Here, every other pinning site captures a vortex and there is an effective dimerization of the occupied pinning sites along the edges of the triangular plaquettes. The dimers are tilted $30^{\circ}$ from the $y$ axis in one row and $-30^{\circ}$ from the $y$ axis in the next row, giving a herringbone ordering of the type previously studied for dimer molecules adsorbed on triangular substrates \cite{39}. As the pinning strength increases, the dimer ordering breaks apart. For the $3^2434$ array there is no herringbone ordering, as shown by the absence of a peak in $P_4$ in figure \ref{fig:7}(b). \begin{figure} \includegraphics[width=3.5in]{Fig9.eps} \caption{ $M$ vs $H/H_{\phi}$ for the $3^2434$ array from figure \ref{fig:1}(c) for $n_p=1.0/\lambda^2$. (a) $F_{p} = 0.1$ (black), 0.2 (red), and 0.3 (green). (b) The same for $F_{p} = 0.5$ (blue), 0.8 (cyan) and 1.0 (violet). } \label{fig:8} \end{figure} \section{Snub Square ($3^2434$) Tiling} We next consider the the snub square tilling or $3^2434$ pinning array illustrated in figure \ref{fig:1}(c). In figure \ref{fig:8}(a) we plot $M$ vs $H/H_{\phi}$ for this array with $F_{p} = 0.1$, $0.2$, and $0.3$, while samples with $F_p=0.5$, $0.8$, and $1.0$ are shown in figure \ref{fig:8}(b). Here a matching peak occurs at $H/H_{\phi} = 1.0$ but there are no other clearly defined peaks at the higher fillings. For the stronger pinning sample with $F_p=1.0$, the first matching peak is obscured by the initial rise of $M$ as shown in figure \ref{fig:8}(b). Since there are few additional features in $M$, we use alternative measurements to show that a variety of partially ordered vortex states can occur in this system. In particular, we consider the fraction of $n$-fold coordinated vortices $P_n$, with $n=4$, $5$, $6$, $7$, and $8$, defined as written earlier for $P_4$. \begin{figure} \includegraphics[width=3.5in]{Fig10.eps} \caption{ $P_n$ vs $H/H_{\phi}$ for the $3^2434$ array at $F_{p} =1.0$, with $P_{4}$ (red), $P_{5}$ (green), $P_{6}$ (blue), $P_{7}$ (violet) and $P_{8}$ (cyan). (a) All vortices in the system. (b) Pinned vortices only. (c) Unpinned vortices only. Below $H/H_{\phi}=1.25$ (shaded region in panel (c)), there are too few unpinned vortices to give a clear signal. } \label{fig:9} \end{figure} \begin{figure} \includegraphics[width=3.5in]{Fig11.eps} \caption{ Plaquette occupancy in representative strips of the samples for the $3^2434$ array with $F_p=1.0$, at field levels corresponding to peaks in figure \ref{fig:9}(c). The full width of the pinned region is shown. The coloring scheme is described in figure \ref{fig:3}. (a) $H/H_{\phi} = 1.65$, (b) $1.79$, (c) $2.08$, (d) $2.33$, and (e) $2.56$. } \label{fig:10} \end{figure} In figure \ref{fig:9}(a) we plot $P_{n}$ with $n=4$ through 8 versus $H/H_{\phi}$ for all the vortices in a $3^2434$ sample with strong pinning $F_{p} = 1.0$. The ordered states in this system can be more easily distinguished by separately measuring $P_n$ for the pinned vortices only, shown in figure \ref{fig:9}(b), and for the interstitial or unpinned vortices only, shown in figure \ref{fig:9}(c). In particular, in figure \ref{fig:9}(c) we find successive peaks in $P_{4,5,6,7,8}$ for the interstitial vortices. Since there are few to no interstitial vortices for $H/H_{\phi}<1.25$, we do not show $P_n$ for this field range in figure \ref{fig:9}(c). The first peak in $P_n$ for the interstitial vortices appears in $P_4$ just above $H/H_{\phi} = 1.5$ in figure \ref{fig:9}(c). Here, most of the pinning sites are occupied and the interstitial vortices primarily sit in the square plaquettes, as indicated in the plaquette occupancy plot in figure \ref{fig:10}(a) for $H/H_{\phi} = 1.65$. An interstitial vortex in a square plaquette has four nearest neighbors, the pinned vortices at the edges of the plaquette. As the field increases, interstitial vortices begin to occupy isolated triangular plaquettes, providing an additional nearest neighbor for the interstitial vortices in the nearby square plaquettes. This produces a peak in $P_{5}$ at $H/H_{\phi} = 1.79$ in figure \ref{fig:9}(c), where nearly all of the filled square plaquettes have one neighboring filled triangular plaquette as shown in figure \ref{fig:10}(b). At $H/H_{\phi} = 2.08$, there is a peak in $P_{6}$ in figure \ref{fig:9}(c), and the corresponding configuration in figure \ref{fig:10}(c) shows that many of the square plaquettes now have two neighboring filled triangular plaquettes. A peak in $P_7$ appears at $H/H_{\phi} = 2.33$ in figure \ref{fig:9}(c), and the configuration in figure \ref{fig:10}(d) has many square plaquettes with three neighboring filled triangular plaquettes, as well as a few doubly occupied square plaquettes. Finally, at $H/H_{\phi} = 2.56$ there is a peak in $P_{8}$ in figure \ref{fig:9}(c). The corresponding configurations in figure \ref{fig:10}(e) indicate that most of the plaquettes are now filled, with some empty triangular plaquettes and some doubly occupied square plaquettes. We can understand the field levels at which the peaks in $P_n$ occur by considering figure \ref{fig:11}(b), where we illustrate both the pinning site basis and the plaquette basis which generate the $3^2434$ tiling. In a ground state configuration at $H/H_{\phi} = 1.0$, there are four vortices occupying the four pins comprising the basis, with no interstitial vortices. For the state where each square plaquette is occupied by one interstitial vortex, there are a total of 6 vortices per basis compared to 4 pins; so the field for this state is $H/H_{\phi} = 6/4 = 1.5$. Each of the interstitial vortices sitting in a square plaquette has 4 nearest neighbors, the pinned vortices at the corners of the square. As $H/H_{\phi}$ increases further, the triangular plaquettes become occupied one by one, with each new triangular plaquette interstitial providing an additional nearest neighbor for a nearby square plaquette interstitial vortex. This process gives a total of $7$, 8, 9, or 10 vortices per basis leading to fields of $H/H_{\phi} = 1.75$, 2.0, 2.25, and $2.5$, respectively. Thus, the peaks in $P_4$, $P_5$, $P_6$, $P_7$, and $P_8$ for the unpinned vortices in figure \ref{fig:9}(c) arise simply from the coordination numbers of the interstitial vortices occupying the square plaquettes. The actual peaks from the simulation shown in figure \ref{fig:9}(c) occur at $H/H_{\phi} = 1.65$, $1.79$, $2.08$, $2.33$, and $2.56$, which are close to the ideal field values. The small shift in the actual values is due to the flux gradient and also to the occasional double occupancy of square plaquettes found in figure \ref{fig:10}(d,e). The peaks in $P_{n}$ for the pinned vortices shown in figure \ref{fig:9}(b) follow immediately from the behavior of the interstitial vortices described above. We note in particular that $P_{5}$ becomes nearly one at $H/H_{\phi} = 2.5$, since almost all the triangular and square plaquettes are occupied as shown in figure \ref{fig:10}(e). Since each pinning site is surrounded by $5$ plaquettes, the nearest neighbors of each pinned vortex are surrounded by $5$ interstitial vortices at this filling. \section{Smaller Pinning Densities} \begin{figure} \includegraphics[width=3.5in]{Fig12.eps} \caption{ $M$ vs $H/H_{\phi}$ for samples with $n_p=0.5/\lambda^2$ and $F_{p} = 0.3$. (a) In the $3^34^2$ array, there are additional peaks at $H/H_{\phi} = 3.5$ and $4.0$. Insets: the ordered vortex arrangements that appear at these additional peaks, indicated by arrows. In the insets, the open circles are pinning sites and the filled circles are vortices. (b) In the $3^2434$ array, there is a peak at $H/H_{\phi} = 3.75$; the inset shows the ordered vortex arrangement that forms at this field. In the inset, the open circles are pinning sites and the filled circles are vortices. } \label{fig:12} \end{figure} For the case of a pinning density of $n_{p} = 0.5/\lambda^2$ and smaller $F_{p}$ we can readily observe matching peaks up to $H/H_{\phi} = 5.0$. In figure \ref{fig:12}(a) we plot $M$ vs $H/H_{\phi}$ for the $3^34^2$ array at $F_{p} = 0.3$ and in figure \ref{fig:12}(b) we show the same quantity for the $3^2434$ pinning array. In the $3^34^2$ array, we find the same peaks shown in figure \ref{fig:2} at $H/H_{\phi} = 1.0$, $1.5$, and $2.5$, and we observe additional peaks just below $H/H_{\phi} = 3.5$ and $4.0$ In the insets of figure \ref{fig:12}(a) we plot the vortex and pinning site locations at the two new peaks at $H/H_{\phi}=3.5$ and $4.0$. Here, all of the pinning sites are occupied and an ordered crystalline arrangement of vortices occurs. By directly counting the number of vortices per pinning site in the ordered part of the sample, we confirm that these are the $3.5$ and $4.0$ fillings. For the $3^2434$ array, figure \ref{fig:12}(b) shows a peak in $M$ for $H/H_{\phi} = 3.75$ corresponding to the ordered vortex array illustrated in the inset. \section{Summary} We have investigated ordering and pinning of vortices interacting with Archimedean pinning arrays. The arrays are constructed using the vertices of Archimedean tilings of squares and triangles, and we specifically examine the elongated triangular tiling and the snub square tiling. For the elongated triangular or $3^34^2$ array, we find that beyond the first matching field, the interstitial vortices first fill the square plaquettes and subsequently fill the triangular plaquettes, producing pronounced peaks in the magnetization at noninteger matching fields along with an absence of peaks at certain higher integer matching fields. The competition between filling the more confined triangular plaquettes with single vortices or doubly occupying the larger square plaquettes can lead to a decrease in the fraction of occupied pins as the field is increased, and can produce disordered vortex states at certain integer matching fields. We also find novel vortex orderings at submatching fillings, such as herringbone configurations for weak pinning. For the snub square or $3^2434$ array above the first matching field, we find that by analyzing the plaquette fillings we can correctly predict the appearance of a sequence of partially ordered states, where interstitial vortices first occupy square plaquettes and then fill the triangular plaquettes. At higher fields we observe additional commensuration effects at noninteger matching fields which correspond to ordered vortex structures. Our results can be tested for experiments on Archimedean tilling pinning arrays in superconductors as well as for colloids interacting with optical trap arrays with similar geometries. \section{Acknowledgments} This work was carried out under the auspices of the NNSA of the U.S. DoE at LANL under Contract No. DE-AC52-06NA25396. \section*{References}
1,108,101,562,643
arxiv
\section{Introduction} \label{one} \noindent The conceptual basis of Quantum Mechanics has been the subject of heated debates since the beginning of the theory until these days. The EPR \cite{EPR} and Bohm \cite{Bohm} thought experiments set the context of the discussion on the nature of the physical reality, as understood according to Quantum Theory. Later, the works by Bell \cite{Bell} allowed for this discussion, previously philosophical, to be the subject to empirical verification. Experimental results arising from Bell's work \cite{Clauser, Aspect, GHZ} strongly suggest the impossibility for a local hidden variable theory to reproduce the predictions of Quantum Mechanics. The study of the foundations of the theory has also led to the analysis of quantum systems as information carriers. The features that distinguish quantum systems from classical ones can be used to transmit and process information in ways that are impossible for systems that obey classical laws. Typical examples of this fact are quantum teleportation \cite{Teleportation} and super-dense coding \cite{SuperDenseCoding}. In order to characterise the novel properties of quantum mechanical systems in a precise manner, there has been an important amount of effort in defining quantities that measure those aspects of the systems that are relevant to the transmission and processing of information (cf., e.g., \cite{Bengtsson, Holevo}). This analysis has been important both from the fundamental and the applied points of view. Recently, there has been considerable interest in studying the behaviour of quantum systems under relativistic transformations from a quantum information perspective. Given the non-local character of quantum mechanics, experiments which produce non-local correlations have been analysed in a special-relativistic framework \cite{Czachor, Friis, Hacyan1, Hacyan2, PeresTernoRMP, SaldanhaVedralNJP, SaldanhaVedralPRA}. Furthermore, in view of the important applications of quantum systems for the transmission of information, the effects that relativistic velocities between emitter and receiver have on the capacity of quantum channels to transmit classical or quantum information have been studied \cite{BradlerCastro-RuizNahmad-Achar}, showing that an appropriate Lorentz boost increases both the classical and quantum capacities of a communication channel. Moreover, it can be used to obtain a positive quantum capacity channel from a channel that does not allow quantum communication for observers at rest with respect to each other. In a similar spirit, the effects of Lorentz transformations on the quantum entanglement of systems of pairs of particles have been analysed \cite{Alsing, Friis, Castro-RuizNahmad-Achar}. Despite the fact that the total entanglement of the system, i.e. entanglement of one particle with respect to the other, is conserved, entanglement in the spin sector of a two-particle system is modified by the relative velocity between observers. In~\cite{Castro-RuizNahmad-Achar} it was shown that, for particles propagating with opposite momenta and a Lorentz boost in a perpendicular direction to the momenta, there exists a set of spin states that remain invariant under the Lorentz transformation, thereby conserving entanglement with respect to all partitions of the Hilbert space of the system. In this paper we consider the transformation of a two particle state in the general case and analyse closely the characteristics of invariant subspaces. In section~\ref{two} we briefly review how Lorentz transformations affect elementary particles with momentum and spin degrees of freedom and in section~\ref{three} we focus on two-particle systems and identify the group that acts on the spin sector of the state. We then analyse the case where the momenta of the particles are correlated and show that there are subspaces of spin states that are closed under Lorentz transformations. In section~\ref{four} we give a closed formula for the spin-momentum entanglement of an EPR-like pair of arbitrary spin under a Lorentz boost perpendicular to the propagation direction. We present our conclusions in section~\ref{five}. \section{One particle} \label{two} \noindent We briefly recall the transformation law for momentum and spin eigenstates under the Lorentz group. For a particle of mass $M>0$ and spin $s$, the state of momentum $p$ and spin projection along the $z$-axis $\sigma$ is defined as \begin{equation} \label{WignerBasis} \ket{p, \sigma} = U(L_p)\ket{k, \sigma}, \end{equation} where $k = (M,0,0,0)^T$ is the four-momentum of the particle in its rest frame, $\sigma = -s, -s+1, \cdot \cdot\cdot, s$, and $U(L_p)$ is the spin-s unitary representation of the pure boost $L_p$ that takes $k$ to $p$. Explicitly \cite{Polyzou}, \begin{equation} L_p = \frac{1}{M} \begin{bmatrix} p^0 && \mathbf{p}^T \\ \mathbf{p} && \delta_{ij}+\frac{p_i p_j}{M+p^0} \end{bmatrix}, \end{equation} where $\mathbf{p}$ denotes the spatial part of $p$ and latin indices with values $1$, $2$, $3$ are used as spatial indices. The state $\ket{k, \sigma}$ is an eigenstate of both the momentum operator, $P$, and the total angular momentum in the $z$ direction, $J_z$ \begin{align} P \ket{k, \sigma} =& k \ket{k, \sigma} \nonumber \\ J_z \ket{k, \sigma} =& \sigma \ket{k, \sigma}. \end{align} It is important to note that the transformation that takes $k$ to $p$ is not unique. In fact, $L(p)R$, where $R$ is any three-dimensional rotation, has the same effect, since $R$ acts trivially on $k$. Different choices for $R$ lead to different definitions of momentum and spin states. We also wish to point out that, for arbitrary $p$, the state $\ket{p, \sigma}$ is no longer an eigenstate of $J_z$, so that $\sigma$ is {\it not} the label of the spin of the particle in the reference frame where it has momentum $p$ but, rather, in the reference frame where it is at rest. This remark is important in, for example, the context of spin measurements performed by a Stern-Gerlach apparatus. To illustrate this point suppose that we prepare a spin-$1/2$ particle in the state $\ket{k,\sigma = +z}$, and perform a quantum test with a Stern-Gerlach magnet oriented in the $z$-direction in the reference frame of the particle. The test consists in checking wether the particle has spin in the direction $+z$. In the rest frame of the particle it is absolutely certain that the particle will pass the test; however, if the particle has momentum $p$ in the reference frame of the magnet, there is a non-zero probability that it will be deflected in the $-z$ direction. Thus, a (normalised) state like \begin{equation} \label{MomentumSuperposition} a\ket{p_1,+z}+b\ket{p_2,+z}, \end{equation} cannot be interpreted as a spin eigenstate in this context, and taking a partial trace of the momentum degrees of freedom can lead to inconsistencies, as shown in \cite{SaldanhaVedralNJP}. This does not mean that the reduced spin density matrix obtained by tracing out the momentum degrees of freedom is useless for making physical predictions, as it gives the correct expectation values for suitably defined spin operators~\cite{Taillebois}. In our case, for example, the state (\ref{MomentumSuperposition}) is certain to pass a test corresponding to the projector $\ketbra{p_1,+z}{p_1,+z}+\ketbra{p_2,+z}{p_2,+z}$ and can therefore be understood as having spin $+z$ in this context. In this work we analyse spin-reduced density matrices formed by the partial trace method and write formally the Hilbert space of the particle as a tensor product of spin and momentum subspaces, $H = H_p\otimes H_s$, with the understanding that the reduced density matrix has to be interpreted in terms of adequate quantum tests. Consider now an observer whose reference frame is obtained by means of the Lorentz transformation $\Lambda^{-1}$ from the original reference frame. For this observer, the state $\ket{p, \sigma}$ is transformed under the spin-$s$ unitary representation of $\Lambda$, that is, $\ket{p, \sigma}\longrightarrow U(\Lambda)\ket{p, \sigma}$. We now find the explicit form of $U(\Lambda)$. From (\ref{WignerBasis}) and the group representation property it follows that \begin{align} U(\Lambda)\ket{p,\sigma} = & U(\Lambda)\,U(L_p) \ket{k, \sigma} \nonumber \\ = & U(L_{\Lambda_p})\,U(L^{-1}_{\Lambda_p}\Lambda L_p) \ket{k, \sigma} \nonumber \\ = & U(L_{\Lambda_p})\,U(W(\Lambda, \mathbf{p})) \ket{k, \sigma}. \end{align} The transformation $W(\Lambda, \mathbf{p})$ is a pure rotation, since it leaves the rest frame four-momentum $k$ invariant: $W(\Lambda, \mathbf{p})\,k = k$. It is called the Wigner rotation corresponding to the Lorentz transformation $\Lambda$ and momentum $p$. In general, for any type of particle, Wigner rotations form a group, called the little group corresponding to momentum $k$. For the case of massive particles Wigner's little group is the rotation group $SO(3)$. As a consequence of the above equation, the momentum part of the state is transformed from $p$ to $\Lambda p$, and the spin part of the state changes under the action of $SO(3)$, according to \begin{equation} \label{Transformation} U(\Lambda)\ket{p,\sigma} = \sum_{\sigma^\prime} D_{\sigma^\prime \sigma}(W(\Lambda, \mathbf{p})) \ket{\Lambda p, \sigma^\prime}, \end{equation} where $D_{\sigma^\prime \sigma}(W(\Lambda, \mathbf{p}))$ is a spin-$s$ representation of the rotation $W(\Lambda, \mathbf{p})$. When considering two particle states it will be useful to look at transformation (\ref{Transformation}) with a different notation, separating the spin and the momentum parts of the system. We thus write \begin{equation} U(\Lambda)\ket{p,\sigma} = \ket{\Lambda p}U_s(\Lambda,\mathbf{p}) \ket{\sigma}, \end{equation} where $U_s(\Lambda,\mathbf{p})\ket{\sigma} = \sum_{\sigma^\prime} D_{\sigma^\prime \sigma}(W(\Lambda, \mathbf{p})) \ket{\sigma^\prime}$. Considering again the state (\ref{MomentumSuperposition}) we see that it transforms as \begin{equation} a\ket{p_1,+z}+b\ket{p_2,+z} \longrightarrow a\ket{\Lambda p_1}U_s(\Lambda,\mathbf{p}_1)\ket{+z} + b\ket{\Lambda p_2}U_s(\Lambda,\mathbf{p}_2)\ket{+z}. \end{equation} Since we cannot write the final state as a tensor product of spin and momentum sectors we say that the Lorentz transformation has entangled the spin and the momentum. It is also said that Lorentz transformations do not preserve the tensor product structure of the Hilbert space. Of course, this spin-momentum entanglement is of a different nature as the usual particle-particle entanglement and should be understood in terms of concrete measurements. In our example, given by state (\ref{MomentumSuperposition}), we see that the transformed state is no longer an eigenstate of a projector of the form \begin{equation} \label{Projector} \ketbra{\Lambda p_1,+n}{\Lambda p_1,+n}+\ketbra{\Lambda p_2,+n}{\Lambda p_2,+n} \end{equation} for any direction $n$, in contrast to state transformations given by pure rotations, where $U(\Lambda,\mathbf{p}_1)$ and $U(\Lambda,\mathbf{p}_2)$ are equal. Therefore, due to spin-momentum entanglement, the particle has a non-zero probability to fail a test for the operator (\ref{Projector}) for every possible value of $n$. This is reflected by the fact that the reduced spin density matrix is no longer a pure state when tracing out momenta. We want to make clear that the situation just described poses no problem for relativistic invariance: we are talking about two different experiments rather than a single experiment seen by two different inertial observers. \section{Two particles} \label{three} \noindent Consider now a pair of spin-$s$ distinguishable massive particles with momenta $p_1$ and $p_2$ according to the reference frame of some inertial observer. The state of the system in this reference frame is $\ket{p_1, \sigma_1}\otimes\ket{p_2, \sigma_2}$. This state is a basis element of the complete Hilbert space, which we decompose into two possible partitions \begin{align} H =& \,H_{A}\otimes H_{B} \nonumber\\ =& \,(H_{A}\otimes H_{B})_{p}\otimes(H_{A}\otimes H_{B})_{s} \nonumber \\ =& \, H_p \otimes H_s, \end{align} where $A$ and $B$ denote our two particles and $p$ and $s$ stand for the momentum and spin degrees of freedom, respectively. For the second inertial observer described in the previous section, the two particle system is described by \begin{align} \label{Transformation2P} U(\Lambda)\ket{p_1, \sigma_1}\otimes\ket{p_2, \sigma_2} =& U_1(\Lambda)\ket{p_1, \sigma_1}\otimes U_2(\Lambda)\ket{p_2, \sigma_2} \nonumber \\ =& \sum_{\sigma_1^\prime \sigma_2^\prime} D_{\sigma_1^\prime \sigma_1}(W(\Lambda, \mathbf{p}_1))D_{\sigma_2^\prime \sigma_2}(W(\Lambda, \mathbf{p}_2))\ket{p_1, \sigma_1^\prime}\otimes\ket{p_2, \sigma_2^\prime} \nonumber \\ =& \sum_{\sigma_1^\prime \sigma_2^\prime} \left(D(W(\Lambda, \mathbf{p}_1))\otimes D(W(\Lambda, \mathbf{p}_2))\right)_{\sigma_1^\prime \sigma_2^\prime, \, \sigma_1 \sigma_2} \ket{p_1, \sigma_1^\prime}\otimes\ket{p_2, \sigma_2^\prime}. \end{align} The most important thing to note about this transformation is that it acts with a unitary operator in each particle subspace. By linearity, this will be true for an arbitrary initial state. As a consequence, entanglement between particles will always be conserved. This fact is fundamental for the consistency between quantum mechanics and special relativity as, for example, a violation of Bell's inequalities in one reference frame implies a violation of the inequalities in any other frame. In order for this to be true, entanglement between particles must be a relativistic invariant. Having said that, we now analyse the transformation in the two-particle spin space. From eq.(\ref{Transformation2P}) we see that it is given by the tensor product of the representations of the Wigner rotations corresponding to each particle, i.e. we may rewrite eq.(\ref{Transformation2P}) as \begin{equation} U(\Lambda)\ket{p_1, p_2}\ket{\sigma_1, \sigma_2} = \ket{\Lambda p_1, \Lambda p_2}U_s(\Lambda, \mathbf{p}_1, \mathbf{p}_2)\ket{\sigma_1, \sigma_2}, \end{equation} where $U_s(\Lambda, \mathbf{p}_1, \mathbf{p}_2) = D(W(\Lambda, \mathbf{p}_1))\otimes D(W(\Lambda, \mathbf{p}_2))$. From this result we can now find the group acting on the spin part of the system for a given pair of momenta $\mathbf{p}_1$ and $\mathbf{p}_2$. Since the transformation $D(W(\Lambda, \mathbf{p}_1))\otimes D(W(\Lambda, \mathbf{p}_2))$ depends on two $SO(3)$ elements, $W(\Lambda, \mathbf{p}_1)$ and $W(\Lambda, \mathbf{p}_2)$, it is a representation of the cartesian product $SO(3) \times SO(3)$. In general, for two groups $\mathcal{G}_1$ and $\mathcal{G}_2$ and two representations $D_1$ and $D_2$ acting on two vector spaces $V_1$ and $V_2$, that is, $D_1: \mathcal{G}_1 \longrightarrow GL(V_1)$ and $D_2: \mathcal{G}_2 \longrightarrow GL(V_2)$, we can construct the exterior tensor product representation \begin{equation} D_1 \boxtimes D_2: \mathcal{G}_1 \times \mathcal{G}_2 \longrightarrow V_1 \otimes V_2, \end{equation} defined by \begin{equation} (g_1, g_2) \mapsto D_1(g_1) \otimes D_2(g_2), \end{equation} for all $g_1 \in \mathcal{G}_1$ and $g_2 \in \mathcal{G}_2$. In the case described above, $\mathcal{G}_1 = \mathcal{G}_2 = SO(3)$ and the representation acting on the spin space $H_s$ is an exterior product of the representations described in the last section. We can ask for the invariant subspaces of these representations in order to find a natural division of the spin space for the physical situation described above. However, it is a known result of exterior tensor product representations that $D_1 \boxtimes D_2$ is irreducible if and only if $D_1$ and $D_2$ are irreducible. Since, by assumption, we have elementary particles, both representations of $SO(3)$ corresponding to $\mathbf{p}_1$ and $\mathbf{p}_2$ are irreducible. As a consequence the two-particle spin subspace has no nontrivial invariant subspaces. The situation is different, however, when both momenta are correlated. In the case where, say, $\mathbf{p}_2$ is a linear function of $\mathbf{p}_1$, i.e. $\mathbf{p}_2 = f_{lin}(\mathbf{p}_1)$, the representation of $SO(3)\times SO(3)$ becomes effectively a representation of $SO(3)$, since the elements $(W(\Lambda,\mathbf{p}),W(\Lambda, f_{lin}(\mathbf{p}))$ are in one to one correspondence with $W(\Lambda, \mathbf{p}) \in SO(3)$. In this case the representation will be in general reducible and there will be nontrivial subspaces of the two-particle spin space that transform amongst themselves under Lorentz boosts. In an EPR-like scenario, a pair of particles is created with $0$ total linear momentum, so that the functional relation between $\mathbf{p}_1$ and $\mathbf{p}_2$ is simply $\mathbf{p}_2 = -\mathbf{p}_1$. For this scenario, and a Lorentz boost $\Lambda$ in a given direction, the underlying group is actually $SO(2)$, since the Wigner rotation is along the same axis for both particles. \section{Spin-momentum entanglement (Results)} \label{four} \subsection{General Results} \noindent Let us closely analyse the situation described in the last paragraph of Section \ref{three}. We first state the physical scenario briefly, following previous treatments \cite{Alsing, Friis,Castro-RuizNahmad-Achar}, and then study the behaviour of spin-momentum entanglement in the light of the invariant subspaces that arise due to the correlation between momenta. Let $p = (p^0, \mathbf{p})$ (and $-p$) be characterised by the rapidity $\mathbf{\eta}$, which is a three-vector that points in the direction of $\mathbf{p}$ and satisfies $M \sinh\vert \mathbf{\eta} \vert = \vert \mathbf{p} \vert$. Let $\Lambda$ be a pure boost perpendicular to the propagation direction and parametrised by the rapidity $\mathbf{\xi}$, so that $\tanh\vert \mathbf{\xi} \vert = \mathbf{v}$, with $v$ equal to the relative velocity between the reference frames. In this case the Wigner rotation $W(\Lambda,\mathbf{p})$ is along the axis defined by $\mathbf{\eta}\times\mathbf{\xi}$ and has a rotation angle $\Omega$, given by \begin{equation} \tan \Omega = \frac{\sinh\vert \mathbf{\eta} \vert \sinh\vert \mathbf{\xi} \vert}{\cosh\vert \mathbf{\eta} \vert + \cosh\vert \mathbf{\xi} \vert}. \end{equation} The angle $\Omega$ is called the Wigner angle. The Wigner rotation corresponding to the opposite value of the momentum, $W(\Lambda,-\mathbf{p})$, is equal to the previous one but replacing $\Omega$ by $-\Omega$. Then the rotation axes corresponding to $W(\Lambda,\mathbf{p})$ and $W(\Lambda, -\mathbf{p})$ are the same, and the underlying group that acts on the spin space is $SO(2)$. The group $SO(2)$ has one-dimensional irreducible representations of the form $e^{\mathrm{i} m \Omega}$. The representation induced in the two-particle spin-space by the Lorentz transformation must be reducible, since this space is $(2s+1)^2$-dimensional. Based on this idea, we label the spin states in $H_s$ according to their transformation properties under representations of `rotations' of the form $W(\Lambda, \mathbf{p})\otimes W(\Lambda, -\mathbf{p})$. More precisely, we define the spin state $\ket{m,\alpha}$ as an element which carries the $m$ representation of $SO(2)$, i.e., \begin{equation} \label{InvariantBasis} U(\Lambda)\ket{p,-p}\ket{m,\alpha} = e^{\mathrm{i}m\Omega}\ket{\Lambda p, -\Lambda p}\ket{m, \alpha}. \end{equation} The label $\alpha$ is used in order to take into account the different times that the same irreducible representation of $SO(2)$, labeled by $m$, appears in the spin space. The number of different $\alpha$'s for a given $m$, that is, the multiplicity of the representation $m$, is calculated in \cite{Castro-RuizNahmad-Achar} and shown to be $a_m = 2 s +1 - \vert m \vert$. Therefore the transformation $W(\Lambda, \mathbf{p})\otimes W(\Lambda, -\mathbf{p})$ is diagonal in the $\{ \ket{m, \alpha} \}$ basis. Moreover, since the transformation that diagonalises $W(\Lambda, \mathbf{p})\otimes W(\Lambda, -\mathbf{p})$ is unitary, $\{ \ket{m, \alpha} \}$ is an orthonormal set, \begin{equation} \braket{m, \alpha}{m^\prime, \alpha^\prime} = \delta_{m m^\prime}\delta_{\alpha \alpha^\prime}. \end{equation} We now quantify how the Lorentz boost $\Lambda$ entangles the spin and the momentum of the most general spin $s$ initial spin-state. Since the entanglement change is $0$ for initial momentum states that are not in a superposition \cite{Friis}, we take the initial momentum state to be in the homogeneous superposition $\left( \ket{p,-p}+ \ket{-p,p} \right)/\sqrt{2}$. Our total initial state is then \begin{equation} \label{InitialState} \ket{\psi_i} = \frac{\ket{p,-p} + \ket{-p,p}}{\sqrt{2}} \sum_{m = -2s}^{2s} \sum_{\alpha = 1}^{a_m} c_{m \alpha}\ket{m,\alpha}. \end{equation} Using equation (\ref{InvariantBasis}) in equation (\ref{InitialState}) we find the final state $\ket{\psi_f} = U(\Lambda)\ket{\psi_i}$ to be \begin{align} \ket{\psi_f} = \sum_{m = -2s}^{2s} \frac{ e^{\mathrm{i}m\Omega} \ket{\Lambda p,- \Lambda p} + e^{-\mathrm{i}m\Omega} \ket{-\Lambda p,\Lambda p} }{\sqrt{2}} \sum_{\alpha = 1}^{a_m} c_{m\alpha} \ket{m,\alpha}. \label{FinalState} \end{align} From expression (\ref{FinalState}) we can see immediately that, for the case of a single value of $m$, i.e. for coefficients $c_{m,\alpha} = \delta_{m m_0}c_\alpha$, the spin sector of the system factors out and remains unchanged. There is therefore no entanglement between spin and momentum and, as a consequence, the entanglement between single-particle spins remains invariant as well, a fact that might be important for future applications of quantum information protocols in relativistic scenarios. Let us now calculate, as a measure of the entanglement between momentum and spin, the linear entropy with respect to the momentum-spin partition of the Hilbert space \cite{Friis} \begin{equation} \label{LinearEntropy} E=\sum_{i} \left(1-Tr\left(\rho_{i}^2\right)\right), \end{equation} where $\rho_i$ is obtained by tracing out the momentum or spin degrees of freedom from the total density matrix $\rho = \ketbra{\psi_f}{\psi_f}$. To simplify calculations we consider sharp momentum distributions approximated by plane-wave states so that, effectively, $\braket{p_i}{p_j}=\delta_{ij}$. Using equations (\ref{FinalState}) and (\ref{LinearEntropy}) we find that the linear entropy after the Lorentz boost is given by \begin{equation} \label{Entanglement} E = 2\left(1-\sum_{m = -2s}^{2s}\sum_{m^\prime = -2s}^{2s} \,\sum_{\alpha = 1}^{a_m} \,\,\sum_{\alpha^\prime = 1}^{a_m^\prime} \vert c_{m \alpha}\vert^2 \vert c_{m^\prime \alpha^\prime}\vert^2 \cos^2(m-m^\prime)\Omega\right). \end{equation} \subsection{Examples: two parametrisations} In order to analyse the behaviour of several spin sates at a time, several parametrisations of initial spin states were proposed in references \cite{Friis} and \cite{Castro-RuizNahmad-Achar}. The parametrisations were defined such that for every definite value of the parameters there was certain initial spin state. In this way, entanglement was calculated as a function of the parameters. Nevertheless, none of the spin state parametrisations proposed was `natural', in the sense that different parametrisations do not mix under Lorentz boosts. For example, it could seem natural to choose to parametrise the spin states according the the (non relativistic) total spin quantum number $s$ (corresponding to the operator $S^2 = S_x^2+S_y^2+S_z^2$), so that states with $s = 0$ belong to one parametrisation, states with $s = 1$ to another, and so on. However, the label $s$ is of course not conserved when applying Lorentz boosts on the states and different parametrisations mix. Now, according to the invariant basis states of equation (\ref{InvariantBasis}), all the spin states that transform under the same $SO(2)$ representation stay invariant when acting on them with a Lorentz transformation. Therefore the `natural' set of states to choose are the invariant subspaces labeled by different values of $m$. For these sets of states the question of how spin and momentum get entangled after the Lorentz boost is trivial: entanglement is zero for all linear combinations of the form $\sum_\alpha c_\alpha \ket{m, \alpha}$. It follows that all the information about how momentum and spin get entangled lies in the differences $m-m^\prime$ of the representation labels, as equation (\ref{Entanglement}) shows. We now illustrate the spin-momentum entanglement for superpositions of different values of $m$. Figure \ref{ParamA} shows the entanglement after the Lorentz boost for the states given by the parametrisation \begin{equation} \label{Eq_ParamA} \ket{\psi_s} = \sin\theta \cos\phi \ket{m = 1}+ \sin\theta \sin\phi \ket{m = 0} + \cos\theta \ket{m = -1}, \end{equation} where we have ignored the value of $\alpha$ since it plays no role in entanglement. \begin{figure}[h] \centering \scalebox{0.37}{\includegraphics{ParamA_OmegaPiOctavos}} \quad \scalebox{0.37}{\includegraphics{ParamA_OmegaPiCuartos}} \\ \scalebox{0.37}{\includegraphics{ParamA_OmegaTresPiOctavos}} \quad \scalebox{0.37}{\includegraphics{ParamA_OmegaPiMedios}} \caption{Spin-momentum entanglement for the states parametrised by eq. (\ref{Eq_ParamA}). } \label{ParamA} \end{figure} Note that, so far, no reference has been made to the total spin of the particles. In fact, the representations $m = 1, \ 0, \ -1$ are present for every $s>1/2$, so that Figure~\ref{ParamA} can describe particles of arbitrary spin. For the case $s = 1/2$, the invariant states are given explicitly by \begin{subequations} \begin{align} \ket{\psi^+} && U_s(\Lambda, \mathbf{p}, -\mathbf{p})\ket{\psi^+} =& \ket{\psi^+} \\ \ket{\phi^-} && U_s(\Lambda, \mathbf{p}, -\mathbf{p})\ket{\phi^-} =& \ket{\phi^-} \\ \ket{\chi^+} && U_s(\Lambda, \mathbf{p}, -\mathbf{p})\ket{\chi^+} =& e^{\mathrm{i}\Omega}\ket{\chi^+} \\ \ket{\chi^-} && U_s(\Lambda, \mathbf{p}, -\mathbf{p})\ket{\chi^-} =& e^{-\mathrm{i}\Omega}\ket{\chi^-}, \end{align} \end{subequations} where \begin{subequations} \begin{align} \ket{\psi^+} =& \frac{1}{\sqrt{2}}(\ket{+z,-z}+\ket{-z,+z}) \\ \ket{\psi^-} =& \frac{1}{\sqrt{2}}(\ket{+z,-z}-\ket{-z,+z}) \\ \ket{\phi^+} =& \frac{1}{\sqrt{2}}(\ket{+z,+z}+\ket{-z,-z}) \\ \ket{\phi^-} =& \frac{1}{\sqrt{2}}(\ket{+z,+z}-\ket{-z,-z}) \end{align} \end{subequations} are the well-known Bell states, and \begin{subequations} \begin{align} \ket{\chi^+} =& \frac{1}{\sqrt{2}}(\ket{\phi^+}+\mathrm{i}\ket{\psi^-}) \\ \ket{\chi^-} =& \frac{1}{\sqrt{2}}(\ket{\phi^+}-\mathrm{i}\ket{\psi^-}). \end{align} \end{subequations} The figure shows the spin-momentum entanglement for increasing Wigner angles. Entanglement is $0$ for vanishing $\Omega$ and increases gradually as $\Omega$ grows, as can be seen in the case $\Omega = \pi/8$. Note how invariant states $\ket{m=1}$ ($\theta = \pi/2$, $\phi = 0$), $\ket{m=0}$ ($\theta = \pi/2$, $\phi = \pi/2$) and $\ket{m=-1}$ ($\theta = 0$) always have $0$ entanglement. For thse spin-$1/2$ case, analysed in \cite{Friis}, the state $\ket{m=0}$ corresponds to either $\ket{\psi^+}$ or $\ket{\phi^-}$, which are maximally entangled states. These spin states remain exactly the same before an after the Lorentz boost and therefore are ideal candidates for transmitting quantum information in a situation where we want both observers to describe the same spin state, regardless the rapidity of the particles $\vert \mathbf{\eta} \vert$ or the strength of the boost $\vert \mathbf{\chi} \vert$. On the other hand, for situations where we want the state to change and therefore need the spin and momentum to get entangled, we note that the states for which entanglement is grater, that is, those corresponding to maxima in Figure~\ref{ParamA}, depend strongly on the Wigner angle. For $\Omega = \pi/8$ and $\pi/4$ there are maxima corresponding to the states $\ket{\phi^+} = (\ket{\chi^+}+\ket{\chi^-})/\sqrt{2}$ (with $\theta = \pi/4$, $\phi = 0$), and $\ket{\psi^-} = (\ket{\chi^+}-\ket{\chi^-})/\sqrt{2}$ (with $\theta = 3\pi/4$, $\phi = 0$), while for $\Omega = \pi/2$, corresponding to the limit of the speed of light, both of these states have $0$ entanglement. For the spin-$1$ case there are more options to choose from as representatives for the different $m$-representations. For example, the states \begin{subequations} \begin{align} \ket{\psi_1} =& \frac{1}{\sqrt{3}}\left( \ket{1,1} -\ket{0,0} +\ket{-1,-1} \right) \label{Entangled1}\\ \ket{\psi_2} =& \frac{1}{\sqrt{3}}\left( \ket{1,-1} +\ket{0,0} +\ket{-1,1} \right) \label{Entangled2}\\ \ket{\psi_3} =& \frac{1}{2}\left( \ket{1} +\ket{-1} \right)\left( \ket{1} +\ket{-1} \right) \label{Separable}, \end{align} \end{subequations} where $1$, $0$ and $-1$ denote the spin projection along the $z$-axis, carry the representation $m=0$ of $SO(2)$. Note that (\ref{Entangled1}) and (\ref{Entangled2}) are maximally entangled spin-$1$ states, while (\ref{Separable}) is separable. For the case $m = \pm 1$ and $s =1$ we have, for example, the states \begin{subequations} \begin{align} \ket{\beta_1} =& \frac{1}{\sqrt{2}}\left(\ket{1,-1}-\ket{-1,1}\right) \\ \ket{\beta_2} =& \frac{1}{2}\left(\ket{0}\left(\ket{1}-\ket{-1}\right)+\left(\ket{1}-\ket{-1}\right)\ket{0}\right), \end{align} \end{subequations} that transform amongst themselves \begin{equation} U_s(\Lambda,\mathbf{p},-\mathbf{p})\begin{bmatrix} \ket{\beta_1} \\ \ket{\beta_2} \end{bmatrix} = \begin{bmatrix} \cos\Omega & \sin\Omega \\ -\sin\Omega & \cos\Omega \end{bmatrix} \begin{bmatrix} \ket{\beta_1} \\ \ket{\beta_2} \end{bmatrix}. \end{equation} As a consequence, the linear combinations $(\ket{\beta_1}+\mathrm{i}\ket{\beta_2})$ and $(\ket{\beta_1}-\mathrm{i}\ket{\beta_2})$ carry the representations $m = 1$ and $m = -1$, respectively. From these examples we note that, while the analysis of spin-momentum entanglement is independent of the spin of the particles and of the particular realisation of the different states that carry $SO(2)$ representations, such realisations have to be taken into account when studying entanglement between pairs of spins, since the characteristic of the states (maximally entangled, partially entangled, separable) may differ strongly from case to case. Nevertheless, we can say safely that if the spin state remains unchanged, as is the case for superpositions that transform under the same representation, then the entanglement between spins will also remain unchanged, no matter its value. We now analyse the entanglement for a general superposition of two different representations, labeled by $m$ and $n$, without any reference to the spin of the particles. Figure \ref{ParamB} shows spin momentum entanglement for the general superposition \begin{equation} \label{Eq_ParamB} \cos\theta \ket{m}+ e^{\mathrm{i}\phi} \sin\theta \ket{n}, \end{equation} where we have again ignored the label $\alpha$. We choose the three different values of $3$, $4$ and $5$ for $m-n$ since the cases $m-n = 0, \ 1, $ and $2$ are already illustrated as particular cases in Figure~\ref{ParamA}. \begin{figure}[h] \centering \scalebox{0.37}{\includegraphics{ParamB_OmegaPiSextos}} \quad \scalebox{0.37}{\includegraphics{ParamB_OmegaPiTercios}} \quad \scalebox{0.37}{\includegraphics{ParamB_OmegaPiMedios}} \caption{Spin-momentum entanglement for the superposition states in (\ref{Eq_ParamB}). \textbf{Solid line} (red online): $m-n = 3$. \textbf{Dashed line} (blue online) $m-n = 4$. \textbf{Dot-dashed line} (purple online): $m-n = 5$. See text for details.} \label{ParamB} \end{figure} The first thing to note directly from equation (\ref{Entanglement}) is that, since entanglement depends only on the squared amplitudes of the state, the relative phase $\phi$ is irrelevant and spin-momentum entanglement is only a function of $\theta$. Explicitly, entanglement takes the simple form \begin{equation} E = \sin^2 2\theta \sin^2(m-n)\Omega \end{equation} Again, as in the case of equation (\ref{Eq_ParamA}), entanglement is invariant for $\ket{m}$ ($\theta =0$) and for $\ket{n}$ ($\theta =\pi/2$). As we can see in Figure~\ref{ParamB}, maxima are always at $\theta = n\pi/4$, with $n$ an integer, for all values of $m-n$. This corresponds to the states \begin{equation} \label{MaximumEntanglement} \frac{1}{\sqrt{2}}(\ket{m}+e^{\mathrm{i}\phi}\ket{n}). \end{equation} Spin states of this form are the ones that exhibit the maximum spin-momentum entanglement and therefore can be used for situations where the sender wants to transmit information encoded in the spin degrees of freedom to a particular receiver, who has a definite relative velocity with respect to his/her reference frame. Since entanglement depends on the boost velocity, the sender can prepare the state in such a way that the desired amount of entanglement, or the desired boosted spin state, is achieved only for the particular velocity of the receiver. Moreover, entanglement also depends on $m-n$. For a relatively weak boost, $\Omega = \pi/6$, the states with $m-n=3$ have the maximal amount of entanglement (top left of Figure~\ref{ParamB}), while for $\Omega = \pi/3$ these states have $0$ entanglement for all values of $\theta$ (top right of Figure~\ref{ParamB}). For this last value of the Wigner angle, the cases $m-n =4$ and $m-n = 5$ are equivalent. When the Wigner angle corresponds to the limit of the speed of light, $\Omega = \pi/2$ (bottom of figure \ref{ParamB}), the state with $m-n = 4$ has no spin-momentum entanglement, while the cases $m-n = 3$ and $m-n = 5$ behave in the same way and have maximal entanglement for states of the form given by eq. (\ref{MaximumEntanglement}). \section{Conclusions} \label{five} We have analysed the transformation properties of two-particle systems under Lorentz transformations from a quantum information perspective. We focused on the transformation corresponding to the spin degrees of freedom and showed that, in general, the spin subspace carries an exterior tensor product representation of $SO(3)\times SO(3)$. For arbitrary momenta this representation is irreducible but, interestingly, the representation becomes reducible for correlated momenta since the underlying group that acts on the spin space in this case becomes $SO(3)$. The states that span irreducible subspaces of $SO(3)$ have interesting properties since they transform amongst themselves under a Lorentz boost and are therefore good candidates for encoding quantum information in relativistic settings. For an EPR-like case, where the momenta of the particles are equal and opposite, the situation simplifies even more since the group that acts on the spin space is $SO(2)$, which has one-dimensional representations of the form $e^{\mathrm{i}m\Omega}$. We have analysed the transformation induced by the Lorentz boost in the spin space using a basis formed by states that carry representations of $SO(2)$ labeled by $m$. The transformation of the spin states and therefore their entanglement properties are independent of the total spin of the particle and a general treatment in terms of invariant states is possible. Superpositions of spin states that transform according to the same value of $m$ remain unchanged after the boost and therefore the initial entanglement between individual spins is invariant. The problem with encoding information into the spin and momentum degrees of freedom is that, since they become entangled as seen by different relativistic observers, the decoding of the said information is not trivial (or perhaps even possible). However, linear superpositions of states that carry the same representation of $SO(2)$ remain invariant under Lorentz boosts, thus offering the opportunity to encode/decode information regardless of the observer. On the other hand, one may wish to encode information into a state that only a particular observer will be able to decode, and the state may then be prepared so that the particular observer with its proper velocity receives the state with the wanted measure of entanglement in order to decode it; in this case superpositions of states that carry different representations of $SO(2)$ are appropriate. As the entanglement between spin and momentum is most naturally analysed in terms of superpositions of states with different values of $m$, the basis of states used in this work to study transformations of spin states under Lorentz boosts is a good candidate for building quantum communication protocols in relativistic scenarios. \section*{Acknowledgements} This work was partially supported by DGAPA-UNAM under project IN101614. \section*{References}
1,108,101,562,644
arxiv
\section{Introduction} Fluorescence molecular tomography (FMT) is finding several applications in 3D visualization and quantification of the distribution of molecular target within biological tissue\cite{ntziachristos2006fluorescence}. In particular, FMT has received substantial interest in small animal imaging for applications such as studying tumor physiology and for pharmaceutical research\cite{biswal2011imaging,stuker2011fluorescence}. In FMT imaging, fluorescence molecules are first injected into biological tissue. External illumination sources are used to excite the fluorescence molecules. The photons emitted by the excited fluorescence molecules are collected by detectors at the tissue surface. The objective in FMT is to use these surface measurements to reconstruct the 3D distribution of fluorescence molecules within the tissue. The reconstruction problem in FMT is known to be highly ill-posed, and is sensitive to noise and modeling errors such as discretization\cite{Zhou2011discre,Zhao:14}. Over the past two decades, various reconstruction methods for FMT have been proposed\cite{arridge2009optical}. Tikhonov regularization is a popular regularization applied to FMT reconstruction problem. The regularized problem can be solved iteratively with methods such as Newton method and algebraic reconstruction technique (ART)\cite{arridge2009optical,ntziachristos2001}. However, such regularization tends to over-smooth the reconstructed images, leading to loss of localized features during reconstruction\cite{ye2014fast}. More recently, reconstruction methods that exploit sparsity of the fluorescence distribution have been studied\cite{Han:10,Zhao:14,dutta2012joint,lee2011compressive}. In these methods, $\ell_0$ or $\ell_1$ regularization on the fluorescence distribution is applied to enforce sparsity while performing the reconstruction. These regularization problems can be solved with methods such as greedy algorithms and iterative thresholding methods\cite{bruckstein2009sparse}. The noise in the data acquired using FMT systems is Poisson distributed\cite{MP:MP0209}. For this noise distribution, MLEM-based reconstruction techniques have yielded reliable results, especially in nuclear medicine imaging \cite{vardi1985statistical,lange1987theoretical,llacer1993statistical,lewitt2003overview,jha2013joint}. The MLEM technique has several advantages, such as accurately modeling the Poisson noise distribution in the acquired data, constraining the activity values to be non-negative without the need for a specific regularizer, and ensuring the conservation of the total number of photons across multiple iterations. In optical tomography, several studies have applied MLEM for reconstruction in bioluminescence tomography\cite{Alexandrakis2005,slavine2006iterative,jiang2007image}. In \cite{cao2010bayesian}, MLEM has also been applied for FMT reconstruction. However, the MLEM technique typically suffers from slow convergence for optical tomography modalities, with thousands of iterations and large amount of time per iteration being required\cite{qi2006iterative,ahn2008fast,jiang2007image}. This makes MLEM a time-consuming method and thus not very practical\cite{Alexandrakis2005,cao2010bayesian}. As a result, MLEM has not been widely used for image reconstruction in optical tomography. The performance of MLEM is influenced by different factors. An important factor being the initial estimate provided to the algorithm. Conventionally, MLEM starts with a uniform initial estimate, as we explain later. However, different initializations for MLEM yield different reconstruction results\cite{Barrett:04,ma2013performance}. In this work, we studied the use of sparse reconstruction to initialize the MLEM approach. The overall motivation for this approach is that the sparse reconstruction method would account for the sparsity of the fluorescence distribution, while the MLEM would accurately model the Poisson noise in the FMT system. However, this combined approach is also able to exploit several inherent advantages of these two techniques, as we describe below. Our method yield reliable and improved results in comparison to pure sparse reconstruction as well as uniformly initialized MLEM methods. Preliminary versions of this work have been presented previously\cite{zhu:17,Zhu:18}. We begin by describing our method in the next section. \section{Methods} \subsection{The forward model and reconstruction problem in FMT} The forward model in FMT is described by a pair of coupled equations. The first equation describes the propagation of excitation photons from source at location $\mathbf{r}_s$ to location $\mathbf{r}$ in the medium and the second one describes the propagation of emitted fluorescence photons from location $\mathbf{r}$ to detector at location $\mathbf{r}_d$, where $\mathbf{r}_s$, $\mathbf{r}$ and $\mathbf{r}_d$ are three-dimensional vectors. These coupled equations are given by: \begin{equation} \phi_{ex}(\mathbf{r})=\int_{\Omega}{g_{ex}(\mathbf{r}_s, \mathbf{r})s(\mathbf{r}_s)d\mathbf{r}_s}, \end{equation} and \begin{equation} \phi_{em}(\mathbf{r}_d)=\int_{\Omega}{g_{em}(\mathbf{r}, \mathbf{r}_d)x(\mathbf{r})\phi_{ex}(\mathbf{r})d\mathbf{r}}, \end{equation} where $\phi_{ex}(\mathbf{r})$ and $\phi_{em}(\mathbf{r}_d)$ are the excitation light field at location $\mathbf{r}$ and emission light field at detector location $\mathbf{r}_d$, respectively, $g_{ex}(\mathbf{r}_s, \mathbf{r})$ is the Green's function of excitation light at location $\mathbf{r}$ due to a source at location $\mathbf{r}_s$, $g_{em}(\mathbf{r}, \mathbf{r}_d)$ denotes Green's function of emission light detected by detector at location $\mathbf{r}_d$ due to the fluorescence source at location $\mathbf{r}$, $x(\mathbf{r})$ is the fluorescence yield at location $\mathbf{r}$, and $\Omega$ denotes object support,. If we discretize $\Omega$ into $N$ voxels, we obtain the linear matrix equation for the forward model: \begin{equation} \mathbf{\Phi}=\mathbf{G} \mathbf{x}, \label{fwd} \end{equation} where \begin{equation*} \mathbf{G}= \begin{bmatrix} g_{em,1}^1\phi_{em,1}^1 & \dots & g_{em,N}^1\phi_{em,N}^1\\ \vdots & &\vdots \\ g_{em,1}^{N_d}\phi_{em,1}^1 & \dots & g_{em,N}^{N_d}\phi_{em,N}^1 \\ g_{em,1}^1\phi_{em,1}^2 & \dots & g_{em,N}^1\phi_{em,N}^2\\ \vdots & &\vdots \\ g_{em,1}^{N_d}\phi_{em,1}^{N_s} & \dots & g_{em,N}^{N_d}\phi_{em,N}^{N_s} \\ \end{bmatrix} \end{equation*} is the sensitivity matrix of the system, $\mathbf{\Phi}$ is an $M\times 1$ vector denoting detector measurements, $\mathbf{x}$ is an $N\times 1$ vector representing unknown fluorescence yield, $N_s$ and $N_d$ are number of sources and detectors, respectively, and $M=N_s\times N_d$ is the total number of measurements. Due to the limited number of sources and detectors, typically $M<N$ in FMT. Modeling the measurement noise denoted by the $M$-dimensional vector $\mathbf{n}$, Eq. (\ref{fwd}) becomes: \begin{equation} \mathbf{\Phi}=\mathbf{G} \mathbf{x}+\mathbf{n}. \label{fwdn} \end{equation} In FMT, the data collected by the detectors is corrupted by Poisson noise\cite{MP:MP0209}. The reconstruction problem in FMT is to reconstruct $\mathbf{x}$ given sensitivity matrix $\mathbf{G}$ and detector measurements $\mathbf{\Phi}$. In the next section, we derive the MLEM-based reconstruction technique that models this noise distribution accurately. \subsection{Modeling Poisson noise in the reconstruction} The likelihood function for Poisson distributed data is: \begin{equation} l(\mathbf{x}|\mathbf{\Phi})=\prod_{m=1}^M{\exp\left[-(\mathbf{G}\mathbf{x})_m\right]}\frac{(\mathbf{G}\mathbf{x})_m^{\phi_m}}{\phi_m!}, \end{equation} where $(\mathbf{G}\mathbf{x})_m$ and $\phi_m$ denote the $m^{th}$ element of the vector $(\mathbf{G}\mathbf{x})$ and $\mathbf{\Phi}$, respectively. Taking the logarithm of the likelihood function yields: \begin{equation} L(\mathbf{x}|\mathbf{\Phi})=\sum_{m=1}^M\left\{-(\mathbf{G}\mathbf{x})_m+\phi_m\ln\left[(\mathbf{G}\mathbf{x})_m\right]-\ln\phi_m!\right\}. \end{equation} The first order derivative of the log-likelihood function is given by \begin{equation} \frac{\partial}{\partial x_n}L(\mathbf{x}|\mathbf{\Phi})=\sum_{m=1}^M\left\{-G_{mn}+\frac{\phi_m}{(\mathbf{G}\mathbf{x})_m}G_{mn}\right\}. \end{equation} Setting $\frac{\partial}{\partial x_n}L(\mathbf{x}|\mathbf{\Phi})=0$ yields \begin{equation} 1=\frac{1}{\sum_{m=1}^M{G_{mn}}}\sum_{m=1}^M{\frac{\phi_m}{(\mathbf{G}\mathbf{x})_m}G_{mn}}. \end{equation} Multiplying both side with $x$ and replacing $x$ with a sequence of estimates $\hat{x}^k$ yields the fixed-point iteration: \begin{equation} \hat{x}_n^{k+1}=\hat{x}_n^{k}\frac{1}{s_n}\sum_{m=1}^M{\frac{\phi_m}{(\mathbf{G} \hat{\mathbf{x}}^k)_m}G_{mn}}, \label{equ:mlem} \end{equation} where $s_n=\sum_{m=1}^M{G_{mn}}$. This is referred to the MLEM technique\cite{Barrett:04}. The MLEM iteration starts from an initial estimate $\hat{\mathbf{x}}^0$, and the results of this technique can be influenced by its initial estimate\cite{Barrett:04}. Typically, the initial estimate is uniform, where all the elements in $\hat{\mathbf{x}}^0$ are assumed to be a constant\cite{kontaxakis1998maximum,chang2004regularized}. However, with this estimate, MLEM updates all the voxels in every iteration, increasing the computational requirements. In Eq. (\ref{equ:mlem}), note that $\hat{x}^k_n$ will always be zero if $\hat{x}^0_n=0$ due to the multiplicative nature of the technique. Thus, the zero elements can be excluded from $\hat{\mathbf{x}}^0$ during MLEM iteration. Matrix $\mathbf{G}$ used for MLEM iteration can be formulated with columns corresponding to non-zero elements in $\hat{\mathbf{x}}^0$. This reduces the size of matrices in the reconstruction problem and accelerates the computation speed. In this context, in many FMT applications, fluorescence molecules tend to concentrate in a small target region. Thus, if we could exploit this property, we could generate a sparse initial estimate, which allows us to accelerate the MLEM technique. Such a technique would inherently exploit the sparsity-based prior information in FMT as well as model the Poisson noise in FMT accurately. Inspired by this, we developed a sparse reconstruction method and used the output from this method as the initial estimate for MLEM. In the next section, we describe the method we used to obtain sparse initial estimate of MLEM. \subsection{Sparse reconstruction and preconditioning of sensitivity matrix} To provide the sparse initial estimate for MLEM, the following minimization problem can be formulated based on Eq. (\ref{fwdn}): \begin{equation} \begin{aligned} \min_{\mathbf{x}}{\|\mathbf{x}\|_0} & & \text{such that} & & \|\mathbf{\mathbf{\Phi}}-\mathbf{\mathbf{G}\mathbf{x}}\|_2\leq\epsilon. \end{aligned} \label{equ:l0} \end{equation} While directly solving this problem is computationally complex, Eq. (\ref{equ:l0}) can be approximately solved with greedy algorithms or convex relaxation techniques\cite{bruckstein2009sparse}. The theory of compressed sensing (CS) provides the conditions under which such approximate solvers are valid. Further, approaches based on singular value decomposition (SVD) can be applied to the sensitivity matrix to improve sparse reconstruction in FMT\cite{jin2012preconditioning,shi2013greedy,jin2014light,yao2015wide}. This technique is known as preconditioning of sensitivity matrix. Here, we follow truncated singular value decomposition (TSVD) described in \cite{shi2013greedy} as the preconditioning method. First, expressing the matrix $\mathbf{G}$ in terms of its singular vectors and singular values using SVD, Eq. (\ref{fwdn}) becomes: \begin{equation} \mathbf{\Phi}=\mathbf{U} \mathbf{\Sigma} \mathbf{V}^T \mathbf{x}+\mathbf{n}, \label{svd} \end{equation} where $\mathbf{U}$ and $\mathbf{V}$ are $M\times M$ and $N\times N$ unitary matrices where the columns are left-singular vectors and right-singular vectors, respectively, and $\mathbf{\Sigma}$ is a diagonal matrix where the diagonal elements are the singular values. By multiplying both sides of Eq. (\ref{svd}) with $\mathbf{\Sigma}^{-1}\mathbf{U}^T$, we could potentially use $\mathbf{V}^T$ as the new sensitivity matrix. However, since the reconstruction problem in FMT is highly ill-posed, the inversion of small singular values contained in $\mathbf{\Sigma}$ will cause large noise amplification. To address this issue, we keep only the $K$ largest singular values of matrix $\mathbf{\Sigma}$ and discard the rest, before performing the inversion of $\mathbf{\Sigma}$. The corresponding columns in $\mathbf{U}$ and $\mathbf{V}$ are also discarded. This process is referred to as truncation. Then Eq. (\ref{svd}) becomes \begin{equation} \mathbf{\Phi}=\mathbf{U}_t \mathbf{\Sigma}_t \mathbf{V}_t^T \mathbf{x}+\mathbf{n}, \label{tsvd} \end{equation} where the size of $\mathbf{U}_t$, $\mathbf{\Sigma}_t$ and $\mathbf{V}_t$ are $M\times K$, $K\times K$ and $N\times K$, respectively. Since small singular values are discarded, usually $K<M$. Applying $\mathbf{M}=\mathbf{\Sigma}_t^{-1}\mathbf{U}_t^T$ to both sides of Eq. (\ref{tsvd}) yields \begin{equation} \mathbf{M} \mathbf{\Phi}=\mathbf{V}_t^T \mathbf{x}+\mathbf{M} \mathbf{n}. \label{equ:mask} \end{equation} Denoting $\mathbf{y}=\mathbf{M}\mathbf{\Phi}$, $\mathbf{A}=\mathbf{V}_t^T$ and $\mathbf{n}'=\mathbf{M} \mathbf{n}$, Eq. (\ref{equ:mask}) can be written as \begin{equation} \mathbf{y}=\mathbf{A} \mathbf{x}+\mathbf{n}'. \label{equ:afterprecond} \end{equation} We now solve Eq.(\ref{equ:afterprecond}) as a sparse reconstruction problem. More specifically, we implemented convex relaxation technique in this work. Our objective is to minimize the $\ell_1$ norm of the vector $\mathbf{x}$. Thus the sparse reconstruction problem is posed as \begin{equation} \min_{\mathbf{x}}{\|\mathbf{x}\|_1} \quad \textrm{such that} \quad \|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2\leq\epsilon. \label{equ:sparse} \end{equation} We applied the fast iterative shrinkage-thresholding algorithm (FISTA) for solving the minimization problem in Eq. (\ref{equ:sparse})\cite{BeckFISTA}. The output with this method is then input to the MLEM technique as the initial estimate. Note that results from sparse reconstruction might contain negative elements. As we explained previously, MLEM constrains the activity values to be non-negative. To enable this, the negative elements in $\hat{\mathbf{x}}^0$ are set to zero. \subsection{Experiments} \begin{figure} \centering \subfloat[]{\includegraphics[width=0.5\textwidth]{cubic_setup}} \hspace{0.5 cm} \subfloat[]{\includegraphics[width=0.4\textwidth]{xt_crosec}} \\ \caption{(a) The experimental setup of cube phantom. (b) Cross section at $y=2.5\textrm{ cm}$ of the simulated phantom. } \label{fig:cube_setup} \end{figure} To validate the proposed method, different simulation experiments were conducted. Three different reconstruction methods were implemented for comparison, namely, (a) a pure sparsity-based reconstruction method that used TSVD in conjunction with FISTA, (b) the MLEM method with uniform initial estimate of the image (more specifically, the initial activity values in all the voxels was set to unity) and (c) the MLEM method with an initialization that was obtained using the method described in (a). We will refer to these methods as pure sparsity-based reconstruction method, uniformly initialized MLEM and sparsity-initialized MLEM, respectively. In the first set of experiments, a $5\times 5\times 5\textrm{ cm}^3$ cubic phantom was considered, as shown in Fig.~\ref{fig:cube_setup}(a). The phantom was discretized into $20\times 20\times 20$ voxels. The absorption coefficient of the phantom was set to $\mu_a=0.05\textrm{ cm}^{-1}$ and the reduced scattering coefficient was set to $\mu_s'=10\textrm{ cm}^{-1}$. $20$ sources and $144$ detectors were placed on the side surfaces. This configuration generated $2880$ measurements. Two cylindric fluorescence bars with radius of $0.375\textrm{ cm}$ and length of $2.5\textrm{ cm}$ each were inserted into the phantom. The fluorescence intensity in these bars was set to unity. The cross section of the phantom at $y=2.5\textrm{ cm}$ is shown in Fig.~\ref{fig:cube_setup}(b). The Green's function in the forward model of FMT was computed using Monte Carlo method, where a large number of photons were simulated to generate approximately noiseless measurements\cite{Fang:09}. The measurements were then scaled to different levels and corresponding Poisson noise was applied using a Poisson distributed pseudo random number generator. This yielded detector measurements with different signal-to-noise ratio (SNR) values. \begin{table}[ht!] \centering \caption{Optical properties of digital mouse phantom\cite{strangman2003factors}} \begin{tabular}{c | c | c | c} \hline Tissue type & Brain & Skull & Skin \\ $\mu_s'(\textrm{ cm}^{-1})$ & 12.5 & 10.0 & 8.0 \\ $\mu_a(\textrm{ cm}^{-1})$ & 0.178 & 0.101 & 0.159\\ \hline \end{tabular} \label{table1} \end{table} \begin{figure} \centering \subfloat[]{\includegraphics[width=0.5\textwidth]{moby_setup}} \hspace{0.3 cm} \subfloat[]{\includegraphics[width=0.4\textwidth]{moby_truth}} \\ \caption{(a) The experimental setup of mouse phantom. (b) Cross section of digital mouse phantom at $z=16\textrm{mm}$.} \label{fig:moby_phan} \end{figure} \begin{figure} \centering\includegraphics[width=\textwidth]{MLEM_iteration2} \caption{Cross sections at $y=2.5\textrm{ cm}$ reconstructed by MLEM with different iteration number n for a cube phantom. SNR=$18$dB. The reconstructed images are from MLEM with uniform initial estimate for the top row and MLEM with sparse initial estimate for the bottom row.} \label{fig:cube_crosec} \end{figure} To study the effect of MLEM iteration number on reconstruction performance, $1000$ iterations were performed for MLEM with different initializations with the SNR initially set to $18$dB, and the truncation number $K$ set to $760$. The region of interest (ROI) corresponded to the region occupied by the fluorescence bars. The rest of the region was defined as background. For quantitative study, different figures of merit were computed. Specifically, we computed absolute bias in the estimated uptake in the ROI and the background, spatial variance within the pixels in the ROI and the background, and the root mean square error (RMSE) for the entire image. The mean of the fluorescence uptake within the ROI, denoted by $\theta_{\textrm{ROI}}$, is defined as \begin{figure} \centering\includegraphics[width=\textwidth]{cube_iter2} \caption{Quantitative results of different reconstruction methods as functions of iteration number for cube phantom. (a) Plot of ROI bias vs. number of iterations. (b) Plot of ROI spatial variance vs. number of iterations. (c) Plot of ROI spatial variance vs. ROI bias. (d) Plot of background bias vs. number of iterations. (e) Plot of background variance vs. number of iterations. (f) Plot of RMSE vs. number of iterations.} \label{fig:cube_quant} \end{figure} \begin{equation} \theta_{\textrm{ROI}}=\frac{1}{N_R}\sum_{r=1}^{N_R}{x_r}, \end{equation} where $r$ denotes the $r^{th}$ voxel in the ROI, and $N_R$ is the number of voxels in the ROI. Similarly, the background mean, denoted by $\theta_{\textrm{B}}$, is defined as \begin{equation} \theta_{\textrm{B}}=\frac{1}{N_B}\sum_{b=1}^{N_B}{x_b}, \end{equation} where $b$ denote the $b^{th}$ voxel in the background region, and $N_B$ is the number of voxels in the background. Then the ROI absolute bias, denoted by $b_{\textrm{ROI}}$, was computed as: \begin{equation} b_{\textrm{ROI}}=\frac{1}{R}\sum_{k=1}^R|\theta_{\textrm{ROI},k}-\theta^{true}_{\textrm{ROI},k}|, \end{equation} where $k$ denotes the $k^{th}$ noise realization, $\theta^{true}_{\textrm{ROI},k}$ denotes the true uptake in the $k^{th}$ voxel in the ROI, and $R$ is the total number of noise realizations. The background absolute bias, denoted by $b_{\textrm{B}}$, was computed as: \begin{equation} b_{\textrm{B}}=\frac{1}{R}\sum_{k=1}^R|\theta_{\textrm{B},k}-\theta^{true}_{\textrm{B},k}|, \end{equation} where $\theta^{true}_{\textrm{B},k}$ denotes the true uptake in the $k^{th}$ voxel in the background. We also computed the spatial variance within the pixels in the ROI (denoted by $\sigma^2_{ROI}$) and in the background (denoted by $\sigma^2_{B}$) as follows: \begin{equation} \sigma^2_{ROI}=\frac{1}{R(N_R-1)}\sum_{k=1}^R{\sum_{r=1}^{N_R}{(x_{r, k}-\theta_{\textrm{ROI},k})^2}}. \end{equation} \begin{equation} \sigma^2_{B}=\frac{1}{R(N_B-1)}\sum_{k=1}^R{\sum_{b=1}^{N_B}{(x_{b, k}-\theta_{\textrm{B},k})^2}}. \end{equation} The RMSE over the entire 3D image was computed as below: \begin{equation} \textrm{RMSE}=\frac{1}{R}\sum_{k=1}^R\sqrt{\frac{\sum_{i=1}^N{(x_{i, k}-x^{true}_{i, k})^2}}{\sum_{i=1}^N{(x^{true}_{i, k})^2}}}\times 100\%, \end{equation} where the subscript $k$ denotes the $k^{th}$ noise realization and the subscript $i$ denotes the $i^{th}$ voxel. \begin{figure} \centering\includegraphics[width=\textwidth]{cube_SNR2} \caption{Quantitative results of different reconstruction methods as functions of SNR for cube phantom. (a) Plot of ROI bias vs. SNR. (b) Plot of ROI variance vs. SNR. (c) Plot of background bias vs. SNR. (d) Plot of background variance vs. SNR. (e) Plot of RMSE vs. SNR.} \label{fig:cube_SNR} \end{figure} In this and all the other experiments in this paper, $100$ noise realizations were used to compute the various figures of merit. To study the sensitivity of our method to noise, experiments were conducted with SNR ranging from $5$ dB to $40$ dB, with step size of $5$ dB. In the second set of experiments, we conducted simulation studies with a digital mouse phantom\cite{Segar2004}. Three fluorescence targets were placed in the mouse brain. Two of them had a radius of $0.8 \textrm{ mm}$ and the third had a radius of $1.2$ mm. The optical properties of the mouse head are listed in Table \ref{table1}. The whole brain was discretized into $2942$ voxels. $48$ sources and $51$ detectors were placed at the surface of the mouse head, as shown in Fig.~\ref{fig:moby_phan}(a). The cross section of the phantom at $z=16\textrm{mm}$ is shown in Fig.~\ref{fig:moby_phan}(b). First, $1000$ iterations were performed for MLEM with uniform and sparse initialization to study the effect of iteration number on MLEM performance. The SNR was set to $18$ dB. The truncation number $K$ was set to 120. Next, quantitative performance of pure sparse reconstruction, sparsity-initialized MLEM and uniformly initialized MLEM methods at different noise levels were evaluated. The SNR value ranged from $5$ dB to $40$ dB, with step size of $5$ dB. The selection of truncation number plays an important role in the quality of the reconstructed image acquired from sparse reconstruction\cite{jin2014light,yao2015wide}. For this reason, we also studied the effect of truncation number on reconstruction results of pure sparse reconstruction method and the proposed sparsity-initialized MLEM method. To compare our proposed method and pure sparse reconstruction method, we conducted experiments with different truncation number $K$. $450$ iterations were used for MLEM with sparse initial estimate. For quantitative study, RMSE was computed as a function of the truncation number. The experiments were conducted for two noise levels, namely SNR=40 dB and SNR=20 dB. \section{Results} \subsection{Uniform cube phantom} Fig.~\ref{fig:cube_crosec} shows cross sections reconstructed by MLEM with different iteration numbers. For sparsity-initialized MLEM, iteration number $n=0$ corresponds to the case of pure sparse reconstruction. The fluorescence intensity in all figures were normalized to the range of $[0, 1]$. The computation time required by MLEM with different initializations for 1000 iterations is provided in Table \ref{table2}. It can be observed that sparsity-initialized MLEM is about 8 times faster than uniformly initialized MLEM. \begin{table}[ht!] \centering \caption{Computation time required by MLEM for 1000 iterations} \begin{tabular}{c | c} \hline Method & Time (s) \\ \hline Sparsity-initialized MLEM & 5 \\ Uniformly initialized MLEM & 39\\ \hline \end{tabular} \label{table2} \end{table} Fig.~\ref{fig:cube_quant} shows the quantitative results as a function of iteration number. We observe from these plots that sparsity-initialized MLEM converges at a lower number of iterations. Sparsity-initialized MLEM has lower ROI bias, background bias, background spatial variance, and image RMSE. We also computed the variance of the mean ROI and the mean background uptakes, and found that these were much lower (less than $1\%$) compared to the bias. Thus, we do not show these results here. From Fig.~\ref{fig:cube_quant}(f), we notice that sparsity-initialized MLEM reached its lowest RMSE after only 50 iterations, but for uniformly initialized MLEM, the lowest RMSE was obtained after 800 iterations. Based on this result, we chose 50 iterations for sparsity-initialized MLEM and 800 iterations for uniformly initialized MLEM for the first set of experiments with different SNR values. The plots of quantitative results for the different reconstruction methods at different SNR values are shown in Fig.~\ref{fig:cube_SNR}. we again observe that sparsity-initialized MLEM leads to lower ROI bias, background bias, background spatial variance, and image RMSE for all noise levels. \begin{figure} \centering\includegraphics[width=\textwidth]{moby_iter} \caption{Quantitative results of different reconstruction methods as functions of iteration number for digital mouse phantom. (a) Plot of ROI bias vs. number of iterations. (b) Plot of ROI spatial variance vs. number of iterations. (c) Plot of ROI spatial variance vs. ROI bias. (d) Plot of background bias vs. number of iterations. (e) Plot of background variance vs. number of iterations. (f) Plot of RMSE vs. number of iterations.} \label{fig:moby_quant} \end{figure} \subsection{Digital mouse phantom} Fig.~\ref{fig:moby_quant} shows quantitative performance of different reconstruction methods as a function of iteration number. We observe that sparsity-initialized MLEM achieves lower ROI bias, background bias and RMSE. It was observed that for the sparsity-initialized MLEM and uniformly initialized MLEM, 450 and 900 iterations yielded the minimum RMSE. Thus, these values were chosen for the two methods for subsequent experiments. Quantitative performance of different reconstruction methods at different noise levels is shown in Fig.~\ref{fig:moby_SNR}. The sparsity-initialized MLEM method shows better performance for ROI bias, background bias and RMSE compared to the other two methods. Fig.~\ref{fig:moby_crosecprecond} shows the cross sections reconstructed by pure sparse reconstruction and MLEM with sparse initial estimate for different truncation number. From Fig.~\ref{fig:moby_crosecprecond}, we notice that for small truncation number, pure sparse reconstruction generates blurry images. As truncation number increases, the resolution improves, but the background noise also increases due to the amplification of noise during preconditioning. For truncation number larger than 550, the signal is totally overwhelmed by the noise. As a comparison, the proposed method is able to largely reduce the background noise as truncation number increases. The RMSE as a function of truncation number is plotted in Fig.~\ref{fig:moby_prec}. The sparsity-initialized MLEM leads to lower RMSE for both noise levels. \section{Discussion} In this paper, we have proposed an MLEM-based technique to reconstruct the fluorescence distribution from FMT data. In our framework, the initial estimate for the MLEM algorithm is derived from a sparse reconstruction method. Often an uniform initial estimate is used with MLEM-based techniques, but here we observe that a sparsity-initialized technique yields several advantages compared to uniformly initialized MLEM. First, sparsity-initialized MLEM has faster convergence speed. From Table \ref{table2}, Fig.~\ref{fig:cube_crosec}, Fig.~\ref{fig:cube_quant}(a) and Fig.~\ref{fig:moby_quant}(a), we observe that sparse initial estimate speeds up the convergence by both shortening the computation time for each iteration and requiring fewer iterations for convergence. In addition, sparsity-initialized MLEM also provides improved quantitative performance in ROI bias, background bias, ROI spatial variance, RMSE, and bias-variance trade-off compared to uniformly initialized MLEM, as shown in Fig.~\ref{fig:cube_quant}-\ref{fig:moby_SNR}. Further, while results in both the cube phantom and the digital mouse phantom experiments indicate that the proposed method leads to higher ROI spatial variance compared to uniformly initialized MLEM for the same number of iterations, Fig.~\ref{fig:cube_quant}(c) and Fig.~\ref{fig:moby_quant}(c) show that the proposed method still provides better bias-variance trade-off compared to MLEM with uniform initial estimate. Further, sparsity-initialized MLEM often requires fewer iterations, which enables it to provide lower ROI spatial variance compared to uniformly initialized MLEM, as we observe from Fig.~\ref{fig:cube_SNR}(b) and Fig.~\ref{fig:moby_SNR}(b). \begin{figure} \centering\includegraphics[width=\textwidth]{moby_SNR} \caption{Quantitative results of different reconstruction methods as functions of SNR for digital mouse phantom. (a) Plot of ROI bias vs. SNR. (b) Plot of ROI variance vs. SNR. (c) Plot of background bias vs. SNR. (d) Plot of background variance vs. SNR. (e) Plot of RMSE vs. SNR.} \label{fig:moby_SNR} \end{figure} We also observe that sparsity-initialized MLEM provides advantages over pure sparse reconstruction method. From Fig.~\ref{fig:moby_crosecprecond}, we notice that sparsity-initialized MLEM is less sensitive to the choice of truncation number. For pure sparse reconstruction method, when the truncation number is small, the reconstructed image is blurry. As truncation number increases, image resolution is improved, but the noise in the background region is also increased due to the noise amplification during preconditioning. On the other hand, for small truncation number, sparsity-initialized MLEM is able to improve the resolution compared to pure sparse reconstruction method. For large truncation number, sparsity-initialized MLEM reduces noise in the background. These properties make MLEM with sparse initial estimate more robust to the choice of truncation number compared to pure sparse reconstruction method. The plots of RMSE vs. truncation number in Fig.~\ref{fig:moby_prec} also demonstrate this point. Sparsity-initialized MLEM also improves quantitative performance of reconstructed images compared to pure sparse reconstruction method. Fig. \ref{fig:cube_SNR} and Fig. \ref{fig:moby_SNR} indicate this for different SNR values. Apart from improved background bias and spatial variance due to the reduction of background noise, sparsity-initialized MLEM also reduces the ROI bias compared to pure sparse reconstruction method, especially at low SNR value. At low SNR value, small truncation number is preferred to avoid noise amplification, which results in only a small number of measurements used for reconstruction. Small truncation number not only generates blurry images, as we discussed previously, but also causes severe bias in the reconstruction results. For example, for SNR$=5\textrm{ dB}$, we observe that pure sparse reconstruction generated $71\%$ ROI bias in the cube phantom experiments and $57\%$ ROI bias in the digital mouse phantom experiments. As a comparison, MLEM uses the original system matrix and detector measurements for reconstruction, enabling it to compensate for the bias in the image, which reduced the ROI bias to $40\%$ in the cube phantom experiments and $33\%$ in the digital mouse phantom experiments. \begin{figure} \centering\includegraphics[width=\textwidth]{moby_crosec_40dB2} \caption{Cross sections of fluorescence target reconstructed with pure sparse reconstruction method for the top row and the proposed method for the bottom row with different truncation number $K$ for digital mouse phantom for SNR=$40$dB.} \label{fig:moby_crosecprecond} \end{figure} We have observed, for example in Fig.~\ref{fig:cube_crosec}, that sparsity-initialized MLEM is able to suppress the noise in the background region that is present in the sparse initial estimate. To explain this observation, here we provide a theoretical justification. For a set of detector measurements denoted by $\mathbf{\Phi}$, consider two reconstructed images $\mathbf{x}_1$ and $\mathbf{x}_2$, where $\mathbf{x}_1$ is image with noise in the background region (referred to as background noise), and $\mathbf{x}_2$ is image that does not contain this background noise, as shown in Fig.~\ref{fig:discuss}(a) and (b), respectively. We denote the background noise as $\mathbf{\epsilon}=\mathbf{x}_1-\mathbf{x}_2$, where $\epsilon_n\geq0$ for all $n$. Before we proceed further, we introduce the concept of KL distance. This distance measures how two probability distributions diverge from another. It is known that MLEM attempts to find an estimate that minimizes the Kullback-Leibler (KL) distance between the measured data $\mathbf{\Phi}$ and the data predicted by an estimate $\mathbf{G}\mathbf{x}$. Thus, our objective is to assess whether the KL distance of $\mathbf{x}_2$ is less than $\mathbf{x}_1$, which would explain why MLEM would yield a solution $\mathbf{x}_2$ in comparison to $\mathbf{x}_1$. For $\mathbf{x}_1$, the KL distance is: \begin{figure} \centering \subfloat[]{\includegraphics[width=0.45\textwidth]{rmse_40dB}} \hspace{0.1 cm} \subfloat[]{\includegraphics[width=0.45\textwidth]{rmse_20dB}} \\ \caption{Plot of RMSE vs. truncation number for pure sparse reconstruction method and the proposed reconstruction method for different noise levels. (a) Plot of RMSE vs. truncation number for SNR=$40$ dB. (b) Plot of RMSE vs. truncation number for SNR=$20$ dB. } \label{fig:moby_prec} \end{figure} \begin{equation} D_{KL,1}(\mathbf{\Phi},\mathbf{G}\mathbf{x}_1)=\sum_m\left\{(\mathbf{G}\mathbf{x}_1)_m-\phi_m+\phi_m\ln{\frac{\phi_m}{(\mathbf{G}\mathbf{x}_1)_m}}\right\}. \end{equation} For $\mathbf{x}_2$, the KL distance is: \begin{equation} D_{KL,2}(\mathbf{\Phi},\mathbf{G}\mathbf{x}_2)=\sum_m\left\{(\mathbf{G}\mathbf{x}_2)_m-\phi_m+\phi_m\ln{\frac{\phi_m}{(\mathbf{G}\mathbf{x}_2)_m}}\right\}. \end{equation} Then the difference is: \begin{align} \Delta D_{KL}&=D_{KL,2}(\mathbf{\Phi},\mathbf{G}\mathbf{x}_2)-D_{KL,1}(\mathbf{\Phi},\mathbf{G}\mathbf{x}_1)\nonumber\\ &=\sum_m\left\{-(\mathbf{G}\mathbf{\epsilon})_m+\phi_m\ln{\left[1+\frac{(\mathbf{G}\mathbf{\epsilon})_m}{(\mathbf{G}\mathbf{x}_2)_m}\right]}\right\}. \end{align} We denote $f_m((\mathbf{G}\mathbf{\epsilon})_m)=-(\mathbf{G}\mathbf{\epsilon})_m+\phi_m\ln{\left[1+\frac{(\mathbf{G}\mathbf{\epsilon})_m}{(\mathbf{G}\mathbf{x}_2)_m}\right]}$. If $\phi_m\leq(\mathbf{G}\mathbf{x}_2)_m$, $f_m\leq 0$ since $f_m$ is monotonically decreasing for $(\mathbf{G}\mathbf{\epsilon})_m\leq 0$ and $f_m(0)=0$. If $\phi_m>(\mathbf{G}\mathbf{x}_2)_m$, the plot of $f_m$ is shown in Fig.~\ref{fig:discuss}(c). \begin{figure} \centering\includegraphics[width=0.8\textwidth]{discuss} \caption{(a) Image with background noise. The noise spot in the background is marked with red circle. (b) Image without background noise. (c) Plot of $f_m$. (d)Plot of $(\mathbf{G}\epsilon)_m$ and $2(\phi-\mathbf{G}\mathbf{x}_2)_m$} \label{fig:discuss} \end{figure} To estimate the zeros of $f_m$, we use second order Taylor expansion to approximate $f_m$, which gives: \begin{equation} f_m\approx -(\mathbf{G}\mathbf{\epsilon})_m+\phi_m\left[\frac{(\mathbf{G}\mathbf{\epsilon})_m}{(\mathbf{G}\mathbf{x}_2)_m}-\frac{1}{2}\left(\frac{(\mathbf{G}\mathbf{\epsilon})_m}{(\mathbf{G}\mathbf{x}_2)_m}\right)^2 \right]. \end{equation} Let $f_m=0$, we get \begin{equation} (\mathbf{G}\mathbf{\epsilon})_{m,1}=0, \end{equation} and \begin{align} (\mathbf{G}\mathbf{\epsilon})_{m,2}&=\frac{2(\mathbf{G}\mathbf{x}_2)_m[\phi_m-(\mathbf{G}\mathbf{x}_2)_m]}{\phi_m}\nonumber\\ &\leq2[\phi_m-(\mathbf{G}\mathbf{x}_2)_m], \end{align} where the inequality comes from the fact that $\phi_m>(\mathbf{G}\mathbf{x}_2)_m$. Also, note that the function $f_m$ has its maxima at $(\mathbf{G}\epsilon)_m=\phi_m-(\mathbf{G}\mathbf{x}_2)_m$. Thus, if the detector response to the noise spot has similar pattern as $\phi_m-(\mathbf{G}\mathbf{x}_2)_m$, $f_m$ will be close to its maximum for most detector index $m$. This provides a higher chance that $\Delta D_{KL}>0$, which means MLEM is more likely to update towards noisy image. This is the case for the noise close to ROI. On the other hand, for noise spot in background region, the detector response to noise spot will have a very different pattern compared to $\phi_m-(\mathbf{G}\mathbf{x}_2)_m$, as shown in Fig.~\ref{fig:discuss}(d). For detector index $m$ where $\phi_m-(\mathbf{G}\mathbf{x}_2)_m>0$, $(\mathbf{G}\epsilon)_m$ is either close to $0$ or too large. This results in makes $f_m$ either close to zero or have a negative value. In this case, it has higher chance that $\Delta D_{KL}<0$, meaning MLEM tends to update towards results without the noise spot. The noise model in FMT is often assumed to be Gaussian\cite{Zhao:14,ye2014fast,Han:10,dutta2012joint,shi2013greedy,yao2015wide}. In very few cases is the Poisson noise model applied\cite{7351233}. Gaussian noise model is a good approximation when SNR is high, i.e. sufficient number of photons are detected. However, in some applications, the SNR value might be low, such as in brain imaging\cite{Raymond:09}, dynamic FMT\cite{liu2011unmixing} and early-photon FMT\cite{leblond2009early}. Our results demonstrate that incorporating the Poisson noise model is especially valuable in these scenarios. More specifically, the pure sparse reconstruction method was formulated based on Gaussian noise model, while the proposed method incorporated both the sparsity information and Poisson noise model. We observe that the performance of the proposed method improves in comparison to the pure sparse reconstruction method as the SNR value decreases, and the proposed method is substantially more reliable at low SNR values. This shows the importance of accurately modeling Poisson noise for applications of FMT when insufficient number of photons are detected. In this work, we only considered the case where the background uptake of fluorescence distribution is zero. While this is a common assumption in FMT studies\cite{Zhao:14,ye2014fast,Han:10,dutta2012joint,shi2013greedy,jin2014light,yao2015wide}, it is possible that the background uptake is non-zero. Exploring the performance of the proposed method for this task would be an important future direction. The proposed method has been validated with extensive simulation experiments. Evaluating the performance of the method with physical phantom and in \textit{vivo} animal experiments is another important direction of research. Finally, we used the MC-based method to model photon propagation to obtain the Greens function in this work. However, there have been several analytical methods proposed for modeling light transport\cite{Lehtikangas:12,Tarvainen:05,mohan2011variable,Jha:12,jha2012three,Jha:17}. These methods can also be used to obtain an expression for the Green's function. Analytical methods offer the advantage that they might be less sensitive to photon noise. Thus, implementing this reconstruction method using the analytical approaches is another important research direction. \section{Conclusion} We have presented a reconstruction framework for FMT involving sparsity-initialized MLEM. Simulation experiments on cubic digital mouse phantoms demonstrate that the proposed method yields improved qualitative and quantitative performance compared to uniformly initialized MLEM as well as sparsity-initialized MLEM techniques. Further, compared to uniformly initialized MLEM, the proposed method is faster to execute, overcoming another barrier to application of MLEM technique for optical tomography. Moreover, compared to pure sparse reconstruction, the proposed method is more robust to noise amplification. We have also provided theoretical justification for the ability of the proposed method to reduce noise in the background region. Overall, this paper provides strong evidence that the proposed sparsity initialized MLEM-based reconstruction framework is feasible and advantageous for reconstruction in FMT imaging systems. \section{Funding} NIH BRAIN Initiative Award R24 MH106083. \section{Acknowledgments} The authors thank Drs. Eric Frey and Jin Kang for helpful discussions. \section{Disclosures} Dean Wong acknowledges contract work with Lilly, Lundbeck, Intracellular, Five Eleven Pharma, Roche and Dart pharmaceuticals. \end{document}
1,108,101,562,645
arxiv
\section{Introduction and Review} \label{sec:Introduction} The last few years have seen a remarkable resurgence of interest in an old approach to Conformal Field Theory (CFT), the conformal bootstrap \cite{Polyakov:1974gs, Belavin:1984vu}, with a great deal of progress leading to new results of phenomenological \cite{Rattazzi:2008pe, Rychkov:2009ij, Rattazzi:2010yc, Vichi:2011ux, Poland:2011ey, ElShowk:2012ht, ElShowk:2012hu} and theoretical \cite{JP, Rattazzi:2010gj, Poland:2010wg, Heemskerk:2010ty, Pappadopulo:2012jk, Liendo:2012hy} import. Most of these new works use numerical methods to constrain the spectrum and OPE coefficients of general CFTs. In a parallel series of developments, there has been significant progress understanding effective field theory in AdS and its interpretation in CFT \cite{JP, Heemskerk:2010ty, Katz, Sundrum:2011ic, ElShowk:2011ag, AdSfromCFT}. This has led to a general bottom-up classification of which CFTs have dual \cite{Maldacena, Witten, Gubser:1998bc} descriptions as effective field theories in AdS, providing an understanding of AdS locality on all length scales greater than the inverse energy cutoff in the bulk. In fact, these two developments are closely related, as the seminal paper \cite{JP} and subsequent work begin by applying the bootstrap to the $1/N$ expansion of CFT correlators. This approach has been fruitful, especially when interpreted in Mellin space \cite{Mack, MackSummary, JoaoMellin, NaturalLanguage, Paulos:2011ie, Paulos:2012nu, AdSfromCFT}, but it is an essentially perturbative approach analogous \cite{Unitarity} to the use of dispersion relations for the study of perturbative scattering amplitudes. In light of recent progress, one naturally wonders if an analytic approach to the bootstrap could yield interesting new exact results. In fact, in \cite{Pappadopulo:2012jk} bounds on operator product expansion (OPE) coefficients for large dimension operators have already been obtained. We will obtain a different sort of bound on both OPE coefficients and operator dimensions in the limit of large angular momentum, basically providing a non-perturbative bootstrap proof of some results that Alday and Maldacena \cite{Alday:2007mf} have also discussed.\footnote{ The authors of \cite{Alday:2007mf} explicitly discuss minimal twist double-trace operators in a large $N$ gauge theory; however their elegant argument can be applied in a more general context, beyond perturbation theory and for general twists. We thank J. Maldacena for discussions of this point. } Specifically, we will study a general scalar primary operator $\phi$ of dimension $\Delta_{\phi}$ in a CFT in $d>2$ dimensions. We will prove that for each non-negative integer $n$ there must exist an infinite tower of operators ${\cal O}_{\tau, \ell}$ with twist $\tau \to 2 \Delta_{\phi} + 2n$ appearing in the OPE of $\phi$ with itself. This means that at large $\ell$, and we can define an `anomalous dimension' $\gamma(n, \ell)$ which vanishes as $\ell \to \infty$. If there exists one such operator at each $n$ and $\ell$, we will argue that at large $\ell$ the anomalous dimensions should roughly approach \begin{eqnarray} \gamma(n, \ell) \approx \frac{\gamma_n}{\ell^{\tau_m}} , \end{eqnarray} where $\tau_m$ is the twist of the minimal twist operator appearing in the OPE of $\phi$ with itself. Related predictions can be made about the OPE coefficients. Finally, we will show that the OPE coefficients of other operators appearing in the OPE of $\phi$ with itself at large $\ell$ must be bounded, so that they fall off even faster as $\ell \to \infty$. Similar results also hold for the OPE of pairs of operators $\phi_1$ and $\phi_2$, although for simplicity we will leave the discussion of this generalization to Appendix \ref{app:DistinctOperators}. Our arguments fail for CFTs in two dimensions, and in fact minimal models provide an explicit counter-example. Two dimensional CFTs are distinguished because there is no gap between the twist of the identity operator and the twist of other operators, such as conserved currents and the energy-momentum tensor. Our results can be interpreted as a proof that all CFTs in $d > 2$ dimensions have correlators that are dual to local AdS physics on superhorizon scales. That is, CFT processes that are dual to bulk interactions will effectively shut off as the bulk impact parameter is taken to be much greater than the AdS length. This can also be viewed as a strong form of the cluster decomposition principle in the bulk. Since the early days of AdS/CFT it has been argued that this notion of ``coarse locality'' \cite{JP} could be due to a decoupling of modes of very different wavelengths, but it has been challenging to make this qualitative holographic RG intuition precise. The bootstrap offers a precise and general method for addressing coarse locality. For the remainder of this section we will give a quick review of the CFT bootstrap. Then in section~\ref{sec:Proof} we delve into the argument, first giving an illustrative example from mean field theory (a Gaussian CFT, with all correlators fixed by 2-pt functions, e.g. a free field theory in AdS). We give the complete argument in sections \ref{sec:DblTrace} and \ref{sec:Bound}, with some more specific results and examples that follow from further assumptions in section \ref{sec:IsolatedTowers}. We provide more detail on how two dimensional CFTs escape our conclusions in section \ref{sec:2d}. In section \ref{sec:super} we connect our results to superhorizon locality in AdS, and we conclude with a brief discussion in section \ref{sec:Discussion}. In Appendix \ref{app:LargeLBlocks} we collect some results on relevant approximations of the conformal blocks in four and general dimensions. In Appendix \ref{app:Rigorous} we give a more formal and rigorous version of the argument in section \ref{sec:Proof}. In Appendix \ref{app:DistinctOperators} we explain how our results generalize to terms occurring in the OPE of distinct operators $\phi_1$ and $\phi_2$. In Appendix \ref{app:Joao} we connect our results with perturbative gravity computations in AdS. \emph{Note added: after this work was completed we learned of the related work of Komargodski and Zhiboedov \cite{Komargodski:2012ek}; they obtain very similar results using somewhat different methods. } \subsection{Lightning Bootstrap Review} In CFTs, the bootstrap equation follows from the constraints of conformal invariance and crossing symmetry applied to the operator product expansion, which says that a product of local operators is equivalent to a sum \begin{eqnarray} \phi(x) \phi(0) &=& \sum_{{\cal O}} c_{{\cal O}} f_{{\cal O}}(x,\partial) {\cal O}(0). \end{eqnarray} Conformal invariance relates the OPE coefficients of all operators in the same irreducible conformal multiplet, and this allows one to reduce the sum above to a sum over different irreducible multiplets, or ``conformal blocks''. When this expansion is performed inside of a four-point function, the contribution of each block is just a constant ``conformal block coefficient'' $P_{{\cal O}} \propto c_{{\cal O}}^2$ for the entire multiplet times a function of the $x_i$'s whose functional form depends only on the spin $\ell_{{\cal O}}$ and dimension $\Delta_{{\cal O}}$ of the lowest-weight (i.e. ``primary'') operator of the multiplet: \begin{eqnarray} \< \phi(x_1) \phi(x_2) \phi(x_3) \phi(x_4) \> &=& \frac{1}{(x_{12}^2 x_{34}^2)^{\Delta_{\phi}}} \sum_{{\cal O}} P_{{\cal O}} g_{\tau_{{\cal O}}, \ell_{{\cal O}}}(u,v), \end{eqnarray} where $x_{ij} = x_i - x_j$, the twist of ${\cal O}$ is $\tau_{{\cal O}} \equiv \Delta_{{\cal O}} - \ell_{{\cal O}}$, and \begin{eqnarray} u= \left( \frac{x_{12}^2x_{34}^2}{x_{24}^2 x_{13}^2 } \right), \qquad v = \left( \frac{x_{14}^2 x_{23}^2}{x_{24}^2 x_{13}^2 } \right), \end{eqnarray} are the conformally invariant cross-ratios. The functions $g_{\tau_{{\cal O}}, \ell_{{\cal O}}} (u,v)$ are also usually referred to as conformal blocks or conformal partial waves \cite{Dolan:2000ut, Dolan:2003hv, Dolan:2011dv, DSDProjectors}, and they are crucial elementary ingredients in the bootstrap program. In the above, we took the OPE of $\phi(x_1) \phi(x_2)$ and $\phi(x_3) \phi(x_4)$ inside the four-point function, but one can also take the OPE in the additional ``channels'' $\phi(x_1) \phi(x_3)$ and $\phi(x_2) \phi(x_4)$ or $\phi(x_1) \phi(x_4)$ and $\phi(x_2) \phi(x_3)$, and the bootstrap equation is the constraint that the decomposition in different channels matches: \begin{eqnarray} \label{eq:bootstrap} \frac{1}{(x_{12}^2 x_{34}^2)^{\Delta_{\phi}}} \sum_{{\cal O}} P_{{\cal O}}g_{\tau_{{\cal O}}, \ell_{{\cal O}}} (u,v) &=& \frac{1}{(x_{14}^2 x_{23}^2)^{\Delta_{\phi}}} \sum_{{\cal O}} P_{{\cal O}}g_{\tau_{{\cal O}}, \ell_{{\cal O}}} (v,u). \end{eqnarray} Much of the power of this constraint follows from the fact that by unitarity, the conformal block coefficients $P_{ {\cal O}}$ must all be non-negative in each of these channels, because the $P_{{\cal O}}$ can be taken to be the squares of real OPE coefficients. \section{The Bootstrap and Large $\ell$ Operators} \label{sec:Proof} Although some of the arguments below are technical, the idea behind them is very simple. By way of analogy, consider the $s$-channel partial wave decomposition of a tree-level scattering amplitude with poles in both the $s$ and $t$ channels. The center of mass energy is simply $\sqrt{s}$, so the $s$-channel poles will appear explicitly in the partial wave decomposition. However, the $t$-channel poles will not be manifest. They will arise from the infinite sum over angular momenta, because the large angular momentum region encodes long-distance effects. Crossing symmetry will impose constraints between the $s$-wave and $t$-wave decompositions, relating the large $\ell$ behavior in one channel with the pole structure of the other channel. We will be studying an analogous phenomenon in the conformal block (sometimes called conformal partial wave) decompositions of CFT correlation functions. The metaphor between scattering amplitudes and CFT correlation functions is very direct when the CFT correlators are expressed in Mellin space, but in what follows we will stick to position space. In position space CFT correlators, the poles of the scattering amplitude are analogous to specific power-laws in conformal cross-ratios, with the smallest power-laws corresponding to the leading poles. \subsection{An Elementary Illustration from Mean Field Theory} \label{sec:MFTExample} Let us begin by considering what naively appears to be a paradox. Consider the 4-point correlation function in a CFT with only Gaussian or `mean field theory' (MFT) type correlators. These mean field theories are the dual of free field theories in AdS. We will study the 4-pt correlator of a dimension $\Delta_{\phi}$ scalar operator $\phi$ in such a theory. By definition, in mean field theory the 4-pt correlator is given as a sum over the 2-pt function contractions: \begin{eqnarray} \langle \phi(x_1) \phi(x_2) \phi(x_3) \phi(x_4) \rangle &=& \frac{1}{(x_{12}^2 x_{34}^2)^{\Delta_{\phi}} } + \frac{1}{(x_{13}^2 x_{24}^2)^{\Delta_{\phi}}} + \frac{1}{(x_{14}^2x_{23}^2)^{\Delta_{\phi}}} ,\nonumber\\ &=& \frac{1}{(x_{13}^2 x_{24}^2)^{\Delta_{\phi}}} \left(u^{-\Delta_{\phi}} + 1 + v^{-\Delta_{\phi}} \right). \end{eqnarray} Since this is the 4-pt correlator of a unitary CFT, it has a conformal block decomposition in every channel with positive conformal block coefficients. The operators appearing in the conformal block decomposition are just the identity operator $\mathbf{1}$ and the ``double-trace'' operators ${\cal O}_{n,\ell}$ of the schematic form \begin{eqnarray} {\cal O}_{n,\ell} \sim \phi (\partial^2)^n \partial_{\mu_1} \dots \partial_{\mu_\ell} \phi, \end{eqnarray} with known \cite{Unitarity} conformal block coefficients $P_{2\Delta_{\phi} + 2n,\ell}$ and twists $\tau_{n,\ell} = 2\Delta_{\phi} + 2n$. Factoring out an overall $(x_{13}^2 x_{24}^2)^{-\Delta_{\phi}}$, the conformal block decomposition in the $14 \rightarrow 23$ channel reads \begin{eqnarray} \label{eq:MFTConfBlock} u^{-\Delta_{\phi}} + 1 + v^{-\Delta_{\phi}} = v^{-\Delta_{\phi}} + v^{-\Delta_{\phi}} \sum_{n, \ell} P_{2\Delta_{\phi} + 2n,\ell} \ \! g_{2\Delta_{\phi} + 2n, \ell} (v, u), \end{eqnarray} where the $v^{-\Delta_{\phi}}$ on the RHS is the contribution from the identity operator. If we look at the behavior of the conformal blocks $g_{2\Delta_{\phi} + 2n, \ell} (v, u)$, we notice a simple problem with this equation: it is known that the conformal blocks $g_{2\Delta_{\phi} + 2n, \ell}(v,u)$ in the sum on the RHS each have at most a $\log u$ divergence at small $u$, but the LHS has a $u^{-\Delta_{\phi}}$ divergence. Thus the LHS cannot be reproduced by any finite number of terms in the sum. To be a bit more precise, the conformal blocks have a series expansion around $u=0$ with only non-negative integer powers of $u$ and at most a single logarithm appearing, so in particular we can write \begin{eqnarray} v^{-\Delta_{\phi}} g_{2\Delta_{\phi} + 2n, \ell} (v, u) &=& f_0(v) + u f_1(v) + u^2 f_2(v) + \ldots \nonumber\\ && + \log(u) \left( \tilde{f}_0 (v) + u \tilde{f}_1 (v) + u^2 \tilde{f}_2 (v) + \ldots \right) . \end{eqnarray} But this means that if the sum on the right-hand side of equation (\ref{eq:MFTConfBlock}) converges uniformly, it cannot reproduce the left-hand side, which includes the negative power term $u^{-\Delta_{\phi}}$ and does not include any logarithms. The simple resolution of this `paradox' is that the sum over conformal blocks does not converge uniformly near $u = 0$. In fact, the sum does converge on an open set with positive real $u$, but when Re$[\sqrt u] < 0$ the sum diverges. So we must define the sum over conformal blocks for general $u$ as the analytic continuation of the sum in the convergent region. Crucially, the analytic continuation of the sum contains the power-law $u^{-\Delta_{\phi}}$ that is not exhibited by any of the individual terms in the sum. Let us see how this works in a bit more detail, so that in particular, we can see that the sum over twists $\tau= 2 \Delta_{\phi} + 2n$ at fixed $\ell$ converges in a neighborhood of $u=0$, but the sum over angular momentum diverges for $u<0$. For the purpose of understanding convergence, we need only study the conformal blocks when $\tau$ or $\ell$ are very large. In the very large $\tau$ limit with $|u|, |v| < 1$ the blocks are always suppressed by $u^{\frac{\tau}{2}}$ or $v^{\frac{\tau}{2}}$. The conformal block coefficients are bounded at large $\tau$ \cite{Pappadopulo:2012jk}. This means that for small $|u|$ and $|v|$, the sum over $\tau$ will converge. In fact, once we know that the sum converges for some particular $u_0, v_0$ we see that for $u < u_0$ and $v < v_0$, the convergence at large $\tau$ becomes exponentially faster. Now consider the $\ell$ dependence. At large $\ell$ and fixed $\tau$, we establish in Appendix~\ref{app:LargeLBlocks} that the crossed-channel blocks in the $|u| \ll |v| \ll 1$ limit behave as \begin{eqnarray} g_{\tau, \ell}(v, u) \approx \frac{\ell^{\frac12} 2^{\tau+2\ell}}{\sqrt{\pi}} v^{\frac{\tau}{2}} K_0\left(2\ell \sqrt{u} \right) \stackrel{\ell \sqrt{u} \gg 1}{\approx} 2^{\tau + 2\ell - 1} v^{\frac{\tau}{2}} \frac{e^{- 2 \ell \sqrt{u}} }{\sqrt[4] u} , \label{eq:conffactor} \end{eqnarray} where $K_0$ is a modified Bessel function. Notice that for Re$[\sqrt u ] > 0$ there is an exponential suppression at very large $\ell$, but for Re$[\sqrt u ] < 0$ there is an exponential growth. Note also that at small $v$ the lowest twist terms ($n=0$) will dominate. Now, the mean field theory conformal block coefficients in any dimension $d$ are \cite{Unitarity} \begin{eqnarray} \label{eq:MFTCoeffs} P_{2\Delta_\phi + 2n, \ell} = \frac{\left[ 1+ (-1)^\ell \right] (\Delta_\phi - \frac{d}{2} + 1)_n^2 (\Delta_\phi)_{n + \ell}^2 }{\ell! n! (\ell+ \frac{d}{2})_n (2 \Delta_\phi + n - d + 1)_n (2 \Delta_\phi + 2n + \ell - 1)_\ell (2 \Delta_\phi + n + \ell - \frac{d}{2})_n}, \end{eqnarray} where the Pochhammer symbol $(a)_b \equiv \Gamma(a+b)/\Gamma(a)$. In particular, for $n=0$ and at large even $\ell$ we can approximate \begin{eqnarray} P_{2\Delta_{\phi},\ell} \stackrel{\ell \gg 1}{ \approx} q_{\Delta_{\phi}} \frac{\sqrt{\pi} }{2^{2\Delta_{\phi} + 2\ell}} \ell^{2\Delta_{\phi}-\frac32}, \end{eqnarray} where $q_{\Delta_{\phi}}$ is an $\ell$-independent prefactor.\footnote{Explicitly, $q_{\Delta_{\phi}} = \left( \frac{8}{ \Gamma(\Delta_{\phi})^2} \right)$. } Thus the sum in Eq.~(\ref{eq:MFTConfBlock}) at large $\ell$ and $|u| \ll |v| \ll 1$ takes the form \begin{equation} v^{-\Delta_{\phi}} \sum_{n, \,\textrm{large } \ell} P_{2\Delta_{\phi} + 2n,\ell} \ \! g_{2\Delta_{\phi}+2n, \ell} (v, u) \approx q_{\Delta_{\phi}} \sum_{\textrm{large even} \ \ell}^\infty \ell^{2\Delta_{\phi}-1} K_0\left(2\ell \sqrt{u}\right). \end{equation} This sum converges at large $\ell$ for positive real $\sqrt u$, and so we will define it by analytic continuation elsewhere in the complex $u$ plane. As can be easily seen by approximating the sum with an integral, the result reproduces the $u^{- \Delta_{\phi}}$ power-law term on the left-hand side of equation (\ref{eq:MFTConfBlock}), as desired. Thus we have seen that general power-laws in $u$ are reproduced by conditionally convergent large $\ell$ sums in the conformal block decomposition, with a power-law dependence on $\ell$ producing a related power of $u$ as $u \to 0$. \subsection{Existence of Twist $2 \Delta_{\phi} + 2n + \gamma(n, \ell)$ Operators at Large $\ell$} \label{sec:DblTrace} In section \ref{sec:MFTExample} we saw how in MFT the sum over large $\ell$ conformal blocks in the crossed $14 \to 23$ channel controls the leading power-law behavior in $u$ in the standard $12 \to 34$ channel. Now we will use the bootstrap equation (\ref{eq:bootstrap}) to turn this observation into a powerful and general method for learning about the spectrum and the conformal block coefficients $P_{\tau, \ell}$ at large $\ell$ in any CFT. Separating out the identity operator, the bootstrap equation reads \begin{equation} \label{eq:CrossingBootstrap} 1 + \sum_{\tau,\ell} P_{\tau, \ell} \ \! u^{\frac{\tau}{2}} f_{\tau,\ell}(u,v) = \p{\frac{u}{v}}^{\Delta_\phi}\p{1+\sum_{\tau,\ell} P_{\tau,\ell} \ \! v^{\frac{\tau}{2}} f_{\tau,\ell}(v,u)}, \end{equation} where we have written the conformal blocks as $g_{\tau, \ell}(u,v) = u^{\frac{\tau}{2}} f_{\tau, \ell}(u,v)$ to emphasize their leading behavior at small $u$ and $v$. We will work in $d >2$ so that the unitarity bound on twists \begin{eqnarray} \tau &\ge& \left\{ \begin{array}{cc} \frac{d-2}{2} & (\ell=0),\\ d-2 & (\ell \ge 1), \end{array} \right. \label{eq:unitaritybound} \end{eqnarray} strictly separates the identity operator from all other operators. The arguments in this section will follow from an elementary point: \begin{itemize} \item In the small-$u$ limit, the sum on the right-hand side of equation (\ref{eq:CrossingBootstrap}) must correctly reproduce the identity contribution on the left-hand side. \end{itemize} We will show that this implies the existence of towers of operators with increasing spin whose twists approach $2\Delta_\phi+2n$, for each integer $n\geq 0$. Together with the results in Appendix \ref{app:LargeLBlocks} and the more rigorous arguments in Appendix \ref{app:Rigorous}, we will provide a rigorous proof of this claim. In subsequent sections, we will consider subleading corrections to the small-$u$ limit of Eq.~(\ref{eq:CrossingBootstrap}) coming from operators of minimal non-zero twist. For the remainder of this section we will use the approximate relation \begin{eqnarray} \label{eq:ApproxCrossing} 1 \approx \p{\frac{u}{v}}^{\Delta_\phi}\sum_{\tau,\ell} P_{\tau,\ell} \ \! g_{\tau,\ell}(v,u), \qquad (u \to 0), \end{eqnarray} valid up to strictly sub-leading corrections in the limit $ u \to 0$.\footnote{The sum over conformal blocks on the left-hand side of the crossing relation is necessarily a subleading correction at small $u$. The reason for this is that we take $v$ to a small but fixed value when we take the small $u$ limit, so the conformal blocks factorize at large spin as shown in Eq.~(\ref{eq:conffactor}) (with $v$ and $u$ interchanged). The sum over spins on the left-hand side therefore manifestly cannot produce additional singularities in $u$. The sum over twists is regulated by the $u^{\frac{\tau}{2} - \Delta_\phi}$ factor.} As we saw in section \ref{sec:MFTExample}, no finite collection of spins on the right-hand side of (\ref{eq:ApproxCrossing}) can give rise to the left-hand side. This is true even including an infinite sum over large $\tau$. To understand how these terms are reproduced, we must study the large $\ell$ region of the sum on the right-hand side of equation (\ref{eq:ApproxCrossing}). For this purpose we need a formula for the conformal blocks, $g_{\tau, \ell}(v,u)$, at $|u| \ll 1$ and large $\ell$. We show in Appendix \ref{app:LargeLBlocks} that the blocks can be approximated in this limit by \begin{eqnarray} \label{eq:fBlock} g_{\tau, \ell}(v, u) &\approx & k_{2\ell}(1-z) v^{\tau/2}F^{(d)}(\tau,v) \qquad(|u|\ll 1\textrm{ and }\ell \gg 1), \\ k_\beta(x) &\equiv& x^{\beta/2} {}_2F_1(\beta/2,\beta/2,\beta,x), \end{eqnarray} where $z$ is defined by $u=z\bar z$, $v=(1-z)(1-\bar z)$, and the function $F^{(d)}(\tau,v)$ is positive and analytic near $v=0$.\footnote{For example, $F^{(2)}(\tau,v)$ is given by Eq.~(\ref{eq:2dFfunction}). In Appendix \ref{app:LargeLBlocks}, we give recursion relations which allow one to generate $F^{(d)}$ in any even $d$. In odd $d$, one must resort to solving a differential equation.} The exact expression for $F^{(d)}$ will not be important for our discussion. Note that $z\to 0$ at fixed $\bar z$ is equivalent to $u\to 0$ at fixed $v$. A key feature of Eq.~(\ref{eq:fBlock}) is that the $\ell,z$ dependence of $g_{\tau,\ell}$ factorizes from the $\tau,v$ dependence in the limit $z\to 0,$ $\ell\to\oo$. Thus, we expect operators with large spin to be crucial for reproducing the correct $z$-dependence on the left-hand side of Eq.~(\ref{eq:ApproxCrossing}). However, a particular pattern of twists should also be necessary for reproducing the correct $v$-dependence in Eq.~(\ref{eq:ApproxCrossing}). What is this pattern? To study this question, it is useful to introduce a conformal block ``density" in twist space, \begin{eqnarray} D_{u,v}(\s) \equiv \p{\frac{u}{v}}^{\Delta_\phi}\sum_{\tau,\ell}P_{\tau,\ell}\de(\tau-\s)g_{\tau,\ell}(v,u). \end{eqnarray} By integrating $D_{u,v}(\s)$ against various functions $f(\s)$, we can study the contributions to the crossing equation from operators with different twists. One should think of it as a tool for studying the spectrum of operators in twist-space. Since the conformal blocks are positive in the region $0<z,\bar z<1$, and the coefficients $P_{\tau,\ell}$ are positive, $D_{u,v}$ is positive as well. The full conformal block expansion comes from integrating $D_{u,v}(\s)$ against the constant function $1$. Thus, the crossing Eq.~(\ref{eq:ApproxCrossing}) in the small $u$ limit reads\footnote{We justify switching the limit and integration in Appendix~\ref{app:Rigorous}. Roughly, it follows from the fact that the integral of $D_{u,v}(\s)$ over regions with large $\s$ falls exponentially with $\s$.} \begin{eqnarray} \label{eq:crossingEqforrho} 1 &=& \lim_{u\to 0} \int_{d-2}^\oo d\s D_{u,v}(\s) \ \ =\ \ \int_{d-2}^\oo d\s \lim_{u\to 0} D_{u,v}(\s). \end{eqnarray} As we discussed above, the $u\to 0$ limit on the RHS is dominated by the sum over large $\ell$, so we are free to substitute the asymptotic form of the blocks Eq.~(\ref{eq:fBlock}) into the definition of $D_{u,v}$ and maintain the same $u\to 0$ (equivalently $z\to 0$) limit, \begin{eqnarray} \lim_{u\to 0} D_{u,v}(\s) &=& \p{\lim_{z\to 0} z^{\Delta_{\phi}}\sum_{\tau,\ell}P_{\tau,\ell} k_{2\ell}(1-z) \de(\tau-\s)} v^{\frac{\s}{2}-\Delta_{\phi}}(1-v)^{\De_\phi}F^{(d)}(\s,v), \end{eqnarray} where we have used $\bar z = 1-v+O(z)$. Note that after this substitution, the $\tau$ and $v$-dependence factors out into an overall function $v^{\frac{\s}{2}-\Delta_{\phi}}(1-v)^{\De_\phi}F^{(d)}(\s,v)$, while the $u$ and $\ell$ dependence is encapsulated in a particular weighted sum of OPE coefficients, \begin{eqnarray} \label{eq:definitionofrhobody} \rho(\s) &\equiv& \lim_{z \to 0}z^{\Delta_{\phi}}\sum_{\tau,\ell}P_{\tau,\ell}k_{2\ell}(1-z) \de(\tau-\s) . \end{eqnarray} By crossing symmetry Eq.~(\ref{eq:crossingEqforrho}), the density $\rho(\s)$ satisfies \begin{eqnarray} \label{eq:integralcrossingrelation} 1&=&(1-v)^{\Delta_{\phi} }\int_{d-2}^\oo d\s \rho(\s) v^{\frac{\s}{2}-\Delta_{\phi}}F^{(d)}(\s,v) . \end{eqnarray} We claim that the only way to solve this relation with positive $\rho(\s)$ is if $\rho$ is given by its value in mean field theory, namely a sum of delta functions at even integer-spaced twist: \begin{eqnarray} \label{eq:MFTclaim} \rho(\s) &=& \sum_{n=0,1,\dots} P^\mathrm{MFT}_{2\Delta_{\phi}+2n}\de(\s-(2\Delta_{\phi}+2n)). \end{eqnarray} where the mean field theory coefficients were given as a function of $n$ and $\ell$ in equation (\ref{eq:MFTCoeffs}). Expanding these coefficients at large $\ell$ and performing the sum in Eq.~(\ref{eq:definitionofrhobody}) gives\footnote{To perform this computation explicitly, we use the fact that the sum is dominated by the region of fixed $z\ell^2$, so that one may use the approximation of Eq.~(\ref{eq:FixedUApprox}).} \begin{eqnarray} \label{eq:MFTAccumulated} P^{MFT}_{2\Delta_\phi + 2n} = \frac{1}{2^{2\Delta_{\phi}+2n}} \frac{ (\Delta_\phi- \frac{d}{2}+1)_n^2 }{ n! (2 \Delta_\phi + n - d + 1)_n }. \end{eqnarray} Let us give a brief argument for why this is the case. Note that the function $F^{(d)}(\s,v)$ is analytic and positive near $v=0$, so the small $v$ behavior of the above integral comes from the term $v^{\frac{\s}{2}-\Delta_{\phi}}$. Since the right-hand side is independent of $v$, and the density $\rho(\s)$ is nonnegative, we see that $\rho(\s)$ must be zero for $\s<2\Delta_{\phi}$ and also have a contribution proportional to $\de(\s-2\Delta_{\phi})$. The necessity of the other terms $\de(\s-(2\Delta_{\phi}+2n))$ now follows from the fact that the LHS is independent of $v$, while the first term $\de(\s-2\Delta_{\phi})$ contributes a power series in $v$ about $v=0$, namely $F^{(d)}(2\Delta_{\phi},v)$. Terms with $n> 0$ are needed to cancel successive powers of $v$ from this first term. To see the necessity of this result most clearly, it is helpful to subtract the contribution of $\de(\s-2\Delta_{\phi})$ from both sides of Eq.~(\ref{eq:integralcrossingrelation}): \begin{eqnarray} O(v) &=& (1-v)^{\Delta_{\phi} }\int_{d-2}^\oo d\s \rho'(\s) v^{\frac{\s}{2}-\Delta_{\phi}}F^{(d)}(\s,v) \end{eqnarray} where $\rho'(\s)=\rho(\s)-P^\mathrm{MFT}_{2\Delta_{\phi}}\de(\s-2\Delta_{\phi})$. We are left with an $O(v)$ term on the LHS which must now be matched by a $\de(\s-(2\Delta_{\phi}+2))$ term in $\rho'$. Repeating this algorithm iteratively, one can fix $\rho(\s)$ to be given by its value in MFT.\footnote{We also note that one might reproduce this conclusion directly by doing a projection of the LHS and RHS onto specific high-spin conformal blocks using the method of ``conglomerating'' \cite{Unitarity}.} The result Eq.~(\ref{eq:MFTclaim}) has several consequences. Firstly, it implies the existence of a tower of operators with increasing spin whose twists approach $\tau=2\Delta_{\phi}+2n$, for each $n\geq 0$. To see this, let us integrate Eq.~(\ref{eq:MFTclaim}) over a bump function $h_\epsilon(\s)$ with some width $\epsilon$ around $\s=2\Delta_{\phi}+2n$. Using the definition in Eq.~(\ref{eq:definitionofrhobody}), we obtain\footnote{We justify interchanging the limit and integration in Appendix \ref{app:Rigorous}.} \begin{eqnarray} \lim_{z\to 0} z^{\Delta_{\phi}}\sum_{\tau,\ell}P_{\tau,\ell}h_\epsilon(\tau) k_{2\ell}(1-z) &=& P_{2\Delta_{\phi}+2n}^\mathrm{MFT}. \end{eqnarray} The limit vanishes termwise on the LHS, so a finite result can only come from the sum over an infinite number of terms. Thus, for any $\epsilon$, there are an infinite number of operators with twist $\tau=2\Delta_{\phi}+2n+O(\epsilon)$. We can also be more precise about the contribution of these operators to the conformal block expansion at large $\ell$. The sum above can be written as an integral over the OPE coefficient density in $\ell$, for operators with twist near $2\Delta_{\phi}+2n$ \begin{eqnarray} \sum_{\tau\sim 2\Delta_{\phi}+2n,\ell} P_{\tau,\ell} k_{2\ell}(1-z) &=& \int_0^\oo d\ell f_n(\ell) k_{2\ell}(1-z), \label{eq:Fdef} \end{eqnarray} where we define an OPE coefficient density \begin{eqnarray} \label{eq:fLDefinition} f_n(\ell) \equiv \sum_{\tau\sim 2\Delta_{\phi}+2n,\ell'} P_{\tau,\ell'} \de(\ell-\ell'). \end{eqnarray} For simplicity, we no longer show the bump function $h_\epsilon(\tau)$ explicitly, but indicate its presence by writing $\tau\sim 2\Delta_{\phi}+2n$. One intuitively expects that the OPE coefficient density $f_n(\ell)$ must be constrained at large $\ell$ in order to reproduce the identity operator in the crossed channel. In mean field theory $f_n(\ell)$ has a power-law behavior, and so we expect that, in an averaged sense, $f_n(\ell)$ must be similar in any CFT. This motivates introducing an integrated density \begin{eqnarray} F_n(L) \equiv \int_0^L d \ell \frac{\Gamma(2 \ell)}{\Gamma(\ell)^2} f_n(\ell) . \end{eqnarray} In Appendix \ref{app:BoundsF} we prove both upper and lower bounds \begin{eqnarray} A_U L^{ 2 \Delta_{\phi} } \ > \ F_n(L) \ > \ A_L \frac{ L^{2 \Delta_{\phi} } }{\log(L)} \end{eqnarray} for some coefficients $A_U$ and $A_L$ in the limit of very large $L$. We expect that the lower bound can be improved to eliminate the logarithm and make a prediction $F_n(L) =A_n L^{2\Delta_\phi}$.\footnote{ In the case where the functions $k_\beta(1-z)$ are replaced by their exponential approximation Eq.~(\ref{eq:Exponentialdecay}), the Hardy-Littlewood Tauberian theorem says that the upper and lower bound at large $L$ are the same, and it fixes their coefficient. It seems likely to us that an analogous theorem could be proven for the case at hand. } We calculate in Appendix \ref{app:BoundsF} that such a prediction would necessarily fix $A_n$ to be \begin{eqnarray} \lim_{L \rightarrow \infty} L^{-2\Delta_\phi} F_n(L) = \frac{P^{MFT}_{2\Delta_\phi + 2n} }{\Delta_\phi \Gamma(\Delta_\phi)^2 }. \end{eqnarray} In summary, we have shown that in any CFT we must have operators accumulating at twists $2 \Delta_{\phi} + 2n$ at large $\ell$. In simple cases where these accumulation points are populated by a single operator at each $\ell$, as in all perturbative theories, we can obtain specific relations for sums over the anomalous dimensions $\gamma(n, \ell)$ and the conformal block coefficients $\delta P_{2 \Delta_{\phi}+2n, \ell}$. We will explore these relations in the next subsection. In Appendix \ref{app:DistinctOperators} we briefly explain how these results generalize when we have distinct operators $\phi_1$ and $\phi_2$, so that there must exist operators accumulating at twist $\Delta_1 + \Delta_2 + 2n$ as $\ell \to \infty$. \subsubsection{Relation to Numerical Results and the 3d Ising Model} In \cite{ElShowk:2012ht}, the authors used numerical boostrap methods to constrain the dimensions of operators appearing in the OPE of a scalar $\phi$ with itself in 3d CFTs. They found numerical evidence that the minimum twist $\tau_\ell$ at each spin $\ell$ in the $\phi\x\phi$ OPE satisfies \begin{eqnarray} \label{eq:lowesttwistbound} \tau_{\ell} &\leq& 2\Delta_\phi. \end{eqnarray} (More precisely, they compute a series of numerical bounds on $\tau_{\ell}$, which appear to approach the presumably optimal bound Eq.~(\ref{eq:lowesttwistbound}), at least for $\ell=2,4,6$.) Here, we note that Eq.~(\ref{eq:lowesttwistbound}) follows from our results, together with Nachtmann's theorem \cite{Nachtmann:1973mr}. We have seen, among other things, that there exist operators in the $\phi\x\phi$ OPE with twist arbitrarily close to $2\Delta_\phi$ and arbitrarily high spin $\ell$. Nachtmann's theorem states that $\tau_{\ell}$ is an increasing function of $\ell$ for $\ell>0$, which implies Eq.~(\ref{eq:lowesttwistbound}) for all $\ell>0$. The 3d Ising Model is a particularly interesting example in light of this result. The theory contains a scalar $\s$ with dimension $\Delta_\s\approx 0.518$. Thus, the minimum twist operators at each $\ell$ have twist less than $\approx 1.036$, which is very close to the unitarity bound. One might interpret these operators for $\ell\geq 4$ as approximately conserved higher-spin currents. It would be interesting to understand what the existence of these approximate currents implies for structure of the theory. \subsection{Properties of Isolated Towers of Operators} \label{sec:IsolatedTowers} So far, we have made no assumptions about the precise spectrum of operators that accumulate at the special values $\tau\sim 2\Delta_{\phi}+2n$. However, it is interesting to consider the case where an accumulation point $2\Delta_{\phi}+2n$ is approached by a single tower of operators $\mathcal{O}_{2 \Delta_\phi + 2n,\ell}$ with $\ell=0,2,\dots$, which are separated by a twist gap from other operators in the spectrum at sufficiently large spin. This occurs in virtually every example we are aware of, including all theories with a perturbative expansion parameter such as $1/N$, and we will also discuss some non-perturbative examples at the end of this subsection. It would be very interesting to identify any CFTs without operators with twists near $2 \Delta_\phi + 2n$ for every sufficiently large value of $\ell$. With this additional assumption, we will be able to characterize subleading corrections to the bootstrap equation, Eq.~(\ref{eq:bootstrap}), in the small $u$ limit. On the left-hand side, these corrections come from the operators ${\cal O}_m$ with minimal nonzero twist. Thus, we have the approximate relation \begin{eqnarray} \label{eq:BetterApproxCrossing} 1 + \sum_{\ell_m=0}^2 P_{m} u^{\frac{\tau_{m}}{2}} f_{\tau_{m},\ell_{m}}(0,v) \approx \sum_{\tau,\ell} P_{\tau,\ell} \ \! v^{\frac{\tau}{2}- \Delta_{\phi}} u^{\Delta_{\phi}} f_{\tau,\ell}(v,u), \end{eqnarray} which is valid up to subleading corrections in $u$ in the limit $ u \to 0$. We have assumed that $\ell_m \leq 2$ because higher spin operators either have twist greater than that of the energy-momentum tensor or, as argued in \cite{Maldacena:2011jn, Maldacena:2012sf}, they are part of an infinite number of higher-spin currents that couple as if they were formed from free fields.\footnote{Strictly speaking, the arguments in \cite{Maldacena:2011jn, Maldacena:2012sf} assumed $d=3$ and studied correlators of currents, but it is likely that they can be extended to $d\ge 3$. In any case, one can view $\ell_m \leq 2$ as an assumption. } Dolan and Osborn \cite{Dolan:2011dv} have given a formula in general $d$ appropriate for the conformal blocks corresponding to ${\cal O}_m$ exchange on the left-hand side, where we have expanded at small $u$. This is \begin{eqnarray} \label{eq:TauMinBlocks} f_{\tau_{m},\ell_{m}}(0,v) = (1-v)^{\ell_m} {}_2F_1\left( \frac{\tau_m}{2} +\ell_m, \frac{\tau_m}{2} + \ell_m, \tau_m+2 \ell_m, 1-v\right). \end{eqnarray} It will be important that this hypergeometric function can be expanded in a power series at small $v$ with terms of the form $v^k (a_k + b_k \log v)$. The logarithms will be related to the `anomalous dimensions' that emerge at large $\ell$. Now we expect there to be a finite separation between the lowest twist $\tau_m$ and the other twists in the theory. To prove this, we consider two cases separately, that of $\tau_m < d-2$ and that of $\tau_m \ge d-2$. In the former case, ${\cal O}_m$ must be a scalar operator due to the unitarity bound (\ref{eq:unitaritybound}). We will assume that there are a finite number of scalar operators with dimension below any given value,\footnote{This assumption would follow, for instance, from the assumption that the CFT has a well-defined partition function at non-zero temperature, or that the four-point function of the energy-momentum tensor is finite.} which immediately implies that the twist of ${\cal O}_m$ must have a finite separation from the other twists in the theory. To have a non-vanishing 4-pt correlator $\phi$ must be uncharged, so in the absence of lower twist scalars we have $\tau_m=d-2$, because the energy-momentum tensor will appear in the $\phi(x) \phi(0)$ OPE. We can then apply the Nachtmann theorem, which says that minimal twists must be non-decreasing functions of $\ell$, to conclude that $\tau_m$ is separated from the other twists in the theory. Note that, crucially, unless $\phi$ is a free scalar field we have $\frac{\tau_m}{2} - \Delta_{\phi} < 0$, so the sub-leading powers of $u$ grow as $u \to 0$. Taken together, these comments imply that there is a limit $u \to 0$ where the exchange of the identity operator plus a finite number of ${\cal O}_m$ dominates the left-hand side of equation (\ref{eq:CrossingBootstrap}). Assuming the existence of an operator with twist approaching $2 \Delta_\phi + 2n$ for each $\ell$, we would like to constrain the deviation of their conformal block coefficients $\delta P_{2 \Delta_{\phi}+2n, \ell}$ from MFT and their anomalous dimensions $\g(n,\ell) \equiv \De_{\mathcal{O}_{n,\ell}}-2\Delta_{\phi}-2n-\ell$. This is possible because the ${\cal O}_m$ contribute a dominant sub-leading contribution at small $u$, with a known $v$-dependence that can be expanded in a power series with integer powers at small $v$. The fact that we have only integer powers $v^n$ and $v^n \log v$ multiplying $u^{\frac{\tau_m}{2}}$ on the left-hand side of equation (\ref{eq:BetterApproxCrossing}) means that the right-hand side can reproduce these terms only with the conformal blocks we just discovered above, namely those with twists approaching the accumulation points $\tau(n, \ell) = 2 \Delta_{\phi} + 2n + \gamma(n, \ell)$ so that $v^{\frac{\tau(n,\ell)}{2} - \Delta_{\phi}}$ is an integer power in the $\ell \to \infty$ limit. In fact, expanding Eq.~(\ref{eq:TauMinBlocks}) for the ${\cal O}_m$ conformal blocks at small $v$ gives \begin{equation} f_{\tau_m, \ell_m}(v) = \frac{\Gamma(\tau_m + 2 \ell_m)(1-v)^{\ell_m} }{\Gamma^2 \left(\frac{\tau_m}{2} + \ell_m \right)} \sum_{n=0}^\infty \left( \frac{\left(\frac{\tau_m}{2} + \ell_m \right)_n}{n!} \right)^2 v^n \left[ 2 \left(\psi(n+1) - \psi \left( \frac{\tau_m}{2} + \ell_m \right) \right)- \log v \right] , \label{eq:ExplicitTauMinBlock} \end{equation} where $\psi(x)=\Gamma'(x)/\Gamma(x)$ is the Digamma function, and $(a)_b = \Gamma(a+b)/\Gamma(a)$ is the Pochhammer symbol. The details of this formula are not especially important, except insofar as it makes explicit the connection between the coefficients of $v^n$ and $v^n \log v$ in the series expansion. We will now see that the $v^n$ terms must come from $\delta P_{2 \Delta_{\phi}+2n, \ell}$ while the $v^n \log v$ terms are a consequence of $\gamma(n, \ell)$. This means that the `anomalous dimension' and the correction to the conformal block coefficients must be related at large $\ell$. These quantities were seen to be related \cite{JP} to all orders in perturbation theory \cite{Unitarity} in the presence of a $1/N$ expansion, so our result extends this relation to a non-perturbative context. The $v^n \log v$ terms in equation (\ref{eq:ExplicitTauMinBlock}) can only be reproduced by expanding $v^{\frac{\tau}{2} - \Delta_{\phi}}$ in $\gamma(n, \ell)$ in the large $\ell$ conformal blocks. For simplicity let us consider the situation where there is only one operator accumulating near $2 \Delta_{\phi} + 2n$ for each $\ell$, and that the conformal block coefficients approach $P^{MFT}_{2 \Delta_{\phi}+2n, \ell}$. In this case we can write the RHS of the crossing relation as \begin{eqnarray} \sum_{\ell} P^{MFT}_{2 \Delta_{\phi}+2n, \ell} \left[ \frac{\gamma(n, \ell)}{2} \log v \right] \frac{\ell^{\frac12} 4^{\ell}}{\sqrt{\pi}} z^{\Delta_{\phi}} K_0(2\ell \sqrt z) v^{n} (1-v)^{\Delta_{\phi}} F^{(d)}(2 \Delta_{\phi} + 2n, v) , \end{eqnarray} where we have used a Bessel function approximation to $k_{2\ell}(1-z)$ discussed in Appendix \ref{app:Bessel}.\footnote{Strictly speaking, the Bessel function approximation breaks down for $\ell \gg 1/\sqrt{z}$, so we are here implicitly using the fact that the sum including the MFT coefficients is dominated by the region of fixed $\ell^2 z$, where the approximation is valid. If the reader is concerned about this, one can instead write these sums using the hypergeometric function $k_{2\ell}(1-z)$. However, we find the Bessel function formulae to be useful for explicitly doing computations using the integral approximations to these sums.} In order for the sum to produce an overall factor of $u^{\frac{\tau_m}{2}} \sim z^{\frac{\tau_m}{2}}$, we must have power law behavior in $\gamma(n,\ell)$ at very large $\ell$, \begin{eqnarray} \gamma(n, \ell) = \frac{\gamma_n}{\ell^{\tau_m}}, \end{eqnarray} with a coefficient $\gamma_n$ related to the OPE coefficient $P_m$ of the leading twist operator ${\cal O}_m$ in equation (\ref{eq:BetterApproxCrossing}). In large $N$ theories, the coefficients $\gamma_n$ are suppressed by powers of $1/N$ as discussed in \cite{Alday:2007mf}. However, we stress that all we need in order to expand $v^{\frac{\gamma(n,\ell)}{2}}$ in $\log v$ in the large $\ell$ sum is the property that it is power law suppressed as $\ell \rightarrow \infty$, which is true even if the coefficients $\gamma_n$ are $O(1)$ or larger. The integer powers of $v$ in equation (\ref{eq:ExplicitTauMinBlock}) must then be reproduced by \begin{eqnarray} \sum_{\ell} P^{MFT}_{2 \Delta_{\phi}+2n, \ell} \left[ \delta P_{2 \Delta_{\phi}+2n, \ell} + \frac12 \gamma(n,\ell) \frac{d}{dn} \right] v^n \frac{\ell^{\frac12} 4^{\ell}}{\sqrt{\pi}} z^{\Delta_{\phi}} K_0(2\ell\sqrt z) (1-v)^{\Delta_{\phi}} F^{(d)}(2 \Delta_{\phi} + 2n, v) , \end{eqnarray} where the $\gamma(n,\ell) \frac{d}{dn}$ piece comes from expanding the $\tau$ dependence of the conformal block in small $\gamma(n,\ell)$. Again requiring that we correctly produce the overall $u^{\frac{\tau_m}{2}} \sim z^{\frac{\tau_m}{2}}$ behavior, we must have \begin{eqnarray} \delta P_{2 \Delta_{\phi}+2n, \ell} = \frac{c_n}{\ell^{\tau_m}} \end{eqnarray} to leading order in $1/\ell$ in the large $\ell$ limit, with a coefficient $c_n$ related to $\gamma_n$. As an example, it is particularly simple to do this matching explicitly for the leading twist tower with $n=0$. In this case, matching the $\log v$ and $v^0$ terms gives the relations \begin{eqnarray}\label{eq:n0matching} \gamma_0 = -P_{m} \frac{2\Gamma(\Delta_{\phi})^2 \Gamma(\tau_m+2 \ell_m)}{ \Gamma(\Delta_{\phi}-\frac{\tau_m}{2})^2\Gamma(\frac{\tau_m}{2}+\ell_m)^2} \, , \,\,\, c_0 = \left[\psi \left(\frac{\tau_m}{2}+\ell_m \right) + \gamma +\log 2 \right] \gamma_{0}, \end{eqnarray} where $\gamma$ is the Euler-Mascheroni constant. It is important to note the relative sign between $\gamma_0$ and $P_m$ (which is strictly positive), since this is required in order to satisfy the Nachtmann theorem asymptotically at large $\ell$. It is then straightforward to continue this matching to higher orders in $v$. \subsubsection{Implications for SCFTs in 4d} In \cite{Poland:2010wg, Poland:2011ey} the crossing relation was examined for a chiral primary operator $\Phi$ of dimension $\Delta_\phi$ in an $\mathcal{N}=1$ SCFT, and it was observed that in the OPE of $\Phi(x) \Phi(0)$ there exists an infinite tower of operators with spin $\ell$ and dimension exactly $2 \Delta_\Phi + \ell$. Supersymmetry and unitarity protect the dimensions of these operators, and further require a gap in twist before non-protected operators can appear. Thus, these operators form an isolated tower with vanishing anomalous dimensions. This immediately implies that the correlator in the $\Phi^\dag \Phi \to \Phi \Phi^{\dag}$ channel must satisfy \begin{eqnarray} \langle \Phi^\dag(x_1) \Phi(x_2) \Phi(x_3) \Phi^{\dag}(x_4) \rangle = \frac{1}{(x_{12}^2 x_{34}^2)^{\Delta_{\phi}}} \left( 1+ u^{1} \left(0 \log v + c + \ldots \right) + \ldots \right) \end{eqnarray} where the $\ldots$ denote higher order terms in $u$ and $v$, and the power $u^{1}$ comes from the $U(1)_R$ current multiplet (containing the stress tensor), which has $\frac{\tau}{2} = 1$. The point is that the term $u^{1} \log v$ must be absent, because it could only arise from the anomalous dimensions of operators with twist $2\Delta_\phi + \gamma(\ell)$ in the $\Phi \Phi \to \Phi^\dag \Phi^\dag$ channel, but we know that due to supersymmetry, $\gamma(\ell) = 0$ exactly. The explicit results of \cite{Poland:2010wg} show that the superconformal block relevant for the $\Phi^\dag \Phi \to \Phi \Phi^{\dag}$ channel is\footnote{Our normalization for the blocks removes a factor of $(-\frac{1}2)^{\ell}$ compared to that used in \cite{Poland:2010wg}.} \begin{eqnarray} \mathcal{G}_{\tau=2, \ell_m}(u, v) = (-1)^{\ell_m}\left[g_{2, \ell_m}(u,v) - \left( \frac{\ell_m + 1}{4 \ell_m + 6} \right) g_{2, \ell_m+1}(u,v) \right] \end{eqnarray} in terms of the usual 4d blocks given in equation (\ref{eq:Standard4dBlocks}). Taking $\ell_m = 1$ for the $U(1)_R$ current, one can easily verify that the $u \log v$ term cancels in this linear combination of conformal blocks in the limit of small $u$ and $v$.\footnote{Note that if instead one considers the s-channel expansion of the correlator $\langle \Phi^\dag \Phi \Phi^{\dag} \Phi \rangle$ then there is no longer a relative sign between the even and odd spins in the superconformal block \cite{Poland:2010wg}, so a $u^{1}\log v$ term is present. However, the conformal block expansion in the s-channel cannot be immediately compared to the $\Phi \Phi \rightarrow \Phi^\dagger \Phi^\dagger$ channel because passing between these two different OPE limits requires changing the radial ordering of operators, which introduces phases $\sim (-1)^\ell$ from crossing branch cuts. } In the case where there are non-$R$ currents in the $\phi \times \phi^\dagger$ OPE, these currents would also appear in multiplets that contain scalar components with $\frac{\tau}{2} = 1$. Consequently, the cancellation also has to occur for $\ell_m=0$, as one can easily verify in the blocks themselves.\footnote{More generally, there exist theories with an infinite number of higher-spin currents, and the anomalous dimensions of the $n=0$ operators should be protected in such cases as well. Additionally, while twists greater than 2 would not be the minimal twist and therefore not obviously constrained by our results, the presence of $u^{\frac{\tau}{2}} v^0 \log v$ terms would at the very least impose non-trivial constraints that would have to be satisfied to be consistent with vanishing anomalous dimensions in the cross-channel. In any case, any fears of a possible contradiction are readily allayed: it is easy to verify that in fact the $u^{\frac{\tau_m}{2}} v^0\log v$ terms in the ${\cal G}_{\tau_m, \ell_m}$ super-conformal blocks cancel for any $\ell_m$ and any $\tau_m$. } This provides a non-trivial consistency check of our results and those of \cite{Poland:2010wg}. We can proceed to consider the OPE coefficients of the twist $2 \Delta_\phi$ tower, which were bounded as a function of $\Delta_\phi$ in \cite{Poland:2011ey}. Our results predict that these should approach the mean field theory conformal block coefficients at a rate $\ell^{-2}$, and this rate of convergence could easily be matched to bounds from the numerical bootstrap in the future. \subsection{Bounding Contributions from Operators with General Twists} \label{sec:Bound} Finally, let us show that as $\ell \to \infty$, the contribution from accumulation points $\tau_a$ other than $2 \Delta_{\phi} + 2n$ is strictly bounded. An analogous generalization for distinct operators follows from observations in Appendix \ref{app:DistinctOperators}. The idea of the argument is very simple -- specific power-law behaviors in $\ell$ for the conformal block coefficients $P_{\tau, \ell}$ result in related power-law contributions at small $u$. Since we explicitly know the leading and sub-leading behavior as $u \to 0$, we can obtain a bound on the conformal block coefficients using the crossing symmetry relation, Eq.~(\ref{eq:BetterApproxCrossing}). The remainder of this subsection formalizes these claims. Consider all terms on the right-hand side of (\ref{eq:BetterApproxCrossing}) at large $\ell$ with $|\tau - (2 \Delta_{\phi} + 2n)| > \epsilon$ for some $\epsilon > 0$ small but fixed. This bound separates out the contributions we studied in the previous subsections. Furthermore, let us consider only operators with twists $\tau < \tau_*$ for some arbitrary choice of $\tau_*$. The reason for imposing this bound on $\tau$ is that we wish to constrain the CFT spectrum and the conformal block coefficients at large $\ell$, and by this we mean large $\ell$ with fixed $\tau$. In the analogy with scattering, we are studying the scattering amplitude at large impact parameter and fixed center of mass energy. Let us define a quantity that is the partial sum of the right-hand side of (\ref{eq:BetterApproxCrossing}) keeping only operators with $\tau<\tau_*, \ell > \ell_*\gg 1/\sqrt{z}$, and $|\tau - (2\Delta_{\phi}+2n)| > \epsilon$: \begin{eqnarray} \textrm{RHS}(\tau_*, \ell_*) &\equiv& \sum_{\substack{\tau< \tau_*,\ell> \ell_* \\ \tau \ne 2\Delta_{\phi} + 2n \pm O(\epsilon)}} P_{\tau,\ell} \ \! v^{\frac{\tau}{2}- \Delta_{\phi}} u^{\Delta_{\phi}} f_{\tau,\ell}(v,u). \end{eqnarray} Then we can approximate \begin{eqnarray} \textrm{RHS}(\tau_*, \ell_*) \stackrel{\ell_* \gg 1/\sqrt{z}}{\approx} z^{\Delta_{\phi}} \sum_{\tau < \tau_*, \ell>\ell_*} P_{\tau, \ell} k_{2\ell}(1-z) \left[ v^{\frac{\tau}{2} - \Delta_{\phi}} (1-v)^{\Delta_{\phi}} F^{(d)}(\tau, v) \right] . \end{eqnarray} The idea will be to combine together all the various values of $\tau$ for each $\ell$. Since the conformal block coefficients satisfy $P_{\tau, \ell} > 0$ by unitarity, a weighted sum of them will also be positive. Furthermore, if we can bound their weighted sum then we can bound each individual term. For all physical $\tau \leq \tau_*$ the function $v^{\frac{\tau}{2}-\Delta_{\phi}} (1-v)^{\Delta_{\phi}} F(\tau, d, v)$ will be bounded from above by some $B(\tau_*, d, v)$, so we can write an inequality \begin{eqnarray} \textrm{RHS}(\tau_*, \ell_*)< B(\tau_*, d, v) z^{\Delta_{\phi}} \sum_{\tau < \tau_*, \ell>\ell_*} P_{\tau, \ell} k_{2\ell}(1-z) . \end{eqnarray} For each value of $\ell$, there can be only a finite number of operators with $\tau < \tau_*$. This means that we can define a new quantity that includes the contributions of all these operators at fixed $\ell$: \begin{eqnarray} Q_{\tau_*, \ell} \equiv \sum_{\tau < \tau_*} P_{\tau, \ell} . \end{eqnarray} Now we have the bound \begin{eqnarray} \textrm{RHS}(\tau_*, \ell_*) < B(\tau_*, d, v) z^{\Delta_{\phi}} \sum_{\ell>\ell_*} Q_{\tau_*, \ell} k_{2\ell}(1-z). \end{eqnarray} Again, the lower-bound $\ell_*$ can be taken arbitrarily large since only the infinite sum over $\ell$ produces additional $u^{-1}$ singularities; operators that do not belong to an infinite tower of spins are irrelevant. Now, for the purposes of this argument we can also approximate the $\ell > \ell_*$ sum by an integral. In order to avoid producing non-integer powers of $v$ on the LHS of (\ref{eq:BetterApproxCrossing}), we must then have that \begin{eqnarray} \lim_{z \to 0} \left[ z^{\Delta_{\phi} - \frac{\tau_{m}}{2} } \int^\infty_{\ell_*} d \ell \ \! Q_{\tau_*, \ell} k_{2\ell}(1-z) \right] = 0 . \end{eqnarray} If we use the $K_0$ approximation of Eq.~(\ref{eq:FixedUApprox}), then performing a change of variables to $y = \ell \sqrt{z}$ immediately shows that since $Q_{\tau_*, \ell} > 0$, we expect to have an asymptotic bound $Q_{\tau_*, \ell} < 4^{-\ell} \ell^{2 \Delta_{\phi} -\frac32 - \tau_m} $ in the large $\ell$ limit, at least in an averaged sense when we smear over a large number of $\ell$. More precisely, we can use the arguments in Appendix \ref{app:BoundsF} to show that \begin{eqnarray} \int^L_{\ell_*} d \ell \frac{\Gamma(2\ell)}{\Gamma(\ell)^2} Q_{\tau_*, \ell} < A \ \! L^{2 \Delta_\phi - \tau_m } \end{eqnarray} for some positive constant $A$ at very large $L$. This provides a general smeared bound for every sequence of $P_{\tau, \ell}$ as $\ell \to \infty$. Note, however, that our method cannot strictly exclude examples where large but extremely rare conformal block coefficients occasionally appear at large $\ell$. \subsection{Failure in Two Dimensions} \label{sec:2d} In the previous sections, we had to restrict to $d>2$ dimensions in order to have a gap between the twist of the identity operator and $\tau_m$. It is illuminating to see how the absence of such a gap in $d=2$ theories explicitly leads to violations of our conclusions in specific examples. We will focus here on the simplest of such examples, the $c=\frac12$ minimal (i.e. $d=2$ Ising) model (see e.g. \cite{Ginsparg} for a review). This theory contains three Virasoro primary operators, all scalars: $1,\sigma,$ and $\epsilon$, of dimensions $0,\frac{1}{8}$, and $1$ respectively, as well as all their Virasoro descendants. Consider the operator $\sigma$; the $\sigma \times \sigma$ OPE in this case can be summarized succinctly as \begin{eqnarray} [\sigma] [\sigma] &=& [1] + [\epsilon], \end{eqnarray} where $[{\cal O}]$ denotes the full Virasoro conformal block associated with an operator. Now, since $\epsilon$ and 1 both have integer dimensions, and the Virasoro operators just raise the dimension by integers, this means that every operator that appears in the $\sigma$ conformal block decomposition has integer twist, violating our conclusion in $d>2$ that there must be operators with twist $\tau = 2\Delta_\sigma +2n = \frac{1}{4} + 2n$. To see what has gone wrong, examine the bootstrap equation in this theory at $|u|\ll |v| \ll 1$: \begin{eqnarray} u^{-\frac{1}{8}} + \sum_{ \ell} P_{0, \ell} u^{-\frac{1}{8}} f_{0, \ell}(u,v) = \sum_{\tau, \ell} P_{\tau, \ell} v^{\frac{\tau}{2} - \frac{1}{8}} f_{\tau, \ell}(v, u) + \textrm{subleading in } 1/u . \end{eqnarray} In this case there is no gap between the twist of the identity operator and $\tau_m$. Furthermore, our assumption from the analysis of \cite{Maldacena:2011jn, Maldacena:2012sf} that there is no non-trivial infinite tower of conserved higher-spin currents with $\tau(\ell) = d-2$ for $\ell > 2$ is also violated. Far from having an isolated dominant contribution from the identity operator at small $u$ followed by a finite number of isolated contributions from twist $\tau_m$ operators (followed by everything else), we immediately have an infinite tower of contributions all at $\tau=0$. Now we see why there are no operators in this theory with twist $\tau=2\Delta_\sigma$: the existence of this low-twist tower means that the identity operator can be (and is) cancelled by contributions {\em on the same side of the crossing relation}. In fact, in this case, the $\tau=0$ tower contributes not only the same $u^{-\Delta_\sigma}$ singularity, but it also contributes a $v^{-\Delta_\sigma}$ coefficient, for a total of $(u v)^{-\Delta_\sigma}$. The resulting singularity in the cross-channel can be seen explicitly in the exact four-point function, which contains a leading singularity at small $u$ and $v$ of the form \begin{eqnarray} G_\sigma(z,\bar{z}) &\sim& \frac{1}{(u v)^{\Delta_\sigma}}, \end{eqnarray} as opposed to the usual $u^{-\Delta_\sigma}$. It is interesting to note that the constraints from the Virasoro algebra that make many $d=2$ CFTs solvable also directly cause them to differ quite drastically in their behavior at large spin from essentially all other CFTs. \section{AdS Interpretation} \label{sec:super} To the uninitiated, results concerning the CFT spectrum and conformal block coefficients may appear rather technical. However as recent work has shown \cite{Katz, ScatteringStates, Analyticity}, both anomalous dimensions and OPE or conformal block coefficients have a very simple interpretation as amplitudes for scattering processes in AdS space. This follows from the fact that in global AdS, time translations are generated by the dilatation operator $D$ of the dual CFT, so anomalous dimensions in the CFT represent energy shifts of bulk states due to interactions. By the Born approximation, these are related to scattering amplitudes in the perturbative regime \cite{Katz}. A thorough investigation of this connection in the context of gravitational scattering in AdS at large impact parameter was performed in \cite{ourEikonal, ourCPW}, and in Appendix \ref{app:Joao} we explicitly compare our results to theirs in the region of overlap. To understand the connection to AdS, consider any scalar primary operator $\phi$ with dimension $\Delta_{\phi}$, which creates a state $|\phi \rangle = \phi | 0 \rangle$ when acting on the vacuum of the CFT. If we were working at large $N$ and $\phi$ was single-trace, then we could interpret $|\phi \rangle$ as a single-particle state in AdS. Furthermore, we could interpret the operators ${\cal O}_{\tau, \ell}$ appearing in the OPE \begin{eqnarray} \phi(x) \phi(0) = \sum_{\tau, \ell} c_{\tau,\ell} f_{\tau,\ell}(x,\partial) {\cal O}_{\tau, \ell}(0) \end{eqnarray} of $\phi$ with itself as 2-particle states whose anomalous dimensions were due to bulk interactions. The operators ${\cal O}_{\tau, \ell}$ at large $\ell$ correspond to states with large angular momentum in AdS, so that the two particles are orbiting a common center with a large angular momentum. This obviously implies that at large $\ell$ the pair of particles will become well-separated, although due to the warped AdS geometry, their separation or impact parameter $b$ is \begin{eqnarray} b \approx R_{AdS} \log \left( \frac{\ell}{\Delta_{\phi}} \right) \end{eqnarray} at large $\ell$. So we need to study very large $\ell$ to create a large separation in AdS units. In the absence of large $N$, we certainly cannot interpret the state $| \phi \rangle$ as a bulk particle, but we can still view it as some de-localized blob in AdS. Without large $N$ we would also expect to lose the interpretation of operators in the $\phi(x) \phi(0)$ OPE as 2-particle states. The results of the previous sections show that on the contrary, at large $\ell$ there are always operators ${\cal O}_{\tau, \ell}$ in the OPE that we can interpret as creating `2-blob' states, where the blobs are orbiting each other at large separation in AdS. The fact that we must always have infinite towers of operators in the OPE with twist $\tau = 2 \Delta_{\phi} + 2n + \gamma(n, \ell)$ and $\gamma(n, \ell) \to 0$ as $\ell \to \infty$ shows that at large $\ell$, the interactions between these orbiting AdS blobs are shutting off. In particular, let us assume that there is exactly one operator at each $n$ and $\ell$ and that the $\gamma(n, \ell) \to 0$ smoothly. In this case we obtain a specific power-law dependence on $\ell$ that can be written as \begin{eqnarray} \gamma(n, \ell) = \frac{\gamma_n}{\ell^{\tau_m}} \propto \gamma_n \exp \left[ - \tau_m \frac{b}{R_{AdS}} \right], \end{eqnarray} so the interactions between the blobs are shutting off exponentially at large, superhorizon distances in AdS. This is the sense in which our results prove superhorizon locality in the putative AdS dual of any $d > 2$ CFT. To emphasize the generality of this result, and the fact that $\phi$ really create `blobs', note that we can even apply our results to the scalar primary operators $\phi$ that create large black holes in AdS theories dual to CFTs with large $N$ and large 't Hooft coupling. In that case, our results show that if the AdS black holes orbit each other with sufficiently large angular momentum, then their interactions become negligible. \section{Discussion} \label{sec:Discussion} The recent revival of the conformal bootstrap has led to a great deal of progress, but perhaps the best is yet to come. Thus far much of the work on the bootstrap has been numerical and has focused on questions of phenomenological interest, so further studies of superconformal theories \cite{Poland:2010wg}, AdS/CFT setups \cite{JP}, and even quantum gravity \cite{Analyticity} may yield important results. Our results in this paper followed from a seemingly elementary consideration of how singularities in one channel of the conformal block expansion can be reproduced in the crossed channel, yet they have powerful implications for general CFTs. We have shown that the OPE of a scalar operator $\phi$ with itself has a universal leading behavior in the limit of large $\ell$ with fixed twist. In particular, there always exist operators that we could call $[\phi \phi]_{n, \ell}$ at very large $\ell$ which have twist $2 \Delta_\phi + 2n + \gamma(n, \ell)$, with $\gamma(n, \ell) \to 0$ as $\ell \to \infty$. This is directly analogous to the structure of `double-trace' operators in large $N$ theories, but it holds in any CFT. Furthermore, we saw that with reasonable assumptions, we could make specific predictions for the fall-off of $\gamma(n, \ell)$ and of the related OPE coefficients. We proved that all other contributions to the OPE must be sub-dominant at large $\ell$. Our bootstrap methods apply in a simple way only to the OPE of scalar operators, but it seems very likely that equivalent results also hold for the OPE of higher-spin operators. Perhaps in the future the results of \cite{Costa2011mg, Costa:2011mg} could be used to prove these statements. Another interesting extension would involve studying further sub-leading corrections to the bootstrap as $u \to 0$; analyzing these corrections could lead to a more general proof of the Nachtmann theorem that does not rely on conformal symmetry breaking in the IR. CFTs with dual AdS descriptions that are local at distances much smaller than the bulk curvature scale must have special features \cite{JP, AdSfromCFT}. However, we have seen that in a certain technically precise sense, all $d > 2$ CFTs can be viewed as dual to AdS theories that are local at superhorizon distances. The question of superhorizon AdS locality has often been discussed in the context of the holographic RG \cite{Susskind:1998dq, deBoer:1999xf, Balasubramanian:1999jd, Li:2000ec, Papadimitriou:2004rz, Heemskerk:2010hk, Faulkner:2010jy, vanRees:2011fr}, although the general success of this interesting approach has not been manifest. It would be interesting if our results could be related to or shed light on the holographic RG. Our arguments fail for CFTs in two dimensions. In fact as we discussed in section \ref{sec:2d}, minimal models provide an immediate counter-example, as they have scalar operators of dimension $\Delta_{\phi}$ without corresponding operators of twist $\approx 2 \Delta_{\phi} + 2n$ at large $\ell$. The reason is that in two dimensions, there is no separation between the dimension of the identity operator and the twists of conserved currents and the energy-momentum tensor. One might try to interpret this in AdS as the statement that there is no clear separation between free propagation and interactions, perhaps due to the fact that gravitational interactions produce a deficit angle in three dimensions; it would be interesting to explore this issue further. One inspiration for our approach was the structure of conformal blocks in Mellin space \cite{Mack, MackSummary, Analyticity, Unitarity, JoaoRegge}, where the blocks imitate the momentum-space partial waves of scattering amplitudes more transparently. The leading behavior at small $u$ in position space translates into the presence of a leading pole in the Mellin amplitude which must be reproduced by an infinite sum over angular momenta in the crossed channel. Through further work it should be possible to use our results to shed light on the convergence properties of the CFT bootstrap in Mellin space. Our results seem to suggest that the sum of conformal blocks in Mellin space will only converge away from the region where the Mellin amplitude has poles. A more precise version of this observation could be useful for further work using the CFT bootstrap, both analytically and numerically. \section*{Acknowledgments} We are grateful to Sheer El-Showk, Ami Katz, Juan Maldacena, Miguel Paulos, Jo\~ao Penedones, Slava Rychkov, Alessandro Vichi, and Alexander Zhiboedov for discussions. We would also like to thank the participants of the ``Back to the Bootstrap II" workshop for discussions and the Perimeter Institute for hospitality during the early stages of this work. ALF and JK thank the GGI in Florence for hospitality while this work was completed; JK also thanks the University of Porto. This material is based upon work supported in part by the National Science Foundation Grant No. 1066293. ALF was partially supported by ERC grant BSMOXFORD no. 228169. JK acknowledges support from the US DOE under contract no. DE-AC02-76SF00515.
1,108,101,562,646
arxiv
\section{Introduction} As well known, Bell's inequality implies that the correlations between the outcomes of measurements performed on far away quantum systems in entangled states exhibit an irreducible and unavoidable nonlocal nature. Obviously, such nonlocal correlations might, in principle, give rise to a conflict with relativistic causality. Popescu and Rohlrlich \cite{Popescu} have faced this crucial problem with admirable lucidity and have analyzed in detail the constraints that must be satisfied in order that no conflict with relativistic causality emerges. They have reached quite interesting general conclusions and have also introduced the so-called ``PR-box", a device with two inputs and two outputs which gives rise to correlations implying what has been called {\it superquantum nonlocality}. The reason for this qualification derives from the fact that the modulus of an appropriate combination of such correlation functions violates not only the classical limit of 2 for it but even the upper limit $2\sqrt{2}$ characteristic of quantum mechanics. To illustrate the arguments of these authors we follow their line of thought and we use their notation. Let $A,A',B$ and $B'$ be physical variables taking values +1 and -1, with $A$ and $A'$ referring to measurements on one part of the system by a local observer and $B$ and $B'$ referring to the other part. If we denote as $P_{AB}(a,b)$ the joint probability of obtaining $A=a$ and $B=b$ when both $A$ and $B$ are measured, the correlation $E(A,B)$ of the outcomes is defined as: \begin{equation} E(A,B)=P_{AB}(+1,+1)+P_{AB}(-1,-1) -P_{AB}(+1,-1)-P_{AB}(-1,+1). \end{equation} As well known Clauser, Horne, Shimony and Holt \cite{CHSH} have shown, completely in general, that an appropriate combination of such correlations (with the variables $A,A',B,B'$ arbitrarily chosen) satisfies, for all local theories, the inequality: \begin{equation} \vert E(A,B)+E(A,B')+E(A',B)-E(A',B')\vert\leq{2}. \end{equation} On the other hand, in the quantum case, when consideration is given to two far away spin $1/2$ particles in the singlet state and measurements of the spin components along appropriate directions are performed, the correlations $E_{Q}(A,B)\equiv \langle\Psi\vert A^{(1)}\otimes B^{(2)}\vert\Psi\rangle$ violate, for appropriate choices of $A,A',B,B'$, the above inequality. Actually, Bell \cite{Bell} has derived his celebrated inequality: \begin{equation} \vert E_{Q}(A,B)+E_{Q}(A,B')+E_{Q}(A',B)-E_{Q}(A',B')\vert\leq{2\sqrt{2}}, \end{equation} \noindent and, as well known, the right hand side upper bound can actually be reached for appropriate choices of the observables appearing in it. The authors of ref.\cite{Popescu} have investigated whether the request that the hypothetical general nonlocal theory one is envisaging respects relativistic causality might be responsible for the precise value of Bell's upper bound. The question is interesting since, at first sight, one might expect that the above combination of correlations reaches the value 4, which is attained when the first three terms take the value +1 and the last the value -1. The extremely interesting result of ref.\cite{Popescu} is the proof that, in principle, a nonlocal theory respecting relativistic causality and yielding a value greater than the upper bound is possible (we will call any theory exhibiting such a feature a {\it superquantum nonlocal theory}). Secondly, by resorting to the smart consideration of the PR-box, the authors have identified a specific family of correlations which, without conflicting with relativistic causality, actually reach the theoretical upper bound of 4. At this stage it is interesting to mention that other conceptual analysis \cite{Jarrett, Suppes, Shimony} concerning the locality issue have led to the conclusion that nonlocality in Bell's sense amounts to the logical conjunction of two other requests which have been named {\it Locality} and {\it Completeness}, respectively, by Jarret \cite {Jarrett} and {\it Parameter Independence (PI)} and {\it Outcome Independence (OI)}, by Shimony \cite{Shimony}. The distinction involved is quite elementary. Let us call $P(A=a|x,y),\;P(B=b|x,y),\;P(A=a,B=b|x,y), $ and $P(A=a|x,y;B=b)$ etc., the single and joint, conditional and unconditional probabilities of the outcomes $(a,b)$ for the inputs (settings) $(x,y)$, and let us recall the relation for conditional probabilities: \begin{equation} P(A=a,B=b|x,y)=P(A=a|x,y;B=b)\cdot P(B=b|x,y). \end{equation} \noindent If one assumes {\it Completeness $\equiv$ Outcome Independence}: \begin{eqnarray} P(A=a|x,y;B=b)&=&P(A=a|x,y),\\ \nonumber P(B=b|x,y;A=a)&=&P(B=b|x,y), \end{eqnarray} \noindent and {\it Locality $\equiv$ Parameter Independence}: \begin{equation} P(A=a|x,y)=P(A=a,|x),\;\;P(B=b|x,y)=P(B=b|y), \end{equation} \noindent one gets \begin{equation} P(A=a,B=b|x,y)=P(A=a,|x)\cdot P(B=b|y), \end{equation} \noindent i.e. Bell's locality request. On the other hand it is trivial to go the other way around showing that this last condition implies both {\it Locality} and {\it Completeness}. \vspace {0.3cm} \section {Analysis of the nonlocal features of the PR-box} The characterization of the PR-box is quite simple. It is represented by the following relation between the inputs $(x,y)$ and the outcomes $(a,b)$, each of which is assumed to take only the values $\{0,1\}$: \begin{equation} a+b=xy\;\; mod \;2. \end{equation} We will analize the PR-box under two possible formulation of its working, the first one (Case 1) being given simply by Eq.(2.1), the second one (Case 2) being enriched by the introduction of a deterministic hidden variable description of the inputs-outputs relations. \vspace {0.3cm} \subsection {Case 1} When one takes into account only the relation (2.1) one can argue in the following way. \begin{itemize} \item Let us consider the system $A$ and suppose that its input $x$ is known. Then, if $x=0 \rightarrow xy=0 \rightarrow (a=0, b=0)\vee (a=1,b=1)$. So, the outcome $a$ that Alice (at $A$) will get once she knows her setting $x=0$, depends on the outcome, 0 or 1, that Bob (at $B$) has obtained. Outcome Independence is violated. On the contrary, if $x=1$ two cases are possible: either $y=0$ and we are back to the previous situation, or $y=1 \rightarrow (a=1, b=0)\vee (a=0,b=1)$. In this case, both knowledge of $b$ and of $y$ are necessary to know $a$ uniquely. So, given the input at $A$, the corresponding output depends, in general, both from $y$ and $b$: the theory violates both PI as well as OI. \item Completely analogous considerations hold for the setting and the outcome at B. \item Another way of looking at the problem derives from looking at the product $ab$ of the outcomes. In the case in which both settings at $A$ and at $B$ are given, there are two possibilities: if $xy=0$ we know that for sure $a=b$ but we do not know whether they take the value $0$ or the value $1$. Alternatively, if $xy=1$ we know that one between a and b takes the value $0$ and one the value $1$, but, once more, we do not know the actual value of them. Specification of both settings does not determine the outcomes. Some further knowledge is necessary. \end{itemize} \subsection {Case 2} Suppose now we consider a hidden variable model characterized by a variable $\lambda$ which also can take the values $\{0,1\}$ and, to be completely general, let us assume that the probabilities of its two values are given by $P^{(\lambda)}(0)$ and $P^{(\lambda)}(1)$, with, obviously, $P^{(\lambda)}(0)+P^{(\lambda)}(1)=1$. The model is defined by the assumption that, for any given setting $x$ for $A$ and/or $y$ for $B$, the assignment of $\lambda$ determines the outcome(s) according to the following rules: \begin{eqnarray} a &=& (x+\lambda)\;\; mod \;2 \nonumber \\ b &=& (x+\lambda-xy)\;\; mod \;2. \end{eqnarray} \noindent Note that the model is manifestly nonlocal since the value of the outcome $b$ besides depending on the value of the hidden variable $\lambda$ and of the associated input $y$ depends also on the input $x$. In accordance with the above rules the assignments of the outputs (once the settings and the hidden variable are given) is the one exhibited in the following table: \begin{table}[htdp] \caption{Hidden Variable model outcomes as functions of the inputs and of $\lambda$.} \begin{center} \begin{tabular}{c|c|c||cc} x & y & $\lambda$ & a & b \\ [0.5ex] \hline 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 \\ 1 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 0 & 1 \end{tabular} \end{center} \label{tav} \end{table}% \noindent Looking at the table one immediately checks that the basic relation characterizing the inputs and outputs of the PR-Box: $a+b=xy\;\;mod \;2$ is satisfied. It is important to remark that the outputs concerning the two parties depend on the inputs of the other party (nonlocality) but not on the associated output. For what concerns us here we can then claim that the model exhibits Parameter (as any deterministic nonlocal hidden variable model) but not Outcome dependence. \vspace {0.3cm} \section {The superquantum character of the PR-box } We have repeatedly claimed that the PR-Box gives rise to superquantum nonlocal correlations. To show this, by following the procedure of ref.\cite{Cerf}, it is useful to introduce new outcomes $a'$ and $b'$, which are simply related to $a$ and $b$ in such a way that their possible values are $\{-1,+1\}$, as in Eqs. (1.1)-(1.3). We then put: \begin{equation} a'=1-2a;\;\;\;b'=1-2b, \end{equation} and we consider the quantities $E_{PR-HV}(x,y)=P_{xy}(+1,+1)+P_{xy}(-1,-1)-P_{xy}(+1,-1)-P_{xy}(-1,+1)$, which are easily calculated. Just to give an example, since, as immediately seen from the table when $x=0$ and $y=0$ it turns out that for any value of $\lambda$, $a=b$ (implying $a'=b'$) one gets $E_{PR-HV}(0,0)= P^{(\lambda)}(0)+P^{(\lambda)}(1)=1$. Just in the same way one reaches the same conclusion for $x=0$ and $y=1$ and for $x=1$ and $y=0$. On the contrary, when $x=1$ and $y=1$ the outcomes $a$ and $b$ are different so that only $P_{11}(+1,-1)$ and $P_{11}(-1,+1)$ contribute to $E_{PR-HV}(1,1)$. But the sum of these probabilities equals the probability that $\lambda$ takes the value $0$ or $1$, which is 1. Accordingly also $-E_{PR-HV}(1,1)=P_{11}(+1,-1)+P_{11}(-1,+1)$ takes the value $+1$ and the general combination expressing the violation of locality reads: \begin{equation} E_{PR-HV}(0,0)+E_{PR-HV}(1,0)+E_{PR-HV}(0,1)-E_{PR-HV}(1,1)=4, \end{equation} \noindent i.e. the considered combination actually saturates its upper bound. \vspace {0.3cm} \section {Concluding remarks} We have analyzed the superquantum nonlocal structure which characterizes the PR-box and we have shown that, if one considers only the general formal structure characterizing the model embodied by Eq.(2.1), the ensuing nonlocal theory exhibits both Parameter Dependence and Outcome Dependence. On the contrary, it is quite simple to account for the working of the box in terms of a deterministic hidden variable theory\footnote{Actually the hidden variables theories we have considered represent already a continuous family because all of them reproduce the functioning of the PR-Box, independently of the explicit values chosen for the hidden variable distribution, provided their probabilities satisfy the necessary request $P^{(\lambda)}(0)+P^{(\lambda)}(1)=1$} . In such a case, as all nonlocal deterministic models, the PR-box turns out to violate the locality request because of the violation of Parameter Independence.
1,108,101,562,647
arxiv
\section{Introduction} In recent years we have started to appreciate that the outer banks of galaxies contain valuable information about the formation process of galaxies. In hierarchical galaxy formation the stellar halos and thick disks of galaxies are formed by accretion of minor satellites, predominantly in the earlier assembly phases. The size, metallicity, and amount of substructure in current day halos are therefore directly related to issues like the small scale properties of the primordial power spectrum of density fluctuations and the suppression of star formation in small dark matter halos after reionization. To exploit this information we have started the GHOSTS\footnote{GHOSTS: Galaxy Halos, Outer disks, Substructure, Thick disks and Star clusters\hfill} survey, which will sample the resolved stellar populations along the major and minor axes of 14 nearby galaxies using HST/ACS and WFPC2. Our data provide color-magnitude diagrams 1.5-2.5 magnitudes below the tip of the Red Giant Branch. We measure the stellar density distribution from star counts down to very low average surface brightnesses, equivalent to $\sim$32 V-mag per square arcsec. We will also obtain spatial information on the metallicity distributions of the Red Giant Branch stars. Our targets have large angular extents and we need several images to sample one principal axis. For the galaxies where we received enough data to create radial profiles, the results have been both remarkable and highly varied. \begin{figure} \begin{minipage}{0.47\linewidth} \includegraphics[width=\linewidth]{dejong_fig1.eps} \vspace{-5mm} \caption{NGC\,4244 major and minor axis SDSS $i_{\rm AB}$-band luminosity profile (blue solid line, left axis, add about 6.5 to get $V$-band) and background subtracted star counts (green points, right axis). Dashed lines show exponential disk fits to the inner region. We detect a clear minor axis extended component (Seth et al.~2007) and a strong truncation in RGB star counts on the major axis. At the distance of NGC4244, 100 arcsec equals about 1.8\,kpc. \label{ngc4244} } \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \vspace{-4mm} \includegraphics[width=\linewidth]{dejong_fig2.eps} \vspace{-5mm} \caption{Color-magnitude diagram of the tidal stream around M83 at 20\,kpc from its center. A very pronounced metal-poor RGB population is detected, with an AGB C-star population (F606W-F814W=1.3--2.5 mag, F814W=24 mag), indicative of a 3--5\,Gyr old population. No main sequence or He burning stars are seen to the left of the RGB; this stream has been dead for at least 300\,Myr. \label{m83} } \end{minipage} \end{figure} NGC\,4244 has a very small, very metal poor halo below $\mu_V$=30 mag arcsec$^{-2}$ on the minor axis (Seth et al.\ 2007; Fig.\ref{ngc4244}). In contrast, most massive galaxies, like NGC\,253, NGC\,891, and M94 have very extended halos; our outermost fields at $\sim$30\,kpc still have $\mu_V$$\sim$28 mag arcsec$^{-2}$. M81 has a projected $r^{-3.5}$ power-law minor axis surface brightness profile, one of the steepest ever seen. M83 seems to be the exception to the expectation that massive galaxies have large halos; at 20\,kpc its CMD is already rather sparse. The metallicities of the halos derived from the colors of RGB stars are quite varied, although with more massive galaxies having higher metallicity inner halos, on average. The RGB stars in the thick disk of NGC\,891 and NGC\,4244 show a truncation at the same radius as the total light distribution (Fig.\,\ref{ngc4244}), suggesting that either truncations are old, or that the old thick disk is affected by similar dynamical effects as the thin disk (bars, spiral arms, disk heating and stripping by dark matter subhalos). We detect a stream in M83 with a maximum surface brightness of 26.5 $R$-mag arcsec$^{-2}$ and FWHM of $\sim$3\,kpc that has no detectable main sequence nor He burning stars and therefore has had no star formation in the past 300\,Myr. However, we find a significant population of AGB C-stars, indicating it had a burst of star formation about 3--5\,Gyr ago (Fig.\,\ref{m83}). \begin{acknowledgments} Support for Program numbers GO-10523 and GO-10889 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. \end{acknowledgments}
1,108,101,562,648
arxiv
\section{Introduction} Quantum topology began in 1984 with the definition of the Jones polynomial \cite{Jo}, a knot invariant that Witten later retrieved in the Chern-Simons quantum field theory on the three-sphere with gauge group $SU(2)$ \cite{Wi}. Following Witten's intuitions from physics, several Topological Quantum Field Theories (or \textit{TQFT} for short, meaning certain functors from cobordisms to vector spaces \cite{At}) were defined in the nineties and provided new invariants of knots and $3$-manifolds \cite{RT1, RT2, BHMV1, BHMV2, TV}. The \textit{volume conjecture} of Kashaev and Murakami-Murakami is perhaps the most studied conjecture in quantum topology currently \cite{Ka95, MM, MuIntro, MY}; it states that the colored Jones polynomials of a given hyperbolic knot evaluated at a certain root of unity asymptotically grow with an exponential rate, which is the hyperbolic volume of this knot. As such, it hints at a deep connection between quantum topology and classical geometry. In the last twenty years, several variants of the volume conjecture have been put forward for other quantum invariants: for instance the Baseilhac-Benedetti generalisation in terms of quantum hyperbolic invariants \cite{BB}, or the Chen-Yang volume conjecture on the Turaev-Viro invariants for hyperbolic $3$-manifolds \cite{CY}. Some of these conjectures have been proven for several infinite families of examples, such as the fundamental shadow links \cite{Co}, the Whitehead chains \cite{VdV} and integral Dehn fillings on the figure-eight knot complement \cite{Oh}. See \cite{MuIntro, MM} for more examples. In \cite{AK}, Andersen and Kashaev constructed the Teichm\"uller TQFT, a \textit{generalised} Topological Quantum Field Theory, in the sense that the operators of the theory act on \textit{infinite-dimensional} vector spaces. The partition function of the Teichm\"uller TQFT yields a quantum invariant $|\mathcal{Z}_{\hbar}(X,\alpha)| \in \mathbb{R}_{>0}$ (indexed by a quantum parameter $\hbar >0$) of a triangulated $3$-manifold $X$ endowed with a family of dihedral angles $\alpha$, up to certain moves on such triangulations with angles (see \cite{AK} for details). Taking its roots in quantum Teichm\"uller theory and making use of Faddeev's quantum dilogarithm, this infinite-dimensional TQFT is constructed with state integrals on tempered distributions from the given triangulation with angles. The Teichm\"uller TQFT already admits several formulations and generalisations (see \cite{AK, AKnew, KaWB, AKicm}), and it is still not clear at the time of writing which formulation one should favor in order to best reduce the technical constraints in the definitions and computations. Nevertheless, two points remain clear regardless of the chosen formulation. Firstly, the Teichm\"uller TQFT is a promising lead for obtaining a mathematical model of quantum Chern-Simons theory with non-compact gauge group $SL(2,\mathbb{C})$ \cite{AK, AKicm, Mi}. Secondly, the Teichm\"uller TQFT should also satisfy a \textit{volume conjecture}, stated as follows without details: \begin{conj}[Conjecture 1 of \cite{AK}, Conjecture \ref{conj:vol:BAGPN}]\label{conj:vol:intro} Let $M$ be a closed oriented 3-manifold and $K\subset M$ a knot whose complement is hyperbolic. Then the partition function of the Teichm\"uller TQFT associated to $(M,K)$ follows an exponential decrease in the semi-classical limit $\hbar \to 0^+$, whose rate is the hyperbolic volume $\mathrm{Vol}(M \setminus K)$. \end{conj} Generally speaking, solving a volume conjecture requires to find connections between quantum topology and hyperbolic geometry hidden in the invariant, and to overcome technical difficulties (often analytical in nature). The payoff is worth the hassle, though: the previously mentioned connections can enrich both domains of mathematics and may provide new insights on how best we can mathematically model physical quantum field theories. In the present paper, we solve the Teichm\"uller TQFT volume conjecture for the infinite family of hyperbolic twist knots in $S^3$ (see Figure \ref{fig:twist:knot} for a picture of these knots). Up until now, the conjecture was proven for the first two knots of this family \cite{AK} and numerically checked for the next nine \cite{AN, BAPNcras}. Moreover, some computations were done for some knots in lens spaces \cite{PN}. To the authors' knowledge, the twist knots are now the first family of hyperbolic knots in $S^3$ for which a volume conjecture is proven. We hope that the techniques and results of this paper can provide valuable insights for further studies of this volume conjecture or its siblings that concern other quantum invariants \cite{Ka95, MM, CY}. Notably, it would be interesting to try to apply the techniques of this paper to prove other conjectures for the twist knots. \ Let us now precise the objects used and the results proven in this paper. Before all, we should clarify that the results split in two halves: Sections \ref{sec:trig} to \ref{sec:vol:conj} focus on the hyperbolic twist knots with an \textit{odd} number of crossings, while the even twist knots are studied in Section~\ref{sec:appendix}. Indeed, the constructions and proofs vary slightly whether the crossing number is odd or even. Hence, the reader interested in discovering for the first time our objects and techniques should focus on the odd twist knots in Sections \ref{sec:trig} to \ref{sec:vol:conj}. Likewise, Section \ref{sec:appendix} is for the experienced reader who wants to understand the difficulties in generalising our results from one infinite family of knots to another, and can be a starting point for future further proofs of the Teichm\"uller TQFT volume conjecture. \ The first part of this paper deals with topological constructions of triangulations for twist knot complements (Sections \ref{sec:trig} and \ref{sub:even:trig}). In the seventies, Thurston showed that hyperbolic geometry was deeply related to low-dimensional topology. He notably conjectured that almost every (irreducible atoroidal) $3$-manifold admits a complete hyperbolic metric \cite{Th2}, which was later proved by Perelman. For $3$-manifolds with toroidal boundary, such as complements of knots in the three-sphere, this hyperbolic metric is unique up to isometry, by the Mostow-Prasad rigidity theorem \cite{Mo, Pr}. Hyperbolic geometry can thus provide topological invariants, such as the hyperbolic volume of a knot complement. Several knot invariants can be computed from an \textit{ideal triangulation} $X=(T_1,\ldots,T_N,\sim)$ of the knot complement $S^3 \setminus K$, that is to say a gluing of $N$ ideal (i.e. without their vertices) tetrahedra $T_1, \ldots, T_N$ along a pairing of faces $\sim$. As a given knot complement admits an infinite number of triangulations, it is therefore natural to look for convenient triangulations with as few tetrahedra as possible. The twist knots $K_n$ of Figure \ref{fig:twist:knot} form the simplest infinite family of hyperbolic knots (when $n\geqslant 2$, starting at the figure-eight knot). Recall that a knot is \textit{hyperbolic} if its complement admits a complete hyperbolic structure of finite volume. In order to study the Teichm\"uller TQFT for the family of twist knots, we thus constructed particularly convenient ideal triangulations of their complements. An intermediate step was to construct \textit{H-triangulations} of $(S^3,K_n)$, which are triangulations of $S^3$ by compact tetrahedra, where the knot $K_n$ is represented by an edge. We now state the first result of this paper. \begin{theorem}[Theorem \ref{thm:trig}]\label{thm:intro:trig} For every $n\geqslant 2$, there exist an ideal triangulation $X_n$ of the twist knot complement $S^3 \setminus K_n$ with $\lfloor \frac{n+4}{2} \rfloor$ tetrahedra and a H-triangulation $Y_n$ of the pair $(S^3,K_n)$ with $\lfloor \frac{n+6}{2} \rfloor$ tetrahedra. Moreover, the edges of all these triangulations admit orientations for which no triangle is a cycle. \end{theorem} The condition on edge orientations implies that every tetrahedron comes with a full order on its vertices: such a property is needed to define the Teichm\"uller TQFT, see Section~\ref{sec:prelim}. Note that in \cite{BB}, this property is called a \textit{branching} on the triangulation (the first of several similarities between the Teichm\"uller TQFT and the Baseilhac-Benedetti quantum hyperbolic invariants). To prove Theorem \ref{thm:intro:trig}, we study the cases ``$n$ odd'' and ``$n$ even'' separately. In both cases, we use a method introduced by Thurston \cite{Th2} and later developed by Menasco and Kashaev-Luo-Vartanov \cite{Me, KLV}: we start from a diagram of the knot $K_n$ and we obtain a combinatorial description of $S^3$ as a polyhedron glued to itself, where $K_n$ is one particular edge. We then apply a combinatorial trick to reduce the number of edges in the polyhedron, and finally we triangulate it. This yields an H-triangulation $Y_n$ of $(S^3,K_n)$, which then gives the ideal triangulation $X_n$ of $S^3 \setminus K_n$ by collapse of the tetrahedron containing the edge $K_n$. The numbers $\lfloor \frac{n+4}{2} \rfloor$ in Theorem \ref{thm:intro:trig} give new upper bounds for the Matveev complexities of the manifolds $S^3 \setminus K_n$, and experimental tests on the software \textit{SnapPy} lead us to conjecture that these numbers are actually equal to the Matveev complexities for this family (see Conjecture \ref{conj:matveev}). \ In the second part of this paper (Sections \ref{sec:geom} and \ref{sub:even:geom}), we prove the \textit{geometricity} of these new ideal triangulations, which means that their tetrahedra can be endowed with positive dihedral angles corresponding to the complete hyperbolic structure on the underlying hyperbolic $3$-manifold. In \cite{Th}, Thurston provided a method to study geometricity of a given triangulation, which is a system of \textit{gluing equations} on complex parameters associated to the tetrahedra; if this system admits a solution, then this solution is unique and corresponds to the complete hyperbolic metric on the triangulated manifold. However, this system of equations is difficult to solve in practice. In the nineties, Casson and Rivin devised a technique to prove geometricity (see the survey \cite{FG}). The idea is to focus on the argument part of the system of complex gluing equations (this part can be seen as a linear system) and use properties of the volume functional. Futer and the second author applied such a method for particular triangulations of once-punctured torus bundles and two-bridge link complements \cite{Gf}. In this vein, we prove that the ideal triangulations $X_n$ of Theorem \ref{thm:intro:trig} are geometric. \begin{theorem}[Theorems \ref{thm:geometric} and \ref{thm:appendix:geom:even}]\label{thm:intro:geom} For every $n\geqslant 2$, $X_n$ is geometric. \end{theorem} To prove Theorem \ref{thm:intro:geom}, we use techniques of Futer and the second author (see \cite{FG, Gf}). We first prove that the space of angle structures on $X_n$ is non-empty (Lemma \ref{lem:non:empty} for the odd case), and then that the volume functional cannot attain its maximum on the boundary of this space (Lemma \ref{lem:interior} for the odd case). Then Theorem \ref{thm:intro:geom} follows from a result of Casson and Rivin (see Theorem \ref{thm:casson:rivin}). \ In the third part of this paper (Sections \ref{sec:part:odd}, \ref{sec:part:H:odd} and \ref{sub:even:tqft}), we compute the partition functions of the Teichm\"uller TQFT for the triangulations $X_n$ and $Y_n$, and we notably prove that they satisfy the properties expected in Conjecture \ref{conj:vol:BAGPN}. Without going into details, we can summarise these properties as: \begin{theorem}[Theorems \ref{thm:part:func}, \ref{thm:even:part:func}, \ref{thm:part:func:Htrig:odd} and \ref{thm:part:func:Htrig:even}]\label{thm:intro:partition} For every $n\geqslant 2$ and every $\hbar>0$, the partition function $\mathcal{Z}_{\hbar}(X_n,\alpha)$ of the ideal triangulation $X_n$ (resp. $\mathcal{Z}_{\hbar}(Y_n,\alpha)$ of the H-triangulation $Y_n$) is computed explicitly for every angle structure $\alpha$ of $X_n$ (resp. of $Y_n$). Moreover, the value $| \mathcal{Z}_{\hbar}(X_n,\alpha)| $ depends only on three entities: two linear combinations of angles $\mu_{X_n}(\alpha)$ and $\lambda_{X_n}(\alpha)$ (related to the meridian and longitude of the knot $K_n$), and a function $(x \mapsto J_{X_n}(\hbar,x))$, defined on some open subset of $\mathbb{C}$, and independent of the angle structure $\alpha$. Furthermore, the value $|J_{X_n}(\hbar,0)|$ can be retrieved in a certain asymptotic of the partition function $\mathcal{Z}_{\hbar}(Y_n,\alpha)$ of the H-triangulation $Y_n$. \end{theorem} The function $(\hbar \mapsto J_{X_n}(\hbar,0))$ should be seen as an analogue of the Kashaev invariant $\langle \cdot \rangle_{N}$ of \cite{Ka94,Ka95}, or of the colored Jones polynomials evaluated at a certain root of unity $J_{\cdot}(N,e^{2i \pi/N})$, where $\hbar$ behaves as the inverse of the color $N$. It is not clear at the time of writing that $(\hbar \mapsto J_{X_n}(\hbar,0))$ always yields a proper knot invariant independent of the triangulation. However, Theorem \ref{thm:intro:partition} states that we can attain this function in at least two ways (as anticipated in the volume conjecture of \cite{AK}), which increases the number of available tools for proving such an invariance. Theorem \ref{thm:intro:partition} is also of interest for studying the \textit{AJ-conjecture} for the Teichm\"uller TQFT, as stated in \cite{AM}. To prove Theorem \ref{thm:intro:partition}, we compute the aforementioned partition functions, and especially their parts that encode how the faces of the triangulation are glued to one another (such a part is called the \textit{kinematical kernel}). We then show a connection between this kinematical kernel and the gluing equations on angles for the same triangulation, which allows us to prove that the partition function only depends on the angle structure $\alpha$ via the weight of $\alpha$ on each edge (which is constant equal to $2 \pi$) and via two angular holonomies $\mu_{X_n}(\alpha)$ and $\lambda_{X_n}(\alpha)$ related to the meridian and longitude of the twist knot $K_n$. Finally, we need to establish some uniform bounds on the quantum dilogarithm in order to apply the dominated convergence theorem in the computation of the asymptotic of $\mathcal{Z}_{\hbar}(Y_n,\alpha)$. At the time of writing, whether or not the partition function always contains such topological information (the meridian and longitude of the knot) is an open question. Nevertheless, we hope that the patterns noticed for this infinite family of examples can illuminate the path. \ In the fourth and final part of this paper (Sections \ref{sec:vol:conj} and \ref{sub:even:vol:conj}), we prove that the function $(\hbar \mapsto J_{X_n}(\hbar,0))$ (extracted from the partition functions of the Teichm\"uller TQFT in Theorem \ref{thm:intro:partition}) exponentially decreases in the semi-classical limit $ \hbar \to 0^+$, with decrease rate the hyperbolic volume. Or in other words: \begin{theorem}[Theorems \ref{thm:vol:conj} and \ref{thm:even:vol:conj}]\label{thm:intro:vol:conj} For every $n\geqslant 2$, we have the following limit: $$ \lim_{\hbar \to 0^+} 2\pi \hbar \log \vert J_{X_n}(\hbar,0) \vert = -\emph{Vol}(S^3\backslash K_n).$$ \end{theorem} To prove Theorem \ref{thm:intro:vol:conj}, we apply the saddle point method on the semi-classical approximation of $\vert J_{X_n}(\hbar,0) \vert$ (expressed with classical dilogarithms $\mathrm{Li}_2$), and we then bound the remaining error terms with respect to $\hbar$. More precisely, the \textit{saddle point method} is a common designation of various theorems that state that an integral $\int_\gamma \exp(\lambda S(z)) dz$ behaves mostly as $\exp\left (\lambda \max_\gamma(\Re(S))\right )$ when $\lambda \to \infty$ (see Theorem \ref{thm:SPM} for the version we used, and \cite{Wo} for a survey). In order to apply this method, we must check technical conditions such as the fact that the maximum of $\Re(S)$ on $\gamma$ is unique and a simple critical point. Fortunately, in the present paper, these conditions are consequences of the \textit{geometricity} of the ideal triangulations $X_n$ (Theorem \ref{thm:intro:geom}); indeed, the equations $\nabla S = 0$ here correspond exactly to the complex gluing equations, and their unique solution (the complete hyperbolic angle structure) provides the expected saddle point. Geometricity was the main ingredient we needed, in order to go from a finite number of numerical checks of the Teichm\"uller TQFT volume conjecture \cite{BAPNcras} to an exact proof for an infinite family. Note that thanks to Theorem \ref{thm:intro:geom}, we did not need to compute the exact value of the complete hyperbolic structure or of the hyperbolic volume, although such computations would be doable in the manner of \cite{CMY} with our triangulations $X_n$. The previously mentioned error bounds follow from the fact that $J_{X_n}(\hbar,0)$ does not depend exactly on the potential function $S$ made of classical dilogarithms, but on a quantum deformation $S'_{\hbar}$ using quantum dilogarithms. An additional difficulty stems from the fact that we must bound the error uniformly on a \textit{non-compact} contour, when $\hbar \to 0^+$. To the authors' knowledge, this difficulty never happened in studies of volume conjectures for other quantum invariants, since asymptotics of these invariants (such as the colored Jones polynomials) involve integrals on \textit{compact} contours. Hence we hope that the analytical techniques we developed in this paper (that are not specific to the twist knots) can be of use for future studies of volume conjectures with unbounded contours. More precisely, the parity trick in Lemma \ref{lem:parity} and its application in the bound for the whole non-compact contour (Lemma \ref{lem:unif:bound}) are our main additions from the previous techniques of \cite{AH}. \ Part of the results in this paper (Theorems \ref{thm:trig}, \ref{thm:part:func}, \ref{thm:even:part:func}, \ref{thm:part:func:Htrig:odd} and \ref{thm:part:func:Htrig:even}) were announced in~\cite{BAPNcras}. Sections \ref{sec:trig}, \ref{sec:geom} and \ref{sub:even:trig} appeared in the arXiv preprint \cite{BAPN2}. The paper is organised as follows: in Section \ref{sec:prelim}, we review preliminaries and notations; in Section \ref{sec:trig} we construct the triangulations for odd twist knots; in Section \ref{sec:geom}, we prove geometricity of these triangulations for odd twist knots; in Section \ref{sec:part:odd} (resp. \ref{sec:part:H:odd}) we compute the partition function of the Teichm\"uller TQFT for the ideal triangulations (resp. H-triangulations), still for odd twist knots; in Section \ref{sec:vol:conj}, we prove the volume conjecture for odd twist knots (readers eager to arrive at Section \ref{sec:vol:conj} can skip Section \ref{sec:part:H:odd} after reading Section \ref{sec:part:odd}); finally, in Section \ref{sec:appendix}, we explain how the proofs of the previous sections differ for the even twist knots. \section*{Acknowledgements} The first and third authors were supported by the Swiss National Science Foundation at the University of Geneva, with subsidy $200021\_162431$. The first author was moreover supported by the FNRS in his position at UCLouvain. The second author acknowledges support from the ANR under the grant DynGeo (ANR-16-CE40-0025-01) and through the Labex Cempi (ANR-11-LABX0007-01). We thank Rinat Kashaev for helpful discussions, Renaud Detcherry for his proof of Lemma \ref{lem:complex:sym}, and the University of Geneva and UCLouvain for their hospitality. \section{Preliminaries and notations} \label{sec:prelim} \subsection{Triangulations} In this section we follow \cite{AK, KaWB}. A tetrahedron $T$ with faces $A,B,C,D$ will be denoted as in Figure \ref{fig:tetrahedron}, where the face outside the circle represents the back face and the center of the circle is the opposite vertex pointing towards the reader. We always choose an \textit{order} on the four vertices of $T$ and we call them $0_T,1_T,2_T,3_T$ (or $0,1,2,3$ if the context makes it obvious). Consequently, if we rotate $T$ such that $0$ is in the center and $1$ at the top, then there are two possible places for vertices $2$ and $3$; we call $T$ a \textit{positive} tetrahedron if they are as in Figure \ref{fig:tetrahedron}, and \textit{negative} otherwise. We denote $\varepsilon(T) \in \{ \pm 1\}$ the corresponding \textit{sign} of $T$. We \textit{orient the edges} of $T$ accordingly to the order on vertices, and we endow each edge with a parametrisation by $[0,1]$ respecting the orientation. Note that such a structure was called a \textit{branching} in \cite{BB}. Thus, up to isotopies fixing the $1$-skeleton pointwise, there is only one way of \textit{gluing} two triangular faces together while \textit{respecting the order of the vertices} and the edge parametrisations, and that is the only type of face gluing we consider in this paper. \begin{figure}[h] \centering \begin{tikzpicture} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$A$} ; \draw (0,-0.6) node{$B$} ; \draw (-0.5,0.3) node{$C$} ; \draw (0.5,0.3) node{$D$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \end{scope} \end{tikzpicture} \caption{The positive tetrahedron $T$} \label{fig:tetrahedron} \end{figure} Note that a tetrahedron $T$ like in Figure \ref{fig:tetrahedron} will either represent a \textit{compact} tetrahedron homeomorphic to a $3$-ball $B^3$ (notably when considering \textit{H-triangulations}) or an \textit{ideal} tetrahedron homeomorphic to a $3$-ball minus $4$ points in the boundary (when considering \textit{ideal triangulations}). A \textit{triangulation} $X=(T_1,\ldots,T_N,\sim)$ is the data of $N$ distinct tetrahedra $T_1, \ldots, T_N$ and an equivalence relation $\sim$ first defined on the faces by pairing and the only gluing that respects vertex order, and also induced on edges then vertices by the combined identifications. We call $M_X$ the (pseudo-)$3$-manifold $ M_X = T_1 \sqcup \cdots \sqcup T_N / \sim$ obtained by quotient. Note that $M_X$ may fail to be a manifold only at a quotient vertex of the triangulation, whose regular neighborhood might not be a $3$-ball (but for instance a cone over a torus for exteriors of links). We denote $X^{k}$ (for $k=0, \ldots, 3$) the set of $k$-cells of $X$ after identification by $\sim$. In this paper we always consider that \textit{no face is left unpaired by $\sim$}, thus $X^{2}$ is always of cardinal $2N$. By a slight abuse of notation we also call $T_j$ the $3$-cell inside the tetrahedron $T_j$, so that $X^{3} = \{T_1, \ldots, T_N\}$. Elements of $X^{1}$ are usually represented by distinct types of arrows, which are drawn on the corresponding preimage edges, see Figure \ref{fig:id:tri:41:complement} for an example. An \textit{ideal triangulation} $X$ contains ideal tetrahedra, and in this case the quotient space minus its vertices $M_X \setminus X^0$ is an open manifold. In this case we will denote $M=M_X \setminus X^0$ and say that the open manifold $M$ admits the ideal triangulation $X$. A (one-vertex) \textit{H-triangulation} is a triangulation $Y$ with compact tetrahedra so that $M=M_Y$ is a closed manifold and $Y^0$ is a singleton, with one distinguished edge in $Y^1$; this edge will represent a knot $K$ (up to ambient isotopy) in the closed manifold $M$, and we will say that $Y$ is an \textit{H-triangulation for $(M,K)$}. Finally, for $X$ a triangulation and $k=0,1,2,3,$ we define $x_k\colon X^3 \to X^2$ the map such that $x_k(T)$ is the equivalence class of the face of $T$ opposed to its vertex $k$. \begin{example} Figure \ref{fig:id:tri:41:complement} displays two possible ways of representing the same ideal triangulation of the complement of the figure-eight knot $M=S^3 \setminus 4_1$, with one positive and one negative tetrahedron. Here $X^3 =\{T_+, T_-\}$, $X^2 =\{A,B,C,D\}$, $X^1 =\{ \,\mathbin{\rotatebox[origin=c]{90}{$\rightarrow$}} , \,\mathbin{\rotatebox[origin=c]{90}{$\twoheadrightarrow$}} \}$ and $X^0$ is a singleton. On the left the tetrahedra are drawn as usual and all the cells are named; on the right we represent each tetrahedron by a ``comb'' \begin{tikzpicture}[style=very thick] \begin{scope}[scale=0.3] \draw(0,0)--(3,0); \draw(0,0)--(0,1/2); \draw(1,0)--(1,1/2); \draw(2,0)--(2,1/2); \draw(3,0)--(3,1/2); \end{scope} \end{tikzpicture} with four spikes numbered $0,1,2,3,$ from left to right, we join the spike $j$ of $T$ to the spike $k$ of $T'$ if $x_j(T)=x_k(T')$, and we add a $+$ or $-$ next to each tetrahedron according to its sign. \begin{figure}[h] \centering \begin{tikzpicture} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$B$} ; \draw (0,-0.6) node{$A$} ; \draw (-0.5,0.3) node{$C$} ; \draw (0.5,0.3) node{$D$} ; \draw (0,-1.4) node{\large $T_+$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow d =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->>](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->>](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \end{scope} \begin{scope}[xshift=4cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$C$} ; \draw (0,-0.6) node{$D$} ; \draw (-0.5,0.3) node{$A$} ; \draw (0.5,0.3) node{$B$} ; \draw (0,-1.4) node{\large $T_-$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow d =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow d =black}}] (0,0)--(1.732/2,-0.5); \draw(1.732/2,-0.5) arc (-30:-90:1); \draw[<<-] (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \end{scope} \begin{scope}[xshift=9.5cm,yshift=0cm,rotate=0] \node[inner sep=0pt] (russell) at (0,0) {\includegraphics[scale=0.8]{4_1.pdf}}; \end{scope} \end{tikzpicture} \caption{Two representations of an ideal triangulation of the knot complement $S^3 \setminus 4_1$.} \label{fig:id:tri:41:complement} \end{figure} \end{example} \subsection{Angle structures} For a given triangulation $X=(T_1,\ldots,T_N,\sim)$ we denote $\mathcal{S}_X$ the set of \textit{shape structures on $X$}, defined as $$ \mathcal{S}_X = \left \{ \alpha = \left (a_1,b_1,c_1,\ldots, a_N,b_N,c_N\right ) \in (0,\pi)^{3N} \ \big | \ \forall k \in \{1,\ldots,N\}, \ a_k+b_k+c_k = \pi \right \}. $$ An angle $a_k$ (respectively $b_k,c_k$) represents the value of a dihedral angle on the edge $\overrightarrow{01}$ (respectively $\overrightarrow{02}$, $\overrightarrow{03}$) and its opposite edge in the tetrahedron $T_k$. If a particular shape structure $\alpha=(a_1,\ldots,c_N)\in \mathcal{S}_X$ is fixed, we define three associated maps $\alpha_j\colon X^3 \to (0,\pi)$ (for $j=1,2,3$) that send $T_k$ to the $j$-th element of $\{a_k,b_k,c_k\}$ for each $k \in \{1,\ldots,N\}$. Let $(X,\alpha)$ be a triangulation with a shape structure as before. We denote $\omega_{X,\alpha}\colon X^1 \to \mathbb{R}$ the associated \textit{weight function}, which sends an edge $e\in X^1$ to the sum of angles $\alpha_j(T_k)$ corresponding to tetrahedral edges that are preimages of $e$ by $\sim$. For example, if we denote $\alpha=(a_+,b_+,c_+,a_-,b_-,c_-)$ a shape structure on the triangulation $X$ of Figure \ref{fig:id:tri:41:complement}, then $\omega_{X,\alpha}(\,\mathbin{\rotatebox[origin=c]{90}{$\rightarrow$}}) = 2 a_+ + c_+ + 2 b_- + c_-.$ One can also consider the closure $\overline{\mathcal{S}_X}$ (sometimes called the space of \textit{extended shape structures}) where the $a_k,b_k,c_k$ are taken in $[0,\pi]$ instead. The definitions of the maps $\alpha_j$ and $\omega_{X,\alpha}$ can immediately be extended. We finally define $ \mathcal{A}_X := \left \{ \alpha \in \mathcal{S}_X \ \big | \ \forall e \in X^1, \ \omega_{X,\alpha}(e)=2\pi \right \} $ the set of \textit{balanced shape structures on $X$}, or \textit{angle structures on $X$}, and $ \overline{\mathcal{A}_X} := \left \{ \alpha \in \overline{\mathcal{S}_X} \ \big | \ \forall e \in X^1, \ \omega_{X,\alpha}(e)=2\pi \right \} $ the set of \textit{extended angle structures on $X$}. \subsection{The volume functional} \label{sub:volume} In this section we recall some known facts about the volume functional on the space of angle structures. See for example the survey \cite{FG} for details. One can understand a shape structure $(a,b,c)$ on an ideal tetrahedron $T$ as a way of realising $T$ in the hyperbolic space $\mathbb{H}^3$, with its four vertices at infinity. In this hyperbolic ideal tetrahedron, the angles $a,b,c$ will represent dihedral angles between two faces. The \textit{Lobachevsky function} $\Lambda\colon \mathbb{R} \to \mathbb{R}$ given by: $$ \Lambda(x) = - \int_0^x \log \vert 2 \sin (t) \vert \, dt$$ is well defined, continuous on $\mathbb{R}$, and periodic with period $\pi$. Furthermore, if $T$ is a hyperbolic ideal tetrahedron with dihedral angles $a,b,c$, its volume satisfies $$ \mathrm{Vol}(T)=\Lambda(a)+\Lambda(b)+\Lambda(c).$$ Let $X=(T_1,\ldots,T_N,\sim)$ be an ideal triangulation and $\mathcal{A}_X$ its space of angle structures, which is a (possibly empty) convex polytope in $\mathbb{R}^{3 N}$. Then we define a volume functional $\mathcal{V} \colon \overline{\mathcal{A}_X} \to \mathbb{R}$, by assigning to an (extended) angle structure $\alpha= (a_1,b_1,c_1,\ldots,a_N,b_N,c_N)$ the real number $$\mathcal{V}(\alpha)= \Lambda(a_1)+\Lambda(b_1)+\Lambda(c_1) + \cdots + \Lambda(a_N)+\Lambda(b_N)+\Lambda(c_N).$$ By \cite[Propositions 6.1 and 6.6]{Gf} and \cite[Lemma 5.3]{FG}, the volume functional $\mathcal{V}$ is strictly concave on $\mathcal{A}_X$ and concave on $\overline{\mathcal{A}_X}$. The maximum of the volume functional is actually related to the complete hyperbolic structure, see for example \cite[Theorem 1.2]{FG} that we re-state below. \begin{theorem}[Casson-Rivin] \label{thm:casson:rivin} Let $M$ be an orientable $3$-manifold with boundary consisting of tori, and let $X$ be an ideal triangulation of $M$. Then an angle structure $\alpha \in \mathcal{A}_X$ corresponds to a complete hyperbolic metric on the interior of $M$ (which is unique) if and only if $\alpha$ is a critical point of the functional $\mathcal{V}\colon \mathcal{A}_X\to \mathbb{R}$. \end{theorem} In this last case, we say that the ideal triangulation $X$ of the $3$-manifold $M$ is \textit{geometric}. \subsection{Thurston's complex gluing equations}\label{sub:thurston} To a shape structure $(a,b,c)$ on an ordered tetrahedron $T$ (i.e. an element of $(0,\pi)^3$ of coordinate sum $\pi$) we can associate bijectively a \textit{complex shape structure} $z \in \mathbb{R}+i\mathbb{R}_{>0}$, as well as two companion complex numbers of positive imaginary part $$z':=\frac{1}{1-z} \text{\ and \ } z'':=\frac{z-1}{z}.$$ Each of the $z, z', z''$ is associated to an edge, in a slightly different way according to $\varepsilon(T)$: \begin{itemize} \item In all cases, $z$ corresponds to the same two edges as the angle $a$. \item If $\varepsilon(T)=1$, then $z'$ corresponds to $c$ and $z''$ to $b$. \item If $\varepsilon(T)=-1$, then $z'$ corresponds to $b$ and $z''$ to $c$. \end{itemize} Another way of phrasing it is that $z, z', z''$ are always in a counterclockwise order around a vertex, whereas $a,b,c$ need to follow the specific vertex ordering of $T$. In this article we will use the following definition of the complex logarithm: \[ \mathrm{Log}(z) := \log\vert z \vert + i\arg(z) \ \textrm{for} \ z \in \mathbb{C}^{*}, \] where $\arg(z) \in (-\pi,\pi]$. We now introduce a third way of describing the shape associated to a tetrahedron, by the complex number $$y := \varepsilon(T)(\mathrm{Log}(z)-i \pi) \in \mathbb{R} - i \varepsilon(T)(0,\pi).$$ We now list the equations relating $(a,b,c), (z,z',z'')$ and $y$ for both possible signs of~$T$: \begin{align*} \text{\underline{Positive \ tetrahedron:} \ } & y+i\pi = \mathrm{Log}(z) = \log\left (\dfrac{\sin(c)}{\sin(b)}\right ) + i a. \\ & -\mathrm{Log}(1+e^y) = \mathrm{Log}(z') = \log\left (\dfrac{\sin(b)}{\sin(a)}\right ) + i c. \\ & \mathrm{Log}(1+e^{-y}) = \mathrm{Log}(z'') = \log\left (\dfrac{\sin(a)}{\sin(c)}\right ) + i b.\\ & y= \log\left (\dfrac{\sin(c)}{\sin(b)}\right ) -i(\pi-a) \in \mathbb{R} -i(\pi-a).\\ & z = -e^y \in \mathbb{R} + i\mathbb{R}_{>0}. \end{align*} \begin{align*} \text{\underline{Negative \ tetrahedron:} \ } & -y+i\pi = \mathrm{Log}(z) = \log\left (\dfrac{\sin(b)}{\sin(c)}\right ) + i a. \\ & -\mathrm{Log}(1+e^{-y}) = \mathrm{Log}(z') = \log\left (\dfrac{\sin(c)}{\sin(a)}\right ) + i b. \\ & \mathrm{Log}(1+e^{y}) = \mathrm{Log}(z'') = \log\left (\dfrac{\sin(a)}{\sin(b)}\right ) + i c.\\ & y= \log\left (\dfrac{\sin(c)}{\sin(b)}\right ) + i(\pi-a) \in \mathbb{R} +i (\pi-a).\\ & z = -e^{-y}\in \mathbb{R} + i\mathbb{R}_{>0}. \end{align*} For clarity, let us define the diffeomorphism $$\psi_T\colon \mathbb{R}+i\mathbb{R}_{>0} \to \mathbb{R} - i \varepsilon(T)(0,\pi), \ z \mapsto \varepsilon(T)(\mathrm{Log}(z)-i \pi),$$ and its inverse $$\psi^{-1}_T\colon \mathbb{R} - i \varepsilon(T)(0,\pi) \to \mathbb{R}+i\mathbb{R}_{>0}, \ y \mapsto -\exp\left (\varepsilon(T) y\right ).$$ We can now define the \textit{complex weight function} $\omega^{\mathbb{C}}_{X,\alpha}\colon X^1 \to \mathbb{C}$ associated to a triangulation $X$ and an angle structure $\alpha \in \mathcal{A}_X$, which sends an edge $e \in X^1$ to the sum of logarithms of complex shapes associated to preimages of $e$ by $\sim$. For example, for the triangulation $X$ of Figure \ref{fig:id:tri:41:complement} and an angle structure $\alpha=(a_+,b_+,c_+,a_-,b_-,c_-)$, we have: \begin{align*} \omega^{\mathbb{C}}_{X,\alpha}(\,\mathbin{\rotatebox[origin=c]{90}{$\rightarrow$}}) &= 2 \mathrm{Log}(z_+) + \mathrm{Log}(z'_+) + 2 \mathrm{Log}(z'_-) + \mathrm{Log}(z''_-)\\ &= \log\left ( \dfrac{\sin(c_+)^2 \sin(b_+) \sin(c_-)^2 \sin(a_-)} {\sin(b_+)^2 \sin(a_+) \sin(a_-)^2 \sin(b_-)} \right ) +i \omega_{X,\alpha}(\,\mathbin{\rotatebox[origin=c]{90}{$\rightarrow$}}). \end{align*} Let $S$ denote one toroidal boundary component of a $3$-manifold $M$ ideally triangulated by $X=(T_1,\ldots,T_N,\sim)$, and $\sigma$ an oriented normal closed curve in $S$. Truncating the tetrahedra $T_j$ at each vertex yields a triangulation of $S$ by triangles coming from vertices of $X$ (called the \textit{cusp triangulation}). If the curve $\sigma$ intersects these triangles transversely (without back-tracking), then $\sigma$ cuts off corners of each such encountered triangle. Let us then denote $(z_1,\ldots,z_l)$ the sequence of (abstract) complex shape variables associated to these corners (each such $z_k$ is of the form $z_{T_{j_k}}, z'_{T_{j_k}}$ or $z''_{T_{j_k}}$). Following \cite{FG}, we define the \textit{complex holonomy} $H^\mathbb{C}(\sigma)$ as $H^\mathbb{C}(\sigma):= \sum_{k=1}^l \epsilon_k \mathrm{Log}(z_k),$ where $\epsilon_k$ is $1$ if the $k$-th cut corner lies on the left of $\sigma$ and $-1$ if it lies on the right. The \textit{angular holonomy} $H^\mathbb{R}(\sigma)$ of $\sigma$ is similarly defined, replacing the term $\mathrm{Log}(z_k)$ by the (abstract) angle $d_k$ (which is of the form $a_{T_{j_k}}$, $b_{T_{j_k}}$ or $c_{T_{j_k}}$) lying in the $i$-th corner. For example, in the triangulation of Figure \ref{fig:trig:cusp:odd}, we have $$H^\mathbb{C}(m_{X_n})=\mathrm{Log}(z_U)-\mathrm{Log}(z_V) \text{ \ \ and \ \ } H^\mathbb{R} (m_{X_n}) = a_U-a_V.$$ The \textit{complex gluing edge equations} associated to $X$ consist in asking that the holonomies of each closed curve in $\partial M$ circling a vertex of the induced boundary triangulation are all equal to $2i\pi$, or in other words that $$\forall e\in X^1, \omega_{X,\alpha}^\mathbb{C}(e) = 2i \pi.$$ The \textit{complex completeness equations} require that the complex holonomies of all curves generating the first homology $H_1(\partial M)$ vanish (when $M$ is of toroidal boundary). Remark that once one asks that a shape structure $\alpha$ of $X$ satisfies the complex gluing edge equations of $X$ (in particular $\alpha \in \mathcal{A}_X$), then for any toroidal boundary component $S$ of $M$, if one calls $l,m$ two curves generating $H_1(S)$, then the following are equivalent formulations of the complex completeness equation for $S$: \begin{itemize} \item $H^\mathbb{C}(m)=0$, \item $H^\mathbb{C}(l)=0$, \item $H^\mathbb{R}(m)=0$ and $H^\mathbb{R}(l)=0$. \end{itemize} This can be compared with the equivalent definitions for a quadrilateral $ABCD$ to be a parallelogram: either you ask that $AB$ and $CD$ are parallel of same length, or the same for $AD$ and $BC$, or equivalently that $AB$ and $CD$ are parallel and $AD$ and $BC$ are too. If $M$ is an orientable $3$-manifold with boundary consisting of tori, and ideally triangulated by $X$, then an angle structure $\alpha \in \mathcal{A}_X$ corresponds to the complete hyperbolic metric on the interior of $M$ (which is unique) if and only if $\alpha$ satisfies the complex gluing edge equations and the complex completeness equations. \subsection{The classical dilogarithm} For the dilogarithm function, we will use the definition: $$ \mathrm{Li}_2(z) := - \int_0^z \mathrm{Log}(1-u) \frac{du}{u} \ \ \ \textrm{for} \ z \in \mathbb{C} \setminus [1,\infty)$$ (see for example \cite{Za}). For $z$ in the unit disk, $\mathrm{Li}_2(z)=\sum_{n\geq 1} n^{-2} z^n$. We will use the following properties of the dilogarithm function, referring for example to \cite[Appendix A]{AH} for the proofs. \begin{proposition}[Some properties of $\mathrm{Li}_2$]\label{prop:dilog} \ \begin{enumerate} \item (inversion relation) $$ \forall z \in \mathbb{C} \setminus [1,\infty), \ \mathrm{Li}_2\left (\frac{1}{z}\right ) = - \mathrm{Li}_2(z) - \frac{\pi^2}{6} - \frac{1}{2}\mathrm{Log}(-z)^2. $$ \item (integral form) For all $y \in \mathbb{R} +i(-\pi,\pi)$, $$ \frac{-i}{2 \pi} \mathrm{Li}_2(-e^y) = \int_{v \in \mathbb{R} + i 0^+} \dfrac{\exp\left (-i \frac{y v}{\pi}\right )}{4 v^2 \sinh(v)} \, dv. $$ \end{enumerate} \end{proposition} \subsection{The Bloch--Wigner function} We define the \emph{Bloch--Wigner function} by \[ D(z) := \Im(\mathrm{Li}_2(z)) + \arg(1-z)\log \vert z \vert \qquad \text{for } z \in \mathbb{C} \backslash [1,\infty). \] This function is real analytic on $\mathbb{C} \backslash \{0,1\}$ and plays a central role in hyperbolic geometry. The following result will be important for us (for a proof, see \cite{NZ}). \begin{proposition} Let $T$ be an ideal tetrahedron in $\mathbb{H}^3$ with complex shape structure $z$. Then, its volume is given by \[ \mathrm{Vol}(T)= D(z) = D \left( \frac{z-1}{z} \right) = D \left( \frac{1}{1-z} \right). \] \end{proposition} \subsection{Twist knots} We denote by $K_n$ the unoriented twist knot with $n$ half-twists and $n+2$ crossings, according to Figure \ref{fig:twist:knot}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{TwistKnot.pdf} \end{center} \caption{The twist knot $K_n$} \label{fig:twist:knot} \end{figure} For clarity, we list the names of the $13$ first twist knots in the table of Figure \ref{fig:table:twist:knot}, along with their hyperbolic volume and the coefficient of the Dehn filling one must apply on the Whitehead link to obtain the considered knot. This last one is useful for studying $K_n$ for large $n$ on the software \textit{SnapPy} without having to draw a huge knot diagram by hand. \begin{figure}[!h] \begin{tabular}{|c|c|c|c|} \hline $n$& $K_n$ & $\begin{matrix} \text{Dehn Surgery coefficient} \\ \text{from the Whitehead link} \end{matrix}$ & Hyperbolic volume \\ \hline $0$& $0_1$ & $(1,0)$ & not hyperbolic \\ \hline $1$& $3_1$ & $(1,-1)$ & not hyperbolic \\ \hline $2$& $4_1$ & $(1,1)$ & $2.02988321... $ \\ \hline $3$& $5_2$ & $(1,-2)$ & $2.82812208... $ \\ \hline $4$& $6_1$ & $(1,2)$ & $3.16396322... $ \\ \hline $5$& $7_2$ & $(1,-3)$ & $3.33174423... $ \\ \hline $6$& $8_1$ & $(1,3)$ & $3.42720524... $ \\ \hline $7$& $9_2$ & $(1,-4)$ & $ 3.48666014...$ \\ \hline $8$& $10_1$ & $(1,4)$ & $3.52619599... $ \\ \hline $9$& $11_{a_{247}}$ & $(1,-5)$ & $3.55381991... $ \\ \hline $10$& $12_{a_{803}}$ & $(1,5)$ & $ 3.57388254...$ \\ \hline $11$& $13_{a_{3143}}$ & $(1,-6)$ & $ 3.588913917...$ \\ \hline $12$& $14_{a_{12741}}$ & $(1,6)$ & $3.600467262... $ \\ \hline \end{tabular} \caption{The first twist knots} \label{fig:table:twist:knot} \end{figure} The twist knots form, in a sense, the simplest infinite family of hyperbolic knots (for $n \geqslant 2$). This is why our initial motivation was to study the volume conjecture for the Teichm\"uller TQFT for this particular family (see \cite{BAPNcras}). \begin{remark} The twist knots $K_{2n-1}$ and $K_{2n}$ are obtained by Dehn filling on one component of the Whitehead link with respective coefficients $(1,-n)$ and $(1,n)$. As a consequence of the J{\o}rgensen-Thurston theorem \cite{Th,NZ}, the hyperbolic volume of $K_n$ tends to $3.6638623767088...$ (the volume of the Whitehead link) as $n \to + \infty$. \end{remark} \subsection{Faddeev's quantum dilogarithm} Recall \cite{AK} that for $\hbar >0$ and $\mathsf{b} >0$ such that $$(\mathsf{b}+\mathsf{b}^{-1}) \sqrt{\hbar} = 1,$$ \emph{Faddeev's quantum dilogarithm} $\Phi_\mathsf{b}$ is the holomorphic function on $\mathbb{R} + i \left (\frac{-1}{2 \sqrt{\hbar}}, \frac{1}{2 \sqrt{\hbar}}\right )$ given by $$ \Phi_\mathsf{b}(z) = \exp\left ( \frac{1}{4} \int_{w \in \mathbb{R} + i 0^+} \dfrac{e^{-2 i z w} dw}{\sinh(\mathsf{b} w) \sinh({\mathsf{b}}^{-1}w) w} \right ) \ \ \ \ \text{for} \ z \in \mathbb{R} + i \left (\frac{-1}{2 \sqrt{\hbar}}, \frac{1}{2 \sqrt{\hbar}}\right ), $$ and extended to a meromorphic function for $z\in \mathbb{C}$ via the functional equation $$\Phi_\mathsf{b}\left (z-i \frac{\mathsf{b}^{\pm 1}}{2}\right )= \left (1+e^{2\pi \mathsf{b}^{\pm 1} z}\right ) \Phi_\mathsf{b}\left (z + i \frac{\mathsf{b}^{\pm 1}}{2}\right ). $$ Note that $\Phi_\mathsf{b}$ depends only on $\hbar = \frac{1}{(\mathsf{b}+\mathsf{b}^{-1})^2}$. Furthermore, as a consquence of the functional equation, the poles of $\Phi_\mathsf{b}$ lie on $ i \left [\frac{1}{2 \sqrt{\hbar}}, \infty\right ) $ and the zeroes lie symmetrically on $i \left (-\infty, \frac{-1}{2 \sqrt{\hbar}}\right ]$. We now list several useful properties of Faddeev's quantum dilogarithm. We refer to \cite[Appendix A]{AK} for these properties (and several more), and to \cite[Lemma 3]{AH} for an alternate proof of the semi-classical limit property. \begin{proposition}[Some properties of $\Phi_\mathsf{b}$]\label{prop:quant:dilog} \ \begin{enumerate} \item (inversion relation) For any $\mathsf{b} \in \mathbb{R}_{>0}$ and any $z \in \mathbb{R} + i \left (\frac{-1}{2 \sqrt{\hbar}}, \frac{1}{2 \sqrt{\hbar}}\right )$, $$\Phi_\mathsf{b}(z) \Phi_\mathsf{b}(-z) = e^{i\frac{\pi}{12}(\mathsf{b}^2 + \mathsf{b}^{-2})} e^{i \pi z^2}.$$ \item (unitarity) For any $\mathsf{b} \in \mathbb{R}_{>0}$ and any $z \in \mathbb{R} + i \left (\frac{-1}{2 \sqrt{\hbar}}, \frac{1}{2 \sqrt{\hbar}}\right )$, $$\overline{\Phi_\mathsf{b}(z)} = \frac{1}{\Phi_\mathsf{b}(\overline{z})}.$$ \item (semi-classical limit) For any $z \in \mathbb{R} + i \left (-\pi,\pi \right )$, $$\Phi_\mathsf{b}\left (\frac{z}{2 \pi \mathsf{b}}\right ) = \exp\left (\frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2 (- e^z)\right ) \left ( 1 + \mathcal{O}_{\mathsf{b} \to 0^+}(\mathsf{b}^2)\right ).$$ \item (behavior at infinity) For any $\mathsf{b} \in \mathbb{R}_{>0}$, \begin{align*} \Phi_\mathsf{b}(z) \ \ \underset{\Re(z)\to -\infty}{\sim} & \ \ 1, \\ \Phi_\mathsf{b}(z) \ \ \underset{\Re(z)\to \infty}{\sim} & \ \ e^{i\frac{\pi}{12}(\mathsf{b}^2 + \mathsf{b}^{-2})} e^{i \pi z^2}. \end{align*} In particular, for any $\mathsf{b} \in \mathbb{R}_{>0}$ and any $d \in \left (\frac{-1}{2 \sqrt{\hbar}}, \frac{1}{2 \sqrt{\hbar}}\right )$, \begin{align*} |\Phi_\mathsf{b}(x+id) | \ \ \underset{\mathbb{R} \ni x \to -\infty}{\sim} & \ \ 1, \\ |\Phi_\mathsf{b}(x+id) | \ \ \underset{\mathbb{R} \ni x \to +\infty}{\sim} & \ \ e^{-2 \pi x d}. \end{align*} \end{enumerate} \end{proposition} \subsection{The Teichm\"uller TQFT of Andersen-Kashaev} In this section we follow \cite{AK, KaWB, Kan}. Let $\mathcal{S}(\mathbb{R}^d)$ denote the Schwartz space of smooth rapidly decreasing functions from $\mathbb{R}^d$ to $\mathbb{C}$. Its continuous dual $\mathcal{S}'(\mathbb{R}^d)$ is the space of tempered distributions. Recall that the \emph{Dirac delta function} is the tempered distribution $\mathcal{S}(\mathbb{R}) \to \mathbb{C}$ denoted by $\delta(x)$ or $\delta$ and defined by $ \delta(x) \cdot f:= \int_{x \in \mathbb{R}} \delta(x) f(x) dx = f(0) $ for all $f \in \mathcal{S}(\mathbb{R})$ (where $x \in \mathbb{R}$ denotes the argument of $f\in \mathcal{S}(\mathbb{R})$). Furthermore, we have the equality of tempered distributions \[ \delta(x)=\int_{w \in \mathbb{R}} e^{-2 \pi i x w} \,dw, \] in the sense that for all $f \in \mathcal{S}(\mathbb{R})$, $$ \left (\int_{w \in \mathbb{R}} e^{-2 \pi i x w} \,dw\right ) (f) = \int_{x \in \mathbb{R}} \int_{w \in \mathbb{R}} e^{-2 \pi i x w} f(x) \,dw \, dx \ = f(0) = \delta(x) \cdot f. $$ The second equality follows from applying the Fourier transform $\mathcal{F}$ twice and using the fact that $\mathcal{F}(\mathcal{F}(f))(x) = f(-x)$ for $f\in \mathcal{S}(\mathbb{R}), x \in \mathbb{R}$. Recall also that the definition of the Dirac delta function and the previous argument have multi-dimensional analogues (see for example \cite{Kan} for details). Given a triangulation $X$ of tetrahedra $T_1, \ldots,T_N$, we identify $X^{3}$ to a set of formal real variables $t_{j}$, $j=1, \ldots, N$ via the map $\mathsf{t}\colon T_j \mapsto t_{j}$. We also denote $\mathbf{t} = (t_{1},\ldots,t_{N})$ a formal vector in $\mathbb{R}^{X^{3}}$. \begin{definition} Let $X$ be a triangulation such that $H_2(M_X\setminus X^0,\mathbb{Z})=0$. The \textit{kinematical kernel of $X$} is a tempered distribution $\mathcal{K}_X \in \mathcal{S}'\left (\mathbb{R}^{X^{3}}\right )$ defined by the integral $$\mathcal{K}_X(\mathbf{t}) = \int_{\mathbf{x} \in \mathbb{R}^{X^{2}}} d\mathbf{x} \prod_{T \in X^3} e^{ 2 i \pi \varepsilon(T) x_0(T) \mathsf{t}(T)} \delta\left ( x_0(T)- x_1(T)+ x_2(T)\right ) \delta\left ( x_2(T)- x_3(T)+ \mathsf{t}(T)\right ).$$ Here, with a slight abuse of notation, $x_k(T)$ denotes the real variable $x_{x_k(T)}$ that is part of the vector $\mathbf{x} \in \mathbb{R}^{X^{2}}$. \end{definition} One should understand the integral of the previous formula as the following equality of tempered distributions, similarly as above : $$ \mathcal{K}_X(\mathbf{t}) = \int_{\mathbf{x} \in \mathbb{R}^{X^{2}}} d\mathbf{x} \int_{\mathbf{w} \in \mathbb{R}^{2 N}} d\mathbf{w} \ e^{ 2 i \pi \mathbf{t}^T R \mathbf{x}} e^{ -2 i \pi \mathbf{w}^T A \mathbf{x}} e^{ -2 i \pi \mathbf{w}^T B \mathbf{t}} \ \in \mathcal{S}'\left (\mathbb{R}^{X^{3}}\right ), $$ where $\mathbf{w}=(w_1,\ldots,w_N,w'_1, \ldots,w'_N)$ is a vector of $2N$ new real variables, such that $w_j,w'_j$ are associated to $\delta\left ( x_0(T_j)- x_1(T_j)+ x_2(T_j)\right )$ and $\delta\left ( x_2(T_j)- x_3(T_j)+ \mathsf{t}(T_j)\right )$, and where $R,A,B$ are matrices with integer coefficients depending on the values $x_k(T_j)$, i.e. on the combinatorics of the face gluings. More precisely, the rows (resp. columns) of $R$ are indexed by the vector of tetrahedron variables $\mathbf{t}$ (resp. of face variables $\mathbf{x}$) and $R$ has a coefficient $\epsilon(T_j)$ at coordinate $(t_j,x_0(T_j))$ and zero everywhere else; $B$ is indexed by $\mathbf{w}$ (rows) and $\mathbf{t}$ (columns) and has a $1$ at the coordinate $(w'_j,t_j)$; finally, $A$ is such that $A \mathbf{x} + B \mathbf{t}$ is a column vector indexed by $\mathbf{w}$ containing the values $x_0(T_j)- x_1(T_j)+ x_2(T_j), \ x_2(T_j)- x_3(T_j)+ t_j$ in order. \begin{lemma}\label{lem:dirac} If the $2N\times 2N$ matrix $A$ in the previous formula is invertible, then the kinematical kernel is simply a bounded function given by: $$ \mathcal{K}_X(\mathbf{t}) = \frac{1}{| \det(A) |} e^{ 2 i \pi \mathbf{t}^T (-R A^{-1} B) \mathbf{t}}. $$\end{lemma} \begin{proof} The lemma follows from the same argument as above (swapping integration symbols and applying the Fourier transform $\mathcal{F}$ twice), this time for the multi-dimensional function $f_{\mathbf{t}}:= \left (\mathbf{x} \mapsto e^{ 2 i \pi \mathbf{t}^T R \mathbf{x}}\right ).$ More precisely: \begin{align*} \mathcal{K}_X(\mathbf{t}) &= \int_{\mathbf{x} \in \mathbb{R}^{X^{2}}} d\mathbf{x} \int_{\mathbf{w} \in \mathbb{R}^{2 N}} d\mathbf{w} \ e^{ 2 i \pi \mathbf{t}^T R \mathbf{x}} e^{ -2 i \pi \mathbf{w}^T A \mathbf{x}} e^{ -2 i \pi \mathbf{w}^T B \mathbf{t}} \\ &= \int_{\mathbf{w} \in \mathbb{R}^{2N}} d\mathbf{w} \ e^{ -2 i \pi \mathbf{w}^T B \mathbf{t}} \int_{\mathbf{x} \in \mathbb{R}^{2N}} d\mathbf{x} \ f_{\mathbf{t}}(\mathbf{x}) e^{ -2 i \pi \mathbf{w}^T A \mathbf{x}}\\ &= \int_{\mathbf{w} \in \mathbb{R}^{2N}} d\mathbf{w} \ e^{ -2 i \pi \mathbf{w}^T B \mathbf{t}} \ \mathcal{F}\left (f_{\mathbf{t}}\right ) (A^T \mathbf{w})\\ &=\frac{1}{| \det(A)|} \int_{\mathbf{v} \in \mathbb{R}^{2N}} d\mathbf{v} \ e^{ -2 i \pi \mathbf{v}^T A^{-1} B \mathbf{t}} \ \mathcal{F}\left (f_{\mathbf{t}}\right ) (\mathbf{v})\\ &=\frac{1}{| \det(A)|} \mathcal{F}\left (\mathcal{F}\left (f_{\mathbf{t}}\right )\right ) (A^{-1} B \mathbf{t}) = \frac{1}{| \det(A)|} f_{\mathbf{t}} (-A^{-1} B \mathbf{t})= \frac{1}{| \det(A) |} e^{ 2 i \pi \mathbf{t}^T (-R A^{-1} B) \mathbf{t}}. \end{align*} \end{proof} The product of several Dirac delta functions might not be a tempered distribution in general. However the kinematical kernels in this paper will always be, thanks to the assumption that $H_2(M_X\setminus X^0,\mathbb{Z})=0$ (satisfied by the twist knot complements). See \cite{AK} for more details, via the theory of wave fronts. The key property to notice is the linear independance of the terms $x_0(T_j)- x_1(T_j)+ x_2(T_j), \ x_2(T_j)- x_3(T_j)+ t_j$. \begin{definition} Let $X$ be a triangulation. Its \textit{dynamical content} associated to $\hbar>0$ is a function $\mathcal{D}_{\hbar,X}\colon \mathcal{A}_X \to \mathcal{S}\left (\mathbb{R}^{X^{3}}\right )$ defined on each set of angles $\alpha \in \mathcal{A}_X$ by $$\mathcal{D}_{\hbar,X}(\mathbf{t},\alpha)= \prod_{T\in X^{3}} \dfrac{\exp \left( \hbar^{-1/2} \alpha_3(T) \mathsf{t}(T) \right )} {\Phi_\mathsf{b}\left (\mathsf{t}(T) - \dfrac{i}{2 \pi \sqrt{\hbar}}\varepsilon(T) (\pi-\alpha_1(T))\right )^{\varepsilon(T)}}. $$\end{definition} Note that $\mathcal{D}_{\hbar,X}(\cdot,\alpha)$ is in $\mathcal{S}\left (\mathbb{R}^{X^{3}}\right )$ thanks to the properties of $\Phi_\mathsf{b}$ and the positivity of the dihedral angles in $\alpha$ (see \cite{AK} for details). More precisely, each term in the dynamical content has exponential decrease as described in the following lemma. \begin{lemma}\label{lem:dec:exp} Let $\mathsf{b} \in \mathbb{R}_{>0}$ and $a,b,c \in (0,\pi)$ such that $a+b+c=\pi$. Then $$ \left | \dfrac{e^{\frac{1}{ \sqrt{\hbar}} c x}}{\Phi_\mathsf{b}\left (x-\frac{i}{ 2 \pi \sqrt{\hbar}}(b+c)\right )} \right | \underset{\mathbb{R} \ni x \to \pm \infty}{\sim} \left | e^{\frac{1}{ \sqrt{\hbar}} c x} \Phi_\mathsf{b}\left (x+\frac{i}{ 2 \pi \sqrt{\hbar}}(b+c)\right ) \right | \ \ \left \{ \begin{matrix} \underset{\mathbb{R} \ni x \to -\infty}{\sim} e^{\frac{1}{ \sqrt{\hbar}} c x}. \\ \ \\ \underset{\mathbb{R} \ni x \to +\infty}{\sim} e^{-\frac{1}{ \sqrt{\hbar}} b x}. \end{matrix} \right . $$ \end{lemma} \begin{proof} The lemma immediately follows from Proposition \ref{prop:quant:dilog} (4). \end{proof} Lemma \ref{lem:dec:exp} illustrates why we need the three angles $a,b,c$ to be in $(0,\pi)$: $b$ and $c$ must be positive in order to have exponential decrease in both directions, and $a$ must be as well so that $b+c < \pi$ and $\Phi_\mathsf{b}\left (x \pm \frac{i}{ 2 \pi \sqrt{\hbar}}(b+c)\right )$ is always defined. Now, for $X$ a triangulation such that $H_2(M_X\setminus X_0,\mathbb{Z})=0$, $\hbar>0$ and $\alpha \in \mathcal{A}_X$ an angle structure, the associated \textit{partition function of the Teichm\"uller TQFT} is the complex number: $$\mathcal{Z}_{\hbar}(X,\alpha)= \int_{\mathbf{t} \in \mathbb{R}^{X^3}} \mathcal{K}_X(\mathbf{t}) \mathcal{D}_{\hbar,X}(\mathbf{t},\alpha) d\mathbf{t} \ \ \ \in \mathbb{C}. $$ Andersen and Kashaev proved in \cite{AK} that the module $\left |\mathcal{Z}_{\hbar}(X,\alpha) \right | \in \mathbb{R}_{>0}$ is invariant under Pachner moves with positive angles, and then generalised this property to a larger class of moves and triangulations with angles, using analytic continuation in complex-valued $\alpha$ \cite{AKicm}. \begin{remark}\label{rem:mirror} If we denote $X^\sharp$ the \textit{mirror image} of the triangulation $X$ (obtained by applying a reflection to each tetrahedron), then all tetrahedron signs $\varepsilon(T_j)$ are multiplied by $-1$. Therefore, it follows from the definition of the Teichm\"uller TQFT and Proposition \ref{prop:quant:dilog} (2) that $\mathcal{Z}_{\hbar}(X^\sharp,\alpha) = \overline{\mathcal{Z}_{\hbar}(X,\alpha)},$ and thus $\left |\mathcal{Z}_{\hbar}(X^\sharp,\alpha) \right | = \left |\mathcal{Z}_{\hbar}(X,\alpha) \right |$. Consequently, the following results will stand for the twist knots $K_n$ of Figure \ref{fig:twist:knot} and their mirror images $K^\sharp_n$. \end{remark} We can now state our version of the \textit{volume conjecture} for the Teichm\"uller TQFT, in a slightly different (and less powerful) way from Andersen-Kashaev in \cite[Conjecture 1]{AK}. Notably, we make the statements depend on specific chosen triangulations $X$ and $Y$; thus we will not be interested in the present paper in how the following properties change under Pachner moves or depend on the triangulations. For some insights on these points, see \cite{AK}. We also introduced a new combination of angles $\mu_X$, which has a interesting topological origin. \begin{conj}[see \cite{AK}, Conjecture 1] \label{conj:vol:BAGPN} Let $M$ be a connected closed oriented $3$-manifold and let $K \subset M$ be a hyperbolic knot. There exist an ideal triangulation $X$ of $M \setminus K$ and a one-vertex H-triangulation $Y$ of $(M,K)$ such that $K$ is represented by an edge $\overrightarrow{K}$ in a single tetrahedron $Z$ of $Y$, and $\overrightarrow{K}$ has only one pre-image. Moreover, there exists a function $J_X\colon \mathbb{R}_{>0} \times \mathbb{C} \to \mathbb{C}$ such that the following properties hold: \begin{enumerate} \item There exist $\mu_X, \lambda_X$ linear combinations of dihedral angles in $X$ such that for all angle structures $\alpha \in \mathcal{A}_{X}$ and all $\hbar>0$, we have: \begin{equation*} \left |\mathcal{Z}_{\hbar}(X,\alpha) \right | = \left | \int_{\mathbb{R}+i \frac{\mu_{X}(\alpha) }{2\pi \sqrt{\hbar}} } J_{X}(\hbar,x) e^{\frac{1}{2 \sqrt{\hbar}} x \lambda_{X}(\alpha)} dx \right |. \end{equation*} Moreover, if $M=S^3$, then $J_X$ can be chosen such that $\mu_X, \lambda_X$ are angular holonomies associated to a meridian and a preferred longitude of $K$. \item For every $\mathsf{b}>0$, and for every $\tau\in \mathcal{S}_{Y \setminus Z} \times \overline{\mathcal{S}_Z}$ such that $\omega_{Y,\tau}$ vanishes on the edge $\overrightarrow{K}$ and is equal to $2\pi$ on every other edge, one has, denoting $\hbar = \frac{1}{(\mathsf{b}+\mathsf{b}^{-1})^2}$: \begin{equation*} \underset{\tiny \begin{matrix}\alpha \to \tau \\ \alpha \in \mathcal{S}_{Y} \end{matrix}}{\lim} \left | \Phi_{\mathsf{b}}\left( \frac{\pi-\omega_{Y,\alpha}\left (\overrightarrow{K}\right )}{2\pi i \sqrt{\hbar}} \right) \mathcal{Z}_{\hbar}(Y,\alpha)\right | = \left | J_{X}(\hbar,0)\right |, \end{equation*} \item In the semi-classical limit $\hbar \to 0^+$, we retrieve the hyperbolic volume of $K$ as: $$ \lim_{\hbar \to 0^+} 2\pi \hbar \log \vert J_{X}(\hbar,0) \vert = -\mathrm{Vol}(M\backslash K).$$ \end{enumerate} \end{conj} The rest of the paper consists in proving Conjecture \ref{conj:vol:BAGPN} for the infinite family of hyperbolic twist knots (in Theorems \ref{thm:trig}, \ref{thm:part:func}, \ref{thm:part:func:Htrig:odd}, \ref{thm:vol:conj}, \ref{thm:even:part:func}, \ref{thm:part:func:Htrig:even} and \ref{thm:even:vol:conj}). Several remarks are in order concerning Conjecture \ref{conj:vol:BAGPN}. \begin{remark} In Conjecture \ref{conj:vol:BAGPN} (1), one may notice that $J_X, \mu_X$ and $\lambda_X$ are not unique, since one can for example replace $(J_X(\hbar,x),x,\mu_X,\lambda_X)$ by \begin{itemize} \item either $(J_X(\hbar,x)e^{-\frac{1}{2 \sqrt{\hbar}}C x},x,\mu_X,\lambda_X+C)$ for any constant $C \in \mathbb{R}$, \item or $(D J_X(\hbar,D x'),x',\mu_X/D,D \lambda_X)$ for any constant $D \in \mathbb{R}^*$ (via the change of variable $x'=x/D$). \end{itemize} Note however that in both cases, the expected limit $\lim_{\hbar \to 0^+} 2\pi \hbar \log \vert J_{X}(\hbar,0) \vert$ does not change. When $M=S^3$, a promising way to reduce ambiguity in the definition of $J_X$ is to impose that $\mu_X(\alpha)$ and $\lambda_X(\alpha)$ are uniquely determined as the angular holonomies of a meridian and a preferred longitude of the knot $K$. In proving Conjecture \ref{conj:vol:BAGPN} (1) for the twist knots in Theorems \ref{thm:part:func} and \ref{thm:even:part:func}, we find such properties for $\mu_X$ and $\lambda_X$. \end{remark} \begin{remark} The function $(\hbar \mapsto J_X(\hbar,0))$ should play the role of the Kashaev invariant in the comparison with the Kashaev-Murakami-Murakami volume conjecture \cite{Ka95,MM}. Notably, the statement of Conjecture \ref{conj:vol:BAGPN} (2) has a similar form as the definition of the Kashaev invariant in \cite{Ka94} and Conjecture \ref{conj:vol:BAGPN} (3) resembles the volume conjecture stated in \cite{Ka95}, where $\hbar$ corresponds to the inverse of the color $N$. \end{remark} \begin{remark} The final form of the Teichm\"uller TQFT volume conjecture is not yet set in stone, notably because of the unoptimal definitions of the function $(\hbar \mapsto J_X(\hbar,0))$ (in Conjecture \ref{conj:vol:BAGPN} (1) and (2)) and the uncertain invariance of the variables and statements under (ordered) Pachner moves. Nevertheless, we hope Conjecture \ref{conj:vol:BAGPN} as stated here and its resolution can help us understand better how to solve these difficulties in the future. \end{remark} \subsection{Saddle point method} Let $n\geqslant 1$ be an integer. Recall \cite{Kr} that a complex-valued function $(z_1,\ldots,z_n) \mapsto S(z_1,\ldots,z_n)$ defined on an open subset of $\mathbb{C}^n$ is called \textit{analytic} (or \textit{holomorphic}) if it is analytic in every variable (as a function of one complex variable). Moreover, its \textit{holomorphic gradient} $\nabla S$ is the function valued in $\mathbb{C}^n$ whose coordinates are the partial derivatives $\dfrac{\partial S}{\partial z_j}$, and its \textit{holomorphic hessian} $\mathrm{Hess}(S)$ is the $n\times n$ matrix with coefficents the second partial derivatives $\dfrac{\partial^2 S}{\partial z_j z_k}$; in both of these cases, the \textit{holomorphic} denomination comes from the absence of partial derivatives of the form $\dfrac{\partial }{\partial \overline{z_j}}$. The \textit{saddle point method} is a general name for studying asymptotics of integrals of the form $\int f e^{\lambda S}$ when $\lambda\to +\infty$. The main contribution is expected to be the value of the integrand at a saddle point of $S$ maximizing $\Re S$. For an overview of such methods, see \cite[Chapter II]{Wo}. Before going in detail in the saddle point method, let us recall the notion of asymptotic expansion. \begin{definition} Let $f:\Omega \to \mathbb{C}$ be a function where $\Omega \subset \mathbb{C}$ is unbounded. A complex power series $\sum_{n=0}^{\infty} a_n z^{-n}$ (either convergent or divergent) is called an \emph{asymptotic expansion} of $f$ if, for every fixed integer $N \geq 0$, one has \[ f(z) = \sum_{n=0}^{N} a_nz^{-n} + \mathcal{O}(z^{-(N+1)}) \] when $z \to \infty$. In this case, one denotes \[ f(z) \underset{z \to \infty}{\cong} \sum_{n=0}^{\infty} a_nz^{-n}. \] \end{definition} For various properties of asymptotic expansions, see \cite{Wo}. The following theorem is due to Fedoryuk and can be found in \cite[Section 2.4.5]{Fe2} (for the statement) and in \cite[Chapter 5]{Fe1} (for the details and proofs, in Russian). To our knowledge, this is the only version of the saddle point method in the literature for $f,S$ analytic functions in several complex variables. \begin{theorem}[Fedoryuk]\label{thm:SPM} Let $m\geqslant 1$ be an integer, and $ \gamma^m $ an $m$-dimensional smooth compact real sub-manifold of $\mathbb{C}^m$ with connected boundary. We denote $z=(z_1,\ldots,z_m) \in \mathbb{C}^m$ and $dz= dz_1\cdots dz_m$. Let $z\mapsto f(z)$ and $z\mapsto S(z)$ be two complex-valued functions analytic on a domain $D$ such that $\gamma^m \subset D \subset \mathbb{C}^m$. We consider the integral $$F(\lambda) = \int_{\gamma^m} f(z) \exp(\lambda S(z)) \, dz,$$ with parameter $\lambda \in \mathbb{R}$. Assume that $\max_{z \in \gamma^m} \Re S(z)$ is attained only at a point $z^0$, which is an interior point of $\gamma^m$ and a simple saddle point of $S$ (i.e. $\nabla S(z^0)=0$ and $\det \mathrm{Hess}(S)(z^0) \neq 0$). Then as $\lambda \to + \infty$, there is the asymptotic expansion $$F(\lambda) \underset{\lambda \to \infty}{\cong} \left (\dfrac{2\pi}{\lambda}\right )^{m/2} \dfrac{\exp \left ( \lambda S(z^0)\right )}{\sqrt{\det \mathrm{Hess}(S)(z^0)}} \left [ f(z^0) + \sum_{k=1}^\infty c_k \lambda^{-k} \right ], $$ where the $c_k$ are complex numbers and the choice of branch for the root $\sqrt{\det \mathrm{Hess}(S)(z^0)}$ depends on the orientation of the contour $\gamma^m$. In particular, $\lim_{\lambda \to + \infty} \frac{1}{\lambda} \log \vert F(\lambda) \vert = \Re S(z^0)$. \end{theorem} \subsection{Notations and conventions} Let $p \in \mathbb{N}$. In the various following sections, we will use the following recurring conventions: \begin{itemize} \item A roman letter in \textbf{bold} will denote a vector of $p+2$ variables (often integration variables), which are the aforementioned letter indexed by $1, \ldots, p,U, W$. For example, $ \mathbf{y} = (y_1,\ldots,y_p,y_U,y_W)$. \item A roman letter in \textbf{bold} and with a tilde $\widetilde{ \ }$ will have $p+3$ variables indexed by $1, \ldots, p, U,V,W$. For example, $ \widetilde{\mathbf{y}}' = (y'_1,\ldots,y'_p,y'_U,y'_V,y'_W)$. \item Matrices and other vectors of size $p+3$ will also wear a tilde but will not necessarily be in bold, for example $\widetilde{C}(\alpha)=(c_1,\ldots,c_p,c_U,c_V,c_W)$. \item A roman letter in \textbf{bold} and with a hat $\widehat{ \ }$ will have $p+4$ variables indexed by $1, \ldots, p, U,V,W, Z$. For example, $ \widehat{\mathbf{t}} = (t_1,\ldots,t_p,t_U,t_V,t_W,t_Z)$. \end{itemize} For $j \in \{1,\ldots,p,U,V,W,Z\}$, we will also use the conventions that: \begin{itemize} \item the symbols $e_j, f_j$ are faces of a triangulation (for $j \in \{1,\ldots,p\}$), \item the symbol $\overrightarrow{e_j}$ is an edge of a triangulation (for $j \in \{1,\ldots,p\}$), \item the integration variable $t_j$ lives in $\mathbb{R}$, \item the symbols $a_j,b_j,c_j$ are angles in $(0,\pi)$ (sometimes $[0,\pi]$) with sum $\pi$, \item the integration variable $y'_j$ lives in $\mathbb{R} \pm \frac{i(\pi-a_j)}{2 \pi \sqrt{\hbar}}$, \item the integration variable $y_j$ live{}s in $\mathbb{R} \pm i(\pi-a_j)$, \item the symbols $x_j, d_j$ are the real and imaginary part of $y_j$, \item the symbol $z_j$ lives in $\mathbb{R} + i \mathbb{R}_{>0}$, \end{itemize} and are (each time) naturally associated to the tetrahedron $T_j$. Moreover, we will simply note $U,V,W,Z$ for the tetrahedra $T_U,T_V,T_W,T_Z$. \section{New triangulations for the twist knots}\label{sec:trig} We describe the construction of new triangulations for the twist knots, starting from a knot diagram and using an algorithm introduced by Thurston in \cite{Th2} and refined in \cite{Me, KLV}. For the odd twist knots the details are in this section, and for the even twist knots they are in Section~\ref{sec:appendix}. \subsection{Statement of results} \begin{theorem}\label{thm:trig} For every $n\geqslant 3$ odd (respectively for every $n\geqslant 2$ even), the triangulations $X_n$ and $Y_n$ represented in Figure \ref{fig:trig:odd} (respectively in Figure \ref{fig:trig:even}) are an ideal triangulation of $S^3 \setminus K_n$ and an H-triangulation of $(S^3,K_n)$ respectively. \end{theorem} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4,angle=90]{TwistOdd.pdf} \end{center} \caption{An H-triangulation $Y_n$ of $(S^3,K_n)$ (full red part) and an ideal triangulation $X_n$ of $S^3 \setminus K_n$ (dotted red part), for odd $n\geqslant 3$, with $p=\frac{n-3}{2}$.} \label{fig:trig:odd} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4,angle=90]{TwistEven.pdf} \end{center} \caption{An H-triangulation $Y_n$ of $(S^3,K_n)$ (full red part) and an ideal triangulation $X_n$ of $S^3 \setminus K_n$ (dotted red part), for even $n\geqslant 2$, with $p=\frac{n-2}{2}$.} \label{fig:trig:even} \end{figure} Figures \ref{fig:trig:odd} and \ref{fig:trig:even} display an H-triangulation $Y_n$ of $(S^3,K_n)$, and the corresponding ideal triangulation $X_n$ of $S^3 \setminus K_n$ is obtained by replacing the upper left red tetrahedron (partially glued to itself) by the dotted line (note that we omitted the numbers $0,1,2,3$ of the vertices for simplicity). Theorem \ref{thm:trig} is proven by applying an algorithm due to Thurston (later refined by Menasco and Kashaev-Luo-Vartanov) to construct a polyhedral decomposition of $S^3$ where the knot $K_n$ is one of the edges, starting from a diagram of $K_n$; along the way we apply a combinatorial trick to reduce the number of edges and we finish by choosing a convenient triangulation of the polyhedron. Once we have the H-triangulation of $(S^3,K_n)$, we can collapse both the edge representing the knot $K_n$ and its underlying tetrahedron to obtain an ideal triangulation of $S^3 \setminus K_n$. This is detailed in Section \ref{sub:trig:odd} (for odd $n$) and in Section \ref{sub:even:trig} (for even $n$). \subsection{Consequences on Matveev complexity} An immediate consequence of Theorem \ref{thm:trig} is a new upper bound for the Matveev complexity of a general twist knot complement. Recall that the Matveev complexity $\mathfrak{c}(S^3\setminus K)$ of a knot complement is equal to the minimal number of tetrahedra in an ideal triangulation of this knot complement $S^3\setminus K$ (see \cite{Ma} for this definition and the original wider definition using simple spines). \begin{corollary}\label{cor:complexity} Let $n\geqslant 2$. Then the Matveev complexity $\mathfrak{c}\left (S^3\setminus K_n\right )$ of the $n$-th twist knot complement satisfies: $$ \mathfrak{c}\left (S^3\setminus K_n\right ) \leqslant \left \lfloor \frac{n+4}{2} \right \rfloor.$$ \end{corollary} Corollary \ref{cor:complexity} follows immediately from Theorem \ref{thm:trig} and is of double interest. Firstly, this new upper bound, which is roughly half the crossing number of the knot, is stricly better that the upper bounds currently in the literature (to the authors' knowledge). Indeed, the usual upper bound for $\mathfrak{c}\left (S^3\setminus K_n\right )$ is roughly $4$ times the crossing number (see for example \cite[Proposition 2.1.11]{Ma}); a better upper bound for two-bridge knots is given in \cite[Theorem 1.1]{IN}, and is equal to $n$ for the $n$-th twist knot $K_n$. Secondly, experiments on the software \textit{SnapPy} lead us to conjecture that the bound of Corollary \ref{cor:complexity} is actually an exact value. Indeed, up to $n= 12$, when we generated an ideal triangulation for $S^3 \setminus K_n$ on \textit{SnapPy}, it always had at least $\left \lfloor \frac{n+4}{2} \right \rfloor$ tetrahedra. Of course, this is only experimental evidence, and proving that $\left \lfloor \frac{n+4}{2} \right \rfloor$ is an actual lower bound seems like a tall order. Notably, lower bounds for $\mathfrak{c}\left (S^3\setminus K_n\right )$ have not yet been found, to the authors' knowledge. Nevertheless, we propose the following conjecture: \begin{conj}\label{conj:matveev} Let $n\geqslant 3$. Then the Matveev complexity $\mathfrak{c}\left (S^3\setminus K_n\right )$ of the $n$-th twist knot complement satisfies: $$ \mathfrak{c}\left (S^3\setminus K_n\right ) = \left \lfloor \frac{n+4}{2} \right \rfloor.$$ \end{conj} In the rest of this section, we present one last lead that gives credence to Conjecture \ref{conj:matveev}, via the notion of complexity of \textit{pairs}. As defined in \cite{PP}, the Matveev complexity $\mathfrak{c}\left (S^3, K_n\right )$ of the knot $K_n$ in $S^3$ is the minimal number of tetrahedra in a triangulation of $S^3$ where $K_n$ is the union of some quotient edges. Since H-triangulations (as defined in this article) are such triangulations, we deduce from Theorem \ref{thm:trig} the following corollary: \begin{corollary}\label{cor:complexity:pair} Let $n\geqslant 2$. Then the Matveev complexity $\mathfrak{c}\left (S^3, K_n\right )$ of the $n$-th twist knot in $S^3$ satisfies: $$ \mathfrak{c}\left (S^3, K_n\right ) \leqslant \left \lfloor \frac{n+6}{2} \right \rfloor.$$ \end{corollary} The upper bound of $\left \lfloor \frac{n+6}{2} \right \rfloor$ for the knots $K_n$ in Corollary \ref{cor:complexity:pair} is better than the upper bound of $4n+10$ in \cite[Propostion 5.1]{PP}, which can be a motivation to see how the results of this section can be expanded to other families of knots in $S^3$. For these same knots $K_n$, the best lower bound to date seems to be in $\log_5(n)$, see \cite[Theorem 5.4]{PP}. Still, we offer the following conjecture: \begin{conj}\label{conj:matveev:pair} Let $n\geqslant 3$. Then the Matveev complexity $\mathfrak{c}\left (S^3, K_n\right )$ of the $n$-th twist knot in $S^3$ satisfies: $$ \mathfrak{c}\left (S^3, K_n\right ) = \left \lfloor \frac{n+6}{2} \right \rfloor.$$ \end{conj} If true, Conjecture \ref{conj:matveev:pair} would be all the more astonishing that the H-triangulation $Y_n$ of cardinality $\left \lfloor \frac{n+6}{2} \right \rfloor$ would be minimal although it has the double restriction that the knot $K_n$ lies in only one edge of the triangulation of $S^3$ and that $Y_n$ admits a vertex ordering. Conjectures \ref{conj:matveev} and \ref{conj:matveev:pair} are equivalent if and only if the following question admits a positive answer: \begin{question}\label{quest:matveev} Let $n\geqslant 2$. Do the respective Matveev complexities of the $n$-th twist knot complement and of the $n$-th twist knot in $S^3$ differ by $1$, i.e. do we always have $$ \mathfrak{c}\left (S^3, K_n\right ) = \mathfrak{c}\left (S^3\setminus K_n\right )+1 \ \ ?$$ \end{question} Question \ref{quest:matveev} looks far from easy to solve, though. On one hand, it is not clear that the minimal triangulation for the pair $(S^3,K_n)$ can always yield an ideal triangulation for $S^3\setminus K_n$ by collapsing exactly one tetrahedron (which is the case for $X_n$ and $Y_n$ as we will see in the following section). On the other hand, it is not clear that one can always construct an H-triangulation of $(S^3,K_n)$ from an ideal triangulation of $S^3\setminus K_n$ by adding only one tetrahedron. The previously mentioned lower bound of the form $\log_5(n)$ for $\mathfrak{c}\left (S^3, K_n\right )$ comes from the general property that $$\frac{1}{2} \mathfrak{c}(M_n) \leqslant \mathfrak{c}\left (S^3, K_n\right )$$ where $M_n$ is the double branched cover of $(S^3,K_n)$ \cite[Proposition 5.2]{PP}. Here $M_n$ happens to be the lens space $L(2n+1,n)$ (see for example \cite[Section 12]{BZH}), whose Matveev complexity is not yet known but conjectured to be $n-1$ through a general conjecture on the complexity of lens spaces \cite[Section 2.3.3 page 77]{Ma}. Hence, if the lens space complexity conjecture holds, then we would have from Corollary \ref{cor:complexity:pair} the double bound $$ \left \lceil \dfrac{n-1}{2} \right \rceil \leqslant \mathfrak{c}\left (S^3, K_n\right ) \leqslant \left \lfloor \frac{n+6}{2} \right \rfloor,$$ which would imply that $\mathfrak{c}\left (S^3, K_n\right )$ can only take four possible values. All this makes Conjecture \ref{conj:matveev:pair} sound more plausible, and Conjecture \ref{conj:matveev} as well by extension. \subsection{Construction for odd twist knots}\label{sub:trig:odd} We first consider a general twist knot $K_n$ for $n\geqslant 3$, $n$ odd. We will construct an H-triangulation of $(S^3,K_n)$ and an ideal triangulation of $S^3 \setminus K_n$ starting from a knot diagram of $K_n$. The method dates back to Thurston \cite{Th} and was also described in more detail in \cite{KLV, Me}. \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black} , every node/.style={transform shape , knot crossing , inner sep=1.5 pt } ] \begin{scope}[scale=0.7] \begin{scope}[dashed,decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (-2,-2)--(-4,-4); \draw[postaction={decorate}] (-7,1)--(-4,-4); \draw[postaction={decorate}] (-2,-2)--(-4,0); \draw[postaction={decorate}] (-7,1)--(-4,0); \draw[postaction={decorate}] (-7,1)--(-2,6); \draw[postaction={decorate}] (-2,2)--(-4,0); \draw[postaction={decorate}] (-2,2)--(-2,6); \draw[postaction={decorate}] (6,3)--(2,2); \draw[postaction={decorate}] (4,0)--(2,2); \draw[postaction={decorate}] (6,3)--(7,0); \draw[postaction={decorate}] (4,0)--(7,0); \draw[postaction={decorate}] (3,-4)--(7,0); \draw[postaction={decorate}] (3,-4)--(2,-2); \draw[postaction={decorate}] (4,0)--(2,-2); \end{scope} \begin{scope}[dashed,decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (-2,2)--(2,2); \draw[postaction={decorate}] (-2,2)--(-1,4); \draw[postaction={decorate}] (1,4)--(2,2); \draw[postaction={decorate}] (1,4)--(-1,4); \draw[postaction={decorate}] (1,4)--(3,6); \draw[postaction={decorate}] (-2,6)--(-1,4); \draw[postaction={decorate}] (-2,6)--(3,6); \end{scope} \draw[style=dashed] (1,4) -- (6,3); \begin{scope}[xshift=3.5cm, yshift=3.5cm, rotate=-100, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (3,6) -- (6,3); \draw[color=blue, line width=0.5mm] (2,2) -- (7,0); \draw[color=blue, line width=0.5mm] (4,0) -- (3,-4); \draw[color=blue, line width=0.5mm] (-7,1) -- (-2,-2); \draw[color=blue, line width=0.5mm] (-4,0) -- (-2,6); \draw[color=blue, line width=0.5mm] (3,6) -- (-1,4); \draw[color=blue, line width=0.5mm] (-2,2) -- (1,4); \draw[color=blue, line width=0.5mm] (2,-2) -- (3.3,-1.5); \draw[color=blue, line width=0.5mm] (4,-1.25) -- (7,0); \draw[color=blue, line width=0.5mm] (4,0) -- (4.5,0.7); \draw[color=blue, line width=0.5mm] (4.8,1.2) -- (6,3); \draw[color=blue, line width=0.5mm] (-1,4) -- (-0.2,3.5); \draw[color=blue, line width=0.5mm] (0.2,3.2) -- (2,2); \draw[color=blue, line width=0.5mm] (1,4) -- (0.3,4.45); \draw[color=blue, line width=0.5mm] (-0.1,4.7) -- (-2,6); \draw[color=blue, line width=0.5mm] (-7,1) -- (-3.7,1.6); \draw[color=blue, line width=0.5mm] (-3.1,1.75) -- (-2,2); \draw[color=blue, line width=0.5mm] (-4,0) -- (-4,-0.6); \draw[color=blue, line width=0.5mm] (-4,-1.1) -- (-4,-4); \draw[scale=4,color=blue] (0,-3/4) node {$\ldots$}; \draw[scale=2] (0,0) node {$D$}; \draw[scale=2] (3/2,4.5/2) node {$m$}; \draw[scale=2] (2.5/2,3/2) node {$r$}; \draw[scale=2] (-1.5/2,4/2) node {$s$}; \draw[scale=2] (-6/2,4/2) node {$E$}; \end{scope} \end{tikzpicture} \caption{Building an H-triangulation from a diagram of $K_n$} \label{fig:diagram:htriang} \end{figure} For the first step, as in Figure \ref{fig:diagram:htriang}, we choose a middle point for each arc of the diagram, except for one arc where we choose two (the upper right one on the figure), and we draw quadrilaterals around the crossings with the chosen points as vertices (in dashed lines in Figure \ref{fig:diagram:htriang}). We consider the equivalence relation on dotted edges generated by ``being part of the same quadrilateral'', and we choose a way of drawing each class. In Figure \ref{fig:diagram:htriang} there are two such edges, one with a simple arrow and one with a double arrow. We orient the arrows such that the directions keep alternating when one goes around any quadrilateral. There remains one quadrilateral with three dotted edges and one edge from the knot $K_n$. We cut this one into two triangles $m$ and $r$, introducing a third arrow type, the ``white triangle'' one (see Figure \ref{fig:diagram:htriang}). Here $m,r,s,D,E$ are the polygonal $2$-cells that decompose the equatorial plane around the knot; note that $m,r,s$ are triangles, $D$ is an $(n+1)$-gon and $E$ is an $(n+2)$-gon. \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black} , every node/.style={transform shape , knot crossing , inner sep=1.5 pt } ] \draw[scale=1] (0,-1) node {$(a)$}; \draw[scale=1] (8.2,-1) node {$(b)$}; \begin{scope}[scale=0.7] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (-5,2)--(-4,0); \draw[postaction={decorate}] (-5,2)--(-4,5); \draw[postaction={decorate}] (4,5) -- (5,2); \draw[postaction={decorate}] (4,0) -- (5,2); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (-4,5)--(0,7); \end{scope} \draw[color=blue, line width=0.5mm] (0,7) -- (4,5); \draw[scale=2] (0,0) node {$\ldots$}; \draw[->] (3,3.5) -- (2,3.5 +1.5/7); \draw (2,3.5 +1.5/7) -- (-4,5); \begin{scope}[xshift=3.5cm, yshift=4.25cm, rotate=-30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (3,3.5) -- (4,5); \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (3,3.5) -- (5,2); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.7 with {\arrow{>>}}} ] \draw[postaction={decorate}] (3,3.5) -- (0,7); \end{scope} \draw[scale=2] (3/2,3/2) node {$D$}; \draw[scale=2] (3/2,4.2/2) node {$m$}; \draw[scale=2] (2/2,4.1/2) node {$s$}; \draw[scale=2] (3.7/2,3.5/2) node {$r$}; \draw[scale=2] (-5/2,4/2) node {$E$}; \end{scope} \begin{scope}[xshift=8.2cm,scale=0.7] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (-5,2)--(-4,0); \draw[postaction={decorate}] (-5,2)--(-4,5); \draw[postaction={decorate}] (4,5) -- (5,2); \draw[postaction={decorate}] (4,0) -- (5,2); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (-4,5)--(0,7); \end{scope} \draw[color=blue, line width=0.5mm] (0,7) -- (4,5); \draw[scale=2] (0,0) node {$\ldots$}; \begin{scope}[xshift=0cm, yshift=5cm, rotate=-90, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (-4,5) -- (4,5); \draw[->>] (-5,2) -- (-3.5,2.75); \draw (-3.5,2.75) -- (-2,3.5); \draw[->>] (-4,5) -- (-3,4.25); \draw (-3,4.25) -- (-2,3.5); \draw[->] (4,5) -- (-1,5-1.5*5/6); \draw (-1,5-1.5*5/6) -- (-2,3.5); \draw[scale=2] (-2/2,3/2) node {$D$}; \draw[scale=2] (-1/2,5.5/2) node {$m$}; \draw[scale=2] (-3/2,3.5/2) node {$s$}; \draw[scale=2] (-1.7/2,4/2) node {$r$}; \draw[scale=2] (-5/2,4/2) node {$E$}; \end{scope} \end{tikzpicture} \caption{Boundaries of $B_+$ and $B_-$}\label{fig:boundary:balls} \end{figure} In Figure \ref{fig:diagram:htriang} we can see that around each crossing of the diagram, there are six edges (two in blue from the knot, four dotted with arrows) that delimit an embedded tetrahedron. We will now collapse each of these tetrahedra into one segment, so that each of the two ``knot edges'' are collapsed to an extremal point of the segment and all four dotted edges fuse into a single one, with natural orientation. The homeomorphism type of $(S^3,K_n)$ does not change if we collapse every tetrahedron in such a way, and that is what we do next. After such a collapse, the ambient space (that we will call again $S^3$) decomposes as one $0$-cell (the collapsed point), four edges (simple arrow, double arrow, arrow with a triangle and blue edge coming from $K_n$), five polygonal $2$-cells still denoted $m,r,s,D,E$, and two $3$-balls $B_+$ and $B_-$, respectively from upper and below the figure. The boundaries of $B_+$ and $B_-$ are given in Figure \ref{fig:boundary:balls}. Note that the boundary of $B_+$ is obtained from Figure \ref{fig:diagram:htriang} by collapsing the upper strands of $K_n$, and $B_+$ is implicitly residing above Figure \ref{fig:boundary:balls} (a). Similarly, $B_-$ resides behind Figure \ref{fig:boundary:balls} (b). Note that the boundary of $D$, read clockwise, is the sequence of $n+1$ arrows $\twoheadrightarrow, \leftarrow, \rightarrow, \ldots, \leftarrow$ with the simple arrows alternating directions. \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black} , every node/.style={transform shape , knot crossing , inner sep=1.5 pt } ] \draw[scale=1] (0,-2) node {$(a)$}; \draw[scale=1] (8,-2) node {$(b)$}; \begin{scope}[scale=0.7] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (-5,2)--(-4,0); \draw[postaction={decorate}] (-5,2)--(-4,5); \draw[postaction={decorate}] (4,5) -- (5,2); \draw[postaction={decorate}] (4,0) -- (5,2); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (-4,5)--(0,7); \end{scope} \draw[color=blue, line width=0.5mm] (0,7) -- (4,5); \draw[scale=2] (0,0) node {$\ldots$}; \begin{scope}[xshift=0cm, yshift=5cm, rotate=-90, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (-4,5) -- (4,5); \draw[->>] (-5,2) -- (-3.5,2.75); \draw (-3.5,2.75) -- (-2,3.5); \draw[->>] (-4,5) -- (-3,4.25); \draw (-3,4.25) -- (-2,3.5); \draw[->] (4,5) -- (-1,5-1.5*5/6); \draw (-1,5-1.5*5/6) -- (-2,3.5); \draw[scale=2] (-2/2,3/2) node {$D$}; \draw[scale=2] (-1/2,5.5/2) node {$m$}; \draw[scale=2] (-3/2,3.5/2) node {$s$}; \draw[scale=2] (-1.7/2,4/2) node {$r$}; \begin{scope}[xshift=3.5cm, yshift=4.25cm, rotate=-30, scale=0.2] \draw[color=red] (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \begin{scope}[style=dashed] \draw[color=red][->] (3,3.5) -- (2,3.5 +1.5/7); \draw[color=red] (2,3.5 +1.5/7) -- (-4,5); \draw[color=red] (3,3.5) -- (4,5); \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}][color=red] (3,3.5) -- (5,2); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.7 with {\arrow{>>}}} ] \draw[postaction={decorate}][color=red] (3,3.5) -- (0,7); \end{scope} \draw[scale=2][color=red] (3/2,3/2) node {$D$}; \draw[scale=2][color=red] (3/2,4.2/2) node {$m$}; \draw[scale=2][color=red] (2/2,4.1/2) node {$s$}; \draw[scale=2][color=red] (3.7/2,3.5/2) node {$r$}; \end{scope} \end{scope} \begin{scope}[xshift=8cm,yshift=-1.5cm,scale=0.5] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (0,4)--(0,0); \draw[postaction={decorate}] (0,4)--(0,6); \draw[postaction={decorate}] (0,10) -- (0,12); \draw[postaction={decorate}] (-8,16) -- (0,12); \draw[postaction={decorate}] (4,14) -- (0,0); \draw[postaction={decorate}] (-8,16) -- (-1,5); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (0,4)--(-1,5); \draw[postaction={decorate}] (0,0)--(-1,5); \draw[postaction={decorate}] (4,14) -- (8,16); \draw[postaction={decorate}] (0,0) -- (8,16); \draw[postaction={decorate}] (4,14) -- (0,12); \end{scope} \draw (0,0) -- (-8,16); \begin{scope}[xshift=-4cm, yshift=8cm, rotate=30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (4,14) -- (-8,16); \begin{scope}[xshift=-2cm, yshift=15cm, rotate=75, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (-8,16) -- (8,16); \draw[scale=2] (0,8/2) node {$\vdots$}; \draw[scale=2] (4/2,15/2) node {$m$}; \draw[scale=2] (2.8/2,13.8/2) node {$r$}; \draw[scale=2] (5/2,14/2) node {$s$}; \draw[scale=2] (3/2,12.5/2) node {$D$}; \draw[scale=2] (-4/2,5/2) node {$m$}; \draw[scale=2] (-1.5/2,4.5/2) node {$r$}; \draw[scale=2] (-0.3/2,3.5/2) node {$s$}; \draw[scale=2] (-0.8/2,6/2) node {$D$}; \end{scope} \end{tikzpicture} \caption{A cellular decomposition of $(S^3,K_n)$ as a polyhedron glued to itself}\label{fig:Htriang:polyhedron} \end{figure} We can now give a new description of $S^3$ by gluing the balls $B_+$ and $B_-$ along the face $E$; the two $3$-cells fuse into one, and its boundary is now as in Figure \ref{fig:Htriang:polyhedron} (a). Indeed, since $B_-$ is behind Figure \ref{fig:boundary:balls} (b) and $B_+$ in front of Figure \ref{fig:boundary:balls} (a), we can picture the gluing along $E$ in the following way, from front to back: \begin{itemize} \item the faces $D,m,r,s$ of $B_-$, \item the $3$-cell $B_-$, \item the face $E$ of $B_-$, \item the face $E$ of $B_+$, \item the $3$-cell $B_+$, \item the faces $D,m,r,s$ of $B_+$. \end{itemize} Note that in Figure \ref{fig:Htriang:polyhedron} (a) the red dashed faces lie on the back of the figure, and the only $3$-cell now lives inside the polyhedron. Finally we can rotate this polyhedron and obtain the cellular decomposition of $S^3$ in Figure \ref{fig:Htriang:polyhedron} (b), where one face $m$ is in the back and the seven other faces lie in front. \begin{figure}[!h] \centering \begin{tikzpicture} [every path/.style={string ,black}] \begin{scope}[xshift=0cm,yshift=0cm,scale=1] \draw[scale=1] (0,-1.5) node {\large $(a)$}; \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,1) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,3) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,4) node[shape=circle,fill=black,scale=0.3] {}; \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,0)--(0,1); \draw[postaction={decorate}] (0,2)--(0,1); \draw[postaction={decorate}] (0,2)--(0,3); \draw[postaction={decorate}] (0,4)--(0,3); \end{scope} \draw[->,>=latex] (0,0) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,1); \draw (-1,1) .. controls +(0,0.5) and +(-0.5,0) .. (0,2); \draw[scale=1] (-0.5,1) node {$u$}; \draw[scale=1] (1,2) node {$F$}; \draw[scale=1] (-1,3) node {$F$}; \draw[scale=1] (0,-0.5) node {$\vdots$}; \draw[scale=1] (0,4.5) node {$\vdots$}; \end{scope} \begin{scope}[xshift=4.5cm,yshift=0cm,scale=1] \draw[scale=1] (0,-1.5) node {\large $(b)$}; \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,1) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,3) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,4) node[shape=circle,fill=black,scale=0.3] {}; \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,0)--(0,1); \draw[postaction={decorate}] (0,2)--(0,1); \draw[postaction={decorate}] (0,2)--(0,3); \draw[postaction={decorate}] (0,4)--(0,3); \end{scope} \draw[->,>=latex] (0,0) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,1); \draw (-1,1) .. controls +(0,0.5) and +(-0.5,0) .. (0,2); \draw[->>,>=latex] (0,2) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,3); \draw (-1,3) .. controls +(0,0.5) and +(-0.5,0) .. (0,4); \draw[->>,>=latex] (0,0) .. controls +(0.5,0) and +(0,-0.5) .. (1,1); \draw (1,1) .. controls +(0,0.5) and +(0.5,0) .. (0,2); \draw[scale=1] (-0.5,1) node {$u$}; \draw[scale=1] (-0.5,3) node {$v$}; \draw[scale=1] (0.5,1) node {$v$}; \draw[scale=1] (1,2) node {$F'$}; \draw[scale=1] (-1,2) node {$F'$}; \draw[scale=1] (0,-0.5) node {$\vdots$}; \draw[scale=1] (0,4.5) node {$\vdots$}; \end{scope} \begin{scope}[xshift=9cm,yshift=0cm,scale=1] \draw[scale=1] (2,-1.5) node {\large $(c)$}; \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,3) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,4) node[shape=circle,fill=black,scale=0.3] {}; \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,2)--(0,3); \draw[postaction={decorate}] (0,4)--(0,3); \end{scope} \draw[->,>=latex] (0,0) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,1); \draw (-1,1) .. controls +(0,0.5) and +(-0.5,0) .. (0,2); \draw[->>,>=latex] (0,2) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,3); \draw (-1,3) .. controls +(0,0.5) and +(-0.5,0) .. (0,4); \draw[->>,>=latex] (0,0) .. controls +(0.5,0) and +(0,-0.5) .. (1,1); \draw (1,1) .. controls +(0,0.5) and +(0.5,0) .. (0,2); \draw[scale=1] (0,1) node {$w$}; \draw[scale=1] (-0.5,3) node {$v$}; \draw[scale=1] (1,2) node {$F'$}; \draw[scale=1] (-1,2) node {$F'$}; \draw[scale=1] (0,-0.5) node {$\vdots$}; \draw[scale=1] (0,4.5) node {$\vdots$}; \draw[scale=1] (2,2) node {$\sqcup$}; \begin{scope}[xshift=4cm,yshift=1cm] \draw[->,>=latex] (0,0) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,1); \draw (-1,1) .. controls +(0,0.5) and +(-0.5,0) .. (0,2); \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,1) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,0)--(0,1); \draw[postaction={decorate}] (0,2)--(0,1); \end{scope} \draw[->>,>=latex] (0,0) .. controls +(0.5,0) and +(0,-0.5) .. (1,1); \draw (1,1) .. controls +(0,0.5) and +(0.5,0) .. (0,2); \draw[scale=1] (-0.5,1) node {$u$}; \draw[scale=1] (0.5,1) node {$v$}; \draw[scale=1] (0,2.3) node {$w$}; \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-7cm,scale=1] \draw[scale=1] (2,-1.5) node {\large $(d)$}; \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,3) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,4) node[shape=circle,fill=black,scale=0.3] {}; \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,2)--(0,3); \draw[postaction={decorate}] (0,4)--(0,3); \end{scope} \draw[->,>=latex] (0,0) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,1); \draw (-1,1) .. controls +(0,0.5) and +(-0.5,0) .. (0,2); \draw[->>,>=latex] (0,2) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,3); \draw (-1,3) .. controls +(0,0.5) and +(-0.5,0) .. (0,4); \draw[->>,>=latex] (0,0) .. controls +(0.5,0) and +(0,-0.5) .. (1,1); \draw (1,1) .. controls +(0,0.5) and +(0.5,0) .. (0,2); \draw[scale=1] (0,1) node {$w$}; \draw[scale=1] (-0.5,3) node {$v$}; \draw[scale=1] (1,2) node {$F'$}; \draw[scale=1] (-1,2) node {$F'$}; \draw[scale=1] (0,-0.5) node {$\vdots$}; \draw[scale=1] (0,4.5) node {$\vdots$}; \draw[scale=1] (2,2) node {$\sqcup$}; \begin{scope}[xshift=4cm,yshift=1cm] \draw[->>,>=latex] (0,0) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,1); \draw (-1,1) .. controls +(0,0.5) and +(-0.5,0) .. (0,2); \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (1,1) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,0)--(1,1); \draw[postaction={decorate}] (0,2)--(1,1); \end{scope} \draw[->,>=latex] (0,0) -- (0,1); \draw (0,1) -- (0,2); \draw[scale=1] (-0.5,1) node {$w$}; \draw[scale=1] (0.5,1) node {$u$}; \draw[scale=1] (0,2.3) node {$v$}; \end{scope} \end{scope} \begin{scope}[xshift=8.5cm,yshift=-7cm,scale=1] \draw[scale=1] (0,-1.5) node {\large $(e)$}; \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,3) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,4) node[shape=circle,fill=black,scale=0.3] {}; \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,2)--(0,3); \draw[postaction={decorate}] (0,4)--(0,3); \end{scope} \draw[->,>=latex] (0,0) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,1); \draw (-1,1) .. controls +(0,0.5) and +(-0.5,0) .. (0,2); \draw[->>,>=latex] (0,2) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,3); \draw (-1,3) .. controls +(0,0.5) and +(-0.5,0) .. (0,4); \draw[->>,>=latex] (0,0) .. controls +(0.5,0) and +(0,-0.5) .. (1,1); \draw (1,1) .. controls +(0,0.5) and +(0.5,0) .. (0,2); \draw[->,>=latex] (0,2) .. controls +(-0.25,0) and +(0,-0.5) .. (-0.5,3); \draw (-0.5,3) .. controls +(0,0.5) and +(-0.25,0) .. (0,4); \draw[scale=1] (0,1) node {$w$}; \draw[scale=1] (-0.25,3) node {$u$}; \draw[scale=1] (-0.75,3) node {$w$}; \draw[scale=1] (1,2) node {$F'$}; \draw[scale=1] (-1,2) node {$F'$}; \draw[scale=1] (0,-0.5) node {$\vdots$}; \draw[scale=1] (0,4.5) node {$\vdots$}; \end{scope} \begin{scope}[xshift=13cm,yshift=-7cm,scale=1] \draw[scale=1] (0,-1.5) node {\large $(f)$}; \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,3) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,4) node[shape=circle,fill=black,scale=0.3] {}; \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,2)--(0,3); \draw[postaction={decorate}] (0,4)--(0,3); \end{scope} \draw[->,>=latex] (0,0) -- (0,1); \draw (0,1) -- (0,2); \draw[->,>=latex] (0,2) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,3); \draw (-1,3) .. controls +(0,0.5) and +(-0.5,0) .. (0,4); \draw[scale=1] (-0.5,3) node {$u$}; \draw[scale=1] (1,2) node {$F''$}; \draw[scale=1] (-1,2) node {$F''$}; \draw[scale=1] (0,-0.5) node {$\vdots$}; \draw[scale=1] (0,4.5) node {$\vdots$}; \end{scope} \end{tikzpicture} \caption{The bigon trick} \label{fig:bigon:trick} \end{figure} We will now use the \textit{bigon trick} to find another polyhedral description of $(S^3,K_n)$ with many fewer edges. The bigon trick is described in Figure \ref{fig:bigon:trick} (a) to (f). We start at (a), with the two faces $F$ having several edges in common, and a triangle $u$ adjacent to $F$ (note that there is a second face $u$ adjacent to the other $F$ somewhere else). Then we go to (b) by cutting $F$ along a new edge (with double full arrow) into $F'$ and a triangle $v$. The CW-complex described in (b) is the same as the one in (c), where the right part is a $3$-ball whose boundary is cut into the triangles $u$ and $v$ and the bigon $w$. The picture in (d) is simply the one from (c) with the ball rotated so that $v$ lies in the back instead of $w$. Then we obtain (e) by gluing the two parts of (d) along the face $v$, and finally (f) by fusing $F'$ and $w$ into a new face $F''$. As a result, we replaced two simple arrows by one longer different (full) arrow and we slided the face $u$ up. \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black}] \draw[scale=1] (0,-2) node {$(a)$}; \draw[scale=1] (8,-2) node {$(b)$}; \begin{scope}[xshift=0cm,yshift=-1.5cm,scale=0.45] \draw (0,6) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,8) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,10) node[shape=circle,fill=black,scale=0.3] {}; \draw[->,>=latex] (0,4) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,7); \draw (-1,7) .. controls +(0,0.5) and +(-0.5,0) .. (0,8); \draw[scale=2] (-0.5/2,7/2) node {$u$}; \draw[->,>=latex] (4,14) -- (2,9); \draw (2,9) -- (0,4); \draw[scale=2] (0.5/2,3.5/2) node {$u$}; \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (0,4)--(0,0); \draw[postaction={decorate}] (0,4)--(0,6); \draw[postaction={decorate}] (0,8)--(0,6); \draw[postaction={decorate}] (0,10) -- (0,12); \draw[postaction={decorate}] (-8,16) -- (0,12); \draw[postaction={decorate}] (4,14) -- (0,0); \draw[postaction={decorate}] (-8,16) -- (-1,5); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (0,4)--(-1,5); \draw[postaction={decorate}] (0,0)--(-1,5); \draw[postaction={decorate}] (4,14) -- (8,16); \draw[postaction={decorate}] (0,0) -- (8,16); \draw[postaction={decorate}] (4,14) -- (0,12); \end{scope} \draw (0,0) -- (-8,16); \begin{scope}[xshift=-4cm, yshift=8cm, rotate=30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (4,14) -- (-8,16); \begin{scope}[xshift=-2cm, yshift=15cm, rotate=75, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (-8,16) -- (8,16); \draw[scale=2] (0,9/2) node {$\vdots$}; \draw[scale=2] (4/2,15/2) node {$m$}; \draw[scale=2] (2.8/2,13.8/2) node {$r$}; \draw[scale=2] (5/2,14/2) node {$s$}; \draw[scale=2] (2/2,11.5/2) node {$D'$}; \draw[scale=2] (-4/2,5/2) node {$m$}; \draw[scale=2] (-1.5/2,4.5/2) node {$r$}; \draw[scale=2] (-0.3/2,3.5/2) node {$s$}; \draw[scale=2] (-4/2,12/2) node {$D'$}; \end{scope} \begin{scope}[xshift=8cm,yshift=-1.5cm,scale=0.45] \draw (0,6) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,8) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,10) node[shape=circle,fill=black,scale=0.3] {}; \draw[->>,>=latex] (4,14) -- (2,12); \draw (2,12) -- (0,10); \draw[scale=2] (1.5/2,12/2) node {$v$}; \draw[->>,>=latex] (0,4) -- (-4,10); \draw (-4,10) -- (-8,16); \draw[scale=2] (-1.8/2,6.1/2) node {$v$}; \draw[->,>=latex] (0,10) -- (-4,13); \draw (-4,13) -- (-8,16); \draw[scale=2] (-0.5/2,11/2) node {$u$}; \draw[->,>=latex] (4,14) -- (2,9); \draw (2,9) -- (0,4); \draw[scale=2] (0.5/2,3.5/2) node {$u$}; \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (0,4)--(0,0); \draw[postaction={decorate}] (0,10) -- (0,12); \draw[postaction={decorate}] (-8,16) -- (0,12); \draw[postaction={decorate}] (4,14) -- (0,0); \draw[postaction={decorate}] (-8,16) -- (-2,6); \end{scope} \draw[->,>=latex] (0,4) -- (0,5); \draw (0,5) -- (0,6); \draw[->,>=latex] (0,8) -- (0,9); \draw (0,9) -- (0,10); \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (0,4)--(-2,6); \draw[postaction={decorate}] (0,0)--(-2,6); \draw[postaction={decorate}] (4,14) -- (8,16); \draw[postaction={decorate}] (0,0) -- (8,16); \draw[postaction={decorate}] (4,14) -- (0,12); \end{scope} \draw (0,0) -- (-8,16); \begin{scope}[xshift=-4cm, yshift=8cm, rotate=30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (4,14) -- (-8,16); \begin{scope}[xshift=-2cm, yshift=15cm, rotate=75, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (-8,16) -- (8,16); \draw[scale=2] (0,7/2) node {$\vdots$}; \draw[scale=2] (4/2,15/2) node {$m$}; \draw[scale=2] (2.8/2,13.8/2) node {$r$}; \draw[scale=2] (5/2,14/2) node {$s$}; \draw[scale=2] (2/2,11/2) node {$G$}; \draw[scale=2] (-4/2,5/2) node {$m$}; \draw[scale=2] (-2.3/2,5.5/2) node {$r$}; \draw[scale=2] (-0.3/2,3.5/2) node {$s$}; \draw[scale=2] (-4/2,12/2) node {$G$}; \end{scope} \end{tikzpicture} \caption{A cellular decomposition of $(S^3,K_n)$ before and after the bigon trick}\label{fig:Htriang:bigon:trick} \end{figure} Let us now go back to our cellular decomposition of $(S^3,K_n)$. We start from Figure \ref{fig:Htriang:polyhedron} (b) and cut $D$ into new faces $u$ and $D'$ as in Figure \ref{fig:Htriang:bigon:trick} (a). Then we apply the bigon trick $p$ times, where $p:=\tfrac{n-3}{2}$, to slide the cell $u$ on the left $D'$, and finally we cut the face obtained from $D'$ a final time into a $p+2$-gon $G$ and a triangle $v$ by adding a double full arrow. See Figure \ref{fig:Htriang:bigon:trick} (b). Note that if $n=3$, i.e. $p=0$, we do not use the bigon trick, and simply denote $D'$ by $v$. In this case, $G$ is empty and the double full arrow should be identified with the simple full arrow. \begin{figure}[!h] \centering \begin{tikzpicture} [every path/.style={string ,black}] \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,1) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,3) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,5) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,6) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,7) node[shape=circle,fill=black,scale=0.3] {}; \draw (5,3.5) node[shape=circle,fill=black,scale=0.3] {}; \draw (-5,3.5) node[shape=circle,fill=black,scale=0.3] {}; \draw[scale=1] (0,4) node {$\vdots$}; \draw[scale=1,color=black] (1,1) node {$e_1$}; \draw[scale=1,color=black] (-0.2,1.5) node {$e_1$}; \draw[scale=1,color=black] (1,1.8) node {$e_2$}; \draw[scale=1,color=black] (-0.2,2.4) node {$e_2$}; \draw[scale=1,color=black] (1,5) node {$e_{p-1}$}; \draw[scale=1,color=black] (-1.5,4) node {$e_{p-1}$}; \draw[scale=1,color=black] (1,5.9) node {$e_p$}; \draw[scale=1,color=black] (-3,3) node {$e_p$}; \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=green] (3.5,3) circle (0.2) node{\scriptsize $1$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=green] (-0.3,1.5) circle (0.2) node{\scriptsize $1$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5,color=purple] \draw[color=purple] (3.5,4.5) circle (0.2) node{\scriptsize $2$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5,color=purple] \draw[color=purple] (-1.3,2.8) circle (0.2) node{\scriptsize $2$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=violet] (3.5,5.8) node{\tiny $p-2$}; \node[draw,ellipse,color=violet] (S) at(3.5,5.8) {\ \ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=violet] (-2.5,4) node{\tiny $p-2$}; \node[draw,ellipse,color=violet] (S) at(-2.5,4) {\ \ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=red] (3.5,6.8) node{\tiny $p-1$}; \node[draw,ellipse,color=red] (S) at(3.5,6.8) {\ \ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=red] (-4,5.5) node{\tiny $p-1$}; \node[draw,ellipse,color=red] (S) at(-4,5.5) {\ \ \ \ }; \end{scope} \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate},color=green] (5,3.5)--(0,1); \draw[postaction={decorate},color=purple] (5,3.5)--(0,2); \draw[postaction={decorate},color=violet] (5,3.5)--(0,5); \draw[postaction={decorate},color=red] (5,3.5)--(0,6); \draw[postaction={decorate},color=green] (0,0) .. controls +(-0.6,0.5) and +(-0.5,0) .. (0,2); \draw[postaction={decorate},color=purple] (0,0) .. controls +(-1.2,0.7) and +(-0.7,0) .. (0,3); \draw[postaction={decorate},color=violet] (0,0) .. controls +(-2.5,1.5) and +(-0.7,0) .. (0,6); \draw[postaction={decorate},color=red] (0,0) .. controls +(-4,3) and +(-2,-2) .. (0,7); \end{scope} \draw[->>,>=latex] (0,0) -- (-3,2.1); \draw (-3,2.1) -- (-5,3.5); \draw[->,>=latex] (0,7) -- (-3,7-2.1); \draw (-3,7-2.1) -- (-5,3.5); \draw (0,0) -- (3,2.1); \draw[<-,>=latex] (3,2.1) -- (5,3.5); \draw (0,7) -- (3,7-2.1); \draw[<<-,>=latex] (3,7-2.1) -- (5,3.5); \begin{scope}[yshift=0cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \begin{scope}[yshift=1cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \begin{scope}[yshift=2cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \begin{scope}[yshift=5cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \begin{scope}[yshift=6cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \end{tikzpicture} \caption{Decomposing the two faces $G$ in a tower of tetrahedra} \label{fig:GG:tower} \end{figure} Then, if $p\geqslant 1$, we triangulate the two faces $G$ as in Figure \ref{fig:GG:tower}: we add $p-1$ new edges drawn with simple arrows and circled $k$ for $k=1, \ldots, p-1$ (and drawn in different colors in Figure \ref{fig:GG:tower} but not in the following pictures), and $G$ is cut into $p$ triangles $e_1, \ldots, e_p$. This still makes sense if $p=1$, in this case we have $G=e_p=e_1$ and no new edges. Now, by combining Figures \ref{fig:Htriang:bigon:trick} (b) and \ref{fig:GG:tower}, we obtain a decomposition of $S^3$ as a polyhedron with only triangular faces glued to one another, and $K_n$ still represents the blue edge after identifications. In order to harmonize the notations with the small cases ($p=0,1$), we do the following arrow replacements: \begin{itemize} \item full black simple arrow by simple arrow with circled $0$, \item full black double arrow by simple arrow with circled $p$, \item white triangle simple arrow by simple arrow with circled $p+1$. \end{itemize} Moreover, we cut the previous polyehdron of Figures \ref{fig:Htriang:bigon:trick} (b) and \ref{fig:GG:tower} into $p+4$ tetrahedra, introducing new triangular faces $e_{p+1}$ (behind $r,u,v$), $g$ (behind $r,s,v$), $s'$ (completing $m,m,s$), $f_p$ (completing $g,s',u$) and $f_1, \ldots, f_{p-1}$ at each of the $p-1$ ``floors'' of the tower of Figure \ref{fig:GG:tower} (from front to back of the figure). We add the convention $f_0=e_1$ to account for the case $p=0$. We also choose an orientation for the blue edge and thus a sign for the tetrahedron that contains it (this choice will not have any influence on the ideal triangulation, though). Finally, we obtain the H-triangulation for $(S^3,K_n)$ described in Figure \ref{fig:H:trig:odd}, for any $p \geqslant 0$ (recalling the convention $f_0=e_1$ if need be). \begin{figure}[!h] \begin{tikzpicture} \begin{scope}[xshift=1cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_1$} ; \draw (0,-0.6) node{$e_2$} ; \draw (-0.5,0.3) node{$f_1$} ; \draw (0.5,0.3) node{$e_1$} ; \draw (0,-1.4) node{\large $T_1$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $1$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) circle (0.15) node{\scriptsize $2$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $1$}; \end{scope} \end{scope} \draw (3.5,0) node{$\ldots$} ; \draw (8.5,0) node{$\ldots$} ; \begin{scope}[xshift=6cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_k$} ; \draw (0,-0.7) node{$e_{k+1}$} ; \draw (-0.5,0.3) node{$f_k$} ; \draw (0.5,0.3) node{$f_{k-1}$} ; \draw (0,-1.4) node{\large $T_k$} ; \draw (0,-1.7) node{\tiny $(2\leqslant k \leqslant p-1)$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $k$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $k-1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) node{\tiny $k+1$}; \node[draw,ellipse] (S) at(-0.6,-0.6) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $k$}; \end{scope} \end{scope} \begin{scope}[xshift=11cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_p$} ; \draw (0,-0.7) node{$e_{p+1}$} ; \draw (-0.5,0.3) node{$f_p$} ; \draw (0.5,0.3) node{$f_{p-1}$} ; \draw (0,-1.4) node{\large $T_p$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $p-1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-0.6,-0.6) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$r$} ; \draw (0,-0.6) node{$v$} ; \draw (-0.5,0.3) node{$g$} ; \draw (0.5,0.3) node{$s$} ; \draw (0,-1.4) node{\large $U$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow d=black}}] (0,0)--(1.732/2,-0.5); \draw[->] (0,0)--(-1.732*0.3,-1*0.3); \draw (-1.732*0.3,-1*0.3)--(-1.732/2,-1/2); \draw[color=black][-<](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->>](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \end{scope} \begin{scope}[xshift=4cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (0.7,1) node{$g$} ; \draw (0,-0.6) node{$s'$} ; \draw (-0.5,0.3) node{$u$} ; \draw (0.5,0.3) node{$f_p$} ; \draw (0,-1.4) node{\large $V$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(-1.732/2,-0.5); \draw[color=black,<-](1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](1.732/4,-0.25) -- (1.732/2,-0.5); \draw[color=black](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black,->] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.2,-1.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \end{scope} \begin{scope}[xshift=8cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$u$} ; \draw (0,-0.6) node{$r$} ; \draw (-0.5,0.3) node{$e_{p+1}$} ; \draw (0.5,0.3) node{$v$} ; \draw (0,-1.4) node{\large $W$} ; \draw[->](0,0)--(0,0.6); \draw(0,0.6)--(0,1); \path [draw=black,postaction={on each segment={mid arrow d=black}}] (0,0)--(1.732/2,-0.5); \draw[color=black,<-](-1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](-1.732/4,-0.25) -- (-1.732/2,-0.5); \draw[color=black][-<](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $p$}; \end{scope} \end{scope} \begin{scope}[xshift=12cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$m$} ; \draw (0,-0.6) node{$m$} ; \draw (-0.5,0.3) node{$s$} ; \draw (0.5,0.3) node{$s'$} ; \draw (0,-1.4) node{\large $Z$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \draw[color=black,<-](1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](1.732/4,-0.25) -- (1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow d=black}}] (0,0)--(-1.732/2,-0.5); \draw[very thick,color=blue][->](1.732/2,-0.5) arc (-30:-90:1); \draw[very thick,color=blue] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black,->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->>](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \end{scope} \end{tikzpicture} \caption{The H-triangulation $Y_n$ for $(S^3,K_n)$, $n$ odd, $n \geqslant 3$, with $p=\frac{n-3}{2}$} \label{fig:H:trig:odd} \end{figure} In the H-triangulation of Figure \ref{fig:H:trig:odd} there are \begin{itemize} \item $1$ common vertex, \item $p+5 = \frac{n+7}{2}$ edges (simple arrow $\overrightarrow{e_s}$, double arrow $\overrightarrow{e_d}$, blue simple arrow $\overrightarrow{K_n}$, and the simple arrows $\overrightarrow{e_0}, \ldots, \overrightarrow{e_{p+1}}$ indexed by $0, \ldots p+1$ in circles), \item $2p+8 = n+5$ faces ($e_1, \ldots, e_{p+1}, f_1, \ldots, f_{p},g, m,r,s,s',u,v $), \item $p+4 = \frac{n+5}{2}$ tetrahedra ($T_1, \ldots, T_{p}, U, V, W, Z$) . \end{itemize} We are now ready to obtain an ideal triangulation of $S^3 \setminus K_n$. From the H-triangulation of $(S^3,K_n)$ of Figure \ref{fig:H:trig:odd}, let us collapse the whole tetrahedron $Z$ into a triangle: this transforms the blue edge (corresponding to $K_n$) into a point, collapses the two faces $m$, and identifies the faces $s$ and $s'$ in a new face also called $s$, and the double arrow edge to the arrow with circled $p+1$. Hence we get an ideal triangulation of the knot complement $S^3 \setminus K_n$, detailed in Figure \ref{fig:id:trig:odd}. \begin{figure}[!h] \begin{tikzpicture} \begin{scope}[xshift=1cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_1$} ; \draw (0,-0.6) node{$e_2$} ; \draw (-0.5,0.3) node{$f_1$} ; \draw (0.5,0.3) node{$e_1$} ; \draw (0,-1.4) node{\large $T_1$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $1$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) circle (0.15) node{\scriptsize $2$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $1$}; \end{scope} \end{scope} \draw (3.5,0) node{$\ldots$} ; \draw (8.5,0) node{$\ldots$} ; \begin{scope}[xshift=6cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_k$} ; \draw (0,-0.7) node{$e_{k+1}$} ; \draw (-0.5,0.3) node{$f_k$} ; \draw (0.5,0.3) node{$f_{k-1}$} ; \draw (0,-1.4) node{\large $T_k$} ; \draw (0,-1.7) node{\tiny $(2\leqslant k \leqslant p-1)$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $k$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $k-1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) node{\tiny $k+1$}; \node[draw,ellipse] (S) at(-0.6,-0.6) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $k$}; \end{scope} \end{scope} \begin{scope}[xshift=11cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_p$} ; \draw (0,-0.7) node{$e_{p+1}$} ; \draw (-0.5,0.3) node{$f_p$} ; \draw (0.5,0.3) node{$f_{p-1}$} ; \draw (0,-1.4) node{\large $T_p$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $p-1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-0.6,-0.6) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \end{scope} \begin{scope}[xshift=1cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$r$} ; \draw (0,-0.6) node{$v$} ; \draw (-0.5,0.3) node{$g$} ; \draw (0.5,0.3) node{$s$} ; \draw (0,-1.4) node{\large $U$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->] (0,0)--(-1.732*0.3,-1*0.3); \draw (-1.732*0.3,-1*0.3)--(-1.732/2,-1/2); \draw[color=black][-<](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \end{scope} \begin{scope}[xshift=6cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (0.7,1) node{$g$} ; \draw (0,-0.6) node{$s$} ; \draw (-0.5,0.3) node{$u$} ; \draw (0.5,0.3) node{$f_p$} ; \draw (0,-1.4) node{\large $V$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(-1.732/2,-0.5); \draw[color=black,<-](1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](1.732/4,-0.25) -- (1.732/2,-0.5); \draw[color=black](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black,->] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.2,-1.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \end{scope} \begin{scope}[xshift=11cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$u$} ; \draw (0,-0.6) node{$r$} ; \draw (-0.5,0.3) node{$e_{p+1}$} ; \draw (0.5,0.3) node{$v$} ; \draw (0,-1.4) node{\large $W$} ; \draw[->](0,0)--(0,0.6); \draw(0,0.6)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[color=black,<-](-1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](-1.732/4,-0.25) -- (-1.732/2,-0.5); \draw[color=black][-<](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $p$}; \end{scope} \end{scope} \end{tikzpicture} \caption{The ideal triangulation $X_n$ for $S^3\setminus K_n$, $n$ odd, $n \geqslant 3$, with $p=\frac{n-3}{2}$} \label{fig:id:trig:odd} \end{figure} In Figure \ref{fig:id:trig:odd} there are \begin{itemize} \item $1$ common vertex, \item $p+3 = \frac{n+3}{2}$ edges (simple arrow $\overrightarrow{e_s}$ and the simple arrows $\overrightarrow{e_0}, \ldots, \overrightarrow{e_{p+1}}$ indexed by $0, \ldots p+1$ in circles), \item $2p+6 = n+3$ faces ($e_1, \ldots, e_{p+1}, f_1, \ldots, f_{p}, g,r,s,u,v $), \item $p+3 = \frac{n+3}{2}$ tetrahedra ($T_1, \ldots, T_{p}, U, V, W$). \end{itemize} \subsection{Proof of Theorem \ref{thm:trig}} We can now conclude with the proof of Theorem \ref{thm:trig}. \begin{proof}[Proof of Theorem \ref{thm:trig}] The triangulations of Figures \ref{fig:H:trig:odd} and \ref{fig:id:trig:odd} correspond to the common ``comb representation'' of Figure \ref{fig:trig:odd}. Similarly, the triangulations of Figures \ref{fig:H:trig:even} and \ref{fig:id:trig:even} (constructed in Section \ref{sub:even:trig}) correspond to the common ``comb representation'' of Figure \ref{fig:trig:even}. \end{proof} \section{Angle structures and geometricity (odd case)}\label{sec:geom} In this section, $n$ will be an odd integer greater than or equal to $3$. \subsection{Geometricity of the ideal triangulations} Here we will compute the balanced angle relations for the ideal triangulations $X_n$ and their spaces of angle structures $\mathcal{A}_{X_n}$. We will then prove that the $X_n$ are \textit{geometric}. \begin{theorem}\label{thm:geometric} For every odd $n \geqslant 3$, the ideal triangulation $X_n$ of the $n$-th twist knot complement $S^3 \setminus K_n$ is geometric. \end{theorem} To prove Theorem \ref{thm:geometric}, we follow Futer-Gu\'eritaud \cite{FG}: we first prove that the space of angle structures $\mathcal{A}_{X_n}$ is non-empty (Lemma \ref{lem:non:empty}); then we prove by contradiction that the volume functional cannot attain its maximum on the boundary $\overline{\mathcal{A}_{X_n}} \setminus \mathcal{A}_{X_n}$ (Lemma \ref{lem:interior}). For the remainder of this section, $n$ will be a fixed odd integer, $n \geqslant 7$. Recall that $p=\frac{n-3}{2}$. The cases $n=3, 5$ (i.e. $p=0, 1$) are similar and simpler than the general following $n \geqslant 7$ case, and will be discussed at the end of this section (Remark \ref{rem:p1}). Recall that we denoted $\overrightarrow{e_0}, \ldots, \overrightarrow{e_{p+1}}, \overrightarrow{e_s} \in (X_n)^1$ the $p+3$ edges in $X_n$ respectively represented in Figure \ref{fig:id:trig:odd} by arrows with circled $0$, \ldots, circled $p+1$ and simple arrow. For $\alpha=(a_1,b_1,c_1,\ldots,a_p,b_p,c_p,a_U,b_U,c_U,a_V,b_V,c_V,a_W,b_W,c_W) \in \mathcal{S}_{X_n}$ a shape structure on $X_n$, we compute the weights of each edge: \begin{itemize} \item $\omega_s(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_s})= 2 a_U+b_V+c_V+a_W+b_W $ \item $\omega_0(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_0})= 2 a_1 + c_1 + 2 a_2 + \ldots + 2 a_p + a_V+c_W $ \item $\omega_1(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_1})= 2b_1+c_2 $ \\ \item $\omega_k(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_k})= c_{k-1}+2b_k+c_{k+1} $ \ \ (for $2\leqslant k \leqslant p-1$) \\ \vspace*{-2mm} \item $\omega_p(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_p})= c_{p-1}+2b_p+b_U+b_V+a_W$ \item $\omega_{p+1}(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_{p+1}})= c_p+b_U+2c_U+a_V+c_V+b_W+c_W $ \end{itemize} The space of angle structures $\mathcal{A}_{X_n}$ is made of shape structures $\alpha \in \mathcal{S}_{X_n}$ satisfying $\omega_j(\alpha)=2\pi$ for all $j\in\{s,0,\ldots,p+1\}$. The sum of all these equations says that all the angles add up to $(p+3)\pi$, which is true in any shape structure, therefore we can drop $\omega_0(\alpha)$ as redundant. Using the properties of shape structures, $\mathcal{A}_{X_n}$ is thus defined by the $p+2$ following equations on $\alpha$: \begin{itemize} \item $E_s(\alpha): \ 2 a_U =a_V+c_W $ \item $E_1(\alpha): \ 2b_1+c_2 = 2 \pi$ \\ \vspace*{-2mm} \item $E_k(\alpha): \ c_{k-1}+2b_k+c_{k+1} = 2 \pi $ \ \ (for $2\leqslant k \leqslant p-1$) \\ \vspace*{-2mm} \item $E_p(\alpha): \ c_{p-1}+2b_p+ (b_U+b_V+a_W)=2\pi$ \item $E_{p+1}(\alpha): \ 3c_p + (a_U+a_V+c_W) + 3(c_U+c_V+b_W) = 3\pi~;$ \end{itemize} the last line was obtained as $3B_{p+1}+2 B_s - 3F_U - 2F_V - 2F_W$, where $F_j$ is the relationship $a_j+b_j+c_j=\pi$ and $B_j$ is the relationship $\omega_{j}(\alpha) = 2 \pi$. In other words, $$\mathcal{A}_{X_n} = \{ \alpha \in \mathcal{S}_{X_n} \ | \ \forall j \in \{s,1,\ldots,p+1\}, \ E_j(\alpha)\}.$$ \begin{lemma}\label{lem:non:empty} The set $\mathcal{A}_{X_n}$ is non-empty. \end{lemma} \begin{proof} For small $\epsilon>0$, define: $$ \begin{pmatrix}a_j\\b_j\\c_j \end{pmatrix}:= \begin{pmatrix}\epsilon\\ \pi - \epsilon(j^2+1) \\ \epsilon j^2 \end{pmatrix} \text{for } 1\leqslant j \leqslant p-1 , \ \begin{pmatrix}a_p\\b_p\\c_p \end{pmatrix}:= \begin{pmatrix} \pi/2 - \epsilon(p^2+2p-1)/2 \\ \pi/2 - \epsilon(p^2-2p+1)/2 \\ \epsilon p^2 \end{pmatrix}, $$ $$ \begin{pmatrix}a_U\\b_U\\c_U \end{pmatrix} = \begin{pmatrix}a_V\\b_V\\c_V \end{pmatrix} = \begin{pmatrix}c_W\\a_W\\b_W \end{pmatrix}:= \begin{pmatrix} \pi/2 + \epsilon p^2/2 \\ \pi/3 \\ \pi/6 - \epsilon p^2/2 \end{pmatrix}. $$ By direct computation, we can check that this $\alpha$ is a shape structure (the angles are in $(0,\pi)$ if $\epsilon$ is small enough), and that the equations $E_j(\alpha)$ are satisfied for $j\in\{s,1,\ldots,p+1\}$. \end{proof} We will say that a tetrahedron $T$ of a triangulation $X$ endowed with an extended shape structure $\alpha \in \overline{\mathcal{S}_X}$ is \textit{flat for $\alpha$} if one of the three angles of $T$ is zero, and \textit{taut for $\alpha$} if two angles are zero and the third is $\pi$. In both cases, $T$ has a volume equal to zero. \begin{lemma}\label{lem:flat:taut} Suppose $\alpha \in \overline{\mathcal{A}_{X_n}} \setminus \mathcal{A}_{X_n}$ is such that the volume functional on $\overline{\mathcal{A}_{X_n}}$ is maximal at $\alpha$. If an angle of $\alpha$ equals $0$, then the other two angles for the same tetrahedron are $0$ and~$\pi$. In other words, if a tetrahedron is flat for $\alpha$, then it is taut for $\alpha$. \end{lemma} \begin{proof} We refer to \cite[Proposition 7.1]{Gf} for the proof. \end{proof} Next, we claim that among the volume maximizers, there is one such that $(a_U,b_U,c_U)=(a_V,b_V,c_V)=(c_W,a_W,b_W)$. The involution $(a_V, b_V, c_V) \leftrightarrow (c_W,a_W,b_W)$ preserves all equations $E_j(\alpha)$, so by concavity of the volume function, there is a maximizer such that $(a_V, b_V, c_V)=(c_W,a_W,b_W)$. By $E_s(\alpha)$ this implies $a_U=a_V=c_W$. The order-3 substitution of variables $$(a_U, b_U, c_U) \rightarrow (a_V, b_V, c_V) \rightarrow (c_W, a_W, b_W) \rightarrow (a_U, b_U, c_U)$$ then clearly leaves $E_p$ and $E_{p+1}$ unchanged, so by concavity we may average out and find a maximizer such that $U,V,W$ have the same angles, as desired. These identifications make $E_s(\alpha)$ redundant. Moreover, dropping the angles of $V$ and $W$ as variables, we may now rewrite the system of constraints as \begin{itemize} \item $E_1 : \ 2b_1+c_2 = 2 \pi$ \vspace{2mm} \item $E_k : \ c_{k-1}+2b_k+c_{k+1} = 2 \pi $ \quad (for $2\leqslant k \leqslant p-1$) \vspace{2mm} \item $E'_p : \ c_{p-1}+2b_p + 3b_U = 2 \pi$ \item $E'_{p+1} : \ c_p + a_U + 3c_U = \pi$ \quad (not $2\pi$!). \end{itemize} \begin{lemma}\label{lem:interior} Suppose that the volume functional on $\overline{\mathcal{A}_{X_n}}$ is maximal at $\alpha$. Then $\alpha$ cannot be on the boundary $\overline{\mathcal{A}_{X_n}} \setminus \mathcal{A}_{X_n}$, and is necessarily in the interior $\mathcal{A}_{X_n}$. \end{lemma} \begin{proof} First, the tetrahedron $T_p$ is not flat, i.e.\ not taut. Indeed, on one hand $c_p=\pi$ would by $E'_{p+1}$ entail $a_U=c_U=0$, hence $b_U=\pi$, incompatible with $E'_p$. On the other hand, suppose $c_p=0$, then the non-negative sequence $(0, c_1, \dots, c_p)$ is convex, because $E_k$ can be rewritten $c_{k-1} - 2c_k + c_{k+1} = 2 a_k \geq 0$ (agreeing that ``$c_0$'' stands for $0$). Hence $c_1=\dots=c_p=0$, and $b_p\in\{0,\pi \}$ by Lemma~\ref{lem:flat:taut}. If $b_p=0$ then $(E'_p, E'_{p+1})$ yield $(a_U, b_U, c_U)=(0,2\pi/3, \pi/3)$. If $b_p=\pi$ they yield $(a_U, b_U, c_U)=(\pi,0,0)$. In either case, all tetrahedra are flat so the volume vanishes and cannot be maximal: this contradiction shows $c_p>0$. Next, we show that $U$ is not flat. We cannot have $c_U=\pi$ or $b_U=\pi$, by $E'_{p+1}$ and $E'_p$. But $a_U=\pi$ is also impossible, since by $E'_{p+1}$ it would imply $c_p=0$, ruled out above. We can see by induction that $b_1,\dots, b_{p-1}>0$: the initialisation is given by $E_1$, written as $b_1=\pi-c_2/2\geq \pi/2$. For the induction step, suppose $b_{k-1}>0$ for some $1< k \leq p-1$: then $c_{k-1}<\pi$, hence $E_k$ implies $b_k>0$. Finally, $b_1,\dots, b_{p-1}<\pi$: we show this by \emph{descending} induction. Initialisation: by $E_{p-1}$, we have $b_{p-1}\leq \pi-c_p/2 < \pi$ since $T_p$ is not flat. For the induction step, suppose $b_{k+1}<\pi$ for some $1\leq k < p-1$: then $0< b_{k+1}<\pi$ by the previous induction, hence $c_{k+1}>0$ by Lemma~\ref{lem:flat:taut}, hence $E_k$ implies $b_k<\pi$. \end{proof} \begin{remark}[Cases $p=0,1$]\label{rem:p1} The above discussion is valid for $p\geq 2$. If $p=1$, we have only the weights $\omega_s$, $\omega_{p+1}$ and $\omega_p$, the latter taking the form $2b_p+b_U+b_V+a_W$ (i.e.\ the variable ``$c_{p-1}$'' disappears from equation $E'_p$). The argument is otherwise unchanged --- the inductions in the proof of Lemma~\ref{lem:interior} being empty. If $p=0$, we find only one equation $E'_{p+1}: \ a_U+3c_U=\pi$ (i.e.\ the variable ``$c_p$'' disappears). The volume maximizer $(a_U, b_U, c_U)$ on the segment from $(\pi,0,0)$ to $(0,2\pi/3, \pi/3)$ yields the complete hyperbolic metric. \end{remark} \begin{proof}[Proof of Theorem \ref{thm:geometric}] In the case $n\geqslant 7$, we have proven in Lemma \ref{lem:non:empty} that $\mathcal{A}_{X_n}$ is non- empty, thus the volume functional $\mathcal{V}\colon \overline{\mathcal{A}_{X_n}}\to \mathbb{R}$ admits a maximum at a certain point $\alpha \in \overline{\mathcal{A} _{X_n}}$ as a continuous function on a non-empty compact set. We proved in Lemma \ref{lem:interior} that $\alpha \notin \overline{\mathcal{A}_{X_n}} \setminus \mathcal{A}_{X_n}$, therefore $\alpha \in \mathcal{A}_{X_n}$. It follows from Theorem \ref{thm:casson:rivin} that $X_n$ is geometric. For the cases $n=3$ and $n=5$, we follow the same reasoning, replacing Lemma~\ref{lem:interior} with Remark~\ref{rem:p1}. \end{proof} \subsection{The cusp triangulation} \label{sub:cusp:trig} \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black}] \begin{scope}[scale=0.65] \draw[color=blue] (11-21,0.5) node {$3_U$}; \draw (10.5-21,4) node {$v$}; \draw (11.7-21,5) node {$s$}; \draw (-10.5,-1.7) node {$r$}; \draw[color=brown] (-11.7,-1.7) node {$a$}; \draw[color=brown] (-9.4,-1.7) node {$b$}; \draw[color=brown] (11.8-21,10) node {$c$}; \draw[color=blue] (-10.3,9) node {$3_W$}; \draw (10.5-21,6) node {$v$}; \draw (10.5-21,11.7) node {$r$}; \draw (-11.7,4) node {$u$}; \draw[color=brown] (-11.8,1) node {$b$}; \draw[color=brown] (9.3-21,11.7) node {$a$}; \draw[color=brown] (11.7-21,11.7) node {$c$}; \draw[color=blue] (-7.9,3) node {$3_V$}; \draw (-7.7,-0.8) node {$g$}; \draw (-8.7,6) node {$s$}; \draw (-7.6,4.8) node {$f_p$}; \draw[color=brown] (-8.8,-1.5) node {$a$}; \draw[color=brown] (-6.3,0.3) node {$b$}; \draw[color=brown] (-8.8,10) node {$c$}; \draw[color=blue] (-5,6.5) node {$3_p$}; \draw (-7,6) node {$f_p$}; \draw (-4,10) node {$e_{p+1}$}; \draw (-3,5) node {$e_p$}; \draw[color=brown] (-0.4,8.8) node {$a$}; \draw[color=brown] (-5.9,0.6) node {$b$}; \draw[color=brown] (-8.7,11.6) node {$c$}; \draw[color=blue] (-2,11) node { $2_W$}; \draw (-3.5,10.5) node { $e_{p+1}$}; \draw (-0.3,10.5) node { $u$}; \draw (-4.5,11.7) node { $r$}; \draw[color=brown] (-0.3,11.7) node {$a$}; \draw[color=brown] (-7.2,11.7) node {$b$}; \draw[color=brown] (-0.3,9.3) node {$c$}; \draw[color=blue] (-6,-1.3) node {$2_U$}; \draw (-6.8,-0.8) node {$g$}; \draw (-5,-1.7) node {$r$}; \draw (-3.5,-1.2) node {$v$}; \draw[color=brown] (-2,-1.7) node {$a$}; \draw[color=brown] (-6,-0.3) node {$b$}; \draw[color=brown] (-8,-1.7) node {$c$}; \draw[color=blue] (-1.5,-0.5) node {$1_W$}; \draw (-2.7,-0.8) node { $v$}; \draw (-2.6,0.1) node { $e_{p+1}$}; \draw (-0.3,-0.5) node { $u$}; \draw[color=brown] (-5.1,-0.1) node {$a$}; \draw[color=brown] (-0.3,-1.6) node {$b$}; \draw[color=brown] (-0.3,0.6) node {$c$}; \draw[color=blue] (11,0.5) node {$2_V$}; \draw (10.5,6) node {$g$}; \draw (11.7,5) node {$u$}; \draw (10.5,11.7) node {$s$}; \draw[color=brown] (9.3,11.7) node {$b$}; \draw[color=brown] (11.8,10) node {$c$}; \draw[color=brown] (11.7,11.7) node {$a$}; \draw[color=blue] (10.5,9.5) node {$1_U$}; \draw (10.5,4) node {$g$}; \draw (21-11.7,4) node {$r$}; \draw (21-10.5,-1.7) node {$s$}; \draw[color=brown] (21-11.8,1) node {$c$}; \draw[color=brown] (21-11.7,-1.7) node {$a$}; \draw[color=brown] (21-9.4,-1.7) node {$b$}; \draw[color=blue] (6,10+1.2) node {$0_U$}; \draw (7,11) node {$v$}; \draw (5,11.7) node {$s$}; \draw (3.5,11.2) node {$g$}; \draw[color=brown] (2,11.7) node {$a$}; \draw[color=brown] (6,10.3) node {$b$}; \draw[color=brown] (8,11.7) node {$c$}; \draw[color=blue] (8,7) node {$0_W$}; \draw (7.7,10.8) node {$v$}; \draw (8.7,4) node {$r$}; \draw (7.8,10-4.8) node {$e_{p+1}$}; \draw[color=brown] (8.8,11.5) node {$c$}; \draw[color=brown] (6.3,10-0.3) node {$a$}; \draw[color=brown] (8.8,0) node {$b$}; \draw[color=blue] (5,10-6.5) node {$0_p$}; \draw (6.9,10-6) node {$e_{p+1}$}; \draw (4,0) node {$f_{p}$}; \draw (3.4,5) node {$f_{p-1}$}; \draw[color=brown] (0.4,10-8.8) node {$a$}; \draw[color=brown] (5.9,10-0.6) node {$b$}; \draw[color=brown] (8.7,10-11.6) node {$c$}; \draw[color=blue] (1.7,-0.8) node {$0_{V}$}; \draw (0.3,-0.5) node { $u$}; \draw (2.5,-0.3) node { $f_p$}; \draw (4.5,-1.8) node { $s$}; \draw[color=brown] (0.2,0.7) node { $a$}; \draw[color=brown] (0.2,-1.7) node { $b$}; \draw[color=brown] (7.7,-1.8) node { $c$}; \draw[color=blue] (1.5,10.6) node {$1_{V}$}; \draw (0.2,10.5) node { $u$}; \draw (2,9.7) node { $f_p$}; \draw (3,10.7) node { $g$}; \draw[color=brown] (0.2,9.4) node {$a$}; \draw[color=brown] (5,10.1) node {$b$}; \draw[color=brown] (0.2,11.7) node {$c$}; \draw (0,-2)--(10,-2)--(12,-2)--(12,12)--(9,12)--(0,12)--(-9,12)--(-12,12)--(-12,-2)--(-9,-2)--(0,-2); \draw (0,-2)--(0,1)--(9,-2)--(6,10)--(9,12)--(9,-2); \draw (9,-2)--(12,12); \draw (6,10)--(0,12)--(0,9)--(-9,12)--(-6,0)--(0,-2); \draw (-6,0)--(-9,-2)--(-9,12); \draw (-12,-2)--(-9,12); \draw[color=black] (-6,0)--(0,9)--(6,10)--(0,1)--(-6,0); \draw[color=black] (-6,0)--(-3.6,2)--(0,1)--(-2.4,3)--(-1.2,4)--(0,1)--(0,9)--(-1.2,4); \draw[color=black] (-3.6,2)--(0,9)--(-2.4,3); \draw[color=black] (0,9)--(1.2,6)--(0,1)--(2.4,7)--(0,9)--(3.6,8)--(0,1); \draw[color=black] (1.2,6)--(2.4,7); \draw[color=black] (3.6,8)--(6,10); \draw[color=violet,style=dashed,very thick] (10,-2)--(10,6); \draw[color=violet,style=dashed, very thick,<-] (10,6)--(10,12); \draw[color=violet] (9.6,7) node {\scriptsize $m_{X_n}$}; \draw[color=teal,style=dashed, very thick,->] (-12,5.5)--(-11,5.5+3.25); \draw[color=teal,style=dashed, very thick] (-11,5.5+3.25)--(-10,12); \draw[color=teal,style=dashed, very thick,->] (-10,-2)--(-9,-1)--(-5,-1); \draw[color=teal,style=dashed, very thick] (-5,-1)--(0,-1)--(7.5,-2); \draw[color=teal,style=dashed, very thick,->] (7.5,12)--(7.5+0.9,12-1.3) ; \draw[color=teal,style=dashed, very thick] (7.5+0.9,12-1.3)--(12,5.5) ; \draw[color=teal] (-11,10) node {\scriptsize $l_{X_n}$}; \draw[color=blue] (3.3,8.5) node {$1_p$}; \draw (4.5,9.1) node {$e_p$}; \draw (3.4,9.3) node {$f_p$}; \draw (2.1,8.8) node {$f_{p-1}$}; \draw[color=brown] (1,8.9) node {$a$}; \draw[color=blue] (4,7.7) node {\tiny $0_{p-1}$}; \draw (4.5,8.45) node {\tiny $e_p$}; \draw (3,6.5) node {\tiny $f_{p-2}$}; \draw (4,7.2) node {\tiny $f_{p-1}$}; \draw[color=brown] (1.8,4) node {\tiny $a$}; \draw[color=blue] (-3.9,2.3) node {\tiny $3_{p-1}$}; \draw (-3,4) node {\tiny $e_p$}; \draw (-3.45,3) node {\tiny $e_{p-1}$}; \draw (-4.35,1.7) node {\tiny $f_{p-1}$}; \draw[color=brown] (-1.2,6.95) node {\tiny $a$}; \draw[color=blue] (-3.4,1.2) node {$2_{p}$}; \draw (-2,1.4) node {\tiny $e_p$}; \draw (-3,0.65) node {\tiny $e_{p+1}$}; \draw (-4.2,0.9) node {\tiny $f_{p-1}$}; \draw[color=brown] (-1,1.05) node {\tiny $a$}; \draw[color=blue] (-0.7,4) node {$2_{1}$}; \draw (-0.5,5.9) node {\tiny $e_1$}; \draw (-0.3,4.7) node {\tiny $e_1$}; \draw (-0.5,3) node {\tiny $e_2$}; \draw[color=brown] (-0.2,2) node {\tiny $a$}; \draw[color=blue] (-1.4,4.5) node {$3_1$}; \draw (-1,5.7) node {\tiny $e_1$}; \draw (-1.35,5) node {\tiny $e_2$}; \draw (-1.65,3.9) node {\tiny $f_{1}$}; \draw[color=brown] (-0.65,7) node {\tiny $a$}; \draw[color=blue] (-1.7,2.9) node {$2_2$}; \draw (-1.3,2.3) node {\tiny $e_3$}; \draw (-1,2.8) node {\tiny $e_2$}; \draw (-1.5,3.45) node {\tiny $f_{1}$}; \draw[color=brown] (-0.6,1.8) node {\tiny $a$}; \draw[color=blue] (0.5,6) node {$1_{1}$}; \draw (0.25,4.8) node {\tiny $e_1$}; \draw (0.5,4) node {\tiny $e_1$}; \draw (0.5,7) node {\tiny $f_1$}; \draw[color=brown] (0.15,8.1) node {\tiny $a$}; \draw[color=blue] (1.8,7.1) node {$1_{2}$}; \draw (1.6,6.6) node {\tiny $e_2$}; \draw (1.3,7.6) node {\tiny $f_2$}; \draw (1.1,7) node {\tiny $f_1$}; \draw[color=brown] (0.5,8.3) node {\tiny $a$}; \draw[color=blue] (1.45,5.5) node {$0_1$}; \draw (1.8,6.2) node {\tiny $e_2$}; \draw (1.1,4.5) node {\tiny $e_1$}; \draw (1.4,5) node {\tiny $f_1$}; \draw[color=brown] (0.65,3) node {\tiny $a$}; \draw (3-0.12,7.5-0.1) node[shape=circle,fill=black,scale=0.2] {}; \draw (3,7.5) node[shape=circle,fill=black,scale=0.2] {}; \draw (3+0.12,7.5+0.1) node[shape=circle,fill=black,scale=0.2] {}; \draw (-3-0.12,2.5-0.1) node[shape=circle,fill=black,scale=0.2] {}; \draw (-3,2.5) node[shape=circle,fill=black,scale=0.2] {}; \draw (-3+0.12,2.5+0.1) node[shape=circle,fill=black,scale=0.2] {}; \draw[color=red, very thick,->] (-12,12)--(-11,12); \draw[color=red, very thick] (-11,12)--(-9,12); \draw[color=red] (-10.7,12.5) node {(i)}; \draw[color=red, very thick,->] (-9,12)--(-9,4); \draw[color=red, very thick] (-9,4)--(-9,-2); \draw[color=red] (-8.4,4) node {(ii)}; \draw[color=red, very thick,->] (-9,-2)--(-4,-2); \draw[color=red, very thick] (-4,-2)--(0,-2); \draw[color=red] (-4,-2.5) node {(iii)}; \draw[color=red, very thick,->] (0,12)--(4,12); \draw[color=red, very thick] (4,12)--(9,12); \draw[color=red] (4,12.5) node {(iv)}; \draw[color=red, very thick,->] (9,12)--(9,6); \draw[color=red, very thick] (9,6)--(9,-2); \draw[color=red] (8.4,6) node {(v)}; \draw[color=red, very thick,->] (9,-2)--(11,-2); \draw[color=red, very thick] (11,-2)--(12,-2); \draw[color=red] (10.7,-2.5) node {(vi)}; \end{scope} \end{tikzpicture} \caption{Triangulation of the boundary torus for the truncation of $X_n$, $n$ odd, with angles (brown), meridian curve $m_{X_n}$ (violet, dashed), longitude curve $l_{X_n}$ (green, dashed) and preferred longitude curve (i)$\cup \ldots \cup$(vi) (red).}\label{fig:trig:cusp:odd} \end{figure} If we truncate the ideal triangulation $X_n$ of Figure \ref{fig:id:trig:odd} by removing a small neighborhood of each vertex, then we obtain a cellular decomposition by compact truncated tetrahedra of the knot exterior $S^3 \setminus \nu(K_n)$ (where $\nu(K)$ is an open tubular neighborhood of $K$). This induces a triangulation on the boundary torus $\partial \nu(K_n)$, where each triangle comes from a pre-quotient vertex of a tetrahedron of $X$. See Figure \ref{fig:trig:cusp:odd} for the full description of the triangulation of this torus. The triangles are called (in blue) by the names of the corresponding truncated vertices (written $k_j$ for the $k$-th vertex in the $j$-th tetrahedron), the edges are called (in black) by the names of the truncated faces they are part of, and the angles $a,b,c$ at each corner of a triangle (in brown) obviously come from the corresponding truncated edges in $X_n$. Note that we did not put the indices on $a,b,c$ for readability, but it goes without saying that angles $a,b,c$ in the triangle $k_j$ are actually the coordinates $a_j,b_j,c_j$. Moreover, for some small faces, we only indicated the brown $a$ angle for readability; the $b$ and $c$ follow clockwise (since all the concerned tetrahedra have positive sign). We drew three particular curves in Figure \ref{fig:trig:cusp:odd}: $m_{X_n}$ in violet and dashed, $l_{X_n}$ in green and dashed, and finally the concatenation (i) $\cup \ldots \cup$ (vi) in red. These curves can be seen as generators of the first homology group of the torus. We call $m_{X_n}$ a \textit{meridian curve} since it actually comes from the projection to $\partial \nu(K_n)$ of a meridian curve in $S^3 \setminus K_n$, the one circling the knot and going through faces $s$ and $E$ on the upper left of Figure \ref{fig:diagram:htriang}, to be exact (we encourage the motivated reader to check this fact by following the curve on the several pictures from Figure \ref{fig:diagram:htriang} to \ref{fig:id:trig:odd}). Similarly, $l_{X_n}$ and (i) $\cup \ldots \cup$ (vi) are two distinct \textit{longitude curves}, and (i) $\cup \ldots \cup$ (vi) corresponds to a \textit{preferred longitude} of the knot $K_n$, i.e. a longitude with zero linking number with the knot. This last fact can be checked in Figure \ref{fig:longitude:odd}: on the bottom of the figure, the sub-curves (i) to (vi) are drawn on a truncated tetrahedron $U$; on the top of the figure, the corresponding full longitude curve (in red) is drawn in the exterior of the knot (in blue) before the collapsing of the knot into one point (compare with Figure \ref{fig:diagram:htriang}). We check that in each square on the left of the figure, the sum of the signs of crossings between blue and red strands is zero (the signs are marked in green circled $+$ and $-$), and thus the red longitude curve has zero linking number with the knot, i.e. is a preferred longitude. \begin{figure} \includegraphics[scale=1.7]{OddLongitude.pdf} \caption{A preferred longitude (i)$\cup \ldots \cup$(vi) (in red) for the odd twist knot $K_n$, seen in $S^3 \setminus K_n$ (top) and on the truncated tetrahedron $U$ (bottom).}\label{fig:longitude:odd} \end{figure} To the curves $m_{X_n}$ and $l_{X_n}$ are associated combinations of angles (the \textit{angular holonomies}) $$m_{X_n}(\alpha):= H^\mathbb{R}(m_{X_n}) = a_U-a_V \ \ \text{and} \ \ l_{X_n}(\alpha):=H^\mathbb{R}(l_{X_n}) = 2(c_V-b_W),$$ following the convention that when the curve crosses a triangle, the lone angle among the three is counted positively if it lies on the left of the curve, and negatively if it lies on the right. Remark that this convention cannot rigorously be applied to the red curve {(i) $\cup \ldots \cup$ (vi)} in Figure \ref{fig:trig:cusp:odd}, since it lies on edges and vertices. Nevertheless, one can see in Figure \ref{fig:trig:cusp:odd} that in the homology group of the boundary torus, we have the relation $$ \mathrm{(i)} \cup \ldots \cup \mathrm{(vi)} = l_{X_n} + 2 m_{X_n}.$$ \subsection{The complex gluing equations}\label{sub:complete:odd} Here seems to be an appropriate place to list the complex versions of the balancing and completeness equations for $X_n$, which will be useful in Section \ref{sec:vol:conj}. For a complex shape structure $\widetilde{\mathbf{z}}=(z_1,\ldots,z_p,z_U,z_V,z_W) \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+3}$, its complex weight functions are: \begin{itemize} \item $\omega^{\mathbb{C}}_s(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_s})= 2\mathrm{Log}(z_U) + \mathrm{Log}(z'_V) + \mathrm{Log}(z''_V) + \mathrm{Log}(z_W) + \mathrm{Log}(z'_W) $ \item $\omega^{\mathbb{C}}_0(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_0})= 2\mathrm{Log}(z_1) + \mathrm{Log}(z'_1) + 2\mathrm{Log}(z_2) + \cdots + 2\mathrm{Log}(z_p) + \mathrm{Log}(z_V) + \mathrm{Log}(z''_W) $ \item $\omega^{\mathbb{C}}_1(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_1})= 2\mathrm{Log}(z''_1) + \mathrm{Log}(z'_2) $ \\ \vspace*{-2mm} \item $\omega^{\mathbb{C}}_k(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_k})= \mathrm{Log}(z'_{k-1}) + 2\mathrm{Log}(z''_k) + \mathrm{Log}(z'_{k+1}) $ \ \ (for $2\leqslant k \leqslant p-1$) \\ \vspace*{-2mm} \item $\omega^{\mathbb{C}}_p(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_p})= \mathrm{Log}(z'_{p-1}) + 2\mathrm{Log}(z''_p) + \mathrm{Log}(z'_U) + \mathrm{Log}(z'_V) + \mathrm{Log}(z_W)$ \item $\omega^{\mathbb{C}} _{p+1}(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_{p+1}})= \mathrm{Log}(z'_p) + \mathrm{Log}(z'_U) + 2\mathrm{Log}(z''_U) + \mathrm{Log}(z_V) + \mathrm{Log}(z''_V) + \mathrm{Log}(z'_W) + \mathrm{Log}(z''_W) $ \end{itemize} It follows from Theorem \ref{thm:geometric} that there exists exactly one complex angle structure $\widetilde{\mathbf{z^0}}=(z_1^0,\ldots,z_p^0,z_U^0,z_V^0,z_W^0)\in (\mathbb{R}+i\mathbb{R}_{>0})^{p+3}$ corresponding to the complete hyperbolic metric. This $\widetilde{\mathbf{z^0}}$ is the only $\widetilde{\mathbf{z}} \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+3}$ satisfying $$ \omega^{\mathbb{C}}_s(\widetilde{\mathbf{z}}) = \omega^{\mathbb{C}}_0(\widetilde{\mathbf{z}}) = \ldots = \omega^{\mathbb{C}}_{p+1}(\widetilde{\mathbf{z}}) = 2 i \pi$$ as well as the complex completeness equation $$\mathrm{Log}(z_U)-\mathrm{Log}(z_V)=0$$ coming from the meridian curve $m_{X_n}$. These conditions are equivalent to the following system $\mathcal{E}^{co}_{X_n}(\widetilde{\mathbf{z}})$ of equations on $\widetilde{\mathbf{z}}$: \begin{itemize} \item $\mathcal{E}_{X_n,0}(\widetilde{\mathbf{z}}) \colon \mathrm{Log}(z'_1) + 2 \mathrm{Log}(z_1)+\cdots + 2\mathrm{Log}(z_p)+2\mathrm{Log}(z_U) = 2i\pi$ \item $\mathcal{E}_{X_n,1}(\widetilde{\mathbf{z}}) \colon 2\mathrm{Log}(z''_1)+\mathrm{Log}(z'_2)=2i\pi$\\ \vspace*{-2mm} \item $\mathcal{E}_{X_n,k}(\widetilde{\mathbf{z}}) \colon \mathrm{Log}(z'_{k-1})+2\mathrm{Log}(z''_k)+\mathrm{Log}(z'_{k+1})=2i\pi$ \ \ (for $2 \leqslant k \leqslant p-1$)\\ \vspace*{-2mm} \item $\mathcal{E}_{X_n,p+1}^{co}(\widetilde{\mathbf{z}}) \colon \mathrm{Log}(z'_{p}) +2 \mathrm{Log}(z''_{U})-\mathrm{Log}(z_{W})=0$ \item $\mathcal{E}_{X_n,s}^{co}(\widetilde{\mathbf{z}}) \colon \mathrm{Log}(z''_{W}) -\mathrm{Log}(z_{U})=0$ \item $z_V=z_U$ \end{itemize} Indeed, notice that the equation $\omega^{\mathbb{C}}_p(\widetilde{\mathbf{z}})=2i\pi$ was redundant with the other complex balancing equation. Remark furthermore that the variable $z_V$ only appears in the equation $z_V=z_U$, which is why we will allow a slight abuse of notation to use the equations $$\mathcal{E}_{X_n,0}(\mathbf{z}), \ldots, \mathcal{E}_{X_n,p-1}(\mathbf{z}), \mathcal{E}^{co}_{X_n,p+1}(\mathbf{z}), \mathcal{E}^{co}_{X_n,s}(\mathbf{z})$$ also for a variable $\mathbf{z}=(z_1,\ldots,z_p,z_U,z_W) \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+2}$ without the coordinate $z_V$ (see Lemma \ref{lem:grad:thurston}). \section{Partition function for the ideal triangulations (odd case)}\label{sec:part:odd} \begin{notation} From now, we will denote $\stackrel{\star}{=}$ the equality up to taking the complex module. \end{notation} In this section, $n$ will be an odd integer greater than or equal to $3$, and $p=\frac{n-3}{2}$. We will compute the partition functions of the Teichm\"uller TQFT for the ideal triangulations $X_n$ of the twist knot complements $S^3\setminus K_n$ constructed in Section \ref{sec:trig} and we will prove that they can be expressed in a simple way using a one-variable function independent of the angle structure, as well as only two linear combinations of angles, which are two independant angular holonomies in the cusp link triangulation. This results in a slightly different version of the first statement in the Andersen-Kashaev volume conjecture of \cite[Conjecture 1 (1)]{AK}. Note that our partition functions are computed only for the specific ideal triangulations $X_n$. In order to generalise Theorem \ref{thm:part:func} to any ideal triangulation of a twist knot complement, one would need further properties of invariance under change of triangulation (more general than the ones discussed in \cite{AK}). A version for the even case is proved in Section \ref{sub:even:tqft} (see Theorem \ref{thm:even:part:func}). \begin{theorem}\label{thm:part:func} Let $n\geqslant 3$ be an odd integer and $p=\frac{n-3}{2}$. Consider the ideal triangulation $X_n$ of $S^3\setminus K_n$ described in Figure \ref{fig:id:trig:odd}. Then for all angle structures $\alpha=(a_1,\ldots,c_W) \in \mathcal{A}_{X_n}$ and all $\hbar>0$, we have: \begin{equation*} \mathcal{Z}_{\hbar}(X_n,\alpha) \stackrel{\star}{=} \int_{\mathbb{R}+i \frac{\mu_{X_n}(\alpha) }{2\pi \sqrt{\hbar}} } J_{X_n}(\hbar,x) e^{\frac{1}{2 \sqrt{\hbar}} x \lambda_{X_n}(\alpha)} dx, \end{equation*} with \begin{itemize} \item the degree one angle polynomial $\mu_{X_n}\colon\alpha\mapsto a_U- a_V$, \item the degree one angle polynomial $\lambda_{X_n}\colon\alpha\mapsto 2(a_U-a_V+c_V-b_W)$, \item the map $(\hbar,x) \mapsto$ \begin{equation*} J_{X_n}(\hbar,x)=\int_{\mathcal{Y}'} d\mathbf{y'} \ e^{2 i \pi \left (\mathbf{y'}^T Q_n \mathbf{y'} + x(x- y'_U-y'_W)\right )} e^{ \frac{1}{\sqrt{\hbar}} \left (\mathbf{y'}^T \mathcal{W}_n - \pi x\right )} \dfrac{ \Phi_\mathsf{b}\left (y'_U\right ) \Phi_\mathsf{b}\left (y'_U+x\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) } , \end{equation*} where $\mathcal{Y}'=\mathcal{Y}'(\hbar,\alpha) = \prod_{k=1}^p\left (\mathbb{R} - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_k)\right ) \times \prod_{l=U,W} \left (\mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_l)\right ),$ \begin{equation*} \mathbf{y'}=\begin{bmatrix} y'_1 \\ \vdots \\ y'_p \\ y'_U \\y'_W \end{bmatrix}, \quad \mathcal{W}_n=\begin{bmatrix}-2p\pi \\ \vdots \\ -2 \pi \left ( k p - \frac{k(k-1)}{2}\right ) \\ \vdots \\ -p(p+1)\pi \\ (p^2+p+1)\pi \\ \pi\end{bmatrix} \quad \text{ and } \quad Q_n=\begin{bmatrix} 1 & 1 & \cdots & 1 & -1 & 0 \\ 1 & 2 & \cdots & 2 & -2 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 1 & 2 & \cdots & p & -p & 0 \\ -1 & -2 & \cdots & -p & p & \frac{1}{2} \\ 0 & 0 & \cdots & 0 & \frac{1}{2} & 0 \end{bmatrix}. \end{equation*} \end{itemize} \end{theorem} The reader may notice that indices corresponding to $V$ are missing in the integration variables. This comes from the change of variables $x= y'_V-y'_U$, which makes $x$ replace the variable $y'_V$. Simply speaking, we chose to make $V$ disappear rather than $U$, because $V$ appeared a lot less than $U$ in the defining gluing equations (see end of Section \ref{sec:geom}). \begin{remark} Note that, if you fix $\hbar>0$ and $x \in \mathbb{R} + i\left (-\frac{1}{2 \sqrt{\hbar}},\frac{1}{2 \sqrt{\hbar}}\right )$, the integration contour $\mathcal{Y}'$ in the definition of $J_{X_n}(\hbar,x)$ depends a priori on the angle structure $\alpha$; however, since the integrand in $J_{X_n}(\hbar,x)$ is a holomorphic function of the variables in $\mathbf{y'}$ on a neighborhood of $\mathcal{Y}'$ in $\mathbb{C}^{p+2}$, it follows from the Bochner-Martinelli formula (that generalises the Cauchy theorem, see \cite{Kr}) and the fast decay properties of this integrand at infinity that $\mathcal{Y}'$ could be replaced with a different contour. In this sense, $J_{X_n}(\hbar,x)$ is independent of the angle structure $\alpha$. Nevertheless, picking the particular contour $\mathcal{Y}'=\mathcal{Y}'(\hbar,\alpha)$ with the complete structure $\alpha=\alpha^0$ will help us prove the volume conjecture in Section \ref{sec:vol:conj}. \end{remark} \begin{remark} The quantities $\mu_{X_n}(\alpha)$ and $\lambda_{X_n}(\alpha)$ in Theorem \ref{thm:part:func} satisfy the following relations with the angular holonomies corresponding to the meridian and longitude curves $m_{X_n}(\alpha), l_{X_n}(\alpha)$ from Section \ref{sub:cusp:trig}: $$ \mu_{X_n}(\alpha) = m_{X_n}(\alpha) \ \ \text{and} \ \ \lambda_{X_n}(\alpha) = l_{X_n}(\alpha) + 2 m_{X_n}(\alpha).$$ Hence, $\lambda_{X_n}(\alpha)$ is the angular holonomy of a curve on $\partial \nu (K_n)$ that is equal in homology to the curve (i) $\cup \ldots \cup$ (vi) (of Figures \ref{fig:trig:cusp:odd} and \ref{fig:longitude:odd}), thus $\lambda_{X_n}(\alpha)$ comes from a preferred longitude of the knot, as expected in Conjecture \ref{conj:vol:BAGPN} (1). Similarly, $\mu_{X_n}(\alpha)$ is associated to a meridian of the knot. \end{remark} We will need two lemmas to prove Theorem \ref{thm:part:func}. \begin{lemma}\label{lem:kin:odd} Let $n\geqslant 3$ be an odd integer and $p=\frac{n-3}{2}$. For the ideal triangulation $X_n$ of $S^3\setminus K_n$ described in Figure \ref{fig:id:trig:odd}, the kinematical kernel is $\mathcal{K}_{X_n}(\mathbf{\widetilde{t}})= \exp\left (2 i \pi \mathbf{\widetilde{t}}^T \widetilde{Q}_n \mathbf{\widetilde{t}} \right ),$ where $\mathbf{\widetilde{t}} = (t_1, \ldots, t_p, t_U, t_V, t_W)^T \in \mathbb{R}^{X_n^{3}}$ and $\widetilde{Q}_n$ is the following symmetric matrix with half-integer coefficients: $$\widetilde{Q}_n=\kbordermatrix{ \mbox{} &t_1 &t_2 &\cdots & t_{p-1} & t_p & \omit\vrule & t_U & t_V & t_W \\ t_1 & 1 & 1 & \cdots & 1 & 1 & \omit\vrule & -1 & 0 & 0 \\ t_2 & 1 & 2 & \cdots & 2 & 2 & \omit\vrule & -2 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \omit\vrule & \vdots & \vdots & \vdots \\ t_{p-1} & 1 & 2 & \cdots & p-1 & p-1 & \omit\vrule & -(p-1) & 0 & 0 \\ t_p & 1 & 2 & \cdots & p-1 & p & \omit\vrule & -p & 0 & 0 \\ \cline{1-1} \cline{2-10} t_U & -1 & -2 & \cdots & -(p-1) & -p & \omit\vrule & p+2 & -3/2 & 1 \\ t_V & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & -3/2 & 1 & -1/2 \\ t_W & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & 1 & -1/2 & 0 }. $$ \end{lemma} \begin{proof} Let $n \geqslant 3$ be an odd integer and $p=\frac{n-3}{2}$. We will denote $$\mathbf{\widetilde{t}} = (\mathsf{t}(T_1), \ldots, \mathsf{t}(W))^T= (t_1, \ldots, t_p, t_U, t_V, t_W)^T \in \mathbb{R}^{X_n^{3}}$$ a vector whose coordinates are associated to the tetrahedra ($t_j$ for $T_j$). The generic vector in $\mathbb{R}^{X_n^{2}}$ corresponding to the face variables will be denoted $$\mathbf{x}=(e_1, \ldots,e_p,e_{p+1}, f_1, \ldots, f_p,v,r,s,g,u)^T \in \mathbb{R}^{X_n^{2}}.$$ By definition, the kinematical kernel is: $$\mathcal{K}_{X_n}\left (\mathbf{\widetilde{t}}\right ) = \int_{\mathbf{x} \in \mathbb{R}^{X_n^{2}}} d\mathbf{x} \prod_{T \in X_n^3} e^{2 i \pi \varepsilon(T) x_0(T) \mathsf{t}(T)} \delta( x_0(T)- x_1(T)+ x_2(T)) \delta( x_2(T)- x_3(T)+ \mathsf{t}(T)). $$ Following Lemma \ref{lem:dirac} we compute from Figure \ref{fig:id:trig:odd} that: $$ \mathcal{K}_{X_n}\left (\mathbf{\widetilde{t}}\right ) = \int_{\mathbf{x} \in \mathbb{R}^{X_n^{2}}} d\mathbf{x} \int_{\mathbf{w} \in \mathbb{R}^{2(p+3)}} d\mathbf{w} \ e^{ 2 i \pi \mathbf{\widetilde{t}}^T R \mathbf{x}} e^{ -2 i \pi \mathbf{w}^T A \mathbf{x}} e^{ -2 i \pi \mathbf{w}^T B \mathbf{\widetilde{t}}}, $$ where $\mathbf{w}=(w_1, \ldots,w_W, w'_1, \ldots, w'_W)^T \in \mathbb{R}^{2(p+3)}$ and the matrices $R,A,B$ are given by: $$ R=\kbordermatrix{ \mbox{} & e_1 & \dots & e_p &e_{p+1} & \omit\vrule &f_1 & \ldots & f_p &\omit\vrule & v & r & s & g & u \\ t_1 & 1 & &\push{\low{0}} & \omit\vrule & & & &\omit\vrule & & & & & \\ \vdots & &\ddots & & & \omit\vrule & & 0 & &\omit\vrule & & & 0 & & \\ t_{p} &\push{0} & 1 & & \omit\vrule & & & &\omit\vrule & & & & & \\ \cline{1-1} \cline{2-15} t_U & & & & & \omit\vrule & & & &\omit\vrule &0 &-1 &0 &0 &0 \\ t_V & &\push{0} & & \omit\vrule & & 0 & &\omit\vrule &0 &0 &0 &-1 &0 \\ t_W & & & & & \omit\vrule & & & &\omit\vrule &0 &0 &0 &0 &-1 }, $$ $$ A=\kbordermatrix{ \mbox{} & e_1 & e_2 & \dots &e_p & e_{p+1} & \omit\vrule &f_1 &f_{2} & \ldots & f_p &\omit\vrule & v & r & s & g & u \\ w_1 & 1 & -1 & & & & \omit\vrule & 1 & & & &\omit\vrule & & & & & \\ \vdots & & \ddots & \ddots & \push{0} & \omit\vrule & &\ddots &\push{0} &\omit\vrule & & & \low{0} & & \\ \vdots & \push{0} & \ddots &\ddots & & \omit\vrule & \push{0} &\ddots & &\omit\vrule & & & & & \\ w_{p} & & & & 1 & -1 & \omit\vrule & & & &1 &\omit\vrule & & & & & \\ \cline{1-1} \cline{2-17} w_{U} & & & & & & \omit\vrule & & & &0 &\omit\vrule &-1 &1 &1 &0 &0 \\ w_{V} & & & 0 & & & \omit\vrule & & & &1 &\omit\vrule &0 &0 &-1 &1 &0 \\ w_{W} & & & & & & \omit\vrule & & & &0 &\omit\vrule &1 &-1 &0 &0 &1 \\ \cline{1-1} \cline{2-17} w'_1 & -1 & & & & & \omit\vrule & 1 & & & &\omit\vrule & & & & & \\ \vdots & & & & & & \omit\vrule & -1 & \ddots & \push{0} &\omit\vrule & & & \low{0} & & \\ \vdots & & & & & & \omit\vrule & & \ddots & \ddots & &\omit\vrule & & & & & \\ w'_{p} & & & & & & \omit\vrule & \push{0} &-1 &1 &\omit\vrule & & & & & \\ \cline{1-1} \cline{2-17} w'_{U} & & & & & & \omit\vrule & & & &0 &\omit\vrule &0 &0 &1 &-1 &0 \\ w'_{V} & & & & & & \omit\vrule & & & &1 &\omit\vrule &0 &0 &0 &0 &-1 \\ w'_{W} & & & & & -1 & \omit\vrule & & & &0 &\omit\vrule &1 &0 &0 &0 &0 }, $$ $$ B=\kbordermatrix{ \mbox{} & t_1 & \dots & t_{p} & \omit\vrule & t_{U} & t_{V} & t_{W} \\ w_1 & & & & & & & \\ \vdots & & & & & & & \\ w_{p} & & & \multicolumn{3}{c}{\low{0}} & & \\ \cline{1-1} w_{U} & & & & & & & \\ w_{V} & & & & & & & \\ w_{W} & & & & & & & \\ \cline{1-1} \cline{2-8} w'_1 & 1 & & & & & & \\ \vdots & &\ddots & & & & 0 & \\ w'_{p} & & & \multicolumn{3}{c}{\low{\ddots}} & & \\ \cline{1-1} w'_{U} & & & & & & & \\ w'_{V} & & 0 & & & & \ddots & \\ w'_{W} & & & & & & & 1 }. $$ Careful computation yields that $\det(A)=1$ and that the inverse $A^{-1}$ is equal to $$ A^{-1}=\kbordermatrix{ \mbox{} & w_{1} & w_{2} & \ldots & w_{p-1} & w_{p} & \omit\vrule & w_{U} & w_{V} & w_{W} & \omit\vrule & w'_{1} & w'_{2} & \ldots & w'_{p-1} & w'_{p} & \omit\vrule & w'_{U} & w'_{V} & w'_{W} \\ e_{1} & 0 & & \cdots & & 0 & \omit\vrule & 0 & 1 & 0 & \omit\vrule & -1 & -1 & \push{\cdots} & -1 & \omit\vrule & 1 & 0 & 0 \\ e_{2} & -1 & 0 & & & & \omit\vrule & 0 & 2 & 0 & \omit\vrule & -1 & -2 & \push{\cdots} & -2 & \omit\vrule & 2 & 0 & 0 \\ \low{\vdots} & -1 & -1 & \ddots & & \vdots & \omit\vrule & & & & \omit\vrule & & & \ddots & & \vdots & \omit\vrule & & & \\ & \vdots & & \ddots & 0 & 0 & \omit\vrule & \vdots & \vdots & \vdots & \omit\vrule & \vdots & \vdots & &\text{\tiny {\(1-p\)}} & \text{\tiny {\(1-p\)}} & \omit\vrule & \vdots & \vdots & \vdots \\ e_{p} & & & & -1 & 0 & \omit\vrule & & & & \omit\vrule & & & & \text{\tiny {\(1-p\)}} & -p & \omit\vrule & & & \\ e_{p+1} & -1 & & \cdots & & -1 & \omit\vrule & 0 & p+1 & 0 & \omit\vrule & -1 & -2 &\cdots & \text{\tiny {\(1-p\)}} & -p & \omit\vrule & p+1 & 0 & 0 \\ \cline{1-1} \cline{2-20} f_{1} & & & & & & \omit\vrule & 0 & 1 & 0 & \omit\vrule & 0 & -1 & \push{\cdots} & -1 & \omit\vrule & 1 & 0 & 0 \\ f_{2} & & & & & & \omit\vrule & & 1 & & \omit\vrule & 0 &0 & \ddots & & \low{\vdots} & \omit\vrule & 1 & & \\ \vdots & & & 0 & & & \omit\vrule & \vdots & \vdots & \vdots & \omit\vrule & \vdots & & \ddots & -1 & -1 & \omit\vrule & \vdots & \vdots & \vdots \\ f_{p-1} & & & & & & \omit\vrule & & & & \omit\vrule & 0 & & & 0 & -1 & \omit\vrule & & & \\ f_{p} & & & & & & \omit\vrule & 0 & 1 & 0 & \omit\vrule & 0 & & \cdots & & 0 & \omit\vrule & 1 & 0 & 0 \\ \cline{1-1} \cline{2-20} v & -1 & & \cdots & & -1 & \omit\vrule & 0 & p+1 & 0 & \omit\vrule & -1 & -2 & \push{\cdots} & -p & \omit\vrule & p+1 & 0 & 1 \\ r & -1 & & \cdots & & -1 & \omit\vrule & 0 & p+2 & -1 & \omit\vrule & -1 & -2 & \push{\cdots} & -p & \omit\vrule & p+2 & -1 & 1 \\ s & & & & & & \omit\vrule & 1 & -1 & 1 & \omit\vrule & & & & & & \omit\vrule & -1 & 1 & 0 \\ g & & & 0 & & & \omit\vrule & 1 & -1 & 1 & \omit\vrule & & & 0 & & & \omit\vrule & -2 & 1 & 0 \\ u & & & & & & \omit\vrule & 0 & 1 & 0 & \omit\vrule & & & & & & \omit\vrule & 1 & -1 & 0 }. $$ Hence, following Lemma \ref{lem:dirac}, we have $$ \mathcal{K}_{X_n}\left (\mathbf{\widetilde{t}}\right ) = \frac{1}{|\det(A)|} e^{ 2 i \pi \mathbf{\widetilde{t}}^T (-R A^{-1} B) \mathbf{\widetilde{t}}}= e^{ 2 i \pi \mathbf{\widetilde{t}}^T (-R A^{-1} B) \mathbf{\widetilde{t}}}.$$ The lemma finally follows from the identity $2 \widetilde{Q}_n = (-R A^{-1} B) + (-R A^{-1} B)^T$, where $\widetilde{Q}_n$ is defined in the statement of the lemma. \end{proof} The following lemma relates the symmetric matrix $\widetilde{Q}_n$ to the gluing equations. \begin{lemma}\label{lem:2QGamma+C} Let $n\geqslant 3$ be an odd integer and $p=\frac{n-3}{2}$. Let $\alpha = (a_1,b_1,c_1,\ldots,a_W,b_W,c_W) \in \mathcal{S}_{X_n}$ denote a shape structure. If we denote $\widetilde{Q}_n$ the symmetric matrix from Lemma \ref{lem:kin:odd}, $\widetilde{C}(\alpha) = (c_1,\ldots,c_W)^T$, and $\widetilde{\Gamma}(\alpha) := (a_1-\pi,\ldots, a_p-\pi,\pi-a_U,\pi-a_V,\pi-a_W)^T$, then (indexing entries by $k\in\{1,\ldots,p\}$ and $U,V,W$) we have the vector equality $ 2\widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha) =$ $$ \renewcommand{\kbldelim}{(} \renewcommand{\kbrdelim}{)} \kbordermatrix{ \mbox{} & \\ k=1 & \vdots\\ \vdots & \hspace{6mm} k(\omega_{s}(\alpha) -2(p+2) \pi) + \sum_{j=1}^{k}j \omega_{k-j}(\alpha) \\ k=p & \vdots \\ \cline{1-1} \cline{2-2} & \omega_{p+1}(\alpha)- \omega_{s}(\alpha) - \left ( p(\omega_{s}(\alpha) -2(p+2) \pi) + \sum_{j=1}^{p}j \omega_{p-j}(\alpha) \right ) + 2 \pi - \frac{1}{2} \lambda_{X_n}(\alpha) \\ & \frac{1}{2}\lambda_{X_n}(\alpha) + \omega_{s}(\alpha) - 3 \pi \\ & 3 \pi - \omega_{s}(\alpha) }, $$ where $\lambda_{X_n}(\alpha)=2(a_U-a_V+c_V-b_W)$. In particular, for $\alpha \in \mathcal{A}_{X_n}$ an angle structure, the vector of angles $$ 2\widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha) = \: \renewcommand{\kbldelim}{(} \renewcommand{\kbrdelim}{)} \kbordermatrix{ \mbox{} & \\ k=1 & \vdots\\ \vdots & -2 \pi \left ( k p - \dfrac{k(k-1)}{2}\right ) \\ k=p & \vdots \\ \cline{1-1} \cline{2-2} & (p^2+p+2)\pi - \frac{1}{2}\lambda_{X_n}(\alpha) \\ & \frac{1}{2}\lambda_{X_n}(\alpha) - \pi \\ & \pi } $$ only depends on the linear combination $\lambda_{X_n}(\alpha)$. \end{lemma} \begin{proof} The lemma follows from direct computations. \end{proof} We can now proceed with the proof of Theorem \ref{thm:part:func}. \begin{proof}[Proof of Theorem \ref{thm:part:func}] Let $n \geqslant 3$ be an odd integer and $p=\frac{n-3}{2}$. We want to compute the partition function associated to $X_n$ and prove that it is of the desired form. We know the form of the kinematical kernel from Lemma \ref{lem:kin:odd}. Let us now compute the dynamical content. Let $\alpha=(a_1,b_1,c_1,\ldots,a_W,b_W,c_W) \in \mathcal{A}_{X_n}$, $\hbar>0$ and $\mathbf{\widetilde{t}} = (t_1, \ldots, t_p, t_U, t_V, t_W)^T \in \mathbb{R}^{X_n^{3}}$. By definition, the dynamical content $\mathcal{D}_{\hbar,X_n}(\mathbf{\widetilde{t}},\alpha)$ is equal to: $$e^{\frac{1}{\sqrt{\hbar}} \widetilde{C}(\alpha)^T \mathbf{\widetilde{t}}} \dfrac{ \Phi_\mathsf{b}\left (t_U + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_U)\right ) \Phi_\mathsf{b}\left (t_V + \frac{i}{2 \pi \sqrt{\hbar}}(\pi-a_V)\right ) \Phi_\mathsf{b}\left (t_W + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_W)\right ) }{ \Phi_\mathsf{b}\left (t_1 - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_1)\right ) \cdots \Phi_\mathsf{b}\left (t_p - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_p)\right ) },$$ where $\widetilde{C}(\alpha) = (c_1, \ldots, c_p,c_U,c_V, c_W)^T$ as in the statement of Lemma \ref{lem:2QGamma+C}. Now we can compute the partition function of the Teichm\"uller TQFT. By definition: $$\mathcal{Z}_{\hbar}(X_n,\alpha)= \int_{\mathbf{\widetilde{t}}\in\mathbb{R}^{X_n^{3}}} d\mathbf{\widetilde{t}} \mathcal{K}_{X_n}(\mathbf{\widetilde{t}}) \mathcal{D}_{\hbar,X_n}(\mathbf{\widetilde{t}},\alpha).$$ We do the following change of variables: \begin{itemize} \item $y'_k = t_k - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_k)$ for $1 \leqslant k \leqslant p$, \item $y'_l = t_l + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_l)$ for $l\in\{U,V,W\}$, \end{itemize} and we denote $\mathbf{\widetilde{y}'}=\left (y'_1, \ldots, y'_{p}, y'_U, y'_V, y'_W\right )^T$. We also denote $$\widetilde{\mathcal{Y}}'_{\hbar,\alpha} := \prod_{k=1}^p\left (\mathbb{R} - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_k)\right ) \times \prod_{l=U,V,W} \left (\mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_l)\right ),$$ the subset of $\mathbb{C}^{p+3}$ on which the variables in $\mathbf{\widetilde{y}'}$ will reside. Finally we denote: \begin{align*} \widetilde{\Gamma}(\alpha) := \frac{2 \pi \sqrt{\hbar}}{i}(\mathbf{\widetilde{y}'}-\mathbf{\widetilde{t}}) = (a_1-\pi,\ldots, a_p-\pi,\pi-a_U,\pi-a_V,\pi-a_W)^T. \end{align*} as in the statement of Lemma \ref{lem:2QGamma+C}. We can now compute: \begin{align*} &\mathcal{Z}_{\hbar}(X_n,\alpha) = \int_{\mathbf{\widetilde{t}}\in\mathbb{R}^{X_n^{3}}} d\mathbf{\widetilde{t}} \mathcal{K}_{X_n}(\mathbf{\widetilde{t}}) \mathcal{D}_{\hbar,X_n}(\mathbf{\widetilde{t}},\alpha) \\ &= \int_{\mathbf{\widetilde{y}'}\in \widetilde{\mathcal{Y}}'_{\hbar,\alpha}} d\mathbf{\widetilde{y}'} \mathcal{K}_{X_n}\left (\mathbf{\widetilde{y}'}-\frac{i}{2 \pi \sqrt{\hbar}} \widetilde{\Gamma}(\alpha) \right ) \mathcal{D}_{\hbar,X_n}\left (\mathbf{\widetilde{y}'}-\frac{i}{2 \pi \sqrt{\hbar}} \widetilde{\Gamma}(\alpha),\alpha\right ) \\ &= \int_{\mathbf{\widetilde{y}'} \in \widetilde{\mathcal{Y}}'_{\hbar,\alpha}} \hspace*{-0.3cm} d\mathbf{\widetilde{y}'} e^{ 2 i \pi \mathbf{\widetilde{y}}^{\prime T} \widetilde{Q}_n \mathbf{\widetilde{y}'} +\frac{2}{\sqrt{\hbar}} \widetilde{\Gamma}(\alpha)^T \widetilde{Q}_n \mathbf{\widetilde{y}'} - \frac{i}{2 \pi \hbar} \widetilde{\Gamma}(\alpha)^T \widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \frac{1}{\sqrt{\hbar}} \widetilde{C}(\alpha)^T \mathbf{\widetilde{y}'} - \frac{i}{2 \pi \hbar} \widetilde{C}(\alpha)^T \widetilde{\Gamma}(\alpha) } \frac{ \Phi_\mathsf{b}\left (y'_U\right ) \Phi_\mathsf{b}\left (y'_V\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) } \\ &\stackrel{\star}{=} \int_{\mathbf{\widetilde{y}'} \in \widetilde{\mathcal{Y}}'_{\hbar,\alpha}} d\mathbf{\widetilde{y}'} e^{ 2 i \pi \mathbf{\widetilde{y}}^{\prime T} \widetilde{Q}_n \mathbf{\widetilde{y}'} +\frac{2}{\sqrt{\hbar}} \widetilde{\Gamma}(\alpha)^T \widetilde{Q}_n \mathbf{\widetilde{y}'} + \frac{1}{\sqrt{\hbar}} \widetilde{C}(\alpha)^T \mathbf{\widetilde{y}'} } \dfrac{ \Phi_\mathsf{b}\left (y'_U\right ) \Phi_\mathsf{b}\left (y'_V\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) } \\ &= \int_{\mathbf{\widetilde{y}'} \in \widetilde{\mathcal{Y}}'_{\hbar,\alpha}} d\mathbf{\widetilde{y}'} e^{ 2 i \pi \mathbf{\widetilde{y}}^{\prime T} \widetilde{Q}_n \mathbf{\widetilde{y}'} + \frac{1}{\sqrt{\hbar}} \widetilde{\mathcal{W}}(\alpha)^T \mathbf{\widetilde{y}'} } \dfrac{ \Phi_\mathsf{b}\left (y'_U\right ) \Phi_\mathsf{b}\left (y'_V\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) }, \end{align*} where $\widetilde{\mathcal{W}}(\alpha):= 2 \widetilde{Q}_n \widetilde{\Gamma}(\alpha)+\widetilde{C}(\alpha)$. Now, from Lemma \ref{lem:2QGamma+C}, we have $$\widetilde{\mathcal{W}}(\alpha) = \begin{pmatrix}-2p\pi\\ \vdots \\ -2 \pi \left ( k p - \dfrac{k(k-1)}{2}\right ) \\ \vdots \\ -p(p+1)\pi \\ (p^2+p+2)\pi - \frac{1}{2}\lambda_{X_n}(\alpha) \\ \frac{1}{2}\lambda_{X_n}(\alpha) - \pi \\ \pi \end{pmatrix}.$$ We define a new variable $x:= y'_V-y'_U$ living in the set $$\mathcal{Y}'^0_{\hbar,\alpha}:=\mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} (a_U-a_V),$$ and we also define $ \mathbf{y'}$ (respectively $\mathcal{Y}' _{\hbar,\alpha}$) exactly like $ \widetilde{\mathbf{y}'}$ (respectively $ \widetilde{\mathcal{Y}}'_{\hbar,\alpha}$) but with the second-to-last coordinate (corresponding to the tetrahedron $V$) taken out. We finally define \begin{equation} \label{eqn:form:W:and:Q} \mathcal{W}_{n}= \begin{bmatrix}\mathcal{W}_{n,1} \\ \vdots \\ \mathcal{W}_{n,k} \\ \vdots \\ \mathcal{W}_{n,p} \\ \mathcal{W}_{n,U} \\ \mathcal{W}_{n,W} \end{bmatrix} := \begin{bmatrix}-2p\pi \\ \vdots \\ -2 \pi \left ( k p - \frac{k(k-1)}{2}\right ) \\ \vdots \\ -p(p+1)\pi \\ (p^2+p+1)\pi \\ \pi\end{bmatrix} \qquad \text{ and } \qquad Q_n:=\begin{bmatrix} 1 & 1 & \cdots & 1 & -1 & 0 \\ 1 & 2 & \cdots & 2 & -2 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 1 & 2 & \cdots & p & -p & 0 \\ -1 & -2 & \cdots & -p & p & \frac{1}{2} \\ 0 & 0 & \cdots & 0 & \frac{1}{2} & 0 \end{bmatrix}. \end{equation} Notice that $Q_n$ is obtained from $\widetilde{Q}_n$ by the following operations: \begin{itemize} \item add the $V$-row to the $U$-row, \item add the $V$-column to the $U$-column, \item delete the $V$-row and the $V$-column, \end{itemize} and $\mathcal{W}_{n}$ is obtained from $\widetilde{\mathcal{W}}(\alpha)$ by the same operations on rows. We can now use the substitution $y'_V = y'_U+x$ to compute: \begin{align*} 2 i \pi \widetilde{\mathbf{y}}^{\prime T} \widetilde{Q}_n \widetilde{\mathbf{y}'} &= 2 i \pi \left ( (\mathbf{y'}^T Q_n \mathbf{y'} - p {y'_U}^2 - y'_U y'_W) + (p+2){y'_U}^2 - 3 y'_U y'_V + 2 y'_U y'_W + {y'_V}^2 - y'_V y'_W \right ) \\ &= 2 i \pi \left (\mathbf{y'}^T Q_n \mathbf{y'} -xy'_U -x y'_W+x^2\right ), \end{align*} and $\frac{1}{\sqrt{\hbar}} \widetilde{\mathcal{W}}(\alpha)^T \widetilde{\mathbf{y}'} = \frac{1}{\sqrt{\hbar}} \left (\mathcal{W}_n^T \mathbf{y'} +x (\frac{1}{2}\lambda_{X_n}(\alpha)-\pi)\right )$, thus \begin{align*} &\mathcal{Z}_{\hbar}(X_n,\alpha) \stackrel{\star}{=} \int_{\mathbf{\widetilde{y}'} \in \widetilde{\mathcal{Y}}'_{\hbar,\alpha}} d\mathbf{\widetilde{y}'} e^{ 2 i \pi \mathbf{\widetilde{y}}^{\prime T} \widetilde{Q}_n \mathbf{\widetilde{y}'} + \frac{1}{\sqrt{\hbar}} \widetilde{\mathcal{W}}(\alpha)^T \mathbf{\widetilde{y}'} } \dfrac{ \Phi_\mathsf{b}\left (y'_U\right ) \Phi_\mathsf{b}\left (y'_V\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) } \\ &\stackrel{\star}{=} \int dx d\mathbf{y'} e^{2 i \pi \left (\mathbf{y'}^T Q_n \mathbf{y'}+x(x-y'_U -y'_W)\right )+ \frac{1}{\sqrt{\hbar}} \left (\mathcal{W}_n^T \mathbf{y'} +x (\frac{1}{2}\lambda_{X_n}(\alpha)-\pi)\right ) } \dfrac{ \Phi_\mathsf{b}\left (y'_U\right ) \Phi_\mathsf{b}\left (y'_U+x\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) } , \end{align*} where the variables $(\mathbf{y'},x)$ in the last integral lie in $\mathcal{Y}'_{\hbar,\alpha} \times \mathcal{Y}'^0_{\hbar,\alpha}$. Finally we obtain that \begin{equation*} \mathcal{Z}_{\hbar}(X_n,\alpha) \stackrel{\star}{=} \int_{x \in \mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} \mu_{X_n}(\alpha)} J_{X_n}(\hbar,x)e^{\frac{1}{2 \sqrt{\hbar}} x \lambda_{X_n}(\alpha)} dx, \end{equation*} where \begin{equation*} J_{X_n}(\hbar,x)=\int_{\mathcal{Y}'} d\mathbf{y'} \ e^{2 i \pi \left (\mathbf{y'}^T Q_n \mathbf{y'} + x(x- y'_U-y'_W)\right )} e^{ \frac{1}{\sqrt{\hbar}} \left (\mathbf{y'}^T \mathcal{W}_n - \pi x\right )} \dfrac{ \Phi_\mathsf{b}\left (y'_U\right ) \Phi_\mathsf{b}\left (y'_U+x\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) } \end{equation*} and $\mu_{X_n}(\alpha)=a_U-a_V$, which concludes the proof. \end{proof} We conclude this section with a slight rephrasing of Theorem \ref{thm:part:func}, in the following Corollary \ref{cor:part:func}. Although the expression in Theorem \ref{thm:part:func} was the closest to the statement of \cite[Conjecture 1 (1)]{AK}, we find that the following re-formulation has additional benefits: the integration multi-contour is now independent of $\hbar$ and the integrand is closer to the form $e^{\frac{1}{2 \pi \hbar} S(\mathbf{y})}$ that we need in order to apply the saddle point method (see Theorem \ref{thm:SPM}, where $\lambda \to \infty$ should be thought of as $2 \pi \hbar \to 0^+$). \begin{corollary}\label{cor:part:func} Let $n\geqslant 3$ be an odd integer and $p=\frac{n-3}{2}$. Consider the ideal triangulation $X_n$ of $S^3\setminus K_n$ from Figure \ref{fig:id:trig:odd}. Then for all angle structures $\alpha \in \mathcal{A}_{X_n}$ and all $\hbar>0$, we have: \begin{equation*} \mathcal{Z}_{\hbar}(X_n,\alpha) \stackrel{\star}{=} \int_{\mathbb{R}+i \mu_{X_n}(\alpha) } \mathfrak{J}_{X_n}(\hbar,\mathsf{x}) e^{\frac{1}{4 \pi \hbar} \mathsf{x} \lambda_{X_n}(\alpha)} d\mathsf{x}, \end{equation*} with the map \begin{equation*} \mathfrak{J}_{X_n}\colon(\hbar,\mathsf{x})\mapsto \left ( \frac{1}{2 \pi \sqrt{\hbar}} \right )^{p+3} \int_{\mathcal{Y}_\alpha} d\mathbf{y} \ e^{\frac { i \mathbf{y}^T Q_n \mathbf{y} + i \mathsf{x}(\mathsf{x}- y_U-y_W) + \mathbf{y}^T \mathcal{W}_n - \pi \mathsf{x} } {2 \pi \hbar} } \dfrac{ \Phi_\mathsf{b}\left ( \frac{y_U}{2 \pi \sqrt{\hbar}} \right ) \Phi_\mathsf{b}\left ( \frac{y_U+\mathsf{x}}{2 \pi \sqrt{\hbar}} \right ) \Phi_\mathsf{b}\left ( \frac{y_W}{2 \pi \sqrt{\hbar}} \right ) }{ \Phi_\mathsf{b}\left (\frac{y_1}{2 \pi \sqrt{\hbar}}\right ) \cdots \Phi_\mathsf{b}\left (\frac{y_p}{2 \pi \sqrt{\hbar}}\right ) } , \end{equation*} where $\mu_{X_n},\lambda_{X_n}, \mathcal{W}_n, Q_n$ are the same as in Theorem \ref{thm:part:func}, and $$\mathcal{Y}_\alpha = \prod_{k=1}^p\left (\mathbb{R} - i (\pi - a_k)\right ) \times \prod_{l=U,W} \left (\mathbb{R} + i (\pi - a_l)\right ).$$ \end{corollary} \begin{proof} We start from the expressions in Theorem \ref{thm:part:func}, and, with $\hbar >0$ fixed, we do the change of variables $y_j = \frac{y'_j}{2\pi \sqrt{\hbar}}$ for $j \in \{1, \ldots, p,U,W\}$ and $\mathsf{x} = \frac{x}{2\pi \sqrt{\hbar}}$. \end{proof} \section{Partition function for the H-triangulations (odd case)}\label{sec:part:H:odd} As stated in the introduction, this section is not essential for understanding the proof of the volume conjecture in Section \ref{sec:vol:conj}, and thus may be skipped at first read. However similar this section looks to the previous Section \ref{sec:part:odd}, subtle differences remain in the equations and calculations, and details should thus be read carefully. Before stating Theorem \ref{thm:part:func:Htrig:odd}, we compute the weights on each edge of the H-triangulation $Y_n$ given in Figure \ref{fig:H:trig:odd} (for $n \geqslant 3$ odd). Recall that we denoted $\overrightarrow{e_0}, \ldots, \overrightarrow{e_{p+1}}, \overrightarrow{e_s}, \overrightarrow{e_d}, \overrightarrow{K_n} \in (Y_n)^1$ the $p+5$ edges in $Y_n$ respectively represented in Figure \ref{fig:H:trig:odd} by arrows with circled $0$, \ldots, circled $p+1$, simple arrow, double arrow and blue simple arrow. For $\alpha=(a_1,b_1,c_1,\ldots,a_p,b_p,c_p,a_U,b_U,c_U,a_V,b_V,c_V,a_W,b_W,c_W,a_Z,b_Z,c_Z) \in \mathcal{S}_{Y_n}$ a shape structure on $Y_n$, the weights of each edge are given by: \begin{itemize} \item $\widehat{\omega}_s(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_s})= 2 a_U+b_V+c_V+a_W+b_W+a_Z $ \item $\widehat{\omega}_d(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_d})= b_U+c_U+c_W+b_Z+c_Z $ \item $\omega_0(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_0})= 2 a_1 + c_1 + 2 a_2 + \ldots + 2 a_p + a_V+c_W $ \item $\omega_1(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_1})= 2b_1+c_2 $ \\ \vspace*{-2mm} \item $\omega_k(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_k})= c_{k-1}+2b_k+c_{k+1} $ \ \ (for $2\leqslant k \leqslant p-1$) \\ \vspace*{-2mm} \item $\omega_p(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_p})= c_{p-1}+2b_p+b_U+b_V+a_W$ \item $\widehat{\omega}_{p+1}(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_{p+1}})= c_p+c_U+a_V+c_V+b_W+b_Z+c_Z $ \item $\widehat{\omega}_{\overrightarrow{K_n}}(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{K_n})= a_Z $ \end{itemize} Note that some of these weights have the same value as the ones for $X_n$ listed in Section \ref{sec:geom} (and are thus also denoted $\omega_j(\alpha)$), and some are specific to $Y_n$ (and are written with a hat). We can now compute the partition function of the Teichm\"uller TQFT for the H-triangulations $Y_n$, and prove the following theorem. We will denote $\mathcal{S}_{Y_n \setminus Z}$ the space of shape structures on every tetrahedron of $Y_n$ except for $Z$. \begin{theorem}\label{thm:part:func:Htrig:odd} Let $n \geqslant 3$ be an odd integer, $p=\frac{n-3}{2}$ and $Y_n$ the one-vertex H-triangulation of the pair $(S^3,K_n)$ from Figure \ref{fig:H:trig:odd}. Then for every $\hbar>0$ and for every $\tau\in \mathcal{S}_{Y_n \setminus Z} \times \overline{\mathcal{S}_Z}$ such that $\omega_{Y_n,\tau}$ vanishes on $\overrightarrow{K_n}$ and is equal to $2\pi$ on every other edge, one has \begin{equation*} \underset{\tiny \begin{matrix}\alpha \to \tau \\ \alpha \in \mathcal{S}_{Y_n} \end{matrix}}{\lim} \Phi_{\mathsf{b}}\left( \frac{\pi-\omega_{Y_n,\alpha}\left (\overrightarrow{K_n}\right )}{2\pi i \sqrt{\hbar}} \right) \mathcal{Z}_{\hbar}(Y_n,\alpha) \stackrel{\star}{=} J_{X_n}(\hbar,0), \end{equation*} where $J_{X_n}$ is defined in Theorem \ref{thm:part:func}. \end{theorem} Before proving Theorem \ref{thm:part:func:Htrig:odd}, let us mention a useful result: the fact that $\Phi_\mathsf{b}$ is bounded on compact horizontal bands. \begin{lemma}\label{lem:PhiB:bounded} Let $\hbar>0$ and $\delta \in (0,\pi/2)$. Then $\displaystyle M_{\delta,\hbar} := \underset{z \in \mathbb{R}+i[\delta,\pi-\delta]}{\max} |\Phi_\mathsf{b}(z)|$ is finite. \end{lemma} \begin{proof} Let $\hbar>0$ and $\delta \in (0,\pi/2)$. By contradiction, let us assume that $ M_{\delta,\hbar} = \infty$. Then there exists a sequence $(z_n)_{n\in \mathbb{N}} \in \left (\mathbb{R}+i[\delta,\pi-\delta]\right )^\mathbb{N}$ such that $|\Phi_\mathsf{b}(z_n)| \underset{n \to \infty}{\to} \infty$. If $(\Re(z_n))_{n\in \mathbb{N}}$ is bounded, then $(z_n)_{n\in \mathbb{N}}$ lives in a compact set, which contradicts the continuity of $|\Phi_\mathsf{b} |$. If $(\Re(z_n))_{n\in \mathbb{N}}$ admits a subsequence going to $-\infty$ (resp. $\infty$), then the image of this subsequence by $| \Phi_\mathsf{b} |$ should still tend to $\infty$, which contradicts Proposition \ref{prop:quant:dilog} (4). \end{proof} \begin{proof}[Proof of Theorem \ref{thm:part:func:Htrig:odd}] Let $n \geqslant 3$ be an odd integer and $p=\frac{n-3}{2}$. The proof will consist in three steps: computing the partition function $\mathcal{Z}_{\hbar}(Y_n,\alpha)$, applying the dominated convergence theorem in $\alpha \to \tau$ and finally retrieving the value $J_{X_n}(\hbar,0)$ in $\alpha =\tau$. \textit{Step 1. Computing the partition function $\mathcal{Z}_{\hbar}(Y_n,\alpha)$.} Like in the proof of Theorem \ref{thm:part:func} we start by computing the kinematical kernel. We denote \[ \widehat{\mathbf{t}}=(t_1,\ldots,t_{p-1},t_p,t_U,t_V,t_W,t_Z) \in \mathbb{R}^{Y_n^3} \] the vector whose coordinates are associated to the tetrahedra ($t_j$ for $T_j$). The generic vector in $\mathbb{R}^{Y_n^2}$ which corresponds to the faces variables will be denoted \[ \widehat{\mathbf{x}}=(e_1,\ldots,e_{p+1},f_1,\ldots,f_p,v,r,s,s',g,u,m) \in \mathbb{R}^{Y_n^2}. \] By definition, the kinematical kernel is: $$\mathcal{K}_{Y_n}\left (\mathbf{\widehat{t}}\right ) = \int_{\widehat{\mathbf{x}} \in \mathbb{R}^{Y_n^{2}}} d\widehat{\mathbf{x}} \prod_{T \in Y_n^3} e^{2 i \pi \varepsilon(T) x_0(T) \mathsf{t}(T)} \delta( x_0(T)- x_1(T)+ x_2(T)) \delta( x_2(T)- x_3(T)+ \mathsf{t}(T)). $$ Following Lemma \ref{lem:dirac}, we compute from Figure \ref{fig:H:trig:odd} that: $$ \mathcal{K}_{Y_n}\left (\mathbf{\widehat{t}}\right ) = \int_{\widehat{\mathbf{x}} \in \mathbb{R}^{Y_n^{2}}} d\widehat{\mathbf{x}} \int_{\widehat{\mathbf{w}} \in \mathbb{R}^{2 (p+4)}} d\widehat{\mathbf{w}} \ e^{ 2 i \pi \mathbf{\widehat{t}}^T \widehat{S} \widehat{\mathbf{x}}} e^{ -2 i \pi \widehat{\mathbf{w}}^T \widehat{H} \widehat{\mathbf{x}}} e^{ -2 i \pi \widehat{\mathbf{w}}^T \widehat{D} \mathbf{\widehat{t}}}, $$ where the matrices $\widehat{S}, \widehat{H}, \widehat{D}$ are given by: $$ \widehat{S}=\kbordermatrix{ \mbox{} & e_1 & \dots & e_p &e_{p+1} & \omit\vrule &f_1 & \ldots & f_p &\omit\vrule & v & r & s & s' & g & u & m \\ t_1 & 1 & &\push{\low{0}} & \omit\vrule & & & &\omit\vrule & & & & & & & \\ \vdots & &\ddots & & & \omit\vrule & & 0 & &\omit\vrule & & & & 0 & & & \\ t_{p} &\push{0} & 1 & & \omit\vrule & & & &\omit\vrule & & & & & & & \\ \cline{1-1} \cline{2-17} t_U & & & & & \omit\vrule & & & &\omit\vrule &0 &-1 &0 & 0 & 0 &0 & 0 \\ t_V & &\push{0} & & \omit\vrule & & 0 & &\omit\vrule &0 &0 &0 & 0 & -1 & 0 & 0 \\ t_W & & & & & \omit\vrule & & & &\omit\vrule &0 &0 &0 & 0 & 0 & -1 & 0 \\ t_Z & & & & & \omit\vrule & & & &\omit\vrule &0 &0 &0 & 0 & 0 & 0 & 1 },$$ $$ \widehat{H}=\kbordermatrix{ \mbox{} & e_1 & e_2 & \dots &e_p & e_{p+1} & \omit\vrule &f_1 &f_{2} & \ldots & f_p &\omit\vrule & v & r & s & s' & g & u & m \\ w_1 & 1 & -1 & & & & \omit\vrule & 1 & & & &\omit\vrule & & & & & & & \\ \vdots & & \ddots & \ddots & \push{0} & \omit\vrule & &\ddots &\push{0} &\omit\vrule & & & & \low{0} & & & \\ \vdots & \push{0} & \ddots &\ddots & & \omit\vrule & \push{0} &\ddots & &\omit\vrule & & & & & & & \\ w_{p} & & & & 1 & -1 & \omit\vrule & & & &1 &\omit\vrule & & & & & & & \\ \cline{1-1} \cline{2-19} w_{U} & & & & & & \omit\vrule & & & & &\omit\vrule & -1 & 1 & 1 & 0 & 0 & 0 & 0 \\ w_{V} & & & \low{0} & & & \omit\vrule & & & &1 &\omit\vrule & 0 & 0 & 0 & -1 & 1 & 0 & 0 \\ w_{W} & & & & & & \omit\vrule & & & & &\omit\vrule & 1 & -1 & 0 & 0 & 0 & 1 & 0 \\ w_{Z} & & & & & & \omit\vrule & & & & &\omit\vrule & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \cline{1-1} \cline{2-19} w'_1 & -1 & & & & & \omit\vrule & 1 & & & &\omit\vrule & & & & & & & \\ \vdots & & & & & & \omit\vrule & -1 & \ddots & \push{0} &\omit\vrule & & & & \low{0} & & & \\ \vdots & & & & & & \omit\vrule & & \ddots & \ddots & &\omit\vrule & & & & & & & \\ w'_{p} & & & & & & \omit\vrule & \push{0} &-1 &1 &\omit\vrule & & & & & & & \\ \cline{1-1} \cline{2-19} w'_{U} & & & & & & \omit\vrule & & & & &\omit\vrule & 0 & 0 & 1 & 0 & -1 & 0 & 0 \\ w'_{V} & & & & & & \omit\vrule & & & &1 &\omit\vrule & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ w'_{W} & & & & & -1 & \omit\vrule & & & & &\omit\vrule & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ w'_{Z} & & & & & & \omit\vrule & & & & &\omit\vrule & 0 & 0 & 1 & -1 & 0 & 0 & 0 },$$ $$ \widehat{D}=\kbordermatrix{ \mbox{} & t_1 & \dots & t_{p} & \omit\vrule & t_{U} & t_{V} & t_{W} & t_{Z} \\ w_1 & & & & & & & & \\ \vdots & & & & & & & & \\ w_{p} & & & & & & & \\ \cline{1-1} w_{U} & & & & & 0 & & & \\ w_{V} & & & & & & & & \\ w_{W} & & & & & & & & \\ w_{Z} & & & & & & & & \\ \cline{1-1} \cline{2-9} w'_1 & 1 & & & & & & & \\ \vdots & &\ddots & & & & & 0 & \\ w'_{p} & & & & & & & & \\ \cline{1-1} w'_{U} & & & & &\ddots & & & \\ w'_{V} & & & & & & & & \\ w'_{W} & & 0 & & & & &\ddots & \\ w'_{Z} & & & & & & & & 1 }. $$ Let us define $S$ the submatrix of $\widehat{S}$ without the $m$-column, $H$ the submatrix of $\widehat{H}$ without the $m$-column and the $w_V$-row, $R_V$ this very $w_V$-row of $\widehat{H}$, $D$ the submatrix of $\widehat{D}$ without the $w_V$-row, $\mathbf{x}$ the subvector of $\widehat{\mathbf{x}}$ without the variable $m$ and $\mathbf{w}$ the subvector of $\widehat{\mathbf{w}}$ without the variable $w_V$. Finally let us denote $f_{\widehat{\mathbf{t}},w_V}(\mathbf{x}):=e^{2i \pi (\widehat{\mathbf{t}}^T S - w_V R_V)\mathbf{x}}$. We remark that $H$ is invertible (whereas $\widehat{H}$ was not) and $\det(H)=-1$. Hence, by using multi-dimensional Fourier transform and the integral definition of the Dirac delta function, we compute: \begin{align*} &\mathcal{K}_{Y_n}\left (\mathbf{\widehat{t}}\right ) = \int_{\widehat{\mathbf{x}} \in \mathbb{R}^{Y_n^{2}}} d\widehat{\mathbf{x}} \int_{\widehat{\mathbf{w}} \in \mathbb{R}^{2 (p+4)}} d\widehat{\mathbf{w}} \ e^{ 2 i \pi \mathbf{\widehat{t}}^T \widehat{S} \widehat{\mathbf{x}}} e^{ -2 i \pi \widehat{\mathbf{w}}^T \widehat{H} \widehat{\mathbf{x}}} e^{ -2 i \pi \widehat{\mathbf{w}}^T \widehat{D} \mathbf{\widehat{t}}}\\ &= \int_{m \in \mathbb{R}} d m \int_{w_V \in \mathbb{R}} d w_V \int_{\mathbf{x} \in \mathbb{R}^{2p+7}} d\mathbf{x} \int_{\mathbf{w} \in \mathbb{R}^{2p+7}} d\mathbf{w} \ e^{ 2 i \pi t_Z m} e^{ -2 i \pi w_V R_V \mathbf{x}} e^{ 2 i \pi \mathbf{\widehat{t}}^T S \mathbf{x}} e^{ -2 i \pi \mathbf{w}^T H \mathbf{x}} e^{ -2 i \pi \mathbf{w}^T D \mathbf{\widehat{t}}}\\ &=\int_{m \in \mathbb{R}} d m \ e^{ 2 i \pi t_Z m} \int_{w_V \in \mathbb{R}} d w_V \int_{\mathbf{w} \in \mathbb{R}^{2p+7}} d\mathbf{w} \ e^{ -2 i \pi \mathbf{w}^T D \mathbf{\widehat{t}}} \int_{\mathbf{x} \in \mathbb{R}^{2p+7}} d\mathbf{x} \ f_{\widehat{\mathbf{t}},w_V}(\mathbf{x}) e^{ -2 i \pi \mathbf{w}^T H \mathbf{x}}\\ &= \delta(-t_Z) \int_{w_V \in \mathbb{R}} d w_V \int_{\mathbf{w} \in \mathbb{R}^{2p+7}} d\mathbf{w} \ e^{ -2 i \pi \mathbf{w}^T D \mathbf{\widehat{t}}} \ \mathcal{F}\left (f_{\widehat{\mathbf{t}},w_V}\right ) (H^T \mathbf{w})\\ &= \delta(-t_Z) \int_{w_V \in \mathbb{R}} d w_V \ \frac{1}{| \det(H)|} \mathcal{F}\left (\mathcal{F}\left (f_{\widehat{\mathbf{t}},w_V}\right )\right ) (H^{-1} D \mathbf{\widehat{t}}) \\ &= \delta(-t_Z) \int_{w_V \in \mathbb{R}} d w_V \ f_{\widehat{\mathbf{t}},w_V} (-H^{-1} D \mathbf{\widehat{t}})\\ &= \delta(-t_Z) \int_{w_V \in \mathbb{R}} d w_V \ e^{2i \pi (\widehat{\mathbf{t}}^T S - w_V R_V) (-H^{-1} D \mathbf{\widehat{t}})}\\ &= \delta(-t_Z) e^{2i \pi \widehat{\mathbf{t}}^T (-S H^{-1} D) \mathbf{\widehat{t}}} \int_{w_V \in \mathbb{R}} d w_V \ e^{-2i \pi w_V (-R_V H^{-1} D \mathbf{\widehat{t}})}\\ &= \delta(-t_Z) e^{2i \pi \widehat{\mathbf{t}}^T (-S H^{-1} D) \mathbf{\widehat{t}}} \delta (-R_V H^{-1} D \mathbf{\widehat{t}}). \end{align*} We can now compute $H^{-1}=$ $$ \kbordermatrix{ \mbox{} & w_{1} & w_{2} & \ldots & w_{p-1} & w_{p} & \omit\vrule & w_{U} & w_{W} & w_{Z} & \omit\vrule & w'_{1} & w'_{2} & \ldots & w'_{p-1} & w'_{p} & \omit\vrule & w'_{U} & w'_{V} & w'_{W} & w'_{Z} \\ e_{1} & 0 & & \cdots & & 0 & \omit\vrule & 1 & 1 & -1 & \omit\vrule & -1 & -1 & \push{\cdots} & -1 & \omit\vrule & 0 & 1 & 0 & 0 \\ e_{2} & -1 & 0 & & & & \omit\vrule & 2 & 2 & -2 & \omit\vrule & -1 & -2 & \push{\cdots} & -2 & \omit\vrule & 0 & 2 & 0 & 0 \\ \low{\vdots} & -1 & -1 & \ddots & & \vdots & \omit\vrule & & & & \omit\vrule & & & \ddots & & \vdots & \omit\vrule & & & & \\ & \vdots & & \ddots & 0 & 0 & \omit\vrule & \vdots & \vdots & \vdots & \omit\vrule & \vdots & \vdots & &\text{\tiny {\(1-p\)}} & \text{\tiny {\(1-p\)}} & \omit\vrule & \vdots & \vdots & \vdots & \vdots \\ e_{p} & & & & -1 & 0 & \omit\vrule & & & & \omit\vrule & & & & \text{\tiny {\(1-p\)}} & -p & \omit\vrule & & & & \\ e_{p+1} & -1 & & \cdots & & -1 & \omit\vrule & \text{\tiny {\(p+1\)}} & \text{\tiny {\(p+1\)}} & \text{\tiny {\(-p-1\)}} & \omit\vrule & -1 & -2 &\cdots & \text{\tiny {\(1-p\)}} & -p & \omit\vrule & 0 & \text{\tiny {\(p+1\)}} & 0 & 0 \\ \cline{1-1} \cline{2-21} f_{1} & & & & & & \omit\vrule & 1 & 1 & -1 & \omit\vrule & 0 & -1 & \push{\cdots} & -1 & \omit\vrule & 0 & 1 & 0 & 0 \\ f_{2} & & & & & & \omit\vrule & & & & \omit\vrule & 0 &0 & \ddots & & \low{\vdots} & \omit\vrule & & & & \\ \vdots & & & 0 & & & \omit\vrule & \vdots & \vdots & \vdots & \omit\vrule & \vdots & & \ddots & -1 & -1 & \omit\vrule & \vdots & \vdots & \vdots & \vdots \\ f_{p-1} & & & & & & \omit\vrule & & & & \omit\vrule & 0 & & & 0 & -1 & \omit\vrule & & & & \\ f_{p} & & & & & & \omit\vrule & 1 & 1 & -1 & \omit\vrule & 0 & & \cdots & & 0 & \omit\vrule & 0 & 1 & 0 & 0 \\ \cline{1-1} \cline{2-21} v & -1 & & \cdots & & -1 & \omit\vrule & \text{\tiny {\(p+1\)}} & \text{\tiny {\(p+1\)}} &\text{\tiny {\(-p-1\)}} & \omit\vrule & -1 & -2 & \push{\cdots} & -p & \omit\vrule & 0 & \text{\tiny {\(p+1\)}} & 1 & 0 \\ r & -1 & & \cdots & & -1 & \omit\vrule & \text{\tiny {\(p+2\)}} & \text{\tiny {\(p+1\)}} &\text{\tiny {\(-p-2\)}} & \omit\vrule & -1 & -2 & \push{\cdots} & -p & \omit\vrule & 0 & \text{\tiny {\(p+1\)}} & 1 & 0 \\ s & & & & & & \omit\vrule & 0 & 0 & 1 & \omit\vrule & & & & & & \omit\vrule & 0 & 0 & 0 & 0 \\ s' & & & \low{0} & & & \omit\vrule & 0 & 0 & 1 & \omit\vrule & & & \low{0} & & & \omit\vrule & 0 & 0 & 0 & -1 \\ g & & & & & & \omit\vrule & 0 & 0 & 1 & \omit\vrule & & & & & & \omit\vrule & -1 & 0 & 0 & 0 \\ u & & & & & & \omit\vrule & 1 & 1 & -1 & \omit\vrule & & & & & & \omit\vrule & 0 & 0 & 0 & 0 },$$ and thus compute that $-R_V H^{-1} D \mathbf{\widehat{t}}=t_U-t_V-t_Z$ and $$-S H^{-1} D= \kbordermatrix{ \mbox{} &t_1 &t_2 & \cdots & t_{p-1} & t_p & \omit\vrule & t_U & t_V & t_W &t_Z \\ t_1 & 1 & 1 & \cdots & 1 & 1 & \omit\vrule & 0 & -1 & 0 & 0 \\ t_2 & 1 & 2 & \cdots & 2 & 2 & \omit\vrule & 0 & -2 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \omit\vrule & \vdots & \vdots & \vdots & \vdots \\ t_{p-1} & 1 & 2 & \cdots & p-1 & p-1 & \omit\vrule & 0 & -(p-1) & 0 & 0 \\ t_p & 1 & 2 & \cdots & p-1 & p & \omit\vrule & 0 & -p & 0 & 0 \\ \cline{1-1} \cline{2-11} t_U & -1 & -2 & \cdots & -(p-1) & -p & \omit\vrule & 0 & p+1 & 1 & 0 \\ t_V & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & -1 & 0 & 0 & 0 \\ t_W & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & 0 & 0 & 0 & 0 \\ t_Z & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & 0 & 0 & 0 & 0 }.$$ Since $\widehat{\mathbf{t}}^T (-S H^{-1} D) \mathbf{\widehat{t}} = \mathbf{t}^T Q_n \mathbf{t} + (t_V-t_U)(t_1+\ldots+p t_p-pt_U)$, where $\mathbf{t}=(t_1,\ldots,t_p,t_U,t_W)$ and $Q_n$ is defined in Theorem \ref{thm:part:func}, we conclude that the kinematical kernel can be written as \[ \mathcal{K}_{Y_n}(\mathbf{\widehat{t}})= e^{2 i \pi \left( \mathbf{t}^T Q_n \mathbf{t} +(t_V - t_U)(t_1 + \cdots + p t_p - p t_U) \right)} \delta(t_Z) \delta(t_U - t_V - t_Z). \] We now compute the dynamical content. We denote $\alpha=(a_1,b_1,c_1,\ldots,a_W,b_W,c_W,a_Z,b_Z,c_Z)$ a general vector in $\mathcal{S}_{Y_n}$. Let $\hbar>0$. The dynamical content $\mathcal{D}_{\hbar,Y_n}(\mathbf{\widehat{t}},\alpha)$ is equal to: \[ e^{\frac{1}{\sqrt{\hbar}} \widehat{C}(\alpha)^T \mathbf{\widehat{t}}} \dfrac{ \Phi_\mathsf{b}\left (t_U + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_U)\right ) \Phi_\mathsf{b}\left (t_V + \frac{i}{2 \pi \sqrt{\hbar}}(\pi-a_V)\right ) \Phi_\mathsf{b}\left (t_W + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_W)\right ) }{ \Phi_\mathsf{b}\left (t_1 - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_1)\right ) \cdots \Phi_\mathsf{b}\left (t_p - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_p)\right ) \Phi_\mathsf{b}\left (t_Z - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_Z)\right ) }, \] where $\widehat{C}(\alpha) = (c_1, \ldots, c_p,c_U,c_V, c_W, c_Z)^T$. Let us come back to the computation of the partition function of the Teichm\"uller TQFT. By definition, \[ \mathcal{Z}_{\hbar}(Y_n,\alpha)= \int_{\mathbf{\widehat{t}}\in\mathbb{R}^{Y_n^{3}}} d\mathbf{\widehat{t}} \mathcal{K}_{Y_n}(\mathbf{\widehat{t}}) \mathcal{D}_{\hbar,Y_n}(\mathbf{\widehat{t}},\alpha). \] We begin by integrating over the variables $t_V$ and $t_Z$, which consists in removing the two Dirac delta functions $\delta(t_Z)$ and $\delta(t_U - t_V - t_Z)$ in the kinematical kernel and replacing $t_Z$ by $0$ and $t_V$ by $t_U$ in the other terms. Therefore, we have $$ \Phi_{\mathsf{b}}\left( \frac{\pi-a_Z}{2\pi i \sqrt{\hbar}} \right) \mathcal{Z}_{\hbar}(Y_n,\alpha) = \int_{\mathbf{t}\in\mathbb{R}^{p+2}} d\mathbf{t} e^{2 i \pi \mathbf{t}^T Q_n\mathbf{t}} e^{\frac{1}{\sqrt{\hbar}} (c_1 t_1 + \cdots + c_p t_p + (c_U + c_V)t_U + c_W t_W)} \Pi(\mathbf{t},\alpha,\hbar),$$ where $\mathbf{t} =(t_1, \ldots,t_p,t_U,t_W)$ and $$\Pi(\mathbf{t},\alpha,\hbar) := \frac{ \Phi_\mathsf{b}\left (t_U + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_U)\right ) \Phi_\mathsf{b}\left (t_U + \frac{i}{2 \pi \sqrt{\hbar}}(\pi-a_V)\right ) \Phi_\mathsf{b}\left (t_W + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_W)\right ) }{ \Phi_\mathsf{b}\left (t_1 - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_1)\right ) \cdots \Phi_\mathsf{b}\left (t_p - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_p)\right ) }. $$ \textit{Step 2. Applying the dominated convergence theorem for $\alpha \to \tau$.} For the rest of the proof, let $$\tau = (a^\tau_1,b^\tau_1,c^\tau_1,\ldots,a^\tau_Z,b^\tau_Z,c^\tau_Z) \in \mathcal{S}_{Y_n \setminus Z} \times \overline{\mathcal{S}_Z}$$ be such that $\omega_j(\tau) = 2 \pi$ for all $j \in \{0,1, \ldots, p-1,p\}$, $\widehat{\omega}_j(\tau) = 2 \pi$ for all $j \in \{s,d,p+1\}$ and $\widehat{\omega}_{\overrightarrow{K_n}}(\tau)=a^\tau_Z=0$. Let $\delta>0$ such that there exists a neighborhood $\mathfrak{U}$ of $\tau$ in $\mathcal{S}_{Y_n \setminus Z} \times \overline{\mathcal{S}_Z}$ such that for each $\alpha \in \mathfrak{U} \cap \mathcal{S}_{Y_n}$ the $3p+9$ first coordinates $a_1, \ldots, c_W$ of $\alpha$ live in $(\delta,\pi-\delta)$. Then for all $\alpha \in \mathfrak{U}\cap \mathcal{S}_{Y_n}$, for any $j \in \{1, \ldots,p,U,V,W\}$, and for any $t \in \mathbb{R}$, we have $$\left |e^{\frac{1}{ \sqrt{\hbar}} c_j t} \Phi_\mathsf{b}\left (t \pm \frac{i}{ 2 \pi \sqrt{\hbar}}(b_j+c_j)\right )^{\pm 1}\right | \leqslant M_{\delta,\hbar} \ e^{-\frac{1}{\sqrt{\hbar}} \delta |t| }.$$ Indeed, this is immediate for $t \leqslant 0$ by Lemma \ref{lem:PhiB:bounded} and the fact that $c_j>\delta$. For $t \geqslant 0$, one has to use that $b_j > \delta$ but also Proposition \ref{prop:quant:dilog} (1) and (2) to remark that : $$ \left |\Phi_\mathsf{b}\left (t + \frac{i}{ 2 \pi \sqrt{\hbar}}(b_j+c_j)\right )\right | = \left |\Phi_\mathsf{b}\left (-t + \frac{i}{ 2 \pi \sqrt{\hbar}}(b_j+c_j)\right )\right | \left |e^{i \pi \left ( \frac{i}{ 2 \pi \sqrt{\hbar}}(b_j+c_j)\right )^2}\right | \leqslant M_{\delta,\hbar} e^{-\frac{1}{\sqrt{\hbar}} (b_j+c_j) t}.$$ Consequently, we have a domination of the previous integrand uniformly over $\mathfrak{U}\cap \mathcal{S}_{Y_n}$, i.e. $$\left |e^{2 i \pi \mathbf{y}^T Q_n\mathbf{y}} e^{\frac{1}{\sqrt{\hbar}} (c_1 t_1 + \cdots + c_p t_p + (c_U + c_V)t_U + c_W t_W)} \Pi(\mathbf{t},\alpha,\hbar)\right | \leqslant \left (M_{\delta,\hbar}\right )^{p+3} e^{-\frac{1}{\sqrt{\hbar}} \delta \left ( |t_1|+\ldots |t_p|+2|t_U|+|t_W| \right )} $$ for all $\alpha \in \mathfrak{U}\cap \mathcal{S}_{Y_n}$ and for all $\mathbf{t} \in \mathbb{R}^{p+2}$. Since the right hand side of this inequality is integrable over $\mathbb{R}^{p+2}$, we can then apply the dominated convergence theorem to conclude that $\Phi_{\mathsf{b}}\left( \frac{\pi-a_Z}{2\pi i \sqrt{\hbar}} \right) \mathcal{Z}_{\hbar}(Y_n,\alpha) $ tends to $$ \int_{\mathbf{t}\in\mathbb{R}^{p+2}} d\mathbf{t} e^{2 i \pi \mathbf{t}^T Q_n\mathbf{t}} e^{\frac{1}{\sqrt{\hbar}} (c^\tau_1 t_1 + \cdots + c^\tau_p t_p + (c^\tau_U + c^\tau_V)t_U + c^\tau_W t_W)} \Pi(\mathbf{t},\tau,\hbar)$$ as $\alpha \in \mathcal{S}_{Y_n}, \alpha \to \tau$ (recall that $c_j^\tau$ denotes the $c_j$ coordinate of $\tau$). \textit{Step 3. Retrieving the value $J_{X_n}(\hbar,0)$ in $\alpha =\tau$.} Let us now prove that $$ \int_{\mathbf{t}\in\mathbb{R}^{p+2}} d\mathbf{t} e^{2 i \pi \mathbf{t}^T Q_n\mathbf{t}} e^{\frac{1}{\sqrt{\hbar}} (c^\tau_1 t_1 + \cdots + c^\tau_p t_p + (c^\tau_U + c^\tau_V)t_U + c^\tau_W t_W)} \Pi(\mathbf{t},\tau,\hbar) = J_{X_n}(\hbar,0).$$ We first do the following change of variables: \begin{itemize} \item $y'_k = t_k - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a^\tau_k)$ for $1 \leqslant k \leqslant p$, \item $y'_l = t_l + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a^\tau_l)$ for $l\in\{U,W\}$, \end{itemize} and we denote $\mathbf{y'}=\left (y'_1, \ldots, y'_{p}, y'_U, y'_W\right )^T$. Note that the term $\Phi_\mathsf{b}\left (t_U + \frac{i}{2 \pi \sqrt{\hbar}}(\pi-a^\tau_V)\right )$ will become $\Phi_\mathsf{b}\left (y'_U + \frac{i}{2 \pi \sqrt{\hbar}}(a^\tau_U-a^\tau_V)\right )= \Phi_\mathsf{b}\left (y'_U \right ),$ since $a^\tau_U-a^\tau_V = (\widehat{\omega}_{s}(\tau)- 2\pi)+(\widehat{\omega}_{d}(\tau)- 2\pi) = 0$. We also denote $$\mathcal{Y}'_{\hbar,\tau} := \prod_{k=1}^p\left (\mathbb{R} - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a^\tau_k)\right ) \times \prod_{l=U,W} \left (\mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a^\tau_l)\right ),$$ the subset of $\mathbb{C}^{p+2}$ on which the variables in $\mathbf{y'}$ reside. By a similar computation as in the proof of Theorem \ref{thm:part:func}, we obtain \begin{align*} &\int_{\mathbf{t}\in\mathbb{R}^{p+2}} d\mathbf{t} e^{2 i \pi \mathbf{t}^T Q_n\mathbf{t}} e^{\frac{1}{\sqrt{\hbar}} (c^\tau_1 t_1 + \cdots + c^\tau_p t_p + (c^\tau_U + c^\tau_V) t_U + c^\tau_W t_W)} \Pi(\mathbf{t},\tau,\hbar)\\ &\stackrel{\star}{=} \int_{\mathbf{y'} \in \mathcal{Y}'_{\hbar,\tau}} d\mathbf{y'} e^{ 2 i \pi \mathbf{y}^{\prime T} Q_n \mathbf{y'} + \frac{1}{\sqrt{\hbar}} \mathcal{W}(\tau)^T \mathbf{y'} } \dfrac{ \Phi_\mathsf{b}\left (y'_U\right ) \Phi_\mathsf{b}\left (y'_U \right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) }, \end{align*} where for any $\alpha \in \mathcal{S}_{Y_n \setminus Z}$, $\mathcal{W}(\alpha)$ is defined as $$\mathcal{W}(\alpha):= 2 Q_n \Gamma(\alpha)+C(\alpha)+(0,\ldots,0,c_V,0)^T,$$ following the definitions of $\Gamma(\alpha)$ and $C(\alpha)$ in the proof of Theorem \ref{thm:part:func}. Hence, from the value of $J_{X_n}(\hbar,0)$, it remains only to prove that $\mathcal{W}(\tau) = \mathcal{W}_n$. Let us denote $\Lambda: (u_1,\ldots,u_p,u_U,u_V,u_W) \mapsto (u_1,\ldots,u_p,u_U,u_W)$ the process of forgetting the second-to-last coordinate. then obviously $C(\alpha) = \Lambda (\widetilde{C}(\alpha))$. Recall from Lemma \ref{lem:2QGamma+C} that $\widetilde{\mathcal{W}}(\alpha) = 2 \widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha)$ depends almost only on edge weights of the angles in $X_n$. Thus, a direct calculation shows that for any $\alpha \in \mathcal{S}_{Y_n \setminus Z}$, we have \begin{equation*} \label{eqn:v:Alpha:Odd} \mathcal{W}(\alpha) = \Lambda(\widetilde{\mathcal{W}}(\alpha)) + \begin{bmatrix} 0 \\ \vdots \\ 0 \\ c_V - 4(\pi-a_U)+3(\pi-a_V)-(\pi-a_W) \\ a_U-a_V \end{bmatrix}. \end{equation*} Now, if we specify $\alpha=\tau$, then the weights $\omega_{X_n,j}(\alpha)$ appearing in $\Lambda(\widetilde{\mathcal{W}}(\alpha))$ will all be equal to $2\pi$, since $\omega_s(\tau) =\widehat{\omega}_s(\tau)-\widehat{\omega}_{\overrightarrow{K_n}}(\tau) = 2 \pi$ and $$\omega_{p+1}(\tau) =\widehat{\omega}_d(\tau)+\widehat{\omega}_{p+1}(\tau) - 2\left (\pi-\widehat{\omega}_{\overrightarrow{K_n}}(\tau)\right ) = 2\pi.$$ Hence $$\mathcal{W}(\tau)= \mathcal{W}_{n} + \begin{bmatrix} 0 \\ \vdots \\ 0 \\ \pi - \frac{1}{2}\lambda_{X_n}(\tau)+ c^\tau_V - 4(\pi-a^\tau_U)+3(\pi-a^\tau_V)-(\pi-a^\tau_W) \\ a^\tau_U-a^\tau_V \end{bmatrix}. $$ Recall that $a^\tau_U-a^\tau_V=0$, and remark finally that \begin{align*} &\pi - \frac{1}{2}\lambda_{X_n}(\tau)+ c^\tau_V - 4(\pi-a^\tau_U)+3(\pi-a^\tau_V)-(\pi-a^\tau_W) \\ &= 3a^\tau_U-2a^\tau_V+a^\tau_W+b^\tau_W-\pi\\ &= 2(a^\tau_U-a^\tau_V)+(a^\tau_U-c^\tau_W)\\ &= -(\widehat{\omega}_{d}(\tau)- 2\pi) -\widehat{\omega}_{\overrightarrow{K_n}}(\tau)=0. \end{align*} Hence $\mathcal{W}(\tau) = \mathcal{W}_n$ and the theorem is proven. \end{proof} \section{Proving the volume conjecture (odd case)}\label{sec:vol:conj} We now arrive to the final and most technical part of this paper, that is to say the proof of the volume conjecture using detailed analytical methods. We advise the reader to be familiar with the proofs and notations of Section \ref{sec:part:odd} before reading this section. Having read section \ref{sec:part:H:odd} is not as essential, but can nevertheless help understanding some arguments in the following first three subsections. The main result is as follows: \begin{theorem} \label{thm:vol:conj} Let $n$ be an odd integer greater or equal to $3$. Let $J_{X_n}$ and $\mathfrak{J}_{X_n}$ be the functions defined in Theorem \ref{thm:part:func} and Corollary \ref{cor:part:func}. Then we have: $$ \lim_{\hbar \to 0^+} 2\pi \hbar \log \vert J_{X_n}(\hbar,0) \vert = \lim_{\hbar \to 0^+} 2\pi \hbar \log \vert \mathfrak{J}_{X_n}(\hbar,0) \vert = -\emph{Vol}(S^3\backslash K_n).$$ In other words, the Teichm\"uller TQFT volume conjecture of Andersen-Kashaev is proved for the infinite family of odd twist knots. \end{theorem} The proof of Theorem \ref{thm:vol:conj} will be split into several lemmas. The general idea is to translate the expressions in Theorem \ref{thm:vol:conj} into asymptotics of the form of Theorem \ref{thm:SPM}, and check that the assumptions of Theorem \ref{thm:SPM} are satisfied one by one, i.e. that we are allowed to apply the saddle point method. Technical analytical lemmas are required for the asymptotics and error bounds, notably due to the fact that we work with \textit{unbounded} integration contours. More precisely, here is an overview of Section \ref{sec:vol:conj}: \begin{itemize} \item \underline{Sections \ref{sub:S:U}, \ref{sub:ReS:Yalpha} and \ref{sub:ReS:Y0}:} For the ``classical'' potential $S$, we check the prerequisites for the saddle point method, notably that $\Re(S)$ attains a maximum of $-\mathrm{Vol}(S^3\setminus K_n)$ at the complete angle structure (from Lemma \ref{lem:complex:sym} to Lemma \ref{lem:-vol}). This part refers to Thurston's gluing equations and the properties of the classical dilogarithm. \item \underline{Section \ref{sub:asym:Y0}:} We apply the saddle point method to the classical potential $S$ on a compact integration contour (Proposition \ref{prop:compact:contour:S:SPM}) and we then deduce asymptotics when the contour is unbounded (Lemma \ref{lem:unbounded:contour} and Proposition \ref{prop:all:contour:S}). This part is where the analytical arguments start. \item \underline{Section \ref{sub:asym:PhiB}:} We compare the classical and quantum dilogarithms $\mathrm{Li}_2$ and $\Phi_{\mathsf{b}}$ in the asymptotic $\mathsf{b} \to 0^+$ (Lemmas \ref{lem:parity}, \ref{lem:unif:bound}, \ref{lem:unif:bound:neg}) and deduce asymptotics for the quantum potential $S_\mathsf{b}$ (Proposition \ref{prop:all:contour:Sb}). This part, and Lemma \ref{lem:unif:bound} in particular, contains the heart of the proof, and needs several new analytical arguments to establish uniform bounds on an unbounded integration contour. \item \underline{Section \ref{sub:asym:hbar}:} In order to get back to the functions $J_{X_n}$ and $\mathfrak{J}_{X_n}$ of Theorem \ref{thm:vol:conj}, we compare the two previous potentials with a second quantum potential $S'_\mathsf{b}$ related to $J_{X_n}$ (Remark \ref{rem:J':S'b}) and we deduce the corresponding asymptotics for $S'_\mathsf{b}$ (Lemma \ref{lem:unif:bound:hbar} and Proposition \ref{prop:all:contour:S'b}). This part uses similar analytical arguments as the previous one, and is needed because of the particular construction of the Teichm\"uller TQFT partition function and the subtle difference between $\frac{1}{\mathsf{b}^2}$ and $\frac{1}{\hbar}$. \item \underline{Section \ref{sub:conjvol:conclusion}:} We conclude with the (now short) proof of Theorem \ref{thm:vol:conj} and we offer comments on how our techniques could be re-used for further works. \end{itemize} Let us finish this introduction by establishing some notations. For the remainder of this section, $n$ will be an odd integer greater or equal to $3$ and $p=\frac{n-3}{2}$. Let us now recall and define some notations: \begin{itemize} \item We denote the following product of open ``horizontal bands" in $\mathbb{C}$, and $$\mathcal{U}:= \prod_{k=1}^p\left (\mathbb{R} + i (-\pi,0) \right ) \times \prod_{l=U,W} \left (\mathbb{R} + i (0,\pi)\right ),$$ an open subset of $\mathbb{C}^{p+2}$. \item For any angle structure $\alpha = (a_1, \ldots, c_W) \in \mathcal{A}_{X_n}$, we denote $$\mathcal{Y}_\alpha := \prod_{k=1}^p\left (\mathbb{R} - i (\pi - a_k)\right ) \times \prod_{l=U,W} \left (\mathbb{R} + i (\pi - a_l)\right ),$$ an affine real plane of real dimension $p+2$ in $\mathbb{C}^{p+2}$, contained in the band $\mathcal{U}$. \item For the complete angle structure $\alpha^0 = (a^0_1, \ldots, c^0_W) \in \mathcal{A}_{X_n}$ (which exists because of Theorem \ref{thm:geometric}), we denote $$\mathcal{Y}^0 := \mathcal{Y}_{\alpha^0}.$$ \item We define the potential function $S\colon \mathcal{U} \to \mathbb{C}$, an holomorphic function on $p+2$ complex variables, by: $$S(\mathbf{y}) = i \mathbf{y}^T Q_n \mathbf{y} + \mathbf{y}^T \mathcal{W}_n + i \mathrm{Li}_2\left (-e^{y_1}\right ) + \cdots + i \mathrm{Li}_2\left (-e^{y_p}\right ) - 2 i \mathrm{Li}_2\left (-e^{y_U}\right ) - i \mathrm{Li}_2\left (-e^{y_W}\right ), $$ where $Q_n$ and $\mathcal{W}_n$ are like in Theorem \ref{thm:part:func}. \end{itemize} \subsection{Properties of the potential function $S$ on the open band $\mathcal{U}$}\label{sub:S:U} The following lemma will be very useful to prove the invertibility of the holomorphic hessian of the potential $S$. \begin{lemma}\label{lem:complex:sym} Let $m\geqslant 1$ an integer, and $S_1, S_2 \in M_m(\mathbb{R})$ such that $S_1$ is symmetric positive definite and $S_2$ is symmetric. Then the complex symmetric matrix $S_1 + i S_2$ is invertible. \end{lemma} \begin{proof} Let $v \in \mathbb{C}^m$ such that $(S_1 + i S_2) v = 0$. Let us prove that $v=0$. Since $S_1$ and $S_2$ are real symmetric, we have $v^T S_1 v, v^T S_2 v \in \mathbb{R}$. Now, since $(S_1 + i S_2) v = 0$, then $$0 = v^T (S_1 + i S_2) v = v^T S_1 v +i v^T S_2 v,$$ thus, by taking the real part, we get $0=v^T S_1 v$, which implies $v=0$ since $S_1$ is positive definite. \end{proof} We can now prove that the holomorphic hessian is non-degenerate at each point. \begin{lemma}\label{lem:hess} For every $\mathbf{y}\in \mathcal{U}$, the holomorphic hessian of $S$ is given by: $$ \mathrm{Hess}(S)(\mathbf{y}) = \left (\dfrac{\partial^2 S}{\partial y_j \partial y_k}\right )_{j,k\in\{1,\ldots,p,U,W\}} (\mathbf{y}) = 2 i Q_n + i \begin{pmatrix} \frac{-1}{1+e^{-y_1}} & \ & 0 &0 &0 \\ \ & \ddots & \ & \vdots & \vdots \\ 0 & \ & \frac{-1}{1+e^{-y_p}} &0 &0 \\ 0 & \cdots & 0 &\frac{2}{1+e^{-y_U}} &0 \\ 0 & \cdots & 0 &0 &\frac{1}{1+e^{-y_W}} \end{pmatrix} .$$ Furthermore, $\mathrm{Hess}(S)(\mathbf{y})$ has non-zero determinant for every $\mathbf{y}\in \mathcal{U}$. \end{lemma} \begin{proof} The first part follows from the double differentiation of $S$ and the fact that $$\dfrac{\partial \mathrm{Li}_2(-e^y)}{\partial y} = - \mathrm{Log}(1+e^y)$$ for $y \in \mathbb{R} \pm i(0,\pi)$ (note that $y \in \mathbb{R} \pm i(0,\pi)$ implies $-e^y \in \mathbb{C} \setminus \mathbb{R}$). Let us prove the second part. Let $\mathbf{y}\in \mathcal{U}$. Then $\Im(\mathrm{Hess}(S)(\mathbf{y}))$ is a symmetric matrix (as the sum of $Q_n$ and a diagonal matrix), and $$ \Re(\mathrm{Hess}(S)(\mathbf{y}))= \begin{pmatrix} -\Im\left (\frac{-1}{1+e^{-y_1}}\right ) & \ & 0 &0 &0 \\ \ & \ddots & \ & \vdots & \vdots \\ 0 & \ & -\Im\left (\frac{-1}{1+e^{-y_p}}\right ) &0 &0 \\ 0 & \cdots & 0 &-\Im\left (\frac{2}{1+e^{-y_U}}\right ) &0 \\ 0 & \cdots & 0 &0 &-\Im\left (\frac{1}{1+e^{-y_W}}\right ) \end{pmatrix}$$ is diagonal with negative coefficients (because $\Im(y_1), \ldots, \Im(y_p) \in(-\pi,0)$ and $\Im(y_U),\Im(y_W)\in (0,\pi)$). Hence it follows from Lemma \ref{lem:complex:sym} that $\mathrm{Hess}(S)(\mathbf{y})$ is invertible for every $\mathbf{y}\in \mathcal{U}$. \end{proof} The following lemma establishes an equivalence between critical points of the potential $S$ and complex shape structures that solve the balancing and completeness equations. \begin{lemma}\label{lem:grad:thurston} Let us consider the diffeomorphism $$\psi := \left (\prod_{T \in \{T_1,\ldots,T_p,U,W\} } \psi_T \right )\colon (\mathbb{R}+i\mathbb{R}_{>0})^{p+2} \to \mathcal{U},$$ where $\psi_T$ was defined in Section \ref{sub:thurston}. Then $\psi$ induces a bijective mapping between\\ $\{\mathbf{y} \in \mathcal{U}; \nabla S(\mathbf{y}) = 0\}$ and $$\left \{\mathbf{z}=(z_1,\ldots,z_p,z_U,z_W) \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+2} | \mathcal{E}_{X_n,0}(\mathbf{z}) \wedge \ldots \wedge \mathcal{E}_{X_n,p-1}(\mathbf{z}) \wedge \mathcal{E}^{co}_{X_n,p+1}(\mathbf{z}) \wedge \mathcal{E}^{co}_{X_n,s}(\mathbf{z}) \right \},$$ where the equations $\mathcal{E}_{X_n,0}(\mathbf{z}), \ldots , \mathcal{E}_{X_n,p-1}(\mathbf{z}), \mathcal{E}^{co}_{X_n,p+1}(\mathbf{z}), \mathcal{E}^{co}_{X_n,s}(\mathbf{z})$ were defined at the end of Section \ref{sec:geom}. In particular, $S$ admits only one critical point $\mathbf{y^0}$ on $\mathcal{U}$, corresponding to the complete hyperbolic structure $\mathbf{z^0}$ on the geometric ideal triangulation $X_n$ (adding $z^0_V$ equal to $z^0_U$). \end{lemma} \begin{proof} First we compute, for every $\mathbf{y} \in \mathcal{U}$, $$ \nabla S(\mathbf{y}) = \begin{pmatrix} \partial_1 S(\mathbf{y})\\ \vdots\\ \partial_p S(\mathbf{y})\\ \partial_U S(\mathbf{y})\\ \partial_W S(\mathbf{y}) \end{pmatrix} = 2 i Q_n \mathbf{y} + \mathcal{W}_n + i \begin{pmatrix} -\mathrm{Log} (1+e^{y_1})\\ \vdots\\ -\mathrm{Log} (1+e^{y_p})\\ 2\mathrm{Log} (1+e^{y_U})\\ \mathrm{Log} (1+e^{y_W}) \end{pmatrix}. $$ Then, we define a lower triangular matrix $A=\kbordermatrix{ \mbox{} &y_1 &y_2 &y_3 &\cdots & y_p & \omit\vrule & y_U & y_W \\ y_1 & 1 & & & & & \omit\vrule & & \\ y_2 &-2 & 1 & & &0 & \omit\vrule & & \\ y_3 &1 & -2 & 1 & & & \omit\vrule & & \\ \vdots & & \ddots & \ddots & \ddots & & \omit\vrule & & \\ y_p & & & 1 & -2 & 1 & \omit\vrule &0 &0 \\ \cline{1-1} \cline{1-9} y_U & & & & & 1 & \omit\vrule & 1 & 0 \\ y_W & &0 & & &0 & \omit\vrule & 0 &1 } \in GL_{p+2}(\mathbb{Z})$, and we compute $$ A \cdot \nabla S(\mathbf{y}) = \begin{pmatrix} 2i(y_1+\cdots+y_p-y_U)-2\pi p -i \mathrm{Log} (1+e^{y_1})\\ -2i y_1 + 2 \pi +2 i \mathrm{Log} (1+e^{y_1}) - i \mathrm{Log} (1+e^{y_2}) \\ 2\pi -i \mathrm{Log} (1+e^{y_1}) +2 i \mathrm{Log} (1+e^{y_2}) - 2i y_2 -i \mathrm{Log} (1+e^{y_3})\\ \vdots\\ 2\pi -i \mathrm{Log} (1+e^{y_{k-1}}) +2 i \mathrm{Log} (1+e^{y_k}) - 2i y_k -i \mathrm{Log} (1+e^{y_{k+1}})\\ \vdots\\ 2\pi -i \mathrm{Log} (1+e^{y_{p-2}}) +2 i \mathrm{Log} (1+e^{y_{p-1}}) - 2i y_{p-1} -i \mathrm{Log} (1+e^{y_p})\\ \pi -i \mathrm{Log} (1+e^{y_p}) +2 i \mathrm{Log} (1+e^{y_U})+i y_W\\ \pi + i y_U +i\mathrm{Log}(1+e^{y_W}) \end{pmatrix}. $$ For $1 \leqslant k \leqslant p$, by denoting $y_k=\psi_{T_k}(z_k)$, we have $$ \mathrm{Log}(z_k) = y_k + i \pi, \ \mathrm{Log}(z'_k) = -\mathrm{Log}(1+e^{y_k}), \ \mathrm{Log}(z''_k) = \mathrm{Log}(1+e^{-y_k}),$$ and for $l = {U,W}$, by denoting $y_l=\psi_{T_l}(z_l)$, we have $$ \mathrm{Log}(z_l) = -y_l + i \pi, \ \mathrm{Log}(z'_l) = -\mathrm{Log}(1+e^{-y_l}), \ \mathrm{Log}(z''_l) = \mathrm{Log}(1+e^{y_l}).$$ Hence we compute, for all $\mathbf{z} \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+2}$, $$ A \cdot (\nabla S)(\psi(\mathbf{z})) = i \begin{pmatrix} \mathrm{Log}(z'_1) + 2 \mathrm{Log}(z_1)+\cdots + 2\mathrm{Log}(z_p)+2\mathrm{Log}(z_U)-2i\pi\\ 2\mathrm{Log}(z''_1)+\mathrm{Log}(z'_2)-2i\pi \\ \mathrm{Log}(z'_{1})+2\mathrm{Log}(z''_2)+\mathrm{Log}(z'_{3})-2i\pi \\ \vdots\\ \mathrm{Log}(z'_{k-1})+2\mathrm{Log}(z''_k)+\mathrm{Log}(z'_{k+1})-2i\pi \\ \vdots\\ \mathrm{Log}(z'_{p-2})+2\mathrm{Log}(z''_{p-1})+\mathrm{Log}(z'_{p})-2i\pi \\ \mathrm{Log}(z'_{p}) +2 \mathrm{Log}(z''_{U})-\mathrm{Log}(z_{W}) \\ \mathrm{Log}(z''_{W}) -\mathrm{Log}(z_{U}) \end{pmatrix}. $$ This last vector is zero if and only if one has $$\mathcal{E}_{X_n,0}(\mathbf{z}) \wedge \ldots \wedge \mathcal{E}_{X_n,p-1}(\mathbf{z}) \wedge \mathcal{E}^{co}_{X_n,p+1}(\mathbf{z}) \wedge \mathcal{E}^{co}_{X_n,s}(\mathbf{z}).$$ Since $A$ is invertible, we thus have $$ \mathbf{z} \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+2} \text{ and } \mathcal{E}_{X_n,0}(\mathbf{z}) \wedge \ldots \wedge \mathcal{E}_{X_n,p-1}(\mathbf{z}) \wedge \mathcal{E}^{co}_{X_n,p+1}(\mathbf{z}) \wedge \mathcal{E}^{co}_{X_n,s}(\mathbf{z})$$ $$\Updownarrow$$ $$\psi(\mathbf{z}) \in \mathcal{U} \text{ and } (\nabla S)(\psi(\mathbf{z})) = 0.$$ \end{proof} Let us now consider the multi-contour $$\mathcal{Y}^0= \mathcal{Y}_{\alpha^0} = \prod_{k=1}^p\left (\mathbb{R} - i (\pi - a^0_k)\right ) \times \prod_{l=U,W} \left (\mathbb{R} + i (\pi - a^0_l)\right ),$$ where $\alpha^0 \in \mathcal{A}_{X_n}$ is the complete hyperbolic angle structure corresponding to the complete hyperbolic complex shape structure $\mathbf{z^0}$. Notice that $\mathbf{y^0} \in \mathcal{Y}^0 \subset \mathcal{U}$. We will parametrise $\mathbf{y} \in \mathcal{Y}^0$ as $$\mathbf{y}= \begin{pmatrix} y_1 \\ \vdots \\ y_W \end{pmatrix} = \begin{pmatrix} x_1 + i d^0_1 \\ \vdots \\ x_W + i d^0_W \end{pmatrix} = \mathbf{x}+ i \mathbf{d^0}, $$ where $d^0_k = -(\pi - a^0_k) <0$ for $k=1, \ldots, p$ and $d^0_l = \pi - a^0_l>0$ for $l=U,W$. For the scrupulous readers, this means that $\mathbf{d^0}$ is a new notation for $\Gamma(\alpha^0)$, where $\Gamma(\alpha)$ was defined in Section \ref{sec:part:odd}. Notice that $\mathcal{Y}^0 = \mathbb{R}^{p+2} + i \mathbf{d^0} \subset \mathbb{C}^{p+2}$ is an $\mathbb{R}$-affine subspace of $\mathbb{C}^{p+2}$. \subsection{Concavity of $\Re S$ on each contour $\mathcal{Y}_{\alpha}$} \label{sub:ReS:Yalpha} Now we focus on the behaviour of the real part $\Re S$ of the classical potential, on each horizontal contour $\mathcal{Y}_{\alpha}$. \begin{lemma}\label{lem:concave} For any $\alpha \in \mathcal{A}_{X_n}$, the function $\Re S \colon \mathcal{Y}_{\alpha} \to \mathbb{R}$ is strictly concave on $\mathcal{Y}_{\alpha}$. \end{lemma} \begin{proof} Let $\alpha \in \mathcal{A}_{X_n}$. Since $\Re S \colon \mathcal{Y}_{\alpha} \to \mathbb{R}$ is twice continuously differentiable (as a function on $p+2$ real variables), we only need to check that its (real) hessian matrix $\left (\Re S\vert_{\mathcal{Y}_{\alpha}}\right )''$ is negative definite on every point $\mathbf{x}+ i \mathbf{d} \in \mathcal{Y}_{\alpha}$. Now, since this real hessian is equal to the real part of the holomorphic hessian of $S$, it follows from Lemma \ref{lem:hess} that for all $\mathbf{x} \in \mathbb{R}^{p+2}$, this real hessian is: \begin{align*} &\left (\Re S\vert_{\mathcal{Y}_{\alpha}}\right )''(\mathbf{x}+ i \mathbf{d})= \Re(\mathrm{Hess}(S)(\mathbf{x}+ i \mathbf{d}))\\ &= \begin{pmatrix} -\Im\left (\frac{-1}{1+e^{-x_1-i d_1}}\right ) & \ & 0 &0 &0 \\ \ & \ddots & \ & \vdots & \vdots \\ 0 & \ & -\Im\left (\frac{-1}{1+e^{-x_p-i d_p}}\right ) &0 &0 \\ 0 & \cdots & 0 &-\Im\left (\frac{2}{1+e^{-x_U-i d_U}}\right ) &0 \\ 0 & \cdots & 0 &0 &-\Im\left (\frac{1}{1+e^{-x_W-i d_W}}\right ) \end{pmatrix}, \end{align*} which is diagonal with negative coefficients, since $d_1,\ldots,d_p \in (-\pi,0)$ and $d_U,d_W\in (0,\pi)$. In particular $\left (\Re S\vert_{\mathcal{Y}_{\alpha}}\right )''$ is negative definite everywhere, thus $\Re S\vert_{\mathcal{Y}_{\alpha}}$ is strictly concave. \end{proof} \subsection{Properties of $\Re S$ on the complete contour $\mathcal{Y}^0$}\label{sub:ReS:Y0} On the complete contour $\mathcal{Y}^0$, the function $\Re S$ is not only strictly concave but also admits a strict global maximum, at the complete structure $\mathbf{y^0}$. \begin{lemma}\label{lem:maximum} The function $\Re S \colon \mathcal{Y}^0 \to \mathbb{R}$ admits a strict global maximum on $\mathbf{y^0} \in \mathcal{Y}^0$. \end{lemma} \begin{proof} Since the holomorphic gradient of $S\colon \mathcal{U}\to \mathbb{C}$ vanishes on $\mathbf{y^0}$ by Lemma \ref{lem:grad:thurston}, the (real) gradient of $\Re S\vert_{\mathcal{Y}^0}$ (which is the real part of the holomorphic gradient of $S$) then vanishes as well on $\mathbf{y^0}$, thus $\mathbf{y^0}$ is a critical point of $\Re S\vert_{\mathcal{Y}^0}$. Besides, $\Re S\vert_{\mathcal{Y}^0}$ is strictly concave by Lemma \ref{lem:concave}, thus $\mathbf{y^0}$ is a global maximum of $\Re S\vert_{\mathcal{Y}^0}$. \end{proof} Before computing the value $\Re S (\mathbf{y^0})$, we establish a useful formula for the potential $S$: \begin{lemma}\label{lem:rewriteS} The function $S\colon \mathcal{U} \to \mathbb{C} $ can be re-written \begin{multline*} S(\mathbf{y}) = i \mathrm{Li}_2\left (-e^{y_1}\right ) + \cdots + i \mathrm{Li}_2\left (-e^{y_p}\right ) + 2 i \mathrm{Li}_2\left (-e^{-y_U}\right ) + i \mathrm{Li}_2\left (-e^{-y_W}\right ) \\ + i \mathbf{y}^T Q_n \mathbf{y} + i y_U^2 + i \frac{y_W^2}{2} + \mathbf{y}^T \mathcal{W}_n + i \frac{\pi^2}{2}. \end{multline*} \end{lemma} \begin{proof} We first recall the well-known formula for the dilogarithm (see Proposition \ref{prop:dilog} (1)): $$ \forall z \in \mathbb{C} \setminus [1,+\infty), \ \mathrm{Li}_2\left (\frac{1}{z}\right ) = - \mathrm{Li}_2(z) - \frac{\pi^2}{6} - \frac{1}{2}\mathrm{Log}(-z)^2. $$ We then apply this formula for $z=-e^{y_l}$ for $l\in\{U,W\}$ to conclude the proof. \end{proof} We can now use this formula to prove that the hyperbolic volume appears at the complete structure $\mathbf{y^0}$, in the following lemma. \begin{lemma}\label{lem:-vol} We have $$ \Re(S)(\mathbf{y^0}) = - \mathrm{Vol}(S^3 \setminus K_n).$$ \end{lemma} \begin{proof} From Lemma \ref{lem:rewriteS}, for all $\mathbf{y} \in \mathcal{U}$ we have \begin{multline*} S(\mathbf{y}) = i \mathrm{Li}_2\left (-e^{y_1}\right ) + \cdots + i \mathrm{Li}_2\left (-e^{y_p}\right ) + 2 i \mathrm{Li}_2\left (-e^{-y_U}\right ) + i \mathrm{Li}_2\left (-e^{-y_W}\right ) \\ + i \mathbf{y}^T Q_n \mathbf{y} + i y_U^2 + i \frac{y_W^2}{2} + \mathbf{y}^T \mathcal{W}_n + i \frac{\pi^2}{2}, \end{multline*} thus \begin{multline*} \Re(S)(\mathbf{y}) = - \Im\left ( \mathrm{Li}_2\left (-e^{y_1}\right )\right ) - \cdots - \Im\left ( \mathrm{Li}_2\left (-e^{y_p}\right )\right ) - 2 \Im\left ( \mathrm{Li}_2\left (-e^{-y_U}\right )\right ) - \Im\left ( \mathrm{Li}_2\left (-e^{-y_W}\right )\right ) \\ -\Im\left ( \mathbf{y}^T Q_n \mathbf{y} + y_U^2 + \frac{y_W^2}{2}\right ) + \Re\left ( \mathbf{y}^T \mathcal{W}_n\right ). \end{multline*} Recall that for $z \in \mathbb{R} + i \mathbb{R}_{>0}$, the ideal hyperbolic tetrahedron of complex shape $z$ has hyperbolic volume $D(z) =\Im(\mathrm{Li}_2(z)) +\arg(1-z) \log|z|$ (where $D$ is the Bloch-Wigner function). Note that for $z=z_k = -e^{y_k}$ (with $1 \leqslant k \leqslant p$), we have $\arg(1-z) \log|z| = - c_k x_k$ and for $z=z_l = -e^{-y_l}$ (with $l\in\{U,W\}$), we have $\arg(1-z) \log|z| = b_l x_l$. Thus we have for $\mathbf{y} \in \mathcal{U}$: \begin{multline*} \Re(S)(\mathbf{y}) = - D(z_1) - \cdots - D(z_p) - 2 D(z_U) - D(z_W) -c_1 x_1 - \cdots -c_p x_p + 2b_U x_U + b_W x_W\\ -2 \mathbf{x}^T Q_n \mathbf{d} - 2 d_U x_U - d_W x_W + \mathbf{x}^T \mathcal{W}_n. \end{multline*} Recall that $\mathbf{z^0}$ is the complex shape structure corresponding to the complete hyperbolic structure on the ideal triangulation $X_n$ where $z^0_U$ is the complex shape of both tetrahedra $U$ and $V$ (because of the completeness equation $z_U=z_V$). Thus \begin{align*} - \mathrm{Vol}(S^3 \setminus K_n) &= - D(z_1^0) - \cdots - D(z_p^0) - D(z_U^0) - D(z_V^0) - D(z_W^0) \\ &= - D(z_1^0) - \cdots - D(z_p^0) - 2 D(z_U^0) - D(z_W^0). \end{align*} Hence we only need to prove that $(\mathbf{x^0})^T \cdot \mathcal{T} = 0$, where $$ \mathcal{T} := \begin{pmatrix} -c_1^0 \\ \vdots \\ -c_p^0 \\ 2 b_U^0 \\ b_W^0 \end{pmatrix} + \mathcal{W}_n -2 Q_n \mathbf{d^0} + \begin{pmatrix} 0 \\ \vdots \\ 0 \\ -2 d_U^0 \\ -d_W^0 \end{pmatrix}. $$ Since $d_l^0 = \pi - a_l^0 = b_l^0 +c_l^0$ for $l=U,W$, we have $\mathcal{T} = - \begin{pmatrix} c_1^0 \\ \vdots \\ c_p^0 \\ 2 c_U^0 \\ c_W^0 \end{pmatrix} + \mathcal{W}_n -2 Q_n \mathbf{d^0}.$ It then follows from the definitions of $\mathcal{W}, \mathcal{W}_n, \widetilde{\Gamma}, \widetilde{C}, \mathbf{d^0}$ and their connections established in Sections \ref{sec:part:odd} and \ref{sec:part:H:odd} that $\mathcal{T}=0$. More precisely, define for instance $$\tau^0 := \alpha^0 \oplus (0,0,\pi) \in \mathcal{S}_{Y_n \setminus Z} \times \overline{\mathcal{S}_Z},$$ which satisfies the assumptions on $\tau$ in Theorem \ref{thm:part:func:Htrig:odd} (as can be checked by computing the weights listed at the beginning of Section \ref{sec:part:H:odd}). Then recall from the end of the proof of Theorem \ref{thm:part:func:Htrig:odd} and the fact that $(a^0_U,b^0_U,c^0_U)=(a^0_V,b^0_V,c^0_V)$ that $$\mathcal{W}_n = \mathcal{W}(\tau^0):= 2 Q_n \Gamma(\tau^0)+C(\tau^0)+(0,\ldots,0,c^{\tau^0}_V,0)^T = 2 Q_n \mathbf{d^0} + (c_1^0, \ldots, c_p^0,2 c_U^0, c_W^0)^T, $$ and thus $\mathcal{T}=0$. The readers having skipped Section \ref{sec:part:H:odd} can instead use the identity $\widetilde{\mathcal{W}}(\alpha)= 2 \widetilde{Q}_n \widetilde{\Gamma}(\alpha)+\widetilde{C}(\alpha)$ at the end of Section \ref{sec:part:odd} to arrive at the same conclusion. \end{proof} \subsection{Asymptotics of integrals on $\mathcal{Y}^0$} \label{sub:asym:Y0} For the remainder of the section, let $r_0>0$ and $\gamma=\{ \mathbf{y}\in \mathcal{Y}^0 \ \vert \ \parallel \mathbf{y}-\mathbf{y^0} \parallel \ \leqslant r_0 \}$ a $p+2$-dimensional ball inside $\mathcal{Y}^0$ containing $\mathbf{y^0}$. We start with asymptotics of an integral on this compact contour $\gamma$. \begin{proposition}\label{prop:compact:contour:S:SPM} There exists a constant $\rho \in \mathbb{C}^*$ such that, as $\lambda \to \infty$, $$ \int_{\gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} = \rho \lambda^{-\frac{p+2}{2}} \exp\left (\lambda S(\mathbf{y^0})\right ) \left ( 1 + o_{\lambda \to \infty}\left (1\right ) \right ). $$ In particular, $$\dfrac{1}{\lambda} \log \left \vert \int_{\gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} \right \vert \underset{\lambda \to \infty}{\longrightarrow} \Re S(\mathbf{y^0}) = - \mathrm{Vol}(S^3 \setminus K_n). $$ \end{proposition} \begin{proof} We apply the saddle point method as in Theorem \ref{thm:SPM}, with $m=p+2$, $\gamma^m=\gamma$, $z=\mathbf{y}$, $z^0=\mathbf{y^0}$, $D=\mathcal{U}$, $f=1$ and $S$ as defined in the beginning of this section. Let us check the technical requirements: \begin{itemize} \item $\mathbf{y^0}$ is an interior point of $\gamma$ by construction. \item $\max_\gamma \Re S$ is attained only at $\mathbf{y^0}$ by Lemma \ref{lem:maximum}. \item $\nabla S (\mathbf{y^0})=0$ by Lemma \ref{lem:grad:thurston}. \item $\det \mathrm{Hess}(S)(\mathbf{y^0}) \neq 0$ by Lemma \ref{lem:hess}. \end{itemize} Thus the first statement follows from Theorem \ref{thm:SPM}, with $\rho := \dfrac{(2\pi)^{\frac{p+2}{2}}}{\sqrt{\det \mathrm{Hess}(S)(\mathbf{y^0})}} \in \mathbb{C}^*$. The second statement then follows from immediate computation and Lemma \ref{lem:-vol}. \end{proof} Now we compute an upper bound on the remainder term, i.e. the integral on $\mathcal{Y}^0 \setminus \gamma$ the whole unbounded contour minus the compact ball. \begin{lemma}\label{lem:unbounded:contour} There exists constants $A,B>0$ such that for all $\lambda > A$, $$ \left \vert \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} \right \vert \leqslant B e^{\lambda M}, $$ where $M := \max_{\partial \gamma} \Re S$. \end{lemma} \begin{proof} First we apply a change of variables to $p+2$-dimensional spherical coordinates $$\mathbf{y} \in \mathcal{Y}^0 \setminus \gamma \Longleftrightarrow r \overrightarrow{e} \in (r_0,\infty) \times \mathbb{S}^{p+1},$$ which yields: $$ \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} = \int_{\mathbb{S}^{p+1}} dvol_{\mathbb{S}^{p+1}} \int_{r_0}^\infty r^{p+1} e^{\lambda S(r \overrightarrow{e})} dr $$ for all $\lambda>0$. Consequently, we have for all $\lambda>0$: $$ \left \vert \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} \right \vert \leqslant \mathrm{vol}(\mathbb{S}^{p+1}) \sup_{\overrightarrow{e} \in \mathbb{S}^{p+1}} \int_{r_0}^\infty r^{p+1} e^{\lambda \Re(S)(r \overrightarrow{e})} dr.$$ Let us fix $\overrightarrow{e} \in \mathbb{S}^{p+1}$ and denote $f=f_{\overrightarrow{e}}:= (r \mapsto \Re(S)(r \overrightarrow{e}))$ the restriction of $\Re(S)$ on the ray $(r_0,\infty)\overrightarrow{e}$. Let $\lambda>0$. Let us find an upper bound on $ \int_{r_0}^\infty r^{p+1} e^{\lambda f(r)} dr$. Since $\Re(S)$ is strictly concave by Lemma \ref{lem:concave} and $f$ is its restriction on a convex set, $f$ is strictly concave as well on $(r_0,+\infty)$ (and even on $[0,+\infty)$). Now let us consider the slope function $N\colon [r_0,+\infty) \to \mathbb{R}$ defined by $N(r) := \dfrac{f(r)- f(r_0)}{r-r_0}$ for $r>r_0$ and $N(r_0):=f'(r_0)$. The function $N$ is $C^1$ and satisfies $N'(r) = \frac{f'(r)-N(r)}{r-r_0}$ for $r>r_0$. Now, since $f$ is strictly concave, we have $f'(r )< N(r)$ for any $r \in (r_0,\infty)$, thus $N$ is decreasing on this same interval. Hence $$ \int_{r_0}^\infty r^{p+1} e^{\lambda f(r)} dr = e^{\lambda f(r_0)} \int_{r_0}^\infty r^{p+1} e^{\lambda N(r)(r-r_0)} dr \leqslant e^{\lambda f(r_0)} \int_{r_0}^\infty r^{p+1} e^{\lambda N(r_0)(r-r_0)} dr. $$ Note that $N(r_0)=f'(r_0)<0$ by Lemmas \ref{lem:concave} and \ref{lem:maximum}. Using integration by parts, we can prove by induction that $$ \int_{r_0}^\infty r^{p+1} e^{\lambda N(r_0)(r-r_0)} dr = \frac{1}{(\lambda N(r_0))^{p+2}} \sum_{k=0}^{p+1} (-1)^{p+1-k} \frac{(p+1)!}{k!}(\lambda N(r_0))^k r_0^k. $$ Moreover, $N(r_0)=f'(r_0)= \langle (\nabla \Re(S))(r_0 \overrightarrow{e}) ; \overrightarrow{e} \rangle$, and since $S$ is holomorphic, we conclude that $( \overrightarrow{e} \mapsto N(r_0) = f_{\overrightarrow{e}}'(r_0))$ is a continous map from $\mathbb{S}^{p+1}$ to $\mathbb{R}_{<0}$. Hence there exist $m_1, m_2 >0$ such that $0 < m_1 \leqslant |N(r_0)| \leqslant m_2$ for all vectors $\overrightarrow{e} \in \mathbb{S}^{p+1}$. We thus conclude that for all $\lambda>\frac{1}{m_1 r_0}$, we have the (somewhat unoptimal) upper bound: \begin{align*} \int_{r_0}^\infty r^{p+1} e^{\lambda f(r)} dr & \leqslant e^{\lambda f(r_0)} \frac{1}{(\lambda N(r_0))^{p+2}} \sum_{k=0}^{p+1} (-1)^{p+1-k} \frac{(p+1)!}{k!}(\lambda N(r_0))^k r_0^k \\ & \leqslant e^{\lambda f(r_0)} \left \vert \frac{1}{(\lambda N(r_0))^{p+2}} \sum_{k=0}^{p+1} (-1)^{p+1-k} \frac{(p+1)!}{k!}(\lambda N(r_0))^k r_0^k \right \vert \\ & \leqslant e^{\lambda f(r_0)} \frac{1}{\vert\lambda N(r_0)\vert^{p+2}} \sum_{k=0}^{p+1} (p+1)! \ \vert\lambda N(r_0) r_0\vert^k \\ & \leqslant e^{\lambda f(r_0)} \frac{(p+2)! \ \vert\lambda N(r_0) r_0\vert^{p+2}}{\vert\lambda N(r_0)\vert^{p+2}} = (p+2)! \ r_0^{p+2} e^{\lambda f(r_0)}. \\ \end{align*} Now, since $\int_{r_0}^\infty r^{p+1} e^{\lambda f_{\overrightarrow{e}}(r)} dr \leqslant C e^{\lambda f_{\overrightarrow{e}}(r_0)}$ for all $\lambda>\frac{1}{m_1 r_0}$, for all $\overrightarrow{e} \in \mathbb{S}^{p+1}$ and with the constant $C>0$ independent of $\lambda$ and $\overrightarrow{e}$, we can finally conclude that: $$ \left \vert \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} \right \vert \leqslant \mathrm{vol}(\mathbb{S}^{p+1}) \sup_{\overrightarrow{e} \in \mathbb{S}^{p+1}} \int_{r_0}^\infty r^{p+1} e^{\lambda \Re(S)(r \overrightarrow{e})} dr \leqslant C \mathrm{vol}(\mathbb{S}^{p+1}) e^{\lambda M} $$ for all $\lambda>\frac{1}{m_1 r_0}$, where $M= \max_{\partial \gamma} \Re S$. This concludes the proof, by putting $A:= \frac{1}{m_1 r_0}$ and $B:=C \mathrm{vol}(\mathbb{S}^{p+1})$. \end{proof} Finally we obtain the asymptotics for the integral on the whole contour $\mathcal{Y}^0$: \begin{proposition}\label{prop:all:contour:S} For the same constant $\rho \in \mathbb{C}^*$ as in Proposition \ref{prop:compact:contour:S:SPM}, we have, as $\lambda \to \infty$, $$ \int_{\mathcal{Y}^0} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} = \rho \lambda^{-\frac{p+2}{2}} \exp\left (\lambda S(\mathbf{y^0})\right ) \left ( 1 + o_{\lambda \to \infty}\left (1\right ) \right ). $$ In particular, $$\dfrac{1}{\lambda} \log \left \vert \int_{\mathcal{Y}^0} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} \right \vert \underset{\lambda \to \infty}{\longrightarrow} \Re S(\mathbf{y^0}) = - \mathrm{Vol}(S^3 \setminus K_n). $$ \end{proposition} \begin{proof} As for Proposition \ref{prop:compact:contour:S:SPM}, the second statement imediately follows from the first one. Let us prove the first statement. From Lemma \ref{lem:unbounded:contour}, for all $\lambda>A$, we have $\left \vert \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} \right \vert \leqslant B e^{\lambda M}$. Then, since $M < \Re(S)(\mathbf{y^0})$ by Lemmas \ref{lem:concave} and \ref{lem:maximum}, we have $$ \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} = o_{\lambda \to \infty}\left ( \lambda^{-\frac{p+2}{2}} \exp\left (\lambda S(\mathbf{y^0})\right ) \right ) .$$ The first statement then follows from Proposition \ref{prop:compact:contour:S:SPM} and the equality $$\int_{\mathcal{Y}^0} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} = \int_{\gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})} + \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\lambda S(\mathbf{y})}.$$ \end{proof} \subsection{Extending the asymptotics to the quantum dilogarithm}\label{sub:asym:PhiB} Let us now introduce some new notations: \begin{itemize} \item We let $R$ denote any positive number in $(0,\pi)$, for example $\pi/2$. Its exact value will not be relevant. \item We denote $I_R^+ := (R,\infty)$, $I^-_R := (-\infty,-R)$, $\Lambda_R$ the closed upper half circle of radius $R$ in the complex plane, and $\Omega_R := I_R^- \cup \Lambda_R \cup I^+_R$. Remark that we can replace the contour $\mathbb{R} + i 0^+$ with $\Omega_R$ in the definition of $\Phi_\mathsf{b}$, by the Cauchy theorem. \item For $\delta>0$, we define the product of closed ``horizontal bands" in $\mathbb{C}$ $$\mathcal{U}_{\delta}:= \prod_{k=1}^p\left (\mathbb{R} + i [-\pi+\delta,-\delta] \right ) \times \prod_{l=U,W} \left (\mathbb{R} + i [\delta,\pi-\delta]\right )$$ a closed subset of $\mathcal{U}$. \item For $\mathsf{b}>0$, we define a new potential function $S_{\mathsf{b}}\colon \mathcal{U} \to \mathbb{C}$, an holomorphic function on $p+2$ complex variables, by: $$S_{\mathsf{b}}(\mathbf{y}) = i \mathbf{y}^T Q_n \mathbf{y} + \mathbf{y}^T \mathcal{W}_n + 2 \pi \mathsf{b}^2 \ \mathrm{Log}\left ( \dfrac{ \Phi_\mathsf{b}\left ( \frac{y_U}{2 \pi \mathsf{b}} \right )^2 \Phi_\mathsf{b}\left ( \frac{y_W}{2 \pi \mathsf{b}} \right ) }{ \Phi_\mathsf{b}\left (\frac{y_1}{2 \pi \mathsf{b}}\right ) \cdots \Phi_\mathsf{b}\left (\frac{y_p}{2 \pi \mathsf{b}}\right ) }, \right ) $$ where $Q_n$ and $\mathcal{W}_n$ are like in Theorem \ref{thm:part:func}. \end{itemize} The following lemma establishes a ``parity property'' for the difference between classical and quantum dilogarithms on the horizontal band $\mathbb{R} + i (0,\pi)$. \begin{lemma}\label{lem:parity} For all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R} + i (0,\pi)$, $$ \Re\left ( \mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{-\overline{y}}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{-\overline{y}}) \right ) \right ) = \Re\left ( \mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^y) \right ) \right ) .$$ \end{lemma} \begin{proof} Let $\mathsf{b} \in (0,1)$ and $y \in \mathbb{R} + i(0,\pi)$. From the fact that $\mathrm{Li}_2$ is real-analytic and Proposition \ref{prop:dilog} (1) applied to $z=-e^y$, we have \begin{align*} \overline{\exp\left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{-\overline{y}}) \right )} &= \exp\left ( \frac{i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{-y}) \right ) \\ &= \exp\left ( \frac{i}{2 \pi \mathsf{b}^2}\left ( -\mathrm{Li}_2(-e^y) - \frac{\pi^2}{6} - \frac{y^2}{2} \right ) \right ) \\ &= \exp\left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{y}) \right ) \exp\left ( \frac{-i \pi}{12 \mathsf{b}^2} \right ) \exp\left ( \frac{-i y^2}{4 \pi \mathsf{b}^2} \right ). \end{align*} Moreover, from Proposition \ref{prop:quant:dilog} (1) and (2), we have $$ \overline{\Phi_\mathsf{b}\left ( \frac{-\overline{y}}{2 \pi \mathsf{b}} \right )} = \dfrac{1}{\Phi_\mathsf{b}\left ( \frac{-y}{2 \pi \mathsf{b}} \right )} = \Phi_\mathsf{b}\left ( \frac{y}{2 \pi \mathsf{b}} \right ) \exp \left ( -i\frac{\pi}{12}(\mathsf{b}^2 + \mathsf{b}^{-2})\right ) \exp\left (i \pi \left (\frac{y}{2 \pi \mathsf{b}}\right )^2\right ). $$ Therefore $$ \overline{ \mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{-\overline{y}}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{-\overline{y}}) \right ) } = \mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^y) \right ) - \frac{i \pi}{12} \mathsf{b}^2,$$ and the statement follows. \end{proof} As a consequence, we can bound uniformly the difference between classical and quantum dilogarithms on compact horizontal bands above the horizontal axis. \begin{lemma}\label{lem:unif:bound} For all $\delta>0$, there exists a constant $B_{\delta}>0$ such that for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R} + i [\delta,\pi-\delta]$, $$ \left \vert \Re\left ( \mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^y) \right ) \right ) \right \vert \leqslant B_{\delta} \mathsf{b}^2 .$$ Moreover, $B_{\delta}$ is of the form $B_\delta = C/\delta + C'$ with $C,C'>0$. \end{lemma} The proof of Lemma \ref{lem:unif:bound} is quite lengthy, but contains relatively classical calculus arguments. The key points are the fact that $\Im(y)$ is uniformly upper bounded by a quantity \textit{strictly smaller} than $\pi$, and that we can restrict ourselves to $y \in (-\infty,0] + i [\delta,\pi-\delta]$ (thanks to Lemma \ref{lem:parity}) which implies that $\Re(y)$ is uniformly upper bounded by $0$. The necessity of this last remark stems from the fact that the state variable $y$ must be integrated on an contour with \textit{unbounded real part} in the definition of the Teichm\"uller TQFT, whereas the contour is usually bounded when studying the volume conjecture for the colored Jones polynomials. Compare with \cite[Lemma 3]{AH}. The parity trick of Lemma \ref{lem:parity} and its application to an unbounded contour are the main technical novelties compared with the methods of \cite{AH}. \begin{proof} Let $\delta >0$. In the following proof, $y = x+ i d$ will denote a generic element in $(-\infty,0] + i [\delta,\pi-\delta]$, with $x \in (-\infty,0], d \in [\delta,\pi-\delta]$. We remark that we only need to prove the statement for $y \in (-\infty,0] + i [\delta,\pi-\delta]$, thanks to Lemma \ref{lem:parity}. We first compute, for any $\mathsf{b} \in (0,1)$ and $y \in \mathbb{R} + i [\delta,\pi-\delta]$: \begin{align*} \mathrm{Log} \ \Phi_\mathsf{b} \left ( \frac{y}{2 \pi \mathsf{b}} \right ) &= \int_{w \in \Omega_{R \mathsf{b}}} \dfrac{\exp\left (-i \frac{y w}{\pi \mathsf{b}}\right ) dw}{4 w\sinh(\mathsf{b} w) \sinh({\mathsf{b}}^{-1}w)} \\ &= \int_{v \in \Omega_{R}} \dfrac{\exp\left (-i \frac{y v}{\pi}\right ) dv}{4 v\sinh(\mathsf{b}^2 v) \sinh(v)} \\ &= \dfrac{1}{\mathsf{b}^2} \int_{v \in \Omega_{R}} \dfrac{\exp\left (-i \frac{y v}{\pi}\right )}{4 v^2 \sinh(v)} \dfrac{(v \mathsf{b}^2)}{\sinh(v \mathsf{b}^2) } dv, \end{align*} where the first equality comes from the definition of $\Phi_\mathsf{b}$ (choosing the integration contour $\Omega_{R \mathsf{b}}$), the second one comes from the change of variables $ v=\frac{w}{\mathsf{b}}$ and the last one is a simple re-writing. Next, we remark that there exists a constant $\sigma_R > 0$ such that $\vert (\frac{v}{\sinh(v)})'' \vert \leq \sigma_R$ for all $v \in \mathbb{R} \cup D_R$, where $D_R$ is the upper half disk of radius $R$. Indeed, note first that $\sinh$ is nonzero everywhere on $\mathbb{R} \cup D_R$. Then a quick computation yields $\left (\frac{v}{\sinh(v)}\right )'' = \dfrac{v(1+\cosh(v)^2)-2 \sinh(v)\cosh(v)}{\sinh(v)^3}$, which is well-defined and continous on $\mathbb{R} \cup D$, has a limit of $-1/3$ at $v=0$ and has a zero limit in $v\in \mathbb{R}, v \to \pm \infty$. The boundedness on $\mathbb{R} \cup D_R$ follows. Now, it follows from Taylor's theorem that for every $\mathsf{b} \in (0,1)$ and every $v \in \Omega_R$, $$ \dfrac{(v \mathsf{b}^2)}{\sinh(v \mathsf{b}^2)} = 1 + (v \mathsf{b}^2)^2 \epsilon(v\mathsf{b}^2),$$ where $\epsilon(v\mathsf{b}^2) := \int_0^1 (1-t)\left (\frac{z}{\sinh(z)}\right )''(v \mathsf{b}^2 t) \ dt$. It then follows from the previous paragraph that $\vert \epsilon(v\mathsf{b}^2) \vert \leqslant \sigma_R$ for every $\mathsf{b} \in (0,1)$ and every $v \in \Omega_R$. Recall from Proposition \ref{prop:dilog} (2) that for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R} + i [\delta,\pi-\delta]$, $$\dfrac{1}{\mathsf{b}^2} \int_{v \in \Omega_{R}} \dfrac{\exp\left (-i \frac{y v}{\pi}\right )}{4 v^2 \sinh(v)} dv = \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^y). $$ Therefore we can write for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R} + i [\delta,\pi-\delta]$: \begin{align*} \mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^y) \right ) &= \dfrac{1}{\mathsf{b}^2} \int_{v \in \Omega_{R}} \dfrac{\exp\left (-i \frac{y v}{\pi}\right )}{4 v^2 \sinh(v)} \left (\dfrac{(v \mathsf{b}^2)}{\sinh(v \mathsf{b}^2) }-1\right ) dv \\ &= \dfrac{1}{\mathsf{b}^2} \int_{v \in \Omega_{R}} \dfrac{\exp\left (-i \frac{y v}{\pi}\right )}{4 v^2 \sinh(v)} (v \mathsf{b}^2)^2 \epsilon(v\mathsf{b}^2) dv \\ &= \mathsf{b}^2 \int_{v \in \Omega_{R}} \epsilon(v\mathsf{b}^2) \dfrac{\exp\left (-i \frac{y v}{\pi}\right )}{4 \sinh(v)} dv. \end{align*} Now it suffices to prove that the quantity $$\Re\left ( \int_{v \in \Omega_{R}} \epsilon(v\mathsf{b}^2) \dfrac{\exp\left (-i \frac{y v}{\pi}\right )}{4 \sinh(v)} dv\right )$$ is uniformly bounded on $y \in (-\infty,0] + i [\delta,\pi-\delta], \mathsf{b} \in (0,1)$. We will split this integral into three parts and prove that each part is uniformly bounded in this way. Firstly, on the contour $I^+_R$, we have for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R} + i [\delta,\pi-\delta]$: \begin{align*} \left \vert \Re\left ( \int_{v \in I^+_{R}} \epsilon(v\mathsf{b}^2) \dfrac{\exp\left (-i \frac{y v}{\pi}\right )} {4 \sinh(v)} dv \right ) \right \vert &\leqslant \left \vert \int_{v \in I^+_{R}} \epsilon(v\mathsf{b}^2) \dfrac{\exp\left (-i \frac{y v}{\pi}\right )} {4 \sinh(v)} dv \right \vert \\ &\leqslant \int_{R}^\infty \vert \epsilon(v\mathsf{b}^2) \vert \dfrac{\left \vert \exp\left (-i \frac{y v}{\pi}\right ) \right \vert}{4 \sinh(v)} dv \\ &\leqslant \frac{\sigma_R}{4} \int_{R}^\infty \dfrac{\exp\left ( \frac{\Im(y) v}{\pi} \right )}{\sinh(v)} dv \\ &\leqslant \frac{\sigma_R}{4} \int_{R}^\infty \dfrac{\exp\left ( \frac{(\pi-\delta) v}{\pi} \right )}{ \frac{1-e^{-2R}}{2} e^v } dv \\ &= \dfrac{\pi \sigma_R e^{-\frac{\delta R}{\pi}}}{2 \delta (1-e^{-2R})}, \end{align*} where in the last inequality we used the fact that $\frac{1-e^{-2R}}{2} e^v \leqslant \sinh(v)$ for all $v\geqslant R$. Secondly, on the contour $I^-_R$, we have similarly for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R} + i [\delta,\pi-\delta]$: \begin{align*} \left \vert \Re\left ( \int_{v \in I^-_{R}} \epsilon(v\mathsf{b}^2) \dfrac{\exp\left (-i \frac{y v}{\pi}\right )} {4 \sinh(v)} dv \right ) \right \vert &\leqslant \left \vert \int_{v \in I^-_{R}} \epsilon(v\mathsf{b}^2) \dfrac{\exp\left (-i \frac{y v}{\pi}\right )} {4 \sinh(v)} dv \right \vert \\ &\leqslant \int_{-\infty}^{-R} \vert \epsilon(v\mathsf{b}^2) \vert \dfrac{\left \vert \exp\left (-i \frac{y v}{\pi}\right ) \right \vert}{4 \vert \sinh(v) \vert} dv \\ &= \int_{R}^{\infty} \vert \epsilon(-v\mathsf{b}^2) \vert \dfrac{\left \vert \exp\left (i \frac{y v}{\pi}\right ) \right \vert}{4 \sinh(v) } dv \\ &\leqslant \frac{\sigma_R}{4} \int_{R}^\infty \dfrac{\exp\left ( \frac{- \Im(y) v}{\pi} \right )}{\sinh(v)} dv \\ &\leqslant \frac{\sigma_R}{4} \int_{R}^\infty \dfrac{1}{ \frac{1-e^{-2R}}{2} e^v } dv \\ &= \dfrac{\sigma_R e^{-R}}{2 (1-e^{-2R})} = \dfrac{\sigma_R}{4 \sinh(R)}. \end{align*} Finally, to obtain the bound on the contour $\Lambda_R$, we will need the assumption that $y \in (-\infty,0] + i [\delta,\pi-\delta]$, since the upper bound will depend on $\Re(y)$. Moreover, we will use the fact that since $\vert \sinh \vert$ is a continous nonzero function on the contour $\Lambda_R$, it is lower bounded by a constant $s_R>0$ on this countour. We then obtain, for all $\mathsf{b} \in (0,1)$ and all $y \in (-\infty,0] + i [\delta,\pi-\delta]$: \begin{align*} \left \vert \Re\left ( \int_{v \in \Lambda_R} \epsilon(v\mathsf{b}^2) \dfrac{\exp\left (-i \frac{y v}{\pi}\right )} {4 \sinh(v)} dv \right ) \right \vert &\leqslant \left \vert \int_{v \in \Lambda_R} \epsilon(v\mathsf{b}^2) \dfrac{\exp\left (-i \frac{y v}{\pi}\right )} {4 \sinh(v)} dv \right \vert \\ &\leqslant \int_{v \in \Lambda_R} \vert \epsilon(v\mathsf{b}^2) \vert \dfrac{\left \vert \exp\left (-i \frac{y v}{\pi}\right ) \right \vert}{4 \vert \sinh(v) \vert} dv \\ &\leqslant \frac{\sigma_R}{4 s_R} \int_{v \in \Lambda_R} \exp\left ( \Re\left ( -i \frac{y v}{\pi} \right ) \right ) dv \\ &= \frac{\sigma_R}{4 s_R} \int_{v \in \Lambda_R} \exp\left ( \frac{\Re(y) \Im(v) + \Im(y) \Re(v)}{\pi} \right ) dv \\ &\leqslant \frac{\sigma_R}{4 s_R} (\pi R) \exp\left ( \dfrac{0 + (\pi-\delta) R}{\pi} \right ) \leqslant \dfrac{\sigma_R \pi R e^R}{4 s_R}, \end{align*} where the fourth inequality is due to the fact that $\Re(y) \leqslant 0$, $\Im(v) \geqslant 0$, $ 0 < \Im(y) \leqslant \pi - \delta$ and $ \Re(v) \leqslant R$. The lemma follows, by taking for example the constant $$ B_\delta:= \dfrac{\pi \sigma_R e^{-\frac{\delta R}{\pi}}}{2 \delta (1-e^{-2R})} + \dfrac{\sigma_R}{4 \sinh(R)} + \dfrac{\sigma_R \pi R e^R}{4 s_R} .$$ \end{proof} The following lemma is simply a variant of Lemma \ref{lem:unif:bound} for compact horizontal bands with negative imaginary part. \begin{lemma}\label{lem:unif:bound:neg} For all $\delta>0$, there exists a constant $B_{\delta}>0$ (the same as in Lemma \ref{lem:unif:bound}) such that for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R} - i [\delta,\pi-\delta]$, $$ \left \vert \Re\left ( \mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^y) \right ) \right ) \right \vert \leqslant B_{\delta} \mathsf{b}^2 .$$ \end{lemma} \begin{proof} The result follows immediately from the fact that $\mathrm{Li}_2 (\overline{\cdot}) = \overline{\mathrm{Li}_2(\cdot)}$, Proposition \ref{prop:quant:dilog} (2) and Lemma \ref{lem:unif:bound}. \end{proof} The following Proposition \ref{prop:all:contour:Sb} will not actually be used in the proof of Theorem \ref{thm:vol:conj}, but fits naturally in the current discussion. \begin{proposition}\label{prop:all:contour:Sb} For some constant $\rho' \in \mathbb{C}^*$, we have, as $\mathsf{b} \to 0^+$, \begin{align*} \int_{\mathcal{Y}^0} d\mathbf{y} e^{\frac{1}{2 \pi \mathsf{b}^2}S_\mathsf{b}(\mathbf{y})} &= \int_{\mathcal{Y}^0} d\mathbf{y} \ e^{\frac { i \mathbf{y}^T Q_n \mathbf{y} + \mathbf{y}^T \mathcal{W}_n } {2 \pi \mathsf{b}^2} } \dfrac{ \Phi_\mathsf{b}\left ( \frac{y_U}{2 \pi \mathsf{b}} \right )^2 \Phi_\mathsf{b}\left ( \frac{y_W}{2 \pi \mathsf{b}} \right ) }{ \Phi_\mathsf{b}\left (\frac{y_1}{2 \pi \mathsf{b}}\right ) \cdots \Phi_\mathsf{b}\left (\frac{y_p}{2 \pi \mathsf{b}}\right ) } \\ &= e^{\frac{1}{2 \pi \mathsf{b}^2} S(\mathbf{y^0})} \left ( \rho' \mathsf{b}^{p+2} \left ( 1 + o_{\mathsf{b} \to 0^+}\left (1\right ) \right ) + \mathcal{O}_{\mathsf{b} \to 0^+}(1) \right ). \end{align*} In particular, $$2 \pi \mathsf{b}^2 \log \left \vert \int_{\mathcal{Y}^0} d\mathbf{y} \ e^{\frac{1}{2 \pi \mathsf{b}^2}S_\mathsf{b}(\mathbf{y})} \right \vert \underset{\mathsf{b} \to 0^+}{\longrightarrow} \Re S(\mathbf{y^0}) = - \mathrm{Vol}(S^3 \setminus K_n). $$ \end{proposition} \begin{proof} The second statement follows from the first one from the fact that the behaviour of $$\left ( \rho' \mathsf{b}^{p+2} \left ( 1 + o_{\mathsf{b} \to 0^+}\left (1\right ) \right ) + \mathcal{O}_{\mathsf{b} \to 0^+}(1) \right )$$ is polynomial in $\mathsf{b}$ as $\mathsf{b} \to 0^+$. To prove the first statement, we will split the integral on $\mathcal{Y}^0$ into two parts, one on the compact contour $\gamma$ from before and the other on the unbounded contour $\mathcal{Y}^0\setminus \gamma$. First we notice that there exists a $\delta>0$ such that for all $\mathbf{y}=(y_1, \ldots,y_p,y_U,y_W)$ in $\mathcal{Y}^0$, $\Im(y_1), \ldots \Im(y_p) \in [-(\pi-\delta),-\delta]$ and $\Im(y_U),\Im(y_W) \in [\delta,\pi-\delta]$. From Lemmas \ref{lem:unif:bound} and \ref{lem:unif:bound:neg}, if we denote $(\eta_1, \ldots, \eta_p,\eta_U,\eta_W) := (-1, \ldots,-1,2,1)$, it then follows that: \begin{align*} \left \vert \Re\left ( \frac{1}{2 \pi \mathsf{b}^2}S_\mathsf{b}(\mathbf{y}) - \frac{1}{2 \pi \mathsf{b}^2}S(\mathbf{y}) \right ) \right \vert &= \left \vert \Re\left ( \sum_{j=1}^W \eta_j \left (\mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y_j}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{y_j}) \right ) \right ) \right ) \right \vert \\ &\leqslant \sum_{j=1}^W \vert \eta_j \vert \left \vert \Re\left ( \left (\mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y_j}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{y_j}) \right ) \right ) \right ) \right \vert \\ & \leqslant (p+3)B_{\delta} \mathsf{b}^2. \end{align*} Let us now focus on the compact contour $\gamma$ and prove that $$ \int_{\gamma} d\mathbf{y} \ e^{\frac{1}{2 \pi \mathsf{b}^2}S_\mathsf{b}(\mathbf{y})} = e^{\frac{1}{2 \pi \mathsf{b}^2} S(\mathbf{y^0})} \left ( \rho' \mathsf{b}^{p+2} \left ( 1 + o_{\mathsf{b} \to 0^+}\left (1\right ) \right ) + \mathcal{O}_{\mathsf{b} \to 0^+}(1) \right ).$$ From Proposition \ref{prop:compact:contour:S:SPM}, by identifying $\lambda = \frac{1}{2\pi \mathsf{b}^2}$ and $\rho' := \rho (2\pi)^{\frac{p+2}{2}}$ it suffices to prove that $$ \int_{\gamma} d\mathbf{y} \ e^{\frac{1}{2 \pi \mathsf{b}^2}S(\mathbf{y})} \left ( e^{\frac{1}{2 \pi \mathsf{b}^2}(S_\mathsf{b}(\mathbf{y})-S(\mathbf{y}))} -1 \right ) = e^{\frac{1}{2 \pi \mathsf{b}^2} S(\mathbf{y^0})} \mathcal{O}_{\mathsf{b} \to 0^+}(1) .$$ This last equality follows from the upper bound $(p+3)B_{\delta} \mathsf{b}^2$ of the previous paragraph, the compactness of $\gamma$, and Lemma \ref{lem:maximum}. Finally, let us prove that on the unbounded contour, we have $$ \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\frac{1}{2 \pi \mathsf{b}^2}S_\mathsf{b}(\mathbf{y})} = e^{\frac{1}{2 \pi \mathsf{b}^2} S(\mathbf{y^0})} \mathcal{O}_{\mathsf{b} \to 0^+}(1).$$ Let $A,B$ be the constants from Lemma \ref{lem:unbounded:contour}. From the proof of Lemma \ref{lem:unbounded:contour}, we have that for all $\mathsf{b} < (2 \pi A)^{-1/2}$: $$ \int_{\mathcal{Y}^0\setminus \gamma} d\mathbf{y} \ e^{\frac{1}{2 \pi \mathsf{b}^2} \Re(S) (\mathbf{y})} \leqslant B e^{\frac{1}{2 \pi \mathsf{b}^2} M}.$$ Moreover, for all $\mathsf{b} \in (0,1)$ and $\mathbf{y} \in \mathcal{Y}^0 \setminus \gamma$, we have $e^{\frac{1}{2 \pi \mathsf{b}^2}\Re\left (S_\mathsf{b}(\mathbf{y})- S(\mathbf{y})\right )} \leqslant e^{(p+3)B_{\delta} \mathsf{b}^2}.$\\ Let us denote $\upsilon := \frac{\Re(S)(\mathbf{y^0})- M}{2}$. Thus, for all $b>0$ smaller than both $(2 \pi A)^{-1/2}$ and $\left ( \dfrac{\upsilon}{2 \pi (p+3) B_{\delta}} \right )^{1/4}$, we have: \begin{align*} \left \vert \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\frac{1}{2 \pi \mathsf{b}^2}S_\mathsf{b}(\mathbf{y})} \right \vert &= \left \vert \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\frac{1}{2 \pi \mathsf{b}^2}S(\mathbf{y})} e^{\frac{1}{2 \pi \mathsf{b}^2}(S_\mathsf{b}(\mathbf{y})-S(\mathbf{y}))} \right \vert \\ & \leqslant \int_{\mathcal{Y}^0 \setminus \gamma} d\mathbf{y} \ e^{\frac{1}{2 \pi \mathsf{b}^2}\Re(S)(\mathbf{y})} e^{\frac{1}{2 \pi \mathsf{b}^2}\Re(S_\mathsf{b}(\mathbf{y})-S(\mathbf{y}))} \\ & \leqslant B e^{\frac{1}{2 \pi \mathsf{b}^2} M} e^{(p+3)B_{\delta} \mathsf{b}^2} \leqslant B e^{\frac{1}{2 \pi \mathsf{b}^2} (M+\upsilon)} \\ &= e^{\frac{1}{2 \pi \mathsf{b}^2} S(\mathbf{y^0})} \mathcal{O}_{\mathsf{b} \to 0^+}(1), \end{align*} which concludes the proof. \end{proof} \subsection{Going from $\mathsf{b}$ to $\hbar$}\label{sub:asym:hbar} Recall that for every $\mathsf{b}>0$, we associate a corresponding parameter $\hbar := \mathsf{b}^2 (1+\mathsf{b}^2)^{-2} >0$. For $\mathsf{b}>0$, we define a new potential function $S'_{\mathsf{b}}\colon \mathcal{U} \to \mathbb{C}$, a holomorphic function on $p+2$ complex variables, by: $$S'_{\mathsf{b}}(\mathbf{y}) = i \mathbf{y}^T Q_n \mathbf{y} + \mathbf{y}^T \mathcal{W}_n + 2 \pi \hbar \ \mathrm{Log}\left ( \dfrac{ \Phi_\mathsf{b}\left ( \frac{y_U}{2 \pi \sqrt{\hbar}} \right )^2 \Phi_\mathsf{b}\left ( \frac{y_W}{2 \pi \sqrt{\hbar}} \right ) }{ \Phi_\mathsf{b}\left (\frac{y_1}{2 \pi \sqrt{\hbar}}\right ) \cdots \Phi_\mathsf{b}\left (\frac{y_p}{2 \pi \sqrt{\hbar}}\right ) }, \right ) $$ where $Q_n$ and $\mathcal{W}_n$ are like in Theorem \ref{thm:part:func}. \begin{remark}\label{rem:J':S'b} Notice that $$\vert \mathfrak{J}_{X_n}(\hbar,0) \vert =\left \vert \left (\dfrac{1}{2\pi \sqrt{\hbar}}\right )^{p+3} \int_{\mathcal{Y}^0} d \mathbf{y} \ e^{\frac{1}{2\pi \hbar} S'_{\mathsf{b}}(\mathbf{y})} \right \vert.$$ Indeed, this follows from taking $\tau=\tau^0$ in Theorem \ref{thm:part:func:Htrig:odd}, where $\tau^0$ is defined at the end of the proof of Lemma \ref{lem:-vol}. \end{remark} The following Lemma \ref{lem:unif:bound:hbar} will play a similar role as Lemmas \ref{lem:unif:bound} and \ref{lem:unif:bound:neg}, but its proof is fortunately shorter. \begin{lemma}\label{lem:unif:bound:hbar} For all $\delta \in (0,\frac{\pi}{2})$, there exists constants $c_{\delta}, C_{\delta}>0$ such that for all $\mathsf{b} \in (0,c_{\delta})$ and all $y \in \mathbb{R}+i \left ([-(\pi-\delta),-\delta]\cup[\delta,\pi-\delta]\right )$, we have: $$ \left \vert \Re\left ( \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2\left (-e^{y(1+\mathsf{b}^2)}\right ) \right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} (1+\mathsf{b}^2)^2 \mathrm{Li}_2(-e^y) \right ) \right ) \right \vert \leqslant C_{\delta} .$$ \end{lemma} \begin{proof} Let $\delta \in (0,\frac{\pi}{2}) $. Let us define $c_{\delta} := \sqrt{\dfrac{\delta}{2(\pi-\delta)}}$, so that $(\pi-\delta)(1+c_\delta^2) = \pi-\delta/2$. We consider the function $$(x,d,u,\mathsf{b}) \mapsto \left \vert \mathrm{Log} \left ( 1 + e^{(x+id)(1+u\mathsf{b}^2)}\right )\right \vert,$$ which is continous and well-defined on $[-1,0]\times [\delta,\pi-\delta]\times[0,1]\times[0,c_{\delta}]$; indeed, since $$d(1+u \mathsf{b}^2) \leqslant (\pi-\delta)(1+c_\delta^2) = \pi-\delta/2 < \pi,$$ the exponential will then never be $-1$. Let us denote $L_{\delta}>0$ the maximum of this function. Let us define $$\Delta(\mathsf{b},y):= \Im\left ( \mathrm{Li}_2\left (-e^{y(1+\mathsf{b}^2)}\right ) - (1+\mathsf{b}^2)^2 \mathrm{Li}_2(-e^y) \right )$$ for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R}+i \left ([-(\pi-\delta),-\delta]\cup[\delta,\pi-\delta]\right )$. We first remark a parity property like in Lemma \ref{lem:parity}. Indeed, it similarly follows from Proposition \ref{prop:dilog} (1) that $\Delta(\mathsf{b},y) = -\Delta(\mathsf{b},-y) = -\Delta(\mathsf{b},\overline{y}) = \Delta(\mathsf{b},-\overline{y})$ for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R}+i \left ([-(\pi-\delta),-\delta]\cup[\delta,\pi-\delta]\right )$. Thus we can consider that $y \in \mathbb{R}_{\leqslant 0}+i [\delta,\pi-\delta]$ in the remainder of the proof. It then follows from Taylor's theorem that for all $\mathsf{b} \in (0,1)$ and all $y \in \mathbb{R}_{\leqslant 0}+i [\delta,\pi-\delta]$, \begin{align*} \Delta(\mathsf{b},y) &= \Im\left ( -\left (\int_{0}^1 \mathrm{Log} \left ( 1 + e^{y(1+u\mathsf{b}^2)}\right ) (-y\mathsf{b}^2) du\right ) - (2\mathsf{b}^2 + \mathsf{b}^4) \mathrm{Li}_2(-e^y) \right ) \\ &= - \mathsf{b}^2 \Im\left ( y \left (\int_{0}^1 \mathrm{Log} \left ( 1 + e^{y(1+u\mathsf{b}^2)}\right ) du\right ) + (2 + \mathsf{b}^2) \mathrm{Li}_2(-e^y) \right ). \end{align*} We will bound $\left \vert \dfrac{\Delta(\mathsf{b},y)}{-\mathsf{b}^2} \right \vert$ separately for $\Re(y) \in [-1,0]$ and then for $\Re(y) \in (-\infty,-1)$. Firstly, we have for all $y \in [-1,0] + i [\delta,\pi-\delta]$ and all $\mathsf{b} \in (0,c_{\delta})$: \begin{align*} \left \vert \dfrac{\Delta(\mathsf{b},y)}{-\mathsf{b}^2} \right \vert & \leqslant \vert y \vert \left (\int_{0}^1 \left \vert \mathrm{Log} \left ( 1 + e^{y(1+u\mathsf{b}^2)}\right ) \right \vert du\right ) + (2 + \mathsf{b}^2) \vert \mathrm{Li}_2(-e^y) \vert \\ & \leqslant \sqrt{1 + (\pi-\delta)^2} L_\delta + 3 L'_\delta, \end{align*} where $L'_\delta$ is the maximum of $(x,d) \mapsto \vert \mathrm{Li}_2(-e^y) \vert$ on $(-\infty,0]\times[\delta,\pi-\delta]$. Secondly, let $y = x+id \in (-\infty,-1] + i [\delta,\pi-\delta]$ and $\mathsf{b} \in (0,c_{\delta})$. For all $u \in [0,1]$, we have $\left \vert e^{y(1+u\mathsf{b}^2)} \right \vert <1$, therefore (from the triangle inequality on the Taylor expansion): $$ \left \vert \mathrm{Log} \left ( 1 + e^{y(1+u\mathsf{b}^2)} \right ) \right \vert \leqslant - \log\left ( 1- \left \vert e^{y(1+u\mathsf{b}^2)} \right \vert \right ) = \log\left ( 1 + \dfrac{e^{x(1+u\mathsf{b}^2)}}{1-e^{x(1+u\mathsf{b}^2)}} \right ) \leqslant \dfrac{e^{2 x}}{1-e^{2 x}}, $$ hence \begin{align*} \left \vert \dfrac{\Delta(\mathsf{b},y)}{-\mathsf{b}^2} \right \vert & \leqslant \vert y \vert \left (\int_{0}^1 \left \vert \mathrm{Log} \left ( 1 + e^{y(1+u\mathsf{b}^2)}\right ) \right \vert du\right ) + (2 + \mathsf{b}^2) \vert \mathrm{Li}_2(-e^y) \vert \\ & \leqslant \sqrt{x^2 + (\pi-\delta)^2} \dfrac{e^{2 x}}{1-e^{2 x}} + 3 L'_\delta \\ & \leqslant E_\delta + 3 L'_\delta, \end{align*} where $E_\delta$ is the maximum of the function $ x \in (-\infty,-1] \mapsto \sqrt{x^2 + (\pi-\delta)^2} \dfrac{e^{2 x}}{1-e^{2 x}}$. We now conclude the proof by defining $C_{\delta} := \frac{1}{2\pi} \max\{\sqrt{1 + (\pi-\delta)^2} L_\delta + 3 L'_\delta, E_\delta + 3 L'_\delta\}$. \end{proof} We can now state and prove the final piece of the proof of Theorem \ref{thm:vol:conj}. \begin{proposition}\label{prop:all:contour:S'b} For the constant $\rho' \in \mathbb{C}^*$ defined in Proposition \ref{prop:all:contour:Sb}, we have, as $\hbar \to 0^+$, \begin{align*} \int_{\mathcal{Y}^0} d\mathbf{y} e^{\frac{1}{2 \pi \hbar}S'_\mathsf{b}(\mathbf{y})} &= \int_{\mathcal{Y}^0} d\mathbf{y} \ e^{\frac { i \mathbf{y}^T Q_n\mathbf{y} + \mathbf{y}^T \mathcal{W}_n } {2 \pi \hbar} } \dfrac{ \Phi_\mathsf{b}\left ( \frac{y_U}{2 \pi \sqrt{\hbar}} \right )^2 \Phi_\mathsf{b}\left ( \frac{y_W}{2 \pi \sqrt{\hbar}} \right ) }{ \Phi_\mathsf{b}\left (\frac{y_1}{2 \pi \sqrt{\hbar}}\right ) \cdots \Phi_\mathsf{b}\left (\frac{y_p}{2 \pi \sqrt{\hbar}}\right ) } \\ &= e^{\frac{1}{2 \pi \hbar} S(\mathbf{y^0})} \left ( \rho' \hbar^{\frac{p+2}{2}} \left ( 1 + o_{\hbar \to 0^+}\left (1\right ) \right ) + \mathcal{O}_{\hbar \to 0^+}(1) \right ). \end{align*} In particular, $$(2 \pi \hbar) \log \left \vert \int_{\mathcal{Y}^0} d\mathbf{y} \ e^{\frac{1}{2 \pi \hbar}S'_\mathsf{b}(\mathbf{y})} \right \vert \underset{\hbar \to 0^+}{\longrightarrow} \Re S(\mathbf{y^0}) = - \mathrm{Vol}(S^3 \setminus K_n). $$ \end{proposition} \begin{proof} The proof will be similar to the one of Proposition \ref{prop:all:contour:Sb} (notably, the second statement follows from the first one in the exact same way), but will need also Lemma \ref{lem:unif:bound:hbar} to bound an extra term. Let us prove the first statement. Let $\delta>0$ such that the absolute value of the imaginary parts of the coordinates of any $\mathbf{y} \in \mathcal{Y}^0$ lie in $[\delta,\pi-\delta]$. Let us again denote $(\eta_1, \ldots, \eta_p,\eta_U,\eta_W) := (-1,\ldots,-1,2,1)$. Then for all $\mathbf{y} \in \mathcal{Y}^0$ and all $\mathsf{b} \in (0,c_{\delta})$, it follows from Lemmas \ref{lem:unif:bound}, \ref{lem:unif:bound:neg} and \ref{lem:unif:bound:hbar} that \begin{align*} &\left \vert \Re\left ( \frac{1}{2 \pi \hbar}S'_\mathsf{b}(\mathbf{y}) - \frac{1}{2 \pi \hbar}S(\mathbf{y}) \right ) \right \vert = \left \vert \Re\left ( \sum_{j=1}^W \eta_j \left (\mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y_j}{2 \pi \sqrt{\hbar}} \right )\right ) - \left ( \frac{-i}{2 \pi \hbar} \mathrm{Li}_2(-e^{y_j}) \right ) \right ) \right ) \right \vert \\ & \hspace*{1.5cm} \leqslant \sum_{j=1}^W \vert \eta_j \vert \left \vert \Re\left ( \left (\mathrm{Log}\left ( \Phi_\mathsf{b}\left ( \frac{y_j(1+\mathsf{b}^2)}{2 \pi \mathsf{b}} \right )\right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{y_j(1+\mathsf{b}^2)}) \right ) \right ) \right ) \right \vert \\ & \hspace*{1.5cm} \ \ +\sum_{j=1}^W \vert \eta_j \vert \left \vert \Re\left ( \left ( \frac{-i}{2 \pi \mathsf{b}^2} \mathrm{Li}_2(-e^{y_j(1+\mathsf{b}^2)}) \right ) - \left ( \frac{-i}{2 \pi \mathsf{b}^2}(1+\mathsf{b}^2)^2 \mathrm{Li}_2(-e^{y_j}) \right ) \right ) \right \vert \\ & \hspace*{1.5cm} \leqslant (p+3)\left (B_{\frac{\delta}{2}} \mathsf{b}^2 + C_{\delta}\right ) \leqslant (p+3)\left (B_{\frac{\delta}{2}} + C_{\delta}\right ). \end{align*} The remainder of the proof is now the same as for Proposition \ref{prop:all:contour:Sb}, by identifying $\lambda= \frac{1}{2 \pi \hbar}$ and taking $\hbar$ small enough so that the associated $\mathsf{b}$ satisfies $$0 < \mathsf{b} < \min\left \{c_\delta, (2\pi A)^{-1/2}, \left ( \dfrac{\upsilon}{2 \pi (p+3) (B_{\delta/2}+C_{\delta})} \right )^{1/2}\right \}.$$ \end{proof} \subsection{Conclusion and comments}\label{sub:conjvol:conclusion} \begin{proof}[Proof of Theorem \ref{thm:vol:conj}] The second equality follows from Remark \ref{rem:J':S'b} and Proposition \ref{prop:all:contour:S'b}, and the first equality follows from the identity $$J_{X_n}(\hbar,x) = 2 \pi \sqrt{\hbar} \ \mathfrak{J}_{X_n}(\hbar, (2\pi \sqrt{\hbar})x).$$ \end{proof} Some comments are in order. \begin{itemize} \item The various upper bounds we constructed were far from optimal, since we were mostly interested to prove that the \textit{exponential decrease rate} yielded the hyperbolic volume. Anyone interested in computing a more detailed asymptotic expansion of $\mathfrak{J}_{X_n}(\hbar,0)$ (looking for the \textit{complex volume}, the \textit{Reidemeister torsions} or potential deeper terms such as the $n$-loop invariants of \cite{DG}) would probably need to develop the estimations of Lemmas \ref{lem:unbounded:contour}, \ref{lem:unif:bound} and \ref{lem:unif:bound:hbar} at higher order and with sharper precision, as well as carefully study the coefficients appearing in Theorem \ref{thm:SPM}. \item In this theory, the integration variables $y_j$ in $\mathfrak{J}_{X_n}(\hbar,0)$ lie in an \textit{unbounded} part of $\mathbb{C}$, contrary to what happens for Kashaev's invariant or the colored Jones polynomials. This is why uniform bounds such as the ones of Lemmas \ref{lem:unbounded:contour}, \ref{lem:unif:bound} and \ref{lem:unif:bound:hbar} were new but absolutely necessary technical difficulties to overcome to obtain the desired asymptotics. Since these results do not depend of the knot, triangulation or potential function $S$ (assuming it has the same general form as in here), we hope that they can be of use to further studies of asymptotics of quantum invariants such as the Teichm\"uller TQFT. \end{itemize} \section{The case of even twist knots}\label{sec:appendix} When the twist knot $K_n$ has an even number of crossings, we can prove the same results as for the odd twist knots, which are: \begin{itemize} \item the construction of convenient H-triangulations and ideal triangulations (Section \ref{sub:even:trig}), \item the geometricity of the ideal triangulations (Section \ref{sub:even:geom}), \item the computation of the partition functions of the Teichm\"uller TQFT (Section \ref{sub:even:tqft}), \item the volume conjecture as a consequence of geometricity (Section \ref{sub:even:vol:conj}). \end{itemize} We tried to provide details of only the parts of proofs that differ from the case of odd twist knots. As the reader will see, most of these differences lie in explicit values and not in general processes of proof. As such, we expect that the techniques developed in the previous sections and adapted in this one can be generalised to several other families of knots in $3$-manifolds. \subsection{Construction of triangulations}\label{sub:even:trig} In the rest of this section we consider a twist knot $K_n$ with $n$ even, $n\geqslant 4$ (the case $n=2$ will be treated in Remark \ref{rem:K2}). We proceed as in Section \ref{sec:trig}, and build an H-triangulation of $(S^3,K_n)$ from a diagram of $K_n$. The first step is described in Figure \ref{fig:diagram:htriang:even}. Note that $D$ is once again an $(n+1)$-gon, and $E$ is an $(n+2)$-gon. \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black} , every node/.style={transform shape , knot crossing , inner sep=1.5 pt } ] \begin{scope}[scale=0.7] \begin{scope}[dashed,decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (-2,-2)--(-4,-4); \draw[postaction={decorate}] (-7,1)--(-4,-4); \draw[postaction={decorate}] (-2,-2)--(-4,0); \draw[postaction={decorate}] (-7,1)--(-4,0); \draw[postaction={decorate}] (-7,1)--(-2,6); \draw[postaction={decorate}] (-2,2)--(-4,0); \draw[postaction={decorate}] (-2,2)--(-2,6); \draw[postaction={decorate}] (2,2)--(6,3); \draw[postaction={decorate}] (2,2)--(4,0); \draw[postaction={decorate}] (7,0)--(6,3); \draw[postaction={decorate}] (7,0)--(4,0); \draw[postaction={decorate}] (7,0)--(3,-4); \draw[postaction={decorate}] (2,-2)--(3,-4); \draw[postaction={decorate}] (2,-2)--(4,0); \end{scope} \begin{scope}[dashed,decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (-2,2)--(2,2); \draw[postaction={decorate}] (-2,2)--(-1,4); \draw[postaction={decorate}] (1,4)--(2,2); \draw[postaction={decorate}] (1,4)--(-1,4); \draw[postaction={decorate}] (1,4)--(3,6); \draw[postaction={decorate}] (-2,6)--(-1,4); \draw[postaction={decorate}] (-2,6)--(3,6); \end{scope} \draw[style=dashed] (1,4) -- (6,3); \begin{scope}[xshift=3.5cm, yshift=3.5cm, rotate=-100, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (3,6) -- (6,3); \draw[color=blue, line width=0.5mm] (2,2) -- (7,0); \draw[color=blue, line width=0.5mm] (4,0) -- (3,-4); \draw[color=blue, line width=0.5mm] (-7,1) -- (-2,-2); \draw[color=blue, line width=0.5mm] (-4,0) -- (-2,6); \draw[color=blue, line width=0.5mm] (3,6) -- (-1,4); \draw[color=blue, line width=0.5mm] (-2,2) -- (1,4); \draw[color=blue, line width=0.5mm] (2,-2) -- (3.3,-1.5); \draw[color=blue, line width=0.5mm] (4,-1.25) -- (7,0); \draw[color=blue, line width=0.5mm] (4,0) -- (4.5,0.7); \draw[color=blue, line width=0.5mm] (4.8,1.2) -- (6,3); \draw[color=blue, line width=0.5mm] (-1,4) -- (-0.2,3.5); \draw[color=blue, line width=0.5mm] (0.2,3.2) -- (2,2); \draw[color=blue, line width=0.5mm] (1,4) -- (0.3,4.45); \draw[color=blue, line width=0.5mm] (-0.1,4.7) -- (-2,6); \draw[color=blue, line width=0.5mm] (-7,1) -- (-3.7,1.6); \draw[color=blue, line width=0.5mm] (-3.1,1.75) -- (-2,2); \draw[color=blue, line width=0.5mm] (-4,0) -- (-4,-0.6); \draw[color=blue, line width=0.5mm] (-4,-1.1) -- (-4,-4); \draw[scale=4,color=blue] (0,-3/4) node {$\ldots$}; \draw[scale=2] (0,0) node {$D$}; \draw[scale=2] (3/2,4.5/2) node {$m$}; \draw[scale=2] (2.5/2,3/2) node {$r$}; \draw[scale=2] (-1.5/2,4/2) node {$s$}; \draw[scale=2] (-6/2,4/2) node {$E$}; \end{scope} \end{tikzpicture} \caption{Building an H-triangulation from a diagram of $K_n$} \label{fig:diagram:htriang:even} \end{figure} From Figure \ref{fig:diagram:htriang:even} we go to Figure \ref{fig:boundary:balls:even} and Figure \ref{fig:Htriang:polyhedron:even} exactly as in Section \ref{sec:trig}. \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black} , every node/.style={transform shape , knot crossing , inner sep=1.5 pt } ] \draw[scale=1] (0,-1) node {$(a)$}; \draw[scale=1] (8.2,-1) node {$(b)$}; \begin{scope}[scale=0.7] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (-5,2)--(-4,0); \draw[postaction={decorate}] (-5,2)--(-4,5); \draw[postaction={decorate}] (5,2)--(4,5); \draw[postaction={decorate}] (5,2)--(4,0); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (-4,5)--(0,7); \end{scope} \draw[color=blue, line width=0.5mm] (0,7) -- (4,5); \draw[scale=2] (0,0) node {$\ldots$}; \draw[->] (3,3.5) -- (2,3.5 +1.5/7); \draw (2,3.5 +1.5/7) -- (-4,5); \begin{scope}[xshift=3.5cm, yshift=4.25cm, rotate=-30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (3,3.5) -- (4,5); \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (3,3.5) -- (5,2); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.7 with {\arrow{>>}}} ] \draw[postaction={decorate}] (3,3.5) -- (0,7); \end{scope} \draw[scale=2] (3/2,3/2) node {$D$}; \draw[scale=2] (3/2,4.2/2) node {$m$}; \draw[scale=2] (2/2,4.1/2) node {$s$}; \draw[scale=2] (3.7/2,3.5/2) node {$r$}; \draw[scale=2] (-5/2,4/2) node {$E$}; \end{scope} \begin{scope}[xshift=8.2cm,scale=0.7] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (-5,2)--(-4,0); \draw[postaction={decorate}] (-5,2)--(-4,5); \draw[postaction={decorate}] (5,2)--(4,5); \draw[postaction={decorate}] (5,2)--(4,0); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (-4,5)--(0,7); \end{scope} \draw[color=blue, line width=0.5mm] (0,7) -- (4,5); \draw[scale=2] (0,0) node {$\ldots$}; \begin{scope}[xshift=0cm, yshift=5cm, rotate=-90, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (-4,5) -- (4,5); \draw[->>] (-5,2) -- (-3.5,2.75); \draw (-3.5,2.75) -- (-2,3.5); \draw[->>] (-4,5) -- (-3,4.25); \draw (-3,4.25) -- (-2,3.5); \draw (4,5) -- (-1,5-1.5*5/6); \draw[<-] (-1,5-1.5*5/6) -- (-2,3.5); \draw[scale=2] (-2/2,3/2) node {$D$}; \draw[scale=2] (-1/2,5.5/2) node {$m$}; \draw[scale=2] (-3/2,3.5/2) node {$s$}; \draw[scale=2] (-1.7/2,4/2) node {$r$}; \draw[scale=2] (-5/2,4/2) node {$E$}; \end{scope} \end{tikzpicture} \caption{Boundaries of $B_+$ and $B_-$}\label{fig:boundary:balls:even} \end{figure} \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black} , every node/.style={transform shape , knot crossing , inner sep=1.5 pt } ] \draw[scale=1] (0,-2) node {$(a)$}; \draw[scale=1] (8,-2) node {$(b)$}; \begin{scope}[scale=0.7] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (-5,2)--(-4,0); \draw[postaction={decorate}] (-5,2)--(-4,5); \draw[postaction={decorate}] (5,2)--(4,5); \draw[postaction={decorate}] (5,2)--(4,0); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (-4,5)--(0,7); \end{scope} \draw[color=blue, line width=0.5mm] (0,7) -- (4,5); \draw[scale=2] (0,0) node {$\ldots$}; \begin{scope}[xshift=0cm, yshift=5cm, rotate=-90, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (-4,5) -- (4,5); \draw[->>] (-5,2) -- (-3.5,2.75); \draw (-3.5,2.75) -- (-2,3.5); \draw[->>] (-4,5) -- (-3,4.25); \draw (-3,4.25) -- (-2,3.5); \draw (4,5) -- (-1,5-1.5*5/6); \draw[<-] (-1,5-1.5*5/6) -- (-2,3.5); \draw[scale=2] (-2/2,3/2) node {$D$}; \draw[scale=2] (-1/2,5.5/2) node {$m$}; \draw[scale=2] (-3/2,3.5/2) node {$s$}; \draw[scale=2] (-1.7/2,4/2) node {$r$}; \begin{scope}[xshift=3.5cm, yshift=4.25cm, rotate=-30, scale=0.2] \draw[color=red] (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \begin{scope}[style=dashed] \draw[color=red][->] (3,3.5) -- (2,3.5 +1.5/7); \draw[color=red] (2,3.5 +1.5/7) -- (-4,5); \draw[color=red] (3,3.5) -- (4,5); \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}][color=red] (3,3.5) -- (5,2); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.7 with {\arrow{>>}}} ] \draw[postaction={decorate}][color=red] (3,3.5) -- (0,7); \end{scope} \draw[scale=2][color=red] (3/2,3/2) node {$D$}; \draw[scale=2][color=red] (3/2,4.2/2) node {$m$}; \draw[scale=2][color=red] (2/2,4.1/2) node {$s$}; \draw[scale=2][color=red] (3.7/2,3.5/2) node {$r$}; \end{scope} \end{scope} \begin{scope}[xshift=8cm,yshift=-1.5cm,scale=0.5] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (0,4)--(0,0); \draw[postaction={decorate}] (0,4)--(0,6); \draw[postaction={decorate}] (0,12) -- (0,10); \draw[postaction={decorate}] (0,12)--(-8,16); \draw[postaction={decorate}] (4,14) -- (0,0); \draw[postaction={decorate}] (-1,5)--(-8,16); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (0,4)--(-1,5); \draw[postaction={decorate}] (0,0)--(-1,5); \draw[postaction={decorate}] (4,14) -- (8,16); \draw[postaction={decorate}] (0,0) -- (8,16); \draw[postaction={decorate}] (4,14) -- (0,12); \end{scope} \draw (0,0) -- (-8,16); \begin{scope}[xshift=-4cm, yshift=8cm, rotate=30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (4,14) -- (-8,16); \begin{scope}[xshift=-2cm, yshift=15cm, rotate=75, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (-8,16) -- (8,16); \draw[scale=2] (0,8/2) node {$\vdots$}; \draw[scale=2] (4/2,15/2) node {$m$}; \draw[scale=2] (2.8/2,13.8/2) node {$r$}; \draw[scale=2] (5/2,14/2) node {$s$}; \draw[scale=2] (3/2,12.5/2) node {$D$}; \draw[scale=2] (-4/2,5/2) node {$m$}; \draw[scale=2] (-1.5/2,4.5/2) node {$r$}; \draw[scale=2] (-0.3/2,3.5/2) node {$s$}; \draw[scale=2] (-0.8/2,6/2) node {$D$}; \end{scope} \end{tikzpicture} \caption{A cellular decomposition of $(S^3,K_n)$ as a polyhedron glued to itself}\label{fig:Htriang:polyhedron:even} \end{figure} Then we add a new edge (with simple full arrow) and cut $D$ into $u$ and $D'$ (see Figure \ref{fig:even:Htriang:bigon:trick} (a)), and then we apply the bigon trick $p$ times, where $p:= \frac{n-2}{2}$. We finally obtain the polyhedron in Figure \ref{fig:even:Htriang:bigon:trick} (b). \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black}] \draw[scale=1] (0,-2) node {$(a)$}; \draw[scale=1] (8,-2) node {$(b)$}; \begin{scope}[xshift=0cm,yshift=-1.5cm,scale=0.45] \draw (0,6) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,8) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,10) node[shape=circle,fill=black,scale=0.3] {}; \draw[->,>=latex] (0,4) .. controls +(-0.5,0) and +(0,-0.5) .. (-1,7); \draw (-1,7) .. controls +(0,0.5) and +(-0.5,0) .. (0,8); \draw[scale=2] (-0.5/2,7/2) node {$u$}; \draw[->,>=latex] (4,14) -- (2,9); \draw (2,9) -- (0,4); \draw[scale=2] (0.5/2,3.5/2) node {$u$}; \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (0,4)--(0,0); \draw[postaction={decorate}] (0,4)--(0,6); \draw[postaction={decorate}] (0,8)--(0,6); \draw[postaction={decorate}] (0,12) -- (0,10); \draw[postaction={decorate}] (0,12)--(-8,16); \draw[postaction={decorate}] (4,14) -- (0,0); \draw[postaction={decorate}] (-1,5)--(-8,16); \end{scope} \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (0,4)--(-1,5); \draw[postaction={decorate}] (0,0)--(-1,5); \draw[postaction={decorate}] (4,14) -- (8,16); \draw[postaction={decorate}] (0,0) -- (8,16); \draw[postaction={decorate}] (4,14) -- (0,12); \end{scope} \draw (0,0) -- (-8,16); \begin{scope}[xshift=-4cm, yshift=8cm, rotate=30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (4,14) -- (-8,16); \begin{scope}[xshift=-2cm, yshift=15cm, rotate=75, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (-8,16) -- (8,16); \draw[scale=2] (0,9/2) node {$\vdots$}; \draw[scale=2] (4/2,15/2) node {$m$}; \draw[scale=2] (2.8/2,13.8/2) node {$r$}; \draw[scale=2] (5/2,14/2) node {$s$}; \draw[scale=2] (2/2,11.5/2) node {$D'$}; \draw[scale=2] (-4/2,5/2) node {$m$}; \draw[scale=2] (-1.5/2,4.5/2) node {$r$}; \draw[scale=2] (-0.3/2,3.5/2) node {$s$}; \draw[scale=2] (-4/2,12/2) node {$D'$}; \end{scope} \begin{scope}[xshift=8cm,yshift=-1.5cm,scale=0.45] \draw (0,6) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,10) node[shape=circle,fill=black,scale=0.3] {}; \draw[->,>=latex] (0,12) -- (-1,9); \draw (-1,9) -- (-2,6); \draw[scale=2] (-2.5/2,11/2) node {$u$}; \draw[->,>=latex] (4,14) -- (2,9); \draw (2,9) -- (0,4); \draw[scale=2] (0.5/2,3.5/2) node {$u$}; \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (0,4)--(0,0); \draw[postaction={decorate}] (0,12)--(-8,16); \draw[postaction={decorate}] (4,14) -- (0,0); \draw[postaction={decorate}] (-2,6)--(-8,16); \end{scope} \draw[->,>=latex] (0,4) -- (0,5); \draw (0,5) -- (0,6); \draw[->,>=latex] (0,10) -- (0,11); \draw (0,11) -- (0,12); \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (0,4)--(-2,6); \draw[postaction={decorate}] (0,0)--(-2,6); \draw[postaction={decorate}] (4,14) -- (8,16); \draw[postaction={decorate}] (0,0) -- (8,16); \draw[postaction={decorate}] (4,14) -- (0,12); \end{scope} \draw (0,0) -- (-8,16); \begin{scope}[xshift=-4cm, yshift=8cm, rotate=30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (4,14) -- (-8,16); \begin{scope}[xshift=-2cm, yshift=15cm, rotate=75, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (-8,16) -- (8,16); \draw[scale=2] (0,8/2) node {$\vdots$}; \draw[scale=2] (4/2,15/2) node {$m$}; \draw[scale=2] (2.8/2,13.8/2) node {$r$}; \draw[scale=2] (5/2,14/2) node {$s$}; \draw[scale=2] (2/2,11/2) node {$G$}; \draw[scale=2] (-4/2,5/2) node {$m$}; \draw[scale=2] (-2.3/2,5.5/2) node {$r$}; \draw[scale=2] (-0.3/2,3.5/2) node {$s$}; \draw[scale=2] (-1/2,6/2) node {$G$}; \end{scope} \end{tikzpicture} \caption{A cellular decomposition of $(S^3,K_n)$ before and after the bigon trick}\label{fig:even:Htriang:bigon:trick} \end{figure} We now chop off the quadrilateral made up of the two adjacent faces $G$ (which are $(p+2)$-gons) and we add a new edge (double full arrow) and two new faces $e_{p+1},f_p$. We triangulate the previous quadrilateral as in Figure \ref{fig:GG:tower} and we finally obtain a decomposition of $S^3$ in three polyhedra glued to one another, as described in Figure \ref{fig:even:Htriang:flip:tower}. Note that if $p=1$, then $G=e_1=e_p=f_0=f_{p-1}$ and there is no tower. \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black}] \begin{scope}[xshift=0cm,yshift=0cm,scale=0.45] \draw[->,>=latex] (0,12) -- (-1,9); \draw (-1,9) -- (-2,6); \draw[scale=2] (-2.5/2,11/2) node {$u$}; \draw[->,>=latex] (4,14) -- (2,9); \draw (2,9) -- (0,4); \draw[scale=2] (0.5/2,3.5/2) node {$u$}; \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \draw[postaction={decorate}] (0,4)--(0,0); \draw[postaction={decorate}] (0,12)--(-8,16); \draw[postaction={decorate}] (4,14) -- (0,0); \draw[postaction={decorate}] (-2,6)--(-8,16); \end{scope} \draw[->>,>=latex] (4,14) -- (1,10); \draw (1,10) -- (-2,6); \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>>}}} ] \draw[postaction={decorate}] (0,4)--(-2,6); \draw[postaction={decorate}] (0,0)--(-2,6); \draw[postaction={decorate}] (4,14) -- (8,16); \draw[postaction={decorate}] (0,0) -- (8,16); \draw[postaction={decorate}] (4,14) -- (0,12); \end{scope} \draw (0,0) -- (-8,16); \begin{scope}[xshift=-4cm, yshift=8cm, rotate=30, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw (4,14) -- (-8,16); \begin{scope}[xshift=-2cm, yshift=15cm, rotate=75, scale=0.2] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=blue, line width=0.5mm] (-8,16) -- (8,16); \draw[scale=2] (4/2,15/2) node {$m$}; \draw[scale=2] (2.8/2,13.8/2) node {$r$}; \draw[scale=2] (5/2,14/2) node {$s$}; \draw[scale=2] (0.5/2,8/2) node {$f_p$}; \draw[scale=2] (-4/2,5/2) node {$m$}; \draw[scale=2] (-2.3/2,5.5/2) node {$r$}; \draw[scale=2] (-0.3/2,3.5/2) node {$s$}; \draw[scale=2] (0.6/2,11/2) node {$e_{p+1}$}; \end{scope} \begin{scope}[xshift=3cm,yshift=-2cm,scale=0.9] \draw[color=red,->] (0,0)--(0,1); \draw[color=red] (0,1)--(0,2); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=red] (0.7,2) node{\tiny $p-1$}; \node[draw,ellipse,color=red] (S) at(0.7,2) {\ \ \ \ }; \end{scope} \draw[->,>=latex] (2,3)--(1,1.5); \draw (1,1.5)--(0,0); \draw[->,>=latex] (0,2)--(-1,2.5); \draw (-1,2.5)--(-2,3); \draw[->>,>=latex] (2,3)--(0,3); \draw (0,3)--(-2,3); \draw[->>] (2,3)--(1,2.5); \draw (1,2.5)--(0,2); \draw[->>] (0,0)--(-1,1.5); \draw (-1,1.5)--(-2,3); \draw[scale=2] (0.8/2,2/2) node{$f_{p-1}$}; \draw[scale=2] (-0.5/2,1.5/2) node{$e_{p}$}; \draw[scale=2] (0/2,2.5/2) node{$e_{p+1}$}; \draw[scale=2] (0/2,3.5/2) node{$f_p$}; \end{scope} \begin{scope}[xshift=7cm,yshift=0cm,scale=1] \draw (0,0) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,1) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,2) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,3) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,5) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,6) node[shape=circle,fill=black,scale=0.3] {}; \draw (0,7) node[shape=circle,fill=black,scale=0.3] {}; \draw (5,3.5) node[shape=circle,fill=black,scale=0.3] {}; \draw[scale=1] (0,4) node {$\vdots$}; \draw[scale=1,color=black] (1,1) node {$e_1$}; \draw[scale=1,color=black] (-0.2,1.5) node {$e_1$}; \draw[scale=1,color=black] (1,1.8) node {$e_2$}; \draw[scale=1,color=black] (-0.2,2.4) node {$e_2$}; \draw[scale=1,color=black] (1,5) node {$e_{p-1}$}; \draw[scale=1,color=black] (-1.5,4) node {$e_{p-1}$}; \draw[scale=1,color=black] (1,5.9) node {$e_p$}; \draw[scale=1,color=black] (3,6) node {$f_{p-1}$}; \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=green] (3.5,3) circle (0.2) node{\scriptsize $1$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=green] (-0.3,1.5) circle (0.2) node{\scriptsize $1$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5,color=purple] \draw[color=purple] (3.5,4.5) circle (0.2) node{\scriptsize $2$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5,color=purple] \draw[color=purple] (-1.3,2.8) circle (0.2) node{\scriptsize $2$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=violet] (3.5,5.8) node{\tiny $p-2$}; \node[draw,ellipse,color=violet] (S) at(3.5,5.8) {\ \ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=violet] (-2.5,4) node{\tiny $p-2$}; \node[draw,ellipse,color=violet] (S) at(-2.5,4) {\ \ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=red] (3.5,6.8) node{\tiny $p-1$}; \node[draw,ellipse,color=red] (S) at(3.5,6.8) {\ \ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw[color=red] (-4,5.5) node{\tiny $p-1$}; \node[draw,ellipse,color=red] (S) at(-4,5.5) {\ \ \ \ }; \end{scope} \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate},color=green] (5,3.5)--(0,1); \draw[postaction={decorate},color=purple] (5,3.5)--(0,2); \draw[postaction={decorate},color=violet] (5,3.5)--(0,5); \draw[postaction={decorate},color=red] (5,3.5)--(0,6); \draw[postaction={decorate},color=green] (0,0) .. controls +(-0.6,0.5) and +(-0.5,0) .. (0,2); \draw[postaction={decorate},color=purple] (0,0) .. controls +(-1.2,0.7) and +(-0.7,0) .. (0,3); \draw[postaction={decorate},color=violet] (0,0) .. controls +(-2.5,1.5) and +(-0.7,0) .. (0,6); \draw[postaction={decorate},color=red] (0,0) .. controls +(-4,3) and +(-2,-2) .. (0,7); \end{scope} \draw (0,0) -- (3,2.1); \draw[<-,>=latex] (3,2.1) -- (5,3.5); \draw (0,7) -- (3,7-2.1); \draw[<<-] (3,7-2.1) -- (5,3.5); \begin{scope}[yshift=0cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \begin{scope}[yshift=1cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \begin{scope}[yshift=2cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \begin{scope}[yshift=5cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \begin{scope}[yshift=6cm] \draw[->,>=latex] (0,0) -- (0,0.7); \draw (0,0.6)--(0,1); \end{scope} \end{scope} \end{tikzpicture} \caption{A flip move and a tower of tetrahedra}\label{fig:even:Htriang:flip:tower} \end{figure} We can then decompose the polyhedra in Figure \ref{fig:even:Htriang:flip:tower} into ordered tetrahedra and obtain the H-triangulation of Figure \ref{fig:H:trig:even}. Along the way, in order to harmonize the notation with the small cases ($p=0,1$), we did the following arrow replacements: \begin{itemize} \item full black simple arrow by simple arrow with circled $0$, \item full black double arrow by simple arrow with circled $p+1$, \item double arrow by simple arrow with circled $p$, \item full white arrow by double full white arrow. \end{itemize} Moreover, we cut the previous polyehdron into $p+4$ tetrahedra, introducing new triangular faces $v$ (behind $e_{p+1},r,u$), $g$ (behind $f_p,s,u$), $s'$ (completing $m,m,s$), and $f_1, \ldots, f_{p-1}$ at each of the $p-1$ floors of the tower of Figure \ref{fig:even:Htriang:flip:tower}. We add the convention $f_0=e_1$ to account for the case $p=0$. \begin{figure}[!h] \begin{tikzpicture} \begin{scope}[xshift=1cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_1$} ; \draw (0,-0.6) node{$e_2$} ; \draw (-0.5,0.3) node{$f_1$} ; \draw (0.5,0.3) node{$e_1$} ; \draw (0,-1.4) node{\large $T_1$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) circle (0.15) node{\scriptsize $2$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $1$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $1$}; \end{scope} \end{scope} \draw (3.5,0) node{$\ldots$} ; \draw (8.5,0) node{$\ldots$} ; \begin{scope}[xshift=6cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_k$} ; \draw (0,-0.7) node{$e_{k+1}$} ; \draw (-0.5,0.3) node{$f_k$} ; \draw (0.5,0.3) node{$f_{k-1}$} ; \draw (0,-1.4) node{\large $T_k$} ; \draw (0,-1.7) node{\tiny $(2\leqslant k \leqslant p-1)$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $k$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $k-1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) node{\tiny $k+1$}; \node[draw,ellipse] (S) at(-0.6,-0.6) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $k$}; \end{scope} \end{scope} \begin{scope}[xshift=11cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_p$} ; \draw (0,-0.7) node{$e_{p+1}$} ; \draw (-0.5,0.3) node{$f_p$} ; \draw (0.5,0.3) node{$f_{p-1}$} ; \draw (0,-1.4) node{\large $T_p$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $p-1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-0.6,-0.6) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$r$} ; \draw (0,-0.6) node{$v$} ; \draw (-0.5,0.3) node{$s'$} ; \draw (0.5,0.3) node{$g$} ; \draw (0,-1.4) node{\large $U$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \draw[->] (0,0)--(1.732*0.3,-1*0.3); \draw (1.732*0.3,-1*0.3)--(1.732/2,-1/2); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \draw[color=black][->](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black](-1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](-1.732/4,-0.25) -- (-1.732/2,-0.5); \begin{scope}[xshift=-0.386cm, yshift=-0.223cm, rotate=120, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \begin{scope}[xshift=-0.476cm, yshift=-0.275cm, rotate=120, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=black](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=-0.838cm, yshift=0.544cm, rotate=149, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \begin{scope}[xshift=-0.887cm, yshift=0.461cm, rotate=151, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \end{scope} \begin{scope}[xshift=4cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$s$} ; \draw (0,-0.6) node{$g$} ; \draw (-0.5,0.3) node{$u$} ; \draw (0.5,0.3) node{$f_p$} ; \draw (0,-1.4) node{\large $V$} ; \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $p$}; \end{scope} \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(-1.732/2,-0.5); \draw[color=black][->](0,0)--(1.732/2*0.6,-0.5*0.6); \draw[color=black](1.732/4,-0.25) -- (1.732/2,-0.5); \draw[color=black](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black][->] (-1.732/2,-0.5) arc (-150:-87:1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \end{scope} \begin{scope}[xshift=8cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$u$} ; \draw (0,-0.6) node{$v$} ; \draw (-0.5,0.3) node{$e_{p+1}$} ; \draw (0.5,0.3) node{$r$} ; \draw (0,-1.4) node{\large $W$} ; \draw[->](0,0)--(0,0.6); \draw(0,0.6)--(0,1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $p$}; \end{scope} \draw[color=black][->](0,0)--(-1.732/2*0.6,-0.5*0.6); \draw[color=black](-1.732/4,-0.25) -- (-1.732/2,-0.5); \draw[color=black][-<](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \draw[color=black](1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](1.732/4,-0.25) -- (1.732/2,-0.5); \begin{scope}[xshift=0.386cm, yshift=-0.223cm, rotate=-120, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \begin{scope}[xshift=0.476cm, yshift=-0.275cm, rotate=-120, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \end{scope} \begin{scope}[xshift=12cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$m$} ; \draw (0,-0.6) node{$m$} ; \draw (-0.5,0.3) node{$s$} ; \draw (0.5,0.3) node{$s'$} ; \draw (0,-1.4) node{\large $Z$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black, postaction={on each segment={mid arrow =black}}] (0,0)--(-1.732/2,-0.5); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \draw[very thick,color=blue][->](1.732/2,-0.5) arc (-30:-90:1); \draw[very thick,color=blue] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \draw[color=black](1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](1.732/4,-0.25) -- (1.732/2,-0.5); \begin{scope}[xshift=0.386cm, yshift=-0.223cm, rotate=-120, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \begin{scope}[xshift=0.476cm, yshift=-0.275cm, rotate=-120, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \draw[color=black](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \begin{scope}[xshift=0.838cm, yshift=0.544cm, rotate=-151, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \begin{scope}[xshift=0.887cm, yshift=0.461cm, rotate=-153, scale=0.1] \draw (1,0) -- (0,1) -- (-1,0) -- (1,0); \end{scope} \end{scope} \end{tikzpicture} \caption{An H-triangulation for $(S^3,K_n)$, $n$ even, $n \geqslant 4$, with $p=\frac{n-2}{2}$} \label{fig:H:trig:even} \end{figure} In the H-triangulation of Figure \ref{fig:H:trig:even} there are \begin{itemize} \item $1$ common vertex, \item $p+5 = \frac{n+8}{2}$ edges (simple arrow $\overrightarrow{e_s}$, double white triangle arrow $\overrightarrow{e_d}$, blue simple arrow $\overrightarrow{K_n}$, and the simple arrows $\overrightarrow{e_0}, \ldots, \overrightarrow{e_{p+1}}$ indexed by $0, \ldots p+1$ in circles) \item $2p+8 = n+6$ faces ($e_1, \ldots, e_{p+1}, f_1, \ldots, f_{p},g, m,r,s,s',u,v $), \item $p+4 = \frac{n+6}{2}$ tetrahedra ($T_1, \ldots, T_{p}, U, V, W, Z$) . \end{itemize} Finally, by collapsing the tetrahedron $Z$ (like in the previous section) we obtain the ideal triangulation of $S^3 \setminus K_n$ described in Figure \ref{fig:id:trig:even}. We identified the face $s'$ with $s$ and the white triangle arrow with the arrow circled by $p$. \begin{figure}[!h] \begin{tikzpicture} \begin{scope}[xshift=1cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_1$} ; \draw (0,-0.6) node{$e_2$} ; \draw (-0.5,0.3) node{$f_1$} ; \draw (0.5,0.3) node{$e_1$} ; \draw (0,-1.4) node{\large $T_1$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) circle (0.15) node{\scriptsize $2$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $1$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $1$}; \end{scope} \end{scope} \draw (3.5,0) node{$\ldots$} ; \draw (8.5,0) node{$\ldots$} ; \begin{scope}[xshift=6cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_k$} ; \draw (0,-0.7) node{$e_{k+1}$} ; \draw (-0.5,0.3) node{$f_k$} ; \draw (0.5,0.3) node{$f_{k-1}$} ; \draw (0,-1.4) node{\large $T_k$} ; \draw (0,-1.7) node{\tiny $(2\leqslant k \leqslant p-1)$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $k$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $k-1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) node{\tiny $k+1$}; \node[draw,ellipse] (S) at(-0.6,-0.6) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $k$}; \end{scope} \end{scope} \begin{scope}[xshift=11cm,yshift=0cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$e_p$} ; \draw (0,-0.7) node{$e_{p+1}$} ; \draw (-0.5,0.3) node{$f_p$} ; \draw (0.5,0.3) node{$f_{p-1}$} ; \draw (0,-1.4) node{\large $T_p$} ; \path [draw=black,postaction={on each segment={mid arrow=black}}] (0,0)--(-1.732/2,-0.5); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(1.732/2,-0.5); \draw[->](1.732/2,-0.5) arc (-30:-90:1); \draw (0,-1) arc (-90:-150:1); \draw[->](0,1) arc (90:30:1); \draw (1.732/2,0.5) arc (30:-30:1); \draw[->](0,1) arc (90:150:1); \draw (-1.732/2,0.5) arc (150:210:1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) node{\tiny $p-1$}; \node[draw,ellipse] (S) at(1.5,1) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-0.6,-0.6) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \end{scope} \begin{scope}[xshift=1cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $2$} ; \draw (-1,-0.55) node{\scriptsize $3$} ; \draw (1,1) node{$r$} ; \draw (0,-0.6) node{$v$} ; \draw (-0.5,0.3) node{$s$} ; \draw (0.5,0.3) node{$g$} ; \draw (0,-1.4) node{\large $U$} ; \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \draw[->] (0,0)--(1.732*0.3,-1*0.3); \draw (1.732*0.3,-1*0.3)--(1.732/2,-1/2); \draw[->] (0,0)--(-1.732*0.3,-1*0.3); \draw (-1.732*0.3,-1*0.3)--(-1.732/2,-1/2); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \draw[color=black][->](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \end{scope} \begin{scope}[xshift=6cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$s$} ; \draw (0,-0.6) node{$g$} ; \draw (-0.5,0.3) node{$u$} ; \draw (0.5,0.3) node{$f_p$} ; \draw (0,-1.4) node{\large $V$} ; \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (1.5,1) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,-1.7) circle (0.15) node{\scriptsize $p$}; \end{scope} \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(-1.732/2,-0.5); \draw[color=black][->](0,0)--(1.732/2*0.6,-0.5*0.6); \draw[color=black](1.732/4,-0.25) -- (1.732/2,-0.5); \draw[color=black](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black][->] (-1.732/2,-0.5) arc (-150:-87:1); \path [draw=black,postaction={on each segment={mid arrow =black}}] (0,0)--(0,1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \end{scope} \begin{scope}[xshift=11cm,yshift=-5cm,rotate=0,scale=1.5] \draw (0,-0.15) node{\scriptsize $0$} ; \draw (0,1.15) node{\scriptsize $1$} ; \draw (1,-0.55) node{\scriptsize $3$} ; \draw (-1,-0.55) node{\scriptsize $2$} ; \draw (1,1) node{$u$} ; \draw (0,-0.6) node{$v$} ; \draw (-0.5,0.3) node{$e_{p+1}$} ; \draw (0.5,0.3) node{$r$} ; \draw (0,-1.4) node{\large $W$} ; \draw[->](0,0)--(0,0.6); \draw(0,0.6)--(0,1); \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.6,-0.6) circle (0.15) node{\scriptsize $p$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-0.6,-0.7) node{\tiny $p+1$}; \node[draw,ellipse] (S) at(-0.6,-0.7) {\ \ \ }; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (-1.5,1) circle (0.15) node{\scriptsize $0$}; \end{scope} \begin{scope}[xshift=0cm,yshift=0cm,rotate=0,scale=1/1.5] \draw (0.2,0.5) circle (0.15) node{\scriptsize $p$}; \end{scope} \draw[color=black][->](0,0)--(-1.732/2*0.6,-0.5*0.6); \draw[color=black](-1.732/4,-0.25) -- (-1.732/2,-0.5); \draw[color=black,<-](1.732/2*0.6,-0.5*0.6) -- (0,0); \draw[color=black](1.732/4,-0.25) -- (1.732/2,-0.5); \draw[color=black][-<](1.732/2,-0.5) arc (-30:-90:1); \draw[color=black] (-1.732/2,-0.5) arc (-150:-87:1); \draw[color=black][->](0,1) arc (90:30:1); \draw[color=black] (1.732/2,0.5) arc (30:-30:1); \draw[color=black][->](0,1) arc (90:150:1); \draw[color=black] (-1.732/2,0.5) arc (150:210:1); \end{scope} \end{tikzpicture} \caption{An ideal triangulation for $S^3 \setminus K_n$, $n$ even, $n \geqslant 4$, with $p=\frac{n-2}{2}$} \label{fig:id:trig:even} \end{figure} In Figure \ref{fig:id:trig:even} there are \begin{itemize} \item $1$ common vertex, \item $p+3 = \frac{n+4}{2}$ edges (simple arrow $\overrightarrow{e_s}$ and the simple arrows $\overrightarrow{e_0}, \ldots, \overrightarrow{e_{p+1}}$ indexed by $0, \ldots p+1$ in circles), \item $2p+6 = n+4$ faces ($e_1, \ldots, e_{p+1}, f_1, \ldots, f_{p}, g,r,s,u,v $), \item $p+3 = \frac{n+4}{2}$ tetrahedra ($T_1, \ldots, T_{p}, U, V, W$). \end{itemize} \begin{remark}\label{rem:K2} When $n=2$, i.e. $p=0$ here, the triangulations of Figures \ref{fig:H:trig:even} and \ref{fig:id:trig:even} are still correct (with the convention $f_0=e_1$), one just needs to stop the previous reasoning at Figure \ref{fig:even:Htriang:bigon:trick} (b) and collapse the bigon $G$ into a segment. In this case, the ideal triangulation $X_2$ of the figure-eight knot complement $S^3 \setminus K_2$ described in Figure \ref{fig:id:trig:even} has three tetrahedra, although it is well-known that this knot complement has Matveev complexity $2$. The ideal triangulations of Figures \ref{fig:id:tri:41:complement} and \ref{fig:id:trig:even} are actually related by a Pachner $3-2$ move. \end{remark} \subsection{Gluing equations and proving geometricity}\label{sub:even:geom} \begin{figure}[!h] \centering \begin{tikzpicture}[every path/.style={string ,black}] \begin{scope}[scale=0.7] \draw[color=blue] (-7.5,7.5) node {$3_U$}; \draw (-8,9.6) node {$r$}; \draw (-6.3,6) node {$s$}; \draw (-7.2,4) node {$v$}; \draw[color=brown] (-9.5,9.6) node {$a$}; \draw[color=brown] (-6.3,9.6) node {$b$}; \draw[color=brown] (-6.3,1.5) node {$c$}; \draw[color=blue] (-8.5,3) node {$3_W$}; \draw (-8,0.4) node {$r$}; \draw (-9.8,4.5) node {$u$}; \draw (-8.2,4) node {$v$}; \draw[color=brown] (-9.8,8.5) node {$a$}; \draw[color=brown] (-9.8,0.4) node {$b$}; \draw[color=brown] (-6.4,0.3) node {$c$}; \draw[color=blue] (-4.5,7) node {$3_V$}; \draw (-4.5,8.6) node {$g$}; \draw (-5.7,6) node {$s$}; \draw (-4.6,5) node {$f_p$}; \draw[color=brown] (-5.8,9.5) node {$a$}; \draw[color=brown] (-5.7,1.5) node {$b$}; \draw[color=brown] (-3.5,8) node {$c$}; \draw[color=blue] (-2.6,6.5) node {$3_p$}; \draw (-4.2,4) node {$f_p$}; \draw (-1.6,8) node {$e_{p+1}$}; \draw (-3,5) node {$e_p$}; \draw[color=brown] (-0.7,8.5) node {$a$}; \draw[color=blue] (-0.7,9.05) node {\tiny $2_W$}; \draw (-1.3,8.6) node {\tiny $e_{p+1}$}; \draw (-0.18,9.4) node {\tiny $u$}; \draw (-1.2,9) node {\tiny $v$}; \draw[color=brown] (-0.15,9.65) node {\tiny $a$}; \draw[color=brown] (-0.15,9.1) node {\tiny $b$}; \draw[color=brown] (-2,8.5) node {\tiny $c$}; \draw[color=blue] (-2.7,9) node {$2_U$}; \draw (-4,9) node {$g$}; \draw (-3,9.7) node {$r$}; \draw (-1.7,9.1) node {$v$}; \draw[color=brown] (-0.7,9.8) node {$a$}; \draw[color=brown] (-3,8.4) node {$b$}; \draw[color=brown] (-4.9,9.6) node {$c$}; \draw[color=blue] (-3.9,2.3) node {\tiny $3_{p-1}$}; \draw (-3,4) node {\tiny $e_p$}; \draw (-3.45,3) node {\tiny $e_{p-1}$}; \draw (-4.35,1.7) node {\tiny $f_{p-1}$}; \draw[color=brown] (-1.2,6.95) node {\tiny $a$}; \draw[color=blue] (-3.4,1.2) node {$2_{p}$}; \draw (-2,1.4) node {\tiny $e_p$}; \draw (-3,0.7) node {\tiny $e_{p+1}$}; \draw (-4.2,0.9) node {\tiny $f_{p-1}$}; \draw[color=brown] (-1,1.05) node {\tiny $a$}; \draw[color=blue] (-1,0.4) node {$1_W$}; \draw (-2,0.15) node {\tiny $r$}; \draw (-2.8,0.3) node {\tiny $e_{p+1}$}; \draw (-0.3,0.5) node {\tiny $u$}; \draw[color=brown] (-4.2,0.15) node {\tiny $a$}; \draw[color=brown] (-.2,.2) node {\tiny $b$}; \draw[color=brown] (-.2,.75) node {\tiny $c$}; \draw[color=blue] (-0.7,4) node {$2_{1}$}; \draw (-0.5,5.9) node {\tiny $e_1$}; \draw (-0.3,4.7) node {\tiny $e_1$}; \draw (-0.5,3) node {\tiny $e_2$}; \draw[color=brown] (-0.2,2) node {\tiny $a$}; \draw[color=blue] (-1.4,4.5) node {$3_1$}; \draw (-1,5.7) node {\tiny $e_1$}; \draw (-1.35,5) node {\tiny $e_2$}; \draw (-1.65,3.9) node {\tiny $f_{1}$}; \draw[color=brown] (-0.65,7) node {\tiny $a$}; \draw[color=blue] (-1.7,2.9) node {$2_2$}; \draw (-1.3,2.3) node {\tiny $e_3$}; \draw (-1,2.8) node {\tiny $e_2$}; \draw (-1.5,3.45) node {\tiny $f_{1}$}; \draw[color=brown] (-0.6,1.8) node {\tiny $a$}; \draw[color=blue] (8.5,7.5) node {$2_V$}; \draw (16-8,9.6) node {$s$}; \draw (16-6.3,6) node {$u$}; \draw (16-7.2,4) node {$g$}; \draw[color=brown] (6.5,9.6) node {$a$}; \draw[color=brown] (9.7,1.5) node {$b$}; \draw[color=brown] (9.7,9.6) node {$c$}; \draw[color=blue] (8,1.8) node {$1_U$}; \draw (16-8,0.4) node {$s$}; \draw (16-9.8,4) node {$r$}; \draw (16-8.2,4) node {$g$}; \draw[color=brown] (9.5,0.4) node {$a$}; \draw[color=brown] (6.3,.4) node {$b$}; \draw[color=brown] (6.3,8.5) node {$c$}; \draw[color=blue] (5.1,3.2) node {$0_W$}; \draw (5,1.4) node {$v$}; \draw (5.7,4) node {$r$}; \draw (5.3,5) node {$e_{p+1}$}; \draw[color=brown] (5.8,8.5) node {$a$}; \draw[color=brown] (4.4,2.1) node {$b$}; \draw[color=brown] (5.7,.7) node {$c$}; \draw[color=blue] (3.5,1) node {$0_U$}; \draw (4.5,1) node {$v$}; \draw (3,0.3) node {$s$}; \draw (2.5,0.9) node {$g$}; \draw[color=brown] (1,0.2) node {$a$}; \draw[color=brown] (4,1.6) node {$b$}; \draw[color=brown] (5.4,.2) node {$c$}; \draw[color=blue] (3.5,3.5) node {$0_p$}; \draw (4.2,5) node {$e_{p+1}$}; \draw (2,2) node {$f_p$}; \draw (3,4.5) node {$f_{p-1}$}; \draw[color=brown] (1,1.7) node {$a$}; \draw[color=blue] (3.3,8.5) node {$1_p$}; \draw (4.5,9.1) node {$e_p$}; \draw (3.4,9.3) node {$f_p$}; \draw (2.1,8.8) node {$f_{p-1}$}; \draw[color=brown] (1,8.9) node {$a$}; \draw[color=blue] (4,7.7) node {\tiny $0_{p-1}$}; \draw (4.5,8.45) node {\tiny $e_p$}; \draw (3,6.5) node {\tiny $f_{p-2}$}; \draw (4,7.2) node {\tiny $f_{p-1}$}; \draw[color=brown] (1.8,4) node {\tiny $a$}; \draw[color=blue] (0.7,0.8) node {$0_{V}$}; \draw (0.2,0.6) node {\tiny $u$}; \draw (2,1.3) node {\tiny $f_p$}; \draw (1.35,0.85) node {\tiny $g$}; \draw[color=brown] (0.2,0.9) node {\tiny $a$}; \draw[color=brown] (.15,.25) node {\tiny $b$}; \draw[color=brown] (2.6,1.5) node {\tiny $c$}; \draw[color=blue] (1,9.6) node {$1_{V}$}; \draw (0.2,9.5) node {\tiny $u$}; \draw (1.8,9.5) node {\tiny $f_p$}; \draw (2,9.85) node {\tiny $s$}; \draw[color=brown] (0.2,9.2) node {\tiny $a$}; \draw[color=brown] (4,9.85) node {\tiny $b$}; \draw[color=brown] (.2,9.8) node {\tiny $c$}; \draw[color=blue] (0.5,6) node {$1_{1}$}; \draw (0.25,4.8) node {\tiny $e_1$}; \draw (0.5,4) node {\tiny $e_1$}; \draw (0.5,7) node {\tiny $f_1$}; \draw[color=brown] (0.15,8.1) node {\tiny $a$}; \draw[color=blue] (1.8,7.1) node {$1_{2}$}; \draw (1.6,6.6) node {\tiny $e_2$}; \draw (1.3,7.6) node {\tiny $f_2$}; \draw (1.1,7) node {\tiny $f_1$}; \draw[color=brown] (0.5,8.3) node {\tiny $a$}; \draw[color=blue] (1.45,5.5) node {$0_1$}; \draw (1.8,6.2) node {\tiny $e_2$}; \draw (1.1,4.5) node {\tiny $e_1$}; \draw (1.4,5) node {\tiny $f_1$}; \draw[color=brown] (0.65,3) node {\tiny $a$}; \draw (0,0)--(6,0)--(10,0)--(10,10)--(6,10)--(0,10)--(-3,8)--(-6,10)--(-10,10)--(-10,0)--(-6,0)--(0,0)--(4,2)--(6,0)--(6,10)--(10,0); \draw (-10,10)--(-6,0)--(-6,10); \draw (-6,0)--(-3,8)--(0,9)--(0,10); \draw (-6,0)--(0,9)--(6,10)--(4,2)--(0,1)--(0,0)--(-6,0)--(0,1)--(6,10); \draw (-6,0)--(-3.6,2)--(0,1)--(-2.4,3)--(-1.2,4)--(0,1)--(0,9)--(-1.2,4); \draw (-3.6,2)--(0,9)--(-2.4,3); \draw (0,9)--(1.2,6)--(0,1)--(2.4,7)--(0,9)--(3.6,8)--(0,1); \draw (1.2,6)--(2.4,7); \draw (3.6,8)--(6,10); \draw (0,10)--(-6,10); \draw (3-0.12,7.5-0.1) node[shape=circle,fill=black,scale=0.2] {}; \draw (3,7.5) node[shape=circle,fill=black,scale=0.2] {}; \draw (3+0.12,7.5+0.1) node[shape=circle,fill=black,scale=0.2] {}; \draw (-3-0.12,2.5-0.1) node[shape=circle,fill=black,scale=0.2] {}; \draw (-3,2.5) node[shape=circle,fill=black,scale=0.2] {}; \draw (-3+0.12,2.5+0.1) node[shape=circle,fill=black,scale=0.2] {}; \draw[color=violet,style=dashed,->] (7,0)--(7,5); \draw[color=violet,style=dashed, very thick] (7,5)--(7,10); \draw[color=violet] (7.7,1.1) node {\scriptsize $m_{X_n}$}; \draw[color=teal,style=dashed, very thick,->] (-10,4)--(-8.5,2); \draw[color=teal,style=dashed, very thick] (-8.5,2)--(-7,0); \draw[color=teal,style=dashed, very thick,->] (-7,10)--(-6,9)--(-5.5,9); \draw[color=teal,style=dashed, very thick] (-5.5,9)--(-5,9.3)--(1.3,9.3)--(1.5,10); \draw[color=teal,style=dashed, very thick,->] (1.5,0)--(2,.4)--(6,1)--(8,2.5) ; \draw[color=teal,style=dashed, very thick] (8,2.5)--(10,4) ; \draw[color=teal] (-8.5,1.5) node {\scriptsize $l_{X_n}$}; \draw[color=red, very thick,->] (-10,0)--(-8,0); \draw[color=red, very thick] (-8,0)--(-6,0); \draw[color=red] (-8,-.5) node {(i)}; \draw[color=red, very thick,->] (-6,0)--(-6,5); \draw[color=red, very thick] (-6,5)--(-6,10); \draw[color=red] (-5.4,5) node {(ii)}; \draw[color=red, very thick,->] (-6,10)--(-3,10); \draw[color=red, very thick] (-3,10)--(0,10); \draw[color=red] (-3,10.5) node {(iii)}; \draw[color=red, very thick,->] (0,0)--(3,0); \draw[color=red, very thick] (3,0)--(6,0); \draw[color=red] (3,-.5) node {(iv)}; \draw[color=red, very thick,->] (6,0)--(6,6); \draw[color=red, very thick] (6,6)--(6,10); \draw[color=red] (5.5,6) node {(v)}; \draw[color=red, very thick,->] (6,10)--(8,10); \draw[color=red, very thick] (8,10)--(10,10); \draw[color=red] (8,10.5) node {(vi)}; \end{scope} \end{tikzpicture} \caption{Triangulation of the boundary torus for the truncation of $X_n$, $n$ even, with angles (brown), meridian curve $m_{X_n}$ (violet, dashed), longitude curve $l_{X_n}$ (green, dashed) and preferred longitude curve (i)$\cup \ldots \cup$(vi) (red).}\label{fig:trig:cusp:even} \end{figure} As in Section \ref{sub:complete:odd}, we constructed in Figure \ref{fig:trig:cusp:even} a triangulation of the boundary torus $\partial \nu(K_n)$ from the datum in Figure \ref{fig:id:trig:even}. Here for the positive tetrahedra $T_1, \ldots, T_p$ we only indicated the brown $a$ angles for readability (the $b$ and $c$ follow clockwise). We also drew a meridian curve $m_{X_n}$ in violet and dashed, a longitude curve $l_{X_n}$ in green and dashed, and a preferred longitude curve (i)$\cup \ldots \cup$(vi) in red (one can check it is indeed a preferred longitude in Figure \ref{fig:longitude:even}). \begin{figure}[!h] \includegraphics[scale=1.5]{EvenLongitude.pdf} \caption{A preferred longitude (i) $\cup \ldots \cup$ (vi) (in red) for the even twist knot $K_n$, seen in $S^3 \setminus K_n$ (left) and on the truncated tetrahedron $U$ (right).}\label{fig:longitude:even} \end{figure} Let us now list the angular and complex weight functions associated to edges of $X_n$. For $\alpha=(a_1,b_1,c_1,\ldots,a_p,b_p,c_p,a_U,b_U,c_U,a_V,b_V,c_V,a_W,b_W,c_W) \in \mathcal{S}_{X_n}$ a shape structure on $X_n$, we compute the weights of each edge: \begin{itemize} \item $\omega_s(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_s})= 2 a_U+b_V+c_V+a_W+b_W $ \item $\omega_0(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_0})= 2 a_1 + c_1 + 2 a_2 + \ldots + 2 a_p + a_V+c_W $ \item $\omega_1(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_1})= 2b_1+c_2 $ \\ \vspace*{-2mm} \item $\omega_k(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_k})= c_{k-1}+2b_k+c_{k+1} $ \ \ (for $2\leqslant k \leqslant p-1$) \\ \vspace*{-2mm} \item $\omega_p(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_p})= c_{p-1}+2b_p+b_U+2c_U+a_V+b_V+a_W+c_W$ \item $\omega_{p+1}(\alpha):= \omega_{X_n,\alpha}(\overrightarrow{e_{p+1}})= c_p+b_U+c_V+b_W$ \end{itemize} For a complex shape structure $\widetilde{\mathbf{z}}=(z_1,\ldots,z_p,z_U,z_V,z_W) \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+3}$, its complex weight functions are: \begin{itemize} \item $\omega^{\mathbb{C}}_s(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_s})= 2\mathrm{Log}(z_U) + \mathrm{Log}(z'_V) + \mathrm{Log}(z''_V) + \mathrm{Log}(z_W) + \mathrm{Log}(z'_W) $ \item $\omega^{\mathbb{C}}_0(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_0})= 2\mathrm{Log}(z_1) + \mathrm{Log}(z'_1) + 2\mathrm{Log}(z_2) + \cdots + 2\mathrm{Log}(z_p) + \mathrm{Log}(z_V) + \mathrm{Log}(z''_W) $ \item $\omega^{\mathbb{C}}_1(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_1})= 2\mathrm{Log}(z''_1) + \mathrm{Log}(z'_2) $ \\ \vspace*{-2mm} \item $\omega^{\mathbb{C}}_k(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_k})= \mathrm{Log}(z'_{k-1}) + 2\mathrm{Log}(z''_k) + \mathrm{Log}(z'_{k+1}) $ \ \ (for $2\leqslant k \leqslant p-1$) \\ \vspace*{-2mm} \item $\omega^{\mathbb{C}}_p(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_p})= \mathrm{Log}(z'_{p-1}) + 2\mathrm{Log}(z''_p) + 2\mathrm{Log}(z'_U) + \mathrm{Log}(z''_U) + \mathrm{Log}(z_V) + \mathrm{Log}(z'_V) + \mathrm{Log}(z_W) + \mathrm{Log}(z''_W)$ \item $\omega^{\mathbb{C}} _{p+1}(\widetilde{\mathbf{z}}):= \omega^{\mathbb{C}}_{X_n,\alpha}(\overrightarrow{e_{p+1}})= \mathrm{Log}(z'_p) + \mathrm{Log}(z''_U) + \mathrm{Log}(z''_V) + \mathrm{Log}(z'_W) $ \end{itemize} To the meridian curve $m_{X_n}$ and the longitude curve $l_{X_n}$ are associated angular holonomies $$m_{X_n}(\alpha):=a_V-a_U, \ \ \ l_{X_n}(\alpha):=2(a_W-b_V),$$ and one possible complex completeness equation is once again (from the meridian curve): $$\mathrm{Log}(z_U)-\mathrm{Log}(z_V)=0.$$ Furthermore, one can again see in Figure \ref{fig:trig:cusp:even} that in the homology group of the boundary torus, we have the relation $$ \mathrm{(i)} \cup \ldots \cup \mathrm{(vi)} = l_{X_n} + 2 m_{X_n}.$$ Using properties of shape structures, we see that the balancing conditions are equivalent to the following $p+2$ equations: \begin{itemize} \item $E_s(\alpha): \ 2a_U + b_V + c_V + a_W + b_W = 2\pi$ \item $E_1(\alpha): \ 2b_1+c_2 = 2 \pi$ \\ \vspace*{-2mm} \item $E_k(\alpha): \ c_{k-1}+2b_k+c_{k+1} = 2 \pi $ \quad (for $2\leqslant k \leqslant p-1$) \\ \vspace*{-2mm} \item $E_p(\alpha): \ c_{p-1}+2b_p + b_U + 2c_U + a_V + b_V + a_W + c_W = 2\pi$ \item $E_{p+1} (\alpha): \ c_p + b_U + c_V + b_W = 2\pi$ \end{itemize} The missing $(p+3)$-rd equation, stating that the angles around the vertices of degree $2p+3$ in Figure~\ref{fig:trig:cusp:even} add up to $2\pi$, is redundant: summed with all of the above, it becomes simply that the sum of all angles is $(p+3)\pi$. \begin{theorem} \label{thm:appendix:geom:even} $X_n$ is geometric for $n \geq 2$ even. \end{theorem} \begin{proof} We begin by treating the case of $n\geq 6$, i.e.\ $p\geq 2$. First we show that the space of positive angle structures is nonempty. For small enough $\epsilon>0$, the values $$ \begin{pmatrix} a_j \\ b_j \\ c_j \end{pmatrix} := \begin{pmatrix} \epsilon \\ \pi - \epsilon(j^2+1) \\ \epsilon j^2 \end{pmatrix}\text{for } 1\leq j \leq p-1, \quad \begin{pmatrix} a_p \\ b_p \\ c_p \end{pmatrix} := \begin{pmatrix} 3\pi/4 - \epsilon (p^2+2p-1)/2 \\ ~\,\pi/4 - \epsilon (p^2-2p+1)/2 \\ \epsilon p^2 \end{pmatrix}, $$ $$ \begin{pmatrix} a_U \\ b_U \\ c_U \end{pmatrix} = \begin{pmatrix} a_V \\ c_V \\ b_V \end{pmatrix} = \begin{pmatrix} c_W \\ b_W \\ a_W \end{pmatrix} := \begin{pmatrix} \pi/4 + \epsilon p^2/2 \\ 2\pi/3 - \epsilon p^2/3 \\ \pi/12 - \epsilon p^2/6 \end{pmatrix} $$ give a positive solution to $E_s,E_1, \dots, E_{p+1}$. Next, we claim that among the volume maximizers, there is one such that $U,V,W$ have identical angles modulo the permutation used in the formula above. Let $F_j$ denote the constraint $a_j+b_j+c_j=\pi$. The angles of $U,V,W$ appear only in equations $E_s, E_p, E_{p+1}$. These can be rewritten $$\begin{array}{r|l} E_{p+1} & c_p + (b_U+ c_V+b_W ) =2 \pi \\ 3E_p + 2 E_s - (3F_U+2F_V+2F_W) & 3 c_{p-1} + 6 b_p + (a_U + a_V + c_W) + 3 (c_U+b_V+a_W)= 3 \pi\\ E_s - (F_V + F_W) & 2a_U = a_V + c_W. \end{array}$$ The involution $(a_V, b_V, c_V) \leftrightarrow (c_W,a_W,b_W)$ preserves these equations, so by concavity of the volume function, there is a maximizer such that $(a_V, b_V, c_V)=(c_W,a_W,b_W)$. The last of the $3$ equations above then gives $a_U=a_V=c_W$. The order-3 substitution of variables $$(a_U, b_U, c_U) \rightarrow (a_V, c_V, b_V) \rightarrow (c_W, b_W, a_W) \rightarrow (a_U, b_U, c_U)$$ then clearly leaves the other two equations unchanged, so by concavity we may average out and find a maximizer such that $(a_U, b_U, c_U)=(a_V, c_V, b_V)=(c_W, b_W, a_W)$, as desired. These identifications make $E_s$ redundant. Moreover, dropping the angles of $V$ and $W$ as variables, we may now rewrite the system of constraints as \begin{itemize} \item $E_1 : \ 2b_1+c_2 = 2 \pi$ \vspace{2mm} \item $E_k : \ c_{k-1}+2b_k+c_{k+1} = 2 \pi $ \quad (for $2\leqslant k \leqslant p-1$) \vspace{2mm} \item $E'_p : \ c_{p-1}+2b_p + a_U + 3c_U = \pi$ \quad (not $2\pi$!) \item $E'_{p+1} : \ c_p + 3b_U = 2\pi$ \end{itemize} Recall from Lemma \ref{lem:flat:taut} that at a volume maximizer, if $a_j b_j c_j=0$ then $a_j, b_j, c_j$ are $0,0,\pi$ up to order. \begin{lemma} \label{lem:girafe} At a volume maximizer, if $a_k b_k c_k=0$ then $k=p$ and $(a_p, b_p, c_p)=(0,0,\pi)$. \end{lemma} \begin{proof} First, $E'_{p+1}$ gives $b_U=(2\pi-c_p)/3\in[\pi/3, 2\pi/3]$ so the tetrahedron $U$ is nondegenerate. \noindent $\bullet$ Let us show by induction on $1 \leq k \leq p-1$ that $b_k > 0$. By $E_1$ we have $b_1=\pi-c_2/2 \geq \pi/2$, giving the case $k=1$. For the induction step, suppose $2\leq k \leq p-1$ and $b_{k-1}>0$. Then $c_{k-1}<\pi$, which by $E_k$ implies that $b_k>0$. \noindent $\bullet$ Let us now show by \emph{descending} induction on $p-1 \geq k \geq 1$ that $b_k < \pi$. For the initialisation, suppose $(a_{p-1},b_{p-1}, c_{p-1})=(0,\pi,0)$ and aim for a contradiction. Recall that $p\geq 2$: by $E_{p-1}$ we have $c_p=0$, hence $b_U=2\pi/3$ by $E'_{p+1}$. But $c_p=0$ also implies $b_p\in \{0,\pi\}$, hence $b_p=0$ by $E'_p$. Together with $c_{p-1}=0$, by $E'_p$ this yields $a_U+3c_U=\pi$. But we showed that $b_U=2\pi/3$, hence $(a_U, b_U, c_U)=(0,2\pi/3, \pi/3)$, a forbidden configuration. This contradiction shows $b_{p-1}<\pi$. For the (downward) induction step, suppose $p-2 \geq k \geq 1$ and $b_{k+1}<\pi$. Actually $0<b_{k+1}<\pi$ (previous bullet-point), hence $0<c_{k+1}$: by $E_k$, this implies $b_k<\pi$. \noindent $\bullet$ It only remains to rule out $c_p=0$. Note that the non-negative sequence $(0, c_1, \dots, c_p)$ is convex, because $E_k$ can be rewritten $c_{k-1} - 2c_k + c_{k+1} = 2 a_k \geq 0$ (agreeing that ``$c_0$'' stands for $0$). But we showed $0<b_{p-1}<\pi$: hence, $c_{p-1}>0$ which entails $c_p\geq \frac{p}{p-1} c_{p-1} > 0$. \end{proof} We can now prove that the volume maximizer has only positive angles. By the above lemma, if not, then we may assume $(a_p, b_p, c_p)=(0,0,\pi)$ and that all other tetrahedra are nondegenerate. We will exhibit a smooth path of deformations of the angles, along which the derivative of the volume is positive. (As a function of the angles, the volume of an ideal tetrahedron is not smooth near the point $(0,0,\pi)$, but it has a well-defined derivative in the direction of any segment.) Using $E_{p-1}, E'_p, E'_{p+1}$, it is straigthforward to check that the angles satisfy \begin{equation} \label{zebre} \begin{pmatrix} a_{p-1} & a_{p} & a_U \\ b_{p-1} & b_{p} & b_U \\ c_{p-1} & c_{p} & c_U \end{pmatrix} = \begin{pmatrix} (\pi+c_{p-2}-2c_{p-1})/2 & 0 & (\pi+c_{p-1})/2 \\ (\pi-c_{p-2})/2 & 0 & \pi/3 \\ c_{p-1} & \pi & \pi/6-c_{p-1}/2 \end{pmatrix}. \end{equation} For small $t>0$, the $t$-deformation given by $(a^t_k,b^t_k,c^t_k)=(a_k,b_k,c_k)$ for $1\leq k \leq p-2$ and $$ \begin{pmatrix} a^t_{p-1} & a^t_{p} & a^t_U \\ b^t_{p-1} & b^t_{p} & b^t_U \\ c^t_{p-1} & c^t_{p} & c^t_U \end{pmatrix} = \begin{pmatrix} a_{p-1} & 0 & a_U \\ b_{p-1} & 0 & b_U \\ c_{p-1} & \pi & c_U \end{pmatrix} +t \begin{pmatrix} -1 & 2 & - 1 \\ 1 & 0 & 2/3 \\ 0 & - 2 & 1/3 \end{pmatrix} $$ is still an angle structure, i.e.\ satisfies $E_1,\dots, E_{p-1}, E'_p, E'_{p+1}$. By definition of the volume functional $\mathcal{V}$ (Section~\ref{sub:volume}), we have for this deformation \begin{equation} \label{eq:slope} \left .\mathrm{exp} \left ( \frac{-\partial \mathcal{V}} {\partial t} \right )\right |_{t=0} = \frac{\sin (b_{p-1})}{\sin (a_{p-1})} \frac{\sin^2(b_U) \sin(c_U)}{\sin^3(a_U)}. \end{equation} Each factor $\sin(\theta)$ appears to the power $\partial \theta/\partial t$, but tripled for $\theta=a_U, b_U, c_U$ because there are 3 isometric copies of the tetrahedron~$U$. The $p$-th tetrahedron stays flat, hence does not contribute volume. The formula for $c_U$ in~\eqref{zebre} gives $0\leq c_{p-1} \leq \pi/3$. We proved in the lemma above that $(0,c_1,\dots, c_p)$ is convex, hence~\eqref{zebre} also yields $a_{p-1} \in [\pi/6, \pi/2]$. Therefore, $$\frac{\sin (b_{p-1})}{\sin (a_{p-1})} \leq \frac {1}{\sin (\pi/6)}=2.$$ On the other hand, still using~\eqref{zebre}, $$\frac{\sin^2(b_U) \sin(c_U)}{\sin^3(a_U)} = \frac{3}{4} \frac{\sin (\pi/6-c_{p-1}/2)}{\sin^3(\pi/2+c_{p-1}/2)} \leq \frac{3}{4} \frac{\sin(\pi/6)}{\sin^3(\pi/2)} = \frac{3}{8}$$ by an easy monotonicity argument for $c_{p-1}$ ranging over $[0,\pi/3]$. In conclusion,~\eqref{eq:slope} is bounded above by $2\cdot 3/8<1$, hence $(\partial \mathcal{V} / \partial t)_{t=0^+}>0$ as desired. Thus, the volume maximizer is interior to the space of angle structures. By Theorem~\ref{thm:casson:rivin}, this implies Theorem~\ref{thm:appendix:geom:even} for $p\geq 2$. It only remains to discuss $p=0,1$. \noindent $\bullet$ For $p=1$ we find the initial gluing equations $$ \begin{array}{lrcl} E_s : & 2a_U + b_V + c_V + a_W + b_W &=& 2\pi \\ E_1 : & 2b_1 + b_U + 2c_U + a_V + b_V + a_W + c_W &=& 2\pi \\ E_2 : & c_1 + b_U + c_V + b_W &=& 2\pi \end{array} $$ (only the term ``$c_{p-1}$'' has disappeared from $E_1$). Symmetry between $U,V,W$ can be argued as in the $p\geq 2$ case, reducing the above to $$ \begin{array}{lrcl} E'_1 : & 2b_1 + a_U + 3c_U &=& \pi \\ E'_2 : & c_1 + 3b_U &=& 2\pi. \end{array}$$ The tetrahedron $U$ is not flat, as $b_U=(2\pi-c_1)/3\in[\pi/3, 2\pi/3]$. If $c_1=0$ then $b_1\in\{0,\pi\}$ must be $0$ by $E'_1$, hence $(a_U, b_U, c_U)=(0,2\pi/3, \pi/3)$ which is prohibited. If $c_1=\pi$ then $$ \begin{pmatrix} a_1 & a_U \\ b_1 & b_U \\ c_1 & c_U \end{pmatrix} = \begin{pmatrix} 0 & \pi/2 \\ 0 & \pi/3 \\ \pi & \pi/6 \end{pmatrix} \text{ can be perturbed by adding } ~ t \begin{pmatrix} 2 & -1 \\ 0 & 2/3 \\ - 2 & 1/3 \end{pmatrix} $$ (where $0<t\ll 1$) to produce a path of angle structures, yielding as before $$\left . \mathrm{exp} \left ( \frac{-\partial \mathcal{V}} {\partial t} \right ) \right |_{t=0} = \frac{\sin^2(b_U) \sin(c_U)}{\sin^3(a_U)} = \frac{3}{8}<1. $$ \noindent $\bullet$ For $p=0$ it is straightforward to check that $(a_U, b_U, c_U)=(a_V, c_V, b_V)=(c_W, b_W, a_W)=(\pi/6, 2\pi/3, \pi/6)$ yields the complete hyperbolic metric (this is actually the result of a $2\rightarrow 3$ Pachner move on the standard triangulation of the figure eight knot complement into two regular ideal tetrahedra). Theorem~\ref{thm:appendix:geom:even} is proved. \end{proof} \subsection{Computation of the partition functions}\label{sub:even:tqft} The following theorem is the version of Theorem \ref{thm:part:func} for even $n$. Note that here $\mu_{X_n}(\alpha)=-m_{X_n}(\alpha)$ and once again $\lambda_{X_n}(\alpha)=l_{X_n}(\alpha)+2m_{X_n}(\alpha)$ corresponds to a preferred longitude. \begin{theorem}\label{thm:even:part:func} Let $n$ be a positive even integer and $p=\frac{n-2}{2}$. Consider the ideal triangulation $X_n$ of $S^3\setminus K_n$ described in Figure \ref{fig:id:trig:even}. Then for all angle structures $\alpha =(a_1,\ldots,c_W)\in \mathcal{A}_{X_n}$ and all $\hbar>0$, we have: \begin{equation*} \mathcal{Z}_{\hbar}(X_n,\alpha) \stackrel{\star}{=} \int_{\mathbb{R}+i \frac{\mu_{X_n}(\alpha) }{2\pi \sqrt{\hbar}} } J_{X_n}(\hbar,x) e^{\frac{1}{2 \sqrt{\hbar}} x \lambda_{X_n}(\alpha)} dx, \end{equation*} with \begin{itemize} \item the degree one angle polynomial $\mu_{X_n}\colon\alpha\mapsto a_U- a_V$, \item the degree one angle polynomial $\lambda_{X_n}\colon\alpha\mapsto 2(a_V-a_U+a_W-b_V)$, \item the map \begin{equation*} J_{X_n}\colon(\hbar,x)\mapsto \int_{\mathcal{Y}'} d\mathbf{y}' \ e^{2 i \pi \mathbf{y'}^T Q_n \mathbf{y'}} e^{2 i \pi x(x- y'_U-y'_W)} e^{ \frac{1}{\sqrt{\hbar}} (\mathbf{y'}^T \mathcal{W}_n - \pi x) } \dfrac{ \Phi_\mathsf{b}\left (x - y'_U \right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right )\Phi_\mathsf{b}\left (y'_U\right ) } , \end{equation*} where $$\mathcal{Y}'=\mathcal{Y}'_{\hbar,\alpha} = \left (\prod_{k=1,\ldots,p,U}\left (\mathbb{R} - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_k)\right ) \right ) \times \left (\mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_W)\right ),$$ \begin{equation*} \mathbf{y'}=\begin{bmatrix} y'_1 \\ \vdots \\ y'_p \\ y'_U \\y'_W \end{bmatrix}, \quad \mathcal{W}_n=\begin{bmatrix}-2p\pi \\ \vdots \\ -2 \pi \left ( k p - \frac{k(k-1)}{2}\right ) \\ \vdots \\ -p(p+1)\pi \\ -(p^2+p+3)\pi \\ \pi\end{bmatrix} \quad \text{ and } \quad Q_n=\begin{bmatrix} 1 & 1 & \cdots & 1 & 1 & 0 \\ 1 & 2 & \cdots & 2 & 2 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 1 & 2 & \cdots & p & p & 0 \\ 1 & 2 & \cdots & p & p+1 & -\frac{1}{2} \\ 0 & 0 & \cdots & 0 & -\frac{1}{2} & 0 \end{bmatrix}. \end{equation*} \end{itemize} \end{theorem} \begin{proof} Since the computations are very similar to those of the proof of Theorem \ref{thm:part:func} we will not give all the details. Let $n \geq 2$ be an even integer and set $p=\frac{n-2}{2}$. As before, we denote $\widetilde{\mathbf{t}}=(t_1,\ldots,t_{p-1},t_p,t_U,t_V,t_W)^T \in \mathbb{R}^{X^3}$ the vector whose coordinates are associated to the tetrahedra, and $\mathbf{x}=(e_1,\dots,e_p,e_{p+1},f_1,\ldots, f_p,v,r,s,g,u)^T \in \mathbb{R}^{X^2}$ the face variables vector. Like in Lemma \ref{lem:kin:odd}, we compute $ \mathcal{K}_{X_n}\left (\mathbf{\widetilde{t}}\right ) = \frac{1}{|\det(A_e)|} e^{ 2 i \pi \mathbf{\widetilde{t}}^T (-R_e A_e^{-1} B) \mathbf{\widetilde{t}}}$, where $B$ is like in the proof of Lemma \ref{lem:kin:odd}, but $R_e, A_e$ (\textit{e} standing for \textit{even}) are given by $$ R_e=\kbordermatrix{ \mbox{} & e_1 & \dots & e_p &e_{p+1} & \omit\vrule &f_1 & \ldots & f_p &\omit\vrule & v & r & s & g & u \\ t_1 & 1 & &\push{\low{0}} & \omit\vrule & & & &\omit\vrule & & & & & \\ \vdots & &\ddots & & & \omit\vrule & & 0 & &\omit\vrule & & & 0 & & \\ t_{p} &\push{0} & 1 & & \omit\vrule & & & &\omit\vrule & & & & & \\ \cline{1-1} \cline{2-15} t_U & & & & & \omit\vrule & & & &\omit\vrule &0 &1 &0 & 0 & 0 \\ t_V & &\push{0} & & \omit\vrule & & 0 & &\omit\vrule &0 &0 &-1 & 0 & 0 \\ t_W & & & & & \omit\vrule & & & &\omit\vrule &0 &0 &0 & 0 &-1 }, $$ $$ A_e=\kbordermatrix{ \mbox{} & e_1 & e_2 & \dots &e_p & e_{p+1} & \omit\vrule &f_1 &f_{2} & \ldots & f_p &\omit\vrule & v & r & s & g & u \\ w_1 & 1 & -1 & & & & \omit\vrule & 1 & & & &\omit\vrule & & & & & \\ \vdots & & \ddots & \ddots & \push{0} & \omit\vrule & &\ddots &\push{0} &\omit\vrule & & & \low{0} & & \\ \vdots & \push{0} & \ddots &\ddots & & \omit\vrule & \push{0} &\ddots & &\omit\vrule & & & & & \\ w_{p} & & & & 1 & -1 & \omit\vrule & & & &1 &\omit\vrule & & & & & \\ \cline{1-1} \cline{2-17} w_{U} & & & & & & \omit\vrule & & & &0 &\omit\vrule & -1 & 1 & 1 & 0 & 0 \\ w_{V} & & & 0 & & & \omit\vrule & & & &1 &\omit\vrule & 0 & 0 & 1 & -1 & 0 \\ w_{W} & & & & & & \omit\vrule & & & &0 &\omit\vrule & -1 & 1 & 0 & 0 & 1 \\ \cline{1-1} \cline{2-17} w'_1 & -1 & & & & & \omit\vrule & 1 & & & &\omit\vrule & & & & & \\ \vdots & & & & & & \omit\vrule & -1 & \ddots & \push{0} &\omit\vrule & & & \low{0} & & \\ \vdots & & & & & & \omit\vrule & & \ddots & \ddots & &\omit\vrule & & & & & \\ w'_{p} & & & & & & \omit\vrule & \push{0} &-1 &1 &\omit\vrule & & & & & \\ \cline{1-1} \cline{2-17} w'_{U} & & & & & 0 & \omit\vrule & & & &0 &\omit\vrule & 0 & 0 & 1 & -1 & 0 \\ w'_{V} & & & & & 0 & \omit\vrule & & & &1 &\omit\vrule & 0 & 0 & 0 & 0 & -1 \\ w'_{W} & & & & & -1 & \omit\vrule & & & &0 &\omit\vrule & 0 & 1 & 0 & 0 & 0 }.$$ Careful computation yields that $\det(A_e)=-1$ and that $A_e^{-1}$ is equal to $$ A_e^{-1}=\kbordermatrix{ \mbox{} & w_{1} & w_{2} & \ldots & w_{p-1} & w_{p} & \omit\vrule & w_{U} & w_{V} & w_{W} & \omit\vrule & w'_{1} & w'_{2} & \ldots & w'_{p-1} & w'_{p} & \omit\vrule & w'_{U} & w'_{V} & w'_{W} \\ e_{1} & 0 & & \cdots & & 0 & \omit\vrule & 0 & 1 & 0 & \omit\vrule & -1 & -1 & \push{\cdots} & -1 & \omit\vrule & -1 & 0 & 0 \\ e_{2} & -1 & 0 & & & & \omit\vrule & 0 & 2 & 0 & \omit\vrule & -1 & -2 & \push{\cdots} & -2 & \omit\vrule & -2 & 0 & 0 \\ \low{\vdots} & -1 & -1 & \ddots & & \vdots & \omit\vrule & & & & \omit\vrule & & & \ddots & & \vdots & \omit\vrule & & & \\ & \vdots & & \ddots & 0 & 0 & \omit\vrule & \vdots & \vdots & \vdots & \omit\vrule & \vdots & \vdots & &\text{\tiny {\(1-p\)}} & \text{\tiny {\(1-p\)}} & \omit\vrule & \vdots & \vdots & \vdots \\ e_{p} & & & & -1 & 0 & \omit\vrule & & & & \omit\vrule & & & & \text{\tiny {\(1-p\)}} & -p & \omit\vrule & & & \\ e_{p+1} & -1 & & \cdots & & -1 & \omit\vrule & 0 & \text{\tiny {\(p+1\)}} & 0 & \omit\vrule & -1 & -2 &\cdots & \text{\tiny {\(1-p\)}} & -p & \omit\vrule & \text{\tiny {\(-p-1\)}} & 0 & 0 \\ \cline{1-1} \cline{2-20} f_{1} & & & & & & \omit\vrule & 0 & 1 & 0 & \omit\vrule & 0 & -1 & \push{\cdots} & -1 & \omit\vrule & -1 & 0 & 0 \\ f_{2} & & & & & & \omit\vrule & & 1 & & \omit\vrule & 0 &0 & \ddots & & \low{\vdots} & \omit\vrule & -1 & & \\ \vdots & & & 0 & & & \omit\vrule & \vdots & \vdots & \vdots & \omit\vrule & \vdots & & \ddots & -1 & -1 & \omit\vrule & \vdots & \vdots & \vdots \\ f_{p-1} & & & & & & \omit\vrule & & & & \omit\vrule & 0 & & & 0 & -1 & \omit\vrule & & & \\ f_{p} & & & & & & \omit\vrule & 0 & 1 & 0 & \omit\vrule & 0 & & \cdots & & 0 & \omit\vrule & -1 & 0 & 0 \\ \cline{1-1} \cline{2-20} v & -1 & & \cdots & & -1 & \omit\vrule & 0 & \text{\tiny {\(p+2\)}} & -1 & \omit\vrule & -1 & -2 & \push{\cdots} & -p & \omit\vrule & \text{\tiny {\(-p-2\)}} & -1 & 1 \\ r & -1 & & \cdots & & -1 & \omit\vrule & 0 & \text{\tiny {\(p+1\)}} & 0 & \omit\vrule & -1 & -2 & \push{\cdots} & -p & \omit\vrule & \text{\tiny {\(-p-1\)}} & 0 & 1 \\ s & & & & & & \omit\vrule & 1 & 1 & -1 & \omit\vrule & & & & & & \omit\vrule & -1 & -1 & 0 \\ g & & & 0 & & & \omit\vrule & 1 & 1 & -1 & \omit\vrule & & & 0 & & & \omit\vrule & -2 & -1 & 0 \\ u & & & & & & \omit\vrule & 0 & 1 & 0 & \omit\vrule & & & & & & \omit\vrule & -1 & -1 & 0 }. $$ Hence $\mathcal{K}_{X_n}(\widetilde{\mathbf{t}})= \exp\left (2 i \pi \widetilde{\mathbf{t}}^T \widetilde{Q}_n \widetilde{\mathbf{t}} \right )$, where $$ \widetilde{Q}_n:= \frac{(-R_e A_e^{-1} B)+(-R_e A_e^{-1} B)^T}{2}= \kbordermatrix{ \mbox{} &t_1 &t_2 &\cdots & t_{p-1} & t_p & \omit\vrule & t_U & t_V & t_W \\ t_1 & 1 & 1 & \cdots & 1 & 1 & \omit\vrule & 1 & 0 & 0 \\ t_2 & 1 & 2 & \cdots & 2 & 2 & \omit\vrule & 2 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \omit\vrule & \vdots & \vdots & \vdots \\ t_{p-1} & 1 & 2 & \cdots & p-1 & p-1 & \omit\vrule & p-1 & 0 & 0 \\ t_p & 1 & 2 & \cdots & p-1 & p & \omit\vrule & p & 0 & 0 \\ \cline{1-1} \cline{2-10} t_U & 1 & 2 & \cdots & p-1 & p & \omit\vrule & p+1 & -1/2 & -1 \\ t_V & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & -1/2 & -1 & -1/2 \\ t_W & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & -1 & -1/2 & 0 }. $$ Now, like in Lemma \ref{lem:2QGamma+C}, if we denote $\widetilde{C}(\alpha) = (c_1,\ldots,c_W)^T$, and $\widetilde{\Gamma}(\alpha) := (a_1-\pi,\ldots, a_p-\pi,a_U-\pi,\pi-a_V,\pi-a_W)^T$, then (indexing entries by $k\in\{1,\ldots,p+3\}$) we can compute: $ 2\widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha) =$ $$ \renewcommand{\kbldelim}{(} \renewcommand{\kbrdelim}{)} \kbordermatrix{ \mbox{} & \\ k=1 & \vdots\\ \vdots & \hspace{6mm} k(\omega_{s}(\alpha) -2(p+2) \pi) + \sum_{j=1}^{k}j \omega_{k-j}(\alpha) \\ k=p & \vdots \\ \cline{1-1} \cline{2-2} & \omega_{s}(\alpha)- \omega_{p+1}(\alpha) + \left ( p(\omega_{s}(\alpha) -2(p+2) \pi) + \sum_{j=1}^{p}j \omega_{p-j}(\alpha) \right ) -4 \pi + \frac{1}{2} \lambda_{X_n}(\alpha) \\ & \frac{1}{2}\lambda_{X_n}(\alpha) - \pi \\ & 3 \pi - \omega_{s}(\alpha) }, $$ where $\lambda_{X_n}(\alpha)=2(-a_U+a_V-b_V+a_W)$. Notably we have for all angle structures $\alpha \in \mathcal{A}_{X_n}$: $$ 2\widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha) = \: \renewcommand{\kbldelim}{(} \renewcommand{\kbrdelim}{)} \kbordermatrix{ \mbox{} & \\ k=1 & \vdots\\ \vdots & -2 \pi \left ( k p - \dfrac{k(k-1)}{2}\right ) \\ k=p & \vdots \\ \cline{1-1} \cline{2-2} & -(p^2+p+4)\pi + \frac{1}{2}\lambda_{X_n}(\alpha) \\ & \frac{1}{2}\lambda_{X_n}(\alpha) - \pi \\ & \pi }. $$ The above computations are fairly quick consequences of the similarities between the matrices $\widetilde{Q}_n$ and the weights $\omega_j(\alpha)$ whether $n$ is odd or even. Denote again $\alpha=(a_1,b_1,c_1,\ldots,a_W,b_W,c_W)$ a general vector of dihedral angles in $\mathcal{A}_{X_n}$. Let $\hbar>0$. Since the tetrahedron $T_U$ is of positive sign here, the dynamical content $\mathcal{D}_{\hbar,X_n}(\widetilde{\mathbf{t}},\alpha)$ thus becomes \[ e^{\frac{1}{\sqrt{\hbar}} \widetilde{C}(\alpha)^T \widetilde{\mathbf{t}}} \dfrac{ \Phi_\mathsf{b}\left (t_V + \frac{i}{2 \pi \sqrt{\hbar}}(\pi-a_V)\right ) \Phi_\mathsf{b}\left (t_W + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_W)\right ) }{ \Phi_\mathsf{b}\left (t_1 - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_1)\right ) \cdots \Phi_\mathsf{b}\left (t_p - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_p)\right ) \Phi_\mathsf{b}\left (t_U - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_U)\right )}.\] According to tetrahedra signs, we do the following change of variables: \begin{itemize} \item $y'_k = t_k - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_k)$ for $k \in \{1,\ldots,p,U\}$, \item $y'_l = t_l + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_l)$ for $l\in\{V,W\}$, \end{itemize} and we define $\widetilde{\mathbf{y'}}:=\left (y'_1, \ldots, y'_{p}, y'_U, y'_V, y'_W\right )^T$. We also denote \[ \mathcal{Y}'_{\hbar, \alpha} := \prod_{k=1, \ldots, p, U}\left (\mathbb{R} - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_k)\right ) \times \prod_{l=V,W} \left (\mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_l)\right ). \] After computations similar to the ones in the proof of Theorem \ref{thm:part:func}, we obtain: \begin{equation*} \mathcal{Z}_{\hbar}(X_n,\alpha) \stackrel{\star}{=} \int_{\mathbf{\widetilde{y}'} \in \widetilde{\mathcal{Y}}'_{\hbar,\alpha}} d\mathbf{\widetilde{y}'} e^{ 2 i \pi \mathbf{\widetilde{y}}^{\prime T} \widetilde{Q}_n \mathbf{\widetilde{y}'} + \frac{1}{\sqrt{\hbar}} \left ( 2\widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha)\right )^T \mathbf{\widetilde{y}'} } \dfrac{ \Phi_\mathsf{b}\left (y'_V\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) \Phi_\mathsf{b}\left (y'_U\right ) }, \end{equation*} We define a new variable $x:= y'_U + y'_V$ living in the set $$\mathcal{Y}^0_{\hbar,\alpha}=\mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} (a_U-a_V),$$ and we also define $ \mathbf{y'}$ (respectively $\mathcal{Y}' _{\hbar,\alpha}$) exactly like $ \widetilde{\mathbf{y}'}$ (respectively $ \widetilde{\mathcal{Y}}'_{\hbar,\alpha}$) but with the second-to-last coordinate (corresponding to $y_V$) taken out. We also define \begin{equation*} \mathcal{W}_{n}= \begin{bmatrix}\mathcal{W}_{n,1} \\ \vdots \\ \mathcal{W}_{n,k} \\ \vdots \\ \mathcal{W}_{n,p} \\ \mathcal{W}_{n,U} \\ \mathcal{W}_{n,W} \end{bmatrix} := \begin{bmatrix}-2p\pi \\ \vdots \\ -2 \pi \left ( k p - \frac{k(k-1)}{2}\right ) \\ \vdots \\ -p(p+1)\pi \\ -(p^2+p+3)\pi \\ \pi\end{bmatrix} \qquad \text{ and } \qquad Q_n=\begin{bmatrix} 1 & 1 & \cdots & 1 & 1 & 0 \\ 1 & 2 & \cdots & 2 & 2 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 1 & 2 & \cdots & p & p & 0 \\ 1 & 2 & \cdots & p & p+1 & -\frac{1}{2} \\ 0 & 0 & \cdots & 0 & -\frac{1}{2} & 0 \end{bmatrix}. \end{equation*} This time, $Q_n$ is obtained from $\widetilde{Q}_n$ by replacing the two rows corresponding to $y_U$ and $y_V$ with their difference (row of $y_U$ minus row of $y_V$), and by replacing the two columns corresponding to $y_U$ and $y_V$ with their difference. We now use the substitution $y'_V = x - y'_U$ and we compute \begin{align*} 2 i \pi \widetilde{\mathbf{y}}^{\prime T} \widetilde{Q}_n \widetilde{\mathbf{y}'} &= 2 i \pi \left ( (\mathbf{y'}^T Q_n \mathbf{y'} - (p+1) {y'_U}^2 + y'_U y'_W) + (p+1){y'_U}^2 - y'_U y'_V - 2 y'_U y'_W - {y'_V}^2 - y'_V y'_W \right ) \\ &= 2 i \pi \left (\mathbf{y'}^T Q_n \mathbf{y'} + x y'_U -x y'_W -x^2\right ), \end{align*} and $\frac{1}{\sqrt{\hbar}} \left ( 2\widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha)\right )^T \widetilde{\mathbf{y}'} = \frac{1}{\sqrt{\hbar}} \left (\mathcal{W}_n^T \mathbf{y'} +x (\frac{1}{2}\lambda_{X_n}(\alpha)-\pi)\right )$, thus \begin{align*} &\mathcal{Z}_{\hbar}(X_n,\alpha) \stackrel{\star}{=} \int_{\mathbf{\widetilde{y}'} \in \widetilde{\mathcal{Y}}'_{\hbar,\alpha}} d\mathbf{\widetilde{y}'} e^{ 2 i \pi \mathbf{\widetilde{y}}^{\prime T} \widetilde{Q}_n \mathbf{\widetilde{y}'} + \frac{1}{\sqrt{\hbar}}\left ( 2\widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha)\right )^T \mathbf{\widetilde{y}'} } \dfrac{ \Phi_\mathsf{b}\left (y'_V\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) \Phi_\mathsf{b}\left (y'_U\right ) } \\ &\stackrel{\star}{=} \int dx d\mathbf{y'} e^{2 i \pi \left (\mathbf{y'}^T Q_n \mathbf{y'}+x(y'_U -y'_W-x)\right )+ \frac{1}{\sqrt{\hbar}} \left (\mathcal{W}_n^T \mathbf{y'} +x (\frac{1}{2}\lambda_{X_n}(\alpha)-\pi)\right ) } \dfrac{ \Phi_\mathsf{b}\left (x-y'_U\right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) \Phi_\mathsf{b}\left (y'_U\right ) } , \end{align*} where the variables $(\mathbf{y'},x)$ in the last integral lie in $\mathcal{Y}'_{\hbar,\alpha} \times \mathcal{Y}^0_{\hbar,\alpha}$. The theorem follows. \end{proof} We now state the counterpart of Corollary \ref{cor:part:func}, which is proven in exactly the same way. \begin{corollary}\label{cor:even:part:func} Let $n$ be a positive even integer, $p=\frac{n-2}{2}$ and $X_n$ the ideal triangulation of $S^3\setminus K_n$ from Figure \ref{fig:id:trig:even}. Then for all angle structures $\alpha \in \mathcal{A}_{X_n}$ and all $\hbar>0$, we have: \begin{equation*} \mathcal{Z}_{\hbar}(X_n,\alpha) \stackrel{\star}{=} \int_{\mathbb{R}+i \mu_{X_n}(\alpha) } \mathfrak{J}_{X_n}(\hbar,\mathsf{x}) e^{\frac{1}{4 \pi \hbar} \mathsf{x} \lambda_{X_n}(\alpha)} d\mathsf{x}, \end{equation*} with the map \begin{equation*} \mathfrak{J}_{X_n}\colon(\hbar,\mathsf{x})\mapsto \left ( \frac{1}{2 \pi \sqrt{\hbar}} \right )^{p+3} \int_{\mathcal{Y}_\alpha} d\mathbf{y} \ e^{\frac { i \mathbf{y}^T Q_n \mathbf{y} + i \mathsf{x}(y_U-y_W-\mathsf{x}) + \mathbf{y}^T \mathcal{W}_n - \pi \mathsf{x} } {2 \pi \hbar} } \dfrac{ \Phi_\mathsf{b}\left ( \frac{\mathsf{x}-y_U}{2 \pi \sqrt{\hbar}} \right ) \Phi_\mathsf{b}\left ( \frac{y_W}{2 \pi \sqrt{\hbar}} \right ) }{ \Phi_\mathsf{b}\left (\frac{y_1}{2 \pi \sqrt{\hbar}}\right ) \cdots \Phi_\mathsf{b}\left (\frac{y_p}{2 \pi \sqrt{\hbar}}\right ) \Phi_\mathsf{b}\left ( \frac{y_U}{2 \pi \sqrt{\hbar}} \right ) } , \end{equation*} where $\mu_{X_n},\lambda_{X_n}, \mathcal{W}_n, Q_n$ are the same as in Theorem \ref{thm:even:part:func}, and $$\mathcal{Y}_\alpha = \left ( \prod_{k=1, \ldots,p,U}\left (\mathbb{R} - i (\pi - a_k)\right ) \right ) \times \left (\mathbb{R} + i (\pi - a_W)\right ).$$ \end{corollary} \begin{proof} Exactly similar to the proof of Corollary \ref{cor:part:func}. \end{proof} We finally come to H-triangulations for even twists knots. Again, before stating Theorem \ref{thm:part:func:Htrig:even}, we compute the weights on each edge of the H-triangulation $Y_n$ given in Figure \ref{fig:H:trig:even} (for $n \geqslant 3$ even). We use exactly the same notations as the odd case. We denoted $\overrightarrow{e_0}, \ldots, \overrightarrow{e_{p+1}}, \overrightarrow{e_s}, \overrightarrow{e_d}, \overrightarrow{K_n} \in (Y_n)^1$ the $p+5$ edges in $Y_n$ respectively represented in Figure \ref{fig:H:trig:even} by arrows with circled $0$, \ldots, circled $p+1$, simple arrow, double arrow and blue simple arrow. For $\alpha=(a_1,b_1,c_1,\ldots,a_p,b_p,c_p,a_U,b_U,c_U,a_V,b_V,c_V,a_W,b_W,c_W,a_Z,b_Z,c_Z) \in \mathcal{S}_{Y_n}$ a shape structure on $Y_n$, the weights of each edge are given by: \begin{itemize} \item $\widehat{\omega}_s(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_s})= 2 a_U+b_V+c_V+a_W+b_W+a_Z $ \item $\widehat{\omega}_d(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_d})= b_U+c_U+c_W+b_Z+c_Z $ \item $\omega_0(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_0})= 2 a_1 + c_1 + 2 a_2 + \ldots + 2 a_p + a_V+c_W $ \item $\omega_1(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_1})= 2b_1+c_2 $ \\ \vspace*{-2mm} \item $\omega_k(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_k})= c_{k-1}+2b_k+c_{k+1} $ \ \ (for $2\leqslant k \leqslant p-1$) \\ \vspace*{-2mm} \item $\widehat{\omega}_p(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_p})= c_{p-1}+2b_p+c_U+a_V+b_V+a_W+b_Z+c_Z$ \item $\omega_{p+1}(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{e_{p+1}})= c_p+b_U+c_V+b_W $ \item $\widehat{\omega}_{\overrightarrow{K_n}}(\alpha):= \omega_{Y_n,\alpha}(\overrightarrow{K_n})= a_Z $ \end{itemize} We can now compute the partition function for the H-triangulations $Y_n$ ($n$ even), and prove the following theorem. As for the odd case, we will denote $\mathcal{S}_{Y_n \backslash Z}$ the space of shape structures on every tetrahedron of $Y_n$ except for $Z$. \begin{theorem}\label{thm:part:func:Htrig:even} Let $n$ be a positive even integer and $p=\frac{n-2}{2}$. Consider the one-vertex H-triangulation $Y_n$ of the pair $(S^3,K_n)$ described in Figure \ref{fig:H:trig:even}. Then for every $\hbar>0$ and for every $\tau\in \mathcal{S}_{Y_n \setminus Z} \times \overline{\mathcal{S}_Z}$ such that $\omega_{Y_n,\tau}$ vanishes on $\overrightarrow{K_n}$ and is equal to $2\pi$ on every other edge, one has \begin{equation*} \underset{\tiny \begin{matrix}\alpha \to \tau \\ \alpha \in \mathcal{S}_{Y_n}\end{matrix}}{\lim} \Phi_{\mathsf{b}}\left( \frac{\pi-\omega_{Y_n,\alpha}\left (\overrightarrow{K_n}\right )}{2\pi i \sqrt{\hbar}} \right) \mathcal{Z}_{\hbar}(Y_n,\alpha) \stackrel{\star}{=} J_{X_n}(\hbar,0), \end{equation*} where $J_{X_n}$ is defined in Theorem \ref{thm:even:part:func}. \end{theorem} \begin{proof} Let $n$ be an even integer and $p=\frac{n-2}{2}$. The proof is similar to the odd case and will be separated in three steps: computing the partition function $\mathcal{Z}_{\hbar}(Y_n,\alpha)$, applying the dominated convergence theorem in $\alpha \to \tau$ and finally retrieving the value $J_{X_n}(\hbar,0)$ in $\alpha =\tau$. \textit{Step 1. Computing the partition function $\mathcal{Z}_{\hbar}(Y_n,\alpha)$.} Like in the proof of Theorem \ref{thm:even:part:func} we start by computing the kinematical kernel. We denote $\widehat{\mathbf{t}}=(t_1,\ldots,t_p,t_U,t_V,t_W,t_Z) \in \mathbb{R}^{Y_n^3}$ and $ {\widehat{\mathbf{x}}=(e_1,\ldots,e_p,e_{p+1},f_1,\ldots,f_p,v,r,s,s',g,u,m) \in \mathbb{R}^{Y_n^2}}$. Like in the proof of Theorem \ref{thm:part:func:Htrig:odd}, using Figure \ref{fig:H:trig:even}, we compute $$ \mathcal{K}_{Y_n}\left (\mathbf{\widehat{t}}\right ) = \int_{\widehat{\mathbf{x}} \in \mathbb{R}^{Y_n^{2}}} d\widehat{\mathbf{x}} \int_{\widehat{\mathbf{w}} \in \mathbb{R}^{2 (p+4)}} d\widehat{\mathbf{w}} \ e^{ 2 i \pi \mathbf{\widehat{t}}^T \widehat{S_e} \widehat{\mathbf{x}}} e^{ -2 i \pi \widehat{\mathbf{w}}^T \widehat{H_e} \widehat{\mathbf{x}}} e^{ -2 i \pi \widehat{\mathbf{w}}^T \widehat{D} \mathbf{\widehat{t}}}, $$ where $\widehat{D}$ is like in proof of Theorem \ref{thm:part:func:Htrig:odd}, whereas the matrices $\widehat{S_e}$ and $\widehat{H_e}$ are given by: $$ \widehat{S_e}=\kbordermatrix{ \mbox{} & e_1 & \dots & e_p &e_{p+1} & \omit\vrule &f_1 & \ldots & f_p &\omit\vrule & v & r & s & s' & g & u & m \\ t_1 & 1 & &\push{\low{0}} & \omit\vrule & & & &\omit\vrule & & & & & & & \\ \vdots & &\ddots & & & \omit\vrule & & 0 & &\omit\vrule & & & & 0 & & & \\ t_{p} &\push{0} & 1 & & \omit\vrule & & & &\omit\vrule & & & & & & & \\ \cline{1-1} \cline{2-17} t_U & & & & & \omit\vrule & & & &\omit\vrule &0 &1 &0 & 0 & 0 &0 & 0 \\ t_V & &\push{0} & & \omit\vrule & & 0 & &\omit\vrule &0 &0 &-1 & 0 & 0 & 0 & 0 \\ t_W & & & & & \omit\vrule & & & &\omit\vrule &0 &0 &0 & 0 & 0 & -1 & 0 \\ t_Z & & & & & \omit\vrule & & & &\omit\vrule &0 &0 &0 & 0 & 0 & 0 & 1 }, $$ $$ \widehat{H_e}=\kbordermatrix{ \mbox{} & e_1 & e_2 & \dots &e_p & e_{p+1} & \omit\vrule &f_1 &f_{2} & \ldots & f_p &\omit\vrule & v & r & s & s' & g & u & m \\ w_1 & 1 & -1 & & & & \omit\vrule & 1 & & & &\omit\vrule & & & & & & & \\ \vdots & & \ddots & \ddots & \push{0} & \omit\vrule & &\ddots &\push{0} &\omit\vrule & & & & \low{0} & & & \\ \vdots & \push{0} & \ddots &\ddots & & \omit\vrule & \push{0} &\ddots & &\omit\vrule & & & & & & & \\ w_{p} & & & & 1 & -1 & \omit\vrule & & & &1 &\omit\vrule & & & & & & & \\ \cline{1-1} \cline{2-19} w_{U} & & & & & & \omit\vrule & & & &0 &\omit\vrule & -1 & 1 & 0 & 1 & 0 & 0 & 0 \\ w_{V} & & & \low{0} & & & \omit\vrule & & & &1 &\omit\vrule & 0 & 0 & 1 & 0 & -1 & 0 & 0 \\ w_{W} & & & & & & \omit\vrule & & & &0 &\omit\vrule & -1 & 1 & 0 & 0 & 0 & 1 & 0 \\ w_{Z} & & & & & & \omit\vrule & & & &0 &\omit\vrule & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \cline{1-1} \cline{2-19} w'_1 & -1 & & & & & \omit\vrule & 1 & & & &\omit\vrule & & & & & & & \\ \vdots & & & & & & \omit\vrule & -1 & \ddots & \push{0} &\omit\vrule & & & & \low{0} & & & \\ \vdots & & & & & & \omit\vrule & & \ddots & \ddots & &\omit\vrule & & & & & & & \\ w'_{p} & & & & & & \omit\vrule & \push{0} &-1 &1 &\omit\vrule & & & & & & & \\ \cline{1-1} \cline{2-19} w'_{U} & & & & & 0 & \omit\vrule & & & & 0 &\omit\vrule & 0 & 0 & 0 & 1 & -1 & 0 & 0 \\ w'_{V} & & & & & 0 & \omit\vrule & & & & 1 &\omit\vrule & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ w'_{W} & & & & & -1 & \omit\vrule & & & & 0 &\omit\vrule & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ w'_{Z} & & & & & 0 & \omit\vrule & & & & 0 &\omit\vrule & 0 & 0 & 1 & -1 & 0 & 0 & 0 }.$$ Like in the odd case, let us define $S_e$ the submatrix of $\widehat{S_e}$ without the $m$-column, $H_e$ the submatrix of $\widehat{H_e}$ without the $m$-column and the $w_V$-row, $R_{e,V}$ this very $w_V$-row of $\widehat{H_e}$, $D$ the submatrix of $\widehat{D}$ without the $w_V$-row, $\mathbf{x}$ the subvector of $\widehat{\mathbf{x}}$ without the variable $m$ and $\mathbf{w}$ the subvector of $\widehat{\mathbf{w}}$ without the variable $w_V$. We remark that $H_e$ is invertible and $\det(H_e)=-1$. Hence, by using multi-dimensional Fourier transform and the integral definition of the Dirac delta function like in the odd case, we compute $$ \mathcal{K}_{Y_n}\left (\mathbf{\widehat{t}}\right ) = \delta(-t_Z) e^{2i \pi \widehat{\mathbf{t}}^T (-S_e H_e^{-1} D) \mathbf{\widehat{t}}} \delta (-R_{e,V} H_e^{-1} D \mathbf{\widehat{t}}).$$ We can now compute $H_e^{-1}=$ $$ \kbordermatrix{ \mbox{} & w_{1} & w_{2} & \ldots & w_{p-1} & w_{p} & \omit\vrule & w_{U} & w_{W} & w_{Z} & \omit\vrule & w'_{1} & w'_{2} & \ldots & w'_{p-1} & w'_{p} & \omit\vrule & w'_{U} & w'_{V} & w'_{W} & w'_{Z} \\ e_{1} & 0 & & \cdots & & 0 & \omit\vrule & -1 & 1 & 1 & \omit\vrule & -1 & -1 & \push{\cdots} & -1 & \omit\vrule & 0 & 1 & 0 & -1 \\ e_{2} & -1 & 0 & & & & \omit\vrule & -2 & 2 & 2 & \omit\vrule & -1 & -2 & \push{\cdots} & -2 & \omit\vrule & 0 & 2 & 0 & -2 \\ \low{\vdots} & -1 & -1 & \ddots & & \vdots & \omit\vrule & & & & \omit\vrule & & & \ddots & & \vdots & \omit\vrule & & & & \\ & \vdots & & \ddots & 0 & 0 & \omit\vrule & \vdots & \vdots & \vdots & \omit\vrule & \vdots & \vdots & &\text{\tiny {\(1-p\)}} & \text{\tiny {\(1-p\)}} & \omit\vrule & \vdots & \vdots & \vdots & \vdots \\ e_{p} & & & & -1 & 0 & \omit\vrule & & & & \omit\vrule & & & & \text{\tiny {\(1-p\)}} & -p & \omit\vrule & & & & \\ e_{p+1} & -1 & & \cdots & & -1 & \omit\vrule & \text{\tiny {\(-p-1\)}} & \text{\tiny {\(p+1\)}} & \text{\tiny {\(p+1\)}} & \omit\vrule & -1 & -2 &\cdots & \text{\tiny {\(1-p\)}} & -p & \omit\vrule & 0 & \text{\tiny {\(p+1\)}} & 0 & \text{\tiny {\(-p-1\)}} \\ \cline{1-1} \cline{2-21} f_{1} & & & & & & \omit\vrule & -1 & 1 & 1 & \omit\vrule & 0 & -1 & \push{\cdots} & -1 & \omit\vrule & 0 & 1 & 0 & -1 \\ f_{2} & & & & & & \omit\vrule & & & & \omit\vrule & 0 &0 & \ddots & & \low{\vdots} & \omit\vrule & & & & \\ \vdots & & & 0 & & & \omit\vrule & \vdots & \vdots & \vdots & \omit\vrule & \vdots & & \ddots & -1 & -1 & \omit\vrule & \vdots & \vdots & \vdots & \vdots \\ f_{p-1} & & & & & & \omit\vrule & & & & \omit\vrule & 0 & & & 0 & -1 & \omit\vrule & & & & \\ f_{p} & & & & & & \omit\vrule & -1 & 1 & 1 & \omit\vrule & 0 & & \cdots & & 0 & \omit\vrule & 0 & 1 & 0 & -1 \\ \cline{1-1} \cline{2-21} v & -1 & & \cdots & & -1 & \omit\vrule & \text{\tiny {\(-p-2\)}} & \text{\tiny {\(p+1\)}} &\text{\tiny {\(p+2\)}} & \omit\vrule & -1 & -2 & \push{\cdots} & -p & \omit\vrule & 0 & \text{\tiny {\(p+1\)}} & 1 & \text{\tiny {\(-p-2\)}} \\ r & -1 & & \cdots & & -1 & \omit\vrule & \text{\tiny {\(-p-1\)}} & \text{\tiny {\(p+1\)}} &\text{\tiny {\(p+1\)}} & \omit\vrule & -1 & -2 & \push{\cdots} & -p & \omit\vrule & 0 & \text{\tiny {\(p+1\)}} & 1 & \text{\tiny {\(-p-1\)}} \\ s & & & & & & \omit\vrule & 0 & 0 & 1 & \omit\vrule & & & & & & \omit\vrule & 0 & 0 & 0 & 0 \\ s' & & & \low{0} & & & \omit\vrule & 0 & 0 & 1 & \omit\vrule & & & \low{0} & & & \omit\vrule & 0 & 0 & 0 & -1 \\ g & & & & & & \omit\vrule & 0 & 0 & 1 & \omit\vrule & & & & & & \omit\vrule & -1 & 0 & 0 & -1 \\ u & & & & & & \omit\vrule & -1 & 1 & 1 & \omit\vrule & & & & & & \omit\vrule & 0 & 0 & 0 & -1 },$$ and thus find that $-R_{e,V} H_e^{-1} D \mathbf{\widehat{t}}=-t_U-t_V$ and $$-S_e H_e^{-1} D= \kbordermatrix{ \mbox{} &t_1 &t_2 &\cdots & t_{p-1} & t_p & \omit\vrule & t_U & t_V & t_W &t_Z \\ t_1 & 1 & 1 & \cdots & 1 & 1 & \omit\vrule & 0 & -1 & 0 & 1 \\ t_2 & 1 & 2 & \cdots & 2 & 2 & \omit\vrule & 0 & -2 & 0 & 2 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \omit\vrule & \vdots & \vdots & \vdots & \vdots \\ t_{p-1} & 1 & 2 & \cdots & p-1 & p-1 & \omit\vrule & 0 & -(p-1) & 0 & p-1 \\ t_p & 1 & 2 & \cdots & p-1 & p & \omit\vrule & 0 & -p & 0 & p \\ \cline{1-1} \cline{2-11} t_U & 1 & 2 & \cdots & p-1 & p & \omit\vrule & 0 & -(p+1) & -1 & p+1 \\ t_V & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & 0 & 0 & 0 & 0 \\ t_W & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & 0 & 0 & 0 & -1 \\ t_Z & 0 & 0 & \cdots & 0 & 0 & \omit\vrule & 0 & 0 & 0 & 0 }.$$ Since $$\widehat{\mathbf{t}}^T (-S_e H_e^{-1} D) \mathbf{\widehat{t}} = \mathbf{t}^T Q_n \mathbf{t} + (-t_U-t_V)(t_1+\ldots+p t_p+(p+1)t_U) +t_Z(t_1+\ldots+pt_p+(p+1)t_U-t_W),$$ where $\mathbf{t}=(t_1,\ldots,t_p,t_U,t_W)$ and $Q_n$ is defined in Theorem \ref{thm:even:part:func}, we conclude that the kinematical kernel can be written as \[ \mathcal{K}_{Y_n}(\mathbf{\widehat{t}})= e^{2 i \pi \left( \mathbf{t}^T Q_n \mathbf{t}-t_W t_Z +(t_Z - t_U-t_V)(t_1 + \cdots + p t_p + (p+1) t_U) \right)} \delta(t_Z) \delta(-t_U - t_V). \] We now compute the dynamical content. We denote $\alpha=(a_1,b_1,c_1,\ldots,a_W,b_W,c_W,a_Z,b_Z,c_Z)$ a general vector in $\mathcal{S}_{Y_n}$. Let $\hbar>0$. The dynamical content $\mathcal{D}_{\hbar,Y_n}(\mathbf{\widehat{t}},\alpha)$ is equal to: \[ e^{\frac{1}{\sqrt{\hbar}} \widehat{C}(\alpha)^T \mathbf{\widehat{t}}} \dfrac{ \Phi_\mathsf{b}\left (t_V + \frac{i}{2 \pi \sqrt{\hbar}}(\pi-a_V)\right ) \Phi_\mathsf{b}\left (t_W + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_W)\right ) }{ \prod_{k=1}^p \Phi_\mathsf{b}\left (t_k - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_k)\right ) \Phi_\mathsf{b}\left (t_U + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_U)\right ) \Phi_\mathsf{b}\left (t_Z - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_Z)\right ) }, \] where $\widehat{C}(\alpha) = (c_1, \ldots, c_p,c_U,c_V, c_W, c_Z)^T$. Let us come back to the computation of the partition function of the Teichm\"uller TQFT. We begin by integrating over the variables $t_V$ and $t_Z$, which consists in removing the two Dirac delta functions $\delta(-t_Z)$ and $\delta(-t_U -t_V)$ in the kinematical kernel and replacing $t_Z$ by $0$ and $t_V$ by $-t_U$ in the other terms. Therefore, we have $$ \Phi_{\mathsf{b}}\left( \frac{\pi-a_Z}{2\pi i \sqrt{\hbar}} \right) \mathcal{Z}_{\hbar}(Y_n,\alpha) \stackrel{\star}{=} \int_{\mathbf{t}\in\mathbb{R}^{p+2}} d\mathbf{t} \ e^{2 i \pi \mathbf{t}^T Q_n\mathbf{t}} e^{\frac{1}{\sqrt{\hbar}} (c_1 t_1 + \cdots + c_p t_p + (c_U - c_V)t_U + c_W t_W)} \Pi(\mathbf{t},\alpha,\hbar), $$ where $\mathbf{t} =(t_1, \ldots,t_p,t_U,t_W)$ and $$ \Pi(\mathbf{t},\alpha,\hbar) := \frac{ \Phi_\mathsf{b}\left (-t_U + \frac{i}{2 \pi \sqrt{\hbar}}(\pi-a_V)\right ) \Phi_\mathsf{b}\left (t_W + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_W)\right ) }{ \Phi_\mathsf{b}\left (t_1 - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_1)\right ) \cdots \Phi_\mathsf{b}\left (t_p - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_p)\right ) \Phi_\mathsf{b}\left (t_U - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_U)\right ) }. $$ \textit{Step 2. Applying the dominated convergence theorem for $\alpha \to \tau$.} This step is exactly as in the proof of Theorem \ref{thm:part:func:Htrig:odd}. As for the odd case, for the rest of the proof, set $$\tau = (a^\tau_1,b^\tau_1,c^\tau_1,\ldots,a^\tau_Z,b^\tau_Z,c^\tau_Z) \in \mathcal{S}_{Y_n \setminus Z} \times \overline{\mathcal{S}_Z}$$ be such that $\omega_j(\tau) = 2 \pi$ for all $j \in \{0,1, \ldots, p-1,p+1\}$, $\widehat{\omega}_j(\tau) = 2 \pi$ for all $j \in \{s,d,p\}$ and $\widehat{\omega}_{\overrightarrow{K_n}}(\tau)=a^\tau_Z=0$. \textit{Step 3. Retrieving the value $J_{X_n}(\hbar,0)$ in $\alpha =\tau$.} Similarly as in the odd case, we do the following change of variables: \begin{itemize} \item $y'_k = t_k - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_k)$ for $k \in \{1,\ldots,p,U\}$, \item $y'_W = t_W + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a_W)$, \end{itemize} and we denote $\mathbf{y'}=\left (y'_1, \ldots, y'_{p}, y'_U, y'_W\right )^T$. Again $a^\tau_U-a^\tau_V = (\widehat{\omega}_{s}(\tau)- 2\pi)+(\widehat{\omega}_{d}(\tau)- 2\pi) = 0$. We also denote $$ \widetilde{\mathcal{Y}}'_{\hbar,\tau} := \prod_{k=1,\ldots,p,U}\left (\mathbb{R} - \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a^\tau_k)\right ) \times \left (\mathbb{R} + \frac{i}{2 \pi \sqrt{\hbar}} (\pi-a^\tau_W)\right ), $$ the subset of $\mathbb{C}^{p+2}$ on which the variables in $\mathbf{y'}$ reside. By a similar computation as in the proof of Theorem \ref{thm:even:part:func}, we obtain \begin{align*} &\int_{\mathbf{t}\in\mathbb{R}^{p+2}} d\mathbf{t} e^{2 i \pi \mathbf{t}^T Q_n\mathbf{t}} e^{\frac{1}{\sqrt{\hbar}} (c^\tau_1 t_1 + \cdots + c^\tau_p t_p + (c^\tau_U - c^\tau_V)t_U + c^\tau_W t_W)} \Pi(\mathbf{t},\tau,\hbar)\\ &\stackrel{\star}{=} \int_{\mathbf{y'} \in \mathcal{Y}'_{\hbar,\tau}} d\mathbf{y'} e^{ 2 i \pi \mathbf{y}^{\prime T} Q_n \mathbf{y'} + \frac{1}{\sqrt{\hbar}} \mathcal{W}(\tau)^T \mathbf{y'} } \dfrac{ \Phi_\mathsf{b}\left (-y'_U \right ) \Phi_\mathsf{b}\left (y'_W\right ) }{ \Phi_\mathsf{b}\left (y'_1\right ) \cdots \Phi_\mathsf{b}\left (y'_p\right ) \Phi_\mathsf{b}\left (y'_U\right ) }, \end{align*} where for any $\alpha \in \mathcal{S}_{Y_n \setminus Z}$, $\mathcal{W}(\alpha)$ is defined as $$\mathcal{W}(\alpha):= 2 Q_n \Gamma(\alpha)+C(\alpha)+(0,\ldots,0,-c_V,0)^T,$$ with $\Gamma(\alpha)=(a_1-\pi,\ldots,a_p-\pi,a_U-\pi, \pi-a_W)^T$ and $C(\alpha)=(c_1,\ldots,c_p,c_U,c_W)$. Hence, from the value of $J_{X_n}(\hbar,0)$, it remains only to prove that $\mathcal{W}(\tau) = \mathcal{W}_n$. Let us denote $\Lambda: (u_1,\ldots,u_p,u_U,u_V,u_W) \mapsto (u_1,\ldots,u_p,u_U,u_W)$ the process of forgetting the second-to-last coordinate. then obviously $C(\alpha) = \Lambda (\widetilde{C}(\alpha))$. Recall from the proof of Theorem \ref{thm:even:part:func} that $\widetilde{\mathcal{W}}(\alpha) = 2 \widetilde{Q}_n \widetilde{\Gamma}(\alpha) + \widetilde{C}(\alpha)$ depends almost only on edge weights of the angles in $X_n$. Thus, a direct calculation shows that for any $\alpha \in \mathcal{S}_{Y_n \setminus Z}$, we have \begin{equation*} \label{eqn:v:Alpha:Odd} \mathcal{W}(\alpha) = \Lambda(\widetilde{\mathcal{W}}(\alpha)) + \begin{bmatrix} 0 \\ \vdots \\ 0 \\ -c_V +(\pi-a_V)+(\pi-a_W) \\ a_U-a_V \end{bmatrix}. \end{equation*} Now, if we specify $\alpha=\tau$, then the weights $\omega_{X_n,j}(\alpha)$ appearing in $\Lambda(\widetilde{\mathcal{W}}(\alpha))$ all become $2\pi$, since $\omega_s(\tau) =\widehat{\omega}_s(\tau)-\widehat{\omega}_{\overrightarrow{K_n}}(\tau) = 2 \pi$ and $\omega_{p}(\tau) =\widehat{\omega}_d(\tau)+\widehat{\omega}_{p}(\tau) - 2\left (\pi-\widehat{\omega}_{\overrightarrow{K_n}}(\tau)\right ) = 2\pi.$ Hence $$\mathcal{W}(\tau)= \mathcal{W}_{n} + \begin{bmatrix} 0 \\ \vdots \\ 0 \\ \frac{1}{2}\lambda_{X_n}(\tau)- \pi -c^\tau_V +(\pi-a^\tau_V)+(\pi-a^\tau_W) \\ a^\tau_U-a^\tau_V \end{bmatrix}. $$ Finally, since $\frac{1}{2}\lambda_{X_n}(\tau)=a^\tau_V-a^\tau_U+a^\tau_W-b^\tau_V$ and $a^\tau_U-a^\tau_V=0$, we conclude that $\mathcal{W}(\tau) = \mathcal{W}_n$ and the theorem is proven. \end{proof} \subsection{Geometricity implies the volume conjecture} \label{sub:even:vol:conj} In this section we will prove the following theorem, which can be compared with Theorem \ref{thm:vol:conj}. \begin{theorem}\label{thm:even:vol:conj} Let $n$ be a positive even integer, and $J_{X_n}, \mathfrak{J}_{X_n}$ the functions defined in Theorem \ref{thm:even:part:func} and Corollary \ref{cor:even:part:func}. If the ideal triangulation $X_n$ is geometric, then $$ \lim_{\hbar \to 0^+} 2\pi \hbar \log \vert J_{X_n}(\hbar,0) \vert = \lim_{\hbar \to 0^+} 2\pi \hbar \log \vert \mathfrak{J}_{X_n}(\hbar,0) \vert = -\emph{Vol}(S^3\backslash K_n).$$ \end{theorem} The following Corollary \ref{cor:vol:conj:even} is an immediate consequence of Theorem \ref{thm:even:vol:conj} and Theorem \ref{thm:appendix:geom:even}. \begin{corollary}\label{cor:vol:conj:even} The Teichm\"uller TQFT volume conjecture of Andersen-Kashaev is proven for the even twist knots. \end{corollary} \begin{proof}[Proof of Theorem \ref{thm:even:vol:conj}] To prove Theorem \ref{thm:even:vol:conj}, we will follow exactly the same general path as in Section \ref{sec:vol:conj}. For the sake of brevity, we will thus only state the modifications that are due to the fact that $n$ is even instead of odd. For the remainder of the section, let $n$ be a positive even integer such that $X_n$ is geometric. Let us first list the changes in notations: \begin{itemize} \item The open ``multi-band'' is now $\mathcal{U} := \left ( \prod_{k=1, \ldots,p,U}\left (\mathbb{R} + i (-\pi,0)\right ) \right ) \times \left (\mathbb{R} + i (0,\pi)\right ),$ and the closed one $\mathcal{U}_{\delta}$ (for $\delta>0$) is $\mathcal{U}_{\delta}:= \prod_{k=1,\ldots,p,U}\left (\mathbb{R} + i [-\pi+\delta,-\delta] \right ) \times \left (\mathbb{R} + i [\delta,\pi-\delta]\right ).$ \item As said in Corollary \ref{cor:even:part:func}, $\mathcal{Y}_\alpha := \left ( \prod_{k=1, \ldots,p,U}\left (\mathbb{R} - i (\pi - a_k)\right ) \right ) \times \left (\mathbb{R} + i (\pi - a_W)\right ).$ \item The potential function $S\colon \mathcal{U} \to \mathbb{C}$ is now $S := \mathbf{y} \mapsto$ $$i \mathbf{y}^T Q_n \mathbf{y} + \mathbf{y}^T \mathcal{W}_n + i \mathrm{Li}_2\left (-e^{y_1}\right ) + \cdots + i \mathrm{Li}_2\left (-e^{y_p}\right ) + i \mathrm{Li}_2\left (-e^{y_U}\right ) - i \mathrm{Li}_2\left (-e^{-y_U}\right ) - i \mathrm{Li}_2\left (-e^{y_W}\right ).$$ The expressions of its quantum deformations $S_{\mathsf{b}}$ and $S'_{\mathsf{b}}$ (for $\mathsf{b}>0$) should be obvious. \item The vector $\eta$, first appearing in Proposition \ref{prop:all:contour:Sb}, is now $\eta := (-1, \ldots,-1,-2,1)$. \end{itemize} We will state and prove several facts, which are variants of statements in Section \ref{sec:vol:conj}. Before all, let us remark that the non-degeneracy of the holomorphic hessian of $S$ (Lemma \ref{lem:hess}) and the strict concavity of $\Re(S)$ (Lemma \ref{lem:concave}) are obtained immediately by arguments and computations similar with the ones in Section \ref{sec:vol:conj}. However, relating the vanishing of $\nabla S$ to Thurston's gluing equations (Lemma \ref{lem:grad:thurston}) needs a little more detail:\\ \textit{Fact 1. The diffeomorphism $\psi$ induces a bijective mapping between $\{\mathbf{y} \in \mathcal{U}; \nabla S(\mathbf{y}) = 0\}$ and $\{\mathbf{z} \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+2}; \mathcal{E}_{X_n}^{co}(\mathbf{z})\}$.} The system $\mathcal{E}^{co}_{X_n}(\mathbf{z})$ of equations (satisfied by the complete hyperbolic structure) is: \begin{itemize} \item $\mathcal{E}_{X_n,0}(\mathbf{z}) \colon \mathrm{Log}(z'_1) + 2 \mathrm{Log}(z_1)+\cdots + 2\mathrm{Log}(z_p)+2\mathrm{Log}(z_U) = 2i\pi$ \item $\mathcal{E}_{X_n,1}(\mathbf{z}) \colon 2\mathrm{Log}(z''_1)+\mathrm{Log}(z'_2)=2i\pi$\\ \vspace*{-2mm} \item $\mathcal{E}_{X_n,k}(\mathbf{z}) \colon \mathrm{Log}(z'_{k-1})+2\mathrm{Log}(z''_k)+\mathrm{Log}(z'_{k+1})=2i\pi$\ \ (for $2\leqslant k \leqslant p-1$)\\ \vspace*{-2mm} \item $\mathcal{E}_{X_n,p+1}^{co}(\mathbf{z}) \colon \mathrm{Log}(z'_{p}) +2 \mathrm{Log}(z''_{U})+\mathrm{Log}(z_{W})=2i\pi$ \item $\mathcal{E}_{X_n,s}^{co}(\mathbf{z}) \colon \mathrm{Log}(z''_{W}) -\mathrm{Log}(z_{U})=0$ \end{itemize} To prove Fact 1, let us first compute, for $\mathbf{y} \in \mathcal{U}$: $$ \nabla S(\mathbf{y}) = 2 i Q_n \mathbf{y} + \mathcal{W}_n + i \begin{pmatrix} -\mathrm{Log} (1+e^{y_1})\\ \vdots\\ -\mathrm{Log} (1+e^{y_p})\\ -\mathrm{Log} (1+e^{y_U})-\mathrm{Log} (1+e^{-y_U})\\ \mathrm{Log} (1+e^{y_W}) \end{pmatrix}.$$ Then, we define the matrix $A=\kbordermatrix{ \mbox{} &y_1 &y_2 &y_3 &\cdots &y_p & \omit\vrule &y_U & y_W \\ y_1 & 1 & & & & & \omit\vrule & & \\ y_2 &-2 & 1 & & &0 & \omit\vrule & & \\ y_3 &1 & -2 & 1 & & & \omit\vrule & & \\ \vdots & & \ddots & \ddots & \ddots & & \omit\vrule & & \\ y_p & & & 1 & -2 & 1 & \omit\vrule &0 &0 \\ \cline{1-1} \cline{2-9} y_U & & & & & -1 & \omit\vrule & 1 & 1 \\ y_W & &0 & & &0 & \omit\vrule & 0 &1} \in GL_{p+2}(\mathbb{Z})$, and we compute $A \cdot \nabla S(\mathbf{y})=$ $$ \begin{pmatrix} 2i(y_1+\cdots+y_p-y_U)-2\pi p -i \mathrm{Log} (1+e^{y_1})\\ -2i y_1 + 2 \pi +2 i \mathrm{Log} (1+e^{y_1}) - i \mathrm{Log} (1+e^{y_2}) \\ - 2i y_2 + 2\pi -i \mathrm{Log} (1+e^{y_1}) +2 i \mathrm{Log} (1+e^{y_2}) -i \mathrm{Log} (1+e^{y_3})\\ \vdots\\ - 2i y_k+2\pi -i \mathrm{Log} (1+e^{y_{k-1}}) +2 i \mathrm{Log} (1+e^{y_k}) -i \mathrm{Log} (1+e^{y_{k+1}})\\ \vdots\\ - 2i y_{p-1}+2\pi -i \mathrm{Log} (1+e^{y_{p-2}}) +2 i \mathrm{Log} (1+e^{y_{p-1}}) -i \mathrm{Log} (1+e^{y_p})\\ i y_U-i y_W -2 \pi -i \mathrm{Log} (1+e^{y_p}) - i \mathrm{Log} (1+e^{y_U})- i \mathrm{Log} (1+e^{-y_U})+ i \mathrm{Log} (1+e^{y_W})\\ - i y_U +i \pi +i\mathrm{Log}(1+e^{y_W}) \end{pmatrix}. $$ Hence we compute, for all $\mathbf{z} \in (\mathbb{R}+i\mathbb{R}_{>0})^{p+2}$, $$ A \cdot (\nabla S)(\psi(\mathbf{z})) = i \begin{pmatrix} \mathrm{Log}(z'_1) + 2 \mathrm{Log}(z_1)+\cdots + 2\mathrm{Log}(z_p)+2\mathrm{Log}(z_U)-2i\pi\\ 2\mathrm{Log}(z''_1)+\mathrm{Log}(z'_2)-2i\pi \\ \mathrm{Log}(z'_{1})+2\mathrm{Log}(z''_2)+\mathrm{Log}(z'_{3})-2i\pi \\ \vdots\\ \mathrm{Log}(z'_{k-1})+2\mathrm{Log}(z''_k)+\mathrm{Log}(z'_{k+1})-2i\pi \\ \vdots\\ \mathrm{Log}(z'_{p-2})+2\mathrm{Log}(z''_{p-1})+\mathrm{Log}(z'_{p})-2i\pi \\ -\mathrm{Log}(z'_{p}) -2 \mathrm{Log}(z''_{U})-\mathrm{Log}(z'_{W}) + 2i\pi \\ \mathrm{Log}(z''_{W}) -\mathrm{Log}(z_{U}) \end{pmatrix}, $$ which is zero if and only if the system $\mathcal{E}^{co}_{X_n}(\mathbf{z})$ is satisfied. Fact 1 then follows from the invertibility of $A$. The second fact, a variant of Lemma \ref{lem:rewriteS}, is proven similarly, using Proposition \ref{prop:dilog}:\\ \textit{Fact 2. The function $S\colon \mathcal{U} \to \mathbb{C} $ can be re-written} \begin{multline*} S(\mathbf{y}) = i \mathrm{Li}_2\left (-e^{y_1}\right ) + \cdots + i \mathrm{Li}_2\left (-e^{y_p}\right ) + 2 i \mathrm{Li}_2\left (-e^{y_U}\right ) + i \mathrm{Li}_2\left (-e^{-y_W}\right ) \\ + i \mathbf{y}^T Q_n \mathbf{y} + i \frac{y_U^2}{2} + i \frac{y_W^2}{2} + \mathbf{y}^T \mathcal{W}_n + i \frac{\pi^2}{3}. \end{multline*} Consequently, the fact that $\Re(S)(\mathbf{y^0}) = - \mathrm{Vol}(S^3 \setminus K_n)$ is proven like in the proof of Lemma \ref{lem:-vol}, using the particular form of $S$ stated in Fact 2, and the fact that at the complete angle structure, $ -e^{y^0_U}=z^0_U = z^0_V = -e^{-y^0_V}$ is the complex shape of both tetrahedra $U$ and $V$. The rest of the statements in Section \ref{sec:vol:conj} (Lemma \ref{lem:maximum} and Proposition \ref{prop:compact:contour:S:SPM} to Proposition \ref{prop:all:contour:S'b}) are proven in exactly the same way, using the new notations defined at the beginning of this proof. Notably, we obtain the following asymptotic behaviour for $\mathfrak{J}_{X_n}(\hbar,0)$: $$\mathfrak{J}_{X_n}(\hbar,0) =\left (\dfrac{1}{2\pi \sqrt{\hbar}}\right )^{p+3} e^{\frac{1}{2 \pi \hbar} S(\mathbf{y^0})} \left ( \rho' \hbar^{\frac{p+2}{2}} \left ( 1 + o_{\hbar \to 0^+}\left (1\right ) \right ) + \mathcal{O}_{\hbar \to 0^+}(1) \right ).$$ \end{proof} \color{black}
1,108,101,562,649
arxiv
\section{Introduction} The recent past has seen a new interest on exactly marginal deformations of ${\cal N}=4$ SYM theory preserving ${\cal N}=1$ supersymmetry \cite{LS}, in particular after the supergravity duals of the so--called $\beta$-deformations of ${\cal N}=4$ $SU(N)$ SYM theory have been found by Lunin and Maldacena in \cite{LM}. Now new results start to emerge also from the field theory side: in \cite{FG,PSZ4,RSS} various properties of composite operators of the deformed theory have been investigated at the perturbative level (see also \cite{NP,MNSS,FRT1,FRT2}). The outcome is that the deformed theory shares some properties with the undeformed ${\cal N}=4$ theory, but new features emerge, as for example the finite corrections of the two- and three-point functions of protected operators \cite{PSZ4,RSS}.\\ The chiral ring of the theory was identified in \cite{BL,BJL} for generic values of the deformation parameter. It is given by the operators ${\rm Tr}(\Phi_i^J)$, $i=1,2,3$, and ${\rm Tr}(\Phi_1^J \Phi_2^J \Phi_3^J)$. In \cite{FG,PSZ4} it was shown that also the operator ${\rm Tr}(\Phi_1\Phi_2)$ does not acquire anomalous dimension. In this paper we will focus on the operators ${\rm Tr}(\Phi_i^{J}\Phi_j )$, $i \neq j$. As opposed to what happens in the undeformed ${\cal N}=4$ case, these operators are not protected and their anomalous dimension was computed at one-loop in \cite{FG}. Our interest in these operators is motivated by the fact that in the large $J$ limit they resemble the BMN operators of the ${\cal N}=4$ theory \cite{BMN}. Indeed a perturbative superfield analysis performed at low orders shows that the supergraphs contributing to their anomalous dimension are exactly the same as the ones of the BMN case. Then we apply the derivation of \cite{SZ} to this class of operators and compute their exact anomalous dimension in the planar limit. The consistency between the perturbative supergraph approach and the result obtained by using the method of \cite{SZ} suggests that the one-loop superconformal invariance condition remains valid in the planar limit to all orders of perturbation theory (at least for real values of the $\beta$ parameter of the $\beta$-deformation which is the case we consider here). We confirm this result by exploiting the formal analogy between $\beta$-deformed and non-commutative field theories. The paper is organized as follows. In Section 2 we present the setup for the perturbative calculation that we perform in Section 3, where we compute the anomalous dimensions of the operators ${\rm Tr}(\Phi_i^{J}\Phi_j )$, $i \neq j$ in the large $N$ limit up to order $g^4$. The contributing graphs are the same that one would encounter in the calculation of the anomalous dimension of BMN operators. In Section 4 we apply to our operators the procedure introduced in \cite{SZ} and give an all-order result for their anomalous dimensions in the large $J$ and large $N$ limit. Then in Section 5 we prove that the one-loop condition of superconformal invariance remains valid to all orders in the planar limit. \section{Generalities} We consider the following deformation of the ${\cal N}=4$ SYM theory \begin{eqnarray} S[j,\bar{j}] &=&\int d^8z~ {\rm Tr}\left(e^{-gV} \bar{\Phi}_i e^{gV} \Phi^i\right)+ \frac{1}{2g^2} \int d^6z~ {\rm Tr} W^\a W_\a\nonumber\\ &&+ih \int d^6z~ {\rm Tr}( q ~\Phi_1 \Phi_2 \Phi_3 - \frac{1}{q} \Phi_1 \Phi_3 \Phi_2 ) - i \overline{h} \int d^6\bar{z}~ {\rm Tr} ( \bar{q} ~\bar{\Phi}_1 \bar{\Phi}_3 \bar{\Phi}_2 - \frac{1}{\bar{q}} \bar{\Phi}_1 \bar{\Phi}_2 \bar{\Phi}_3 ) \nonumber\\ &&+\int d^6z~ j {\cal O}+\int d^6\bar{z}~ \bar{j}\bar{{\cal O}} \label{actionYM} \end{eqnarray} where we have set $q\equiv e^{i\pi\b}$ and in the following we choose $\b$ real so that $q\bar{q}=1$. We have added to the classical action source terms for composite chiral operators generically denoted by ${\cal O}$ with $j$ ($\bar{j}$) (anti)chiral sources. The superfield strength $W_\a= i\bar{D}^2(e^{-gV}D_\a e^{gV})$ is given in terms of a real prepotential $V$ and $\Phi_{1,2,3}$ contain the six scalars of the original ${\cal N}=4$ SYM theory organized into the ${\bf 3}\times \bf{ \bar 3}$ of $SU(3) \subset SU(4)$. We write $V=V^aT_a$, $\Phi_i=\Phi_i^a T_a$ where $T_a$ are $SU(N)$ matrices in the fundamental representation. In the following we use the notation ${\rm Tr} (T^aT^bT^c\dots)\equiv (abc\dots)$. The $\beta$--deformation breaks the original $SU(4)$ R--symmetry to $U(1)_R$. However, besides the $Z_3$ symmetry associated to cyclic permutations of $(\Phi_1,\Phi_2,\Phi_3)$, two extra non--R--symmetry $U(1)$'s survive. Applying the $a$--maximization procedure \cite{IW} and the conditions of vanishing ABJ anomalies it turns out that $U(1)_R$ is the one which assigns the same R--charge $\o$ to the three elementary superfields, whereas the charges respect to the two non--R--symmetries $U(1)_1 \times U(1)_2$ can be chosen to be $(\Phi_1,\Phi_2, \Phi_3) \rightarrow (0,1,-1)$ and $(-1,1,0)$, respectively. As discussed in \cite{FG,PSZ4}, at the quantum level the theory is superconformal invariant (and then finite) up to two loops if the coupling constants satisfy the following condition (vanishing of the beta functions) \begin{equation} h \bar{h} \left[ 1 - \frac{1}{N^2} \Big|q - \frac{1}{q} \Big|^2 \right]- g^2 = 0 \label{cond} \end{equation} In the large $N$ limit this condition reduces simply to $g^2 = h \bar{h}$, independently of the value of $q$. In \cite{RSS} the three loop correction to (\ref{cond}) has been also evaluated. Since it turns out to be suppressed for $N \to \infty$, the condition $g^2 = h \bar{h}$ is the correct condition for superconformal invariance in the planar limit up to three loops. At the superconformal fixed point of the theory we compute the anomalous dimensions for the class of non--protected operators \footnote{The choice of $\Phi_1$ and $\Phi_2$ superfields is totally arbitrary and we expect the operators ${\rm Tr }(\Phi_i^J \Phi_k)$, for any $i,k$ with $i \neq k$ to have similar quantum properties. We will comment on this point later on.} \begin{equation} {\cal O}_J = {\rm Tr }(\Phi_1^J \Phi_2) \label{OJ} \end{equation} They are charged under $U(1)_1 \times U(1)_2$ with charges $(1,1-J)$. Using the equations of motion from the action (\ref{actionYM}) with $j=\bar{j}=0$ (from now on we neglect factors of $e^{\pm gV}$ since they are not relevant to our purposes) \begin{equation} \bar{D}^2\bar{\Phi}^a_3= -ih\Phi_1^b\Phi_2^c~[q(abc)-\frac{1}{q}(acb)] \label{eom} \end{equation} it is easy to see that \begin{equation} {\cal O}_J = \frac{i}{h[q - \frac{1}{q}]} \bar{D}^2 {\rm Tr}(\Phi_1^{J-1} \bar{\Phi}_3) + \frac{1}{N} {\rm Tr}(\Phi_1^{J-1}) {\rm Tr}(\Phi_1 \Phi_2) \label{relation} \end{equation} As long as $J > 1$, in the large $N$ limit the operator ${\cal O}_J$ becomes descendent of the primary ${\rm Tr}(\Phi_1^{J-1} \bar{\Phi}_3)$, whereas for finite $N$ the combination ${\cal O}_J - \frac{1}{N} {\rm Tr}(\Phi_1^{J-1}) {\rm Tr}(\Phi_1 \Phi_2)$ is descendent. The exceptional case $J=1$ corresponds to the chiral primary operator whose protection has been proven perturbatively in \cite{FG,PSZ4}. In the next Sections we will concentrate on the evaluation of the anomalous dimensions for the ${\cal O}_J$ operators. \section{The perburbative calculation} In this Section we compute the anomalous dimension of ${\cal O}_J$ in (\ref{OJ}) perturbatively, up to two loops. For generic values of $J$ we perform the calculation in the large $N$ limit in order to avoid dealing with mixing with multitrace operators. In order to compute anomalous dimensions we evaluate one--point correlators $\langle {\cal O}_J e^{S_{int}} \rangle$ where $S_{int}$ is the sum of the interaction terms in (\ref{actionYM}). Divergent contributions proportional to the operator itself are removed by a multiplicative renormalization which in dimensional regularization reads \begin{equation} {\cal O}_J^{(bare)} \equiv {\cal O}_J \left( 1 + \sum_{k=0}^{\infty} \frac{a_k(\l, q, N )}{\epsilon^k} \right) \equiv Z {\cal O}_J \end{equation} where we have introduced the 't Hooft coupling $\l = \frac{g^2N}{4\pi^2}$. There is no dependence on the $h$ coupling since we are at the superconformal point where $h$ can be expressed in terms of the other couplings through the condition of vanishing beta functions. The anomalous dimension is then given by \begin{equation} \gamma \equiv 2\l \frac{da_1(\l, q, N)}{d\l} \label{andim} \end{equation} Therefore, at any order it is easily read from the simple pole divergence. We perform perturbative calculations in a superspace setup by following closely the procedure used in \cite{PSZ1, PSZ2, PSZ3, PS, PSZ4} (we refer to those papers for conventions and technical details). After D--algebra supergraphs are reduced to ordinary Feynman diagrams which we evaluate in momentum space. We work in dimensional regularization, $d = 4-2\epsilon$, and minimal subtraction scheme. From the action (\ref{actionYM}) quantized in the Feynman gauge we read the superfield propagators ($z \equiv (x, \th,{\bar{\theta}})$) \begin{eqnarray} && \langle V^a(z_1) V^b(z_2) \rangle = ~~\d^{ab} \frac{1}{(x_1 - x_2)^2} \d^{(4)}(\th_1 - \th_2) \nonumber \\ && \langle \Phi_i^a(z_1) \bar{\Phi}_j^b(z_2) \rangle = ~- \d_{ij}\d^{ab} \frac{1}{(x_1 - x_2)^2} \d^{(4)}(\th_1 - \th_2) \label{prop} \end{eqnarray} and the three--point vertices \begin{eqnarray} && (\Phi \Phi \Phi)_{{\rm vertex}} \rightarrow ~~~~~~ ih \Phi_1^a \Phi_2^b \Phi_3^c \left[ q (abc) - \frac{1}{q} (acb) \right] \nonumber \\ && (\bar{\Phi} \bar{\Phi} \bar{\Phi})_{{\rm vertex}} \rightarrow ~~~ -i\bar{h} \bar{\Phi}_1^a \bar{\Phi}_2^b \bar{\Phi}_3^c \left[ \bar{q} (acb) - \frac{1}{\bar{q}} (abc) \right] \nonumber \\ && (\bar{\Phi} V \Phi)_{{\rm vertex}} \rightarrow ~~~~~~ g \bar{\Phi}_i^a V^b \Phi_i^c ~\left[ (abc) -(acb) \right] \label{vertices} \end{eqnarray} At the lowest order the only contribution to the one--point function for the operator ${\cal O}_J$ is the one given in Fig. 1 where, using the notation introduced in \cite{GMR}, the horizontal bold line indicates the spacetime point where the operator is inserted. \vskip 18pt \noindent \begin{minipage}{\textwidth} \begin{center} \includegraphics[width=0.40\textwidth]{betadef2_1.eps} \end{center} \begin{center} {\small{Figure 1: One--loop contribution to the ${\cal O}_J$ anomalous dimension}} \end{center} \end{minipage} \vskip 20pt The corresponding contribution is proportional to the self--energy integral \begin{equation} I_1 \equiv \int d^n k \frac{1}{k^2 (p-k)^2} \sim \frac{1}{(4\pi)^2} \frac{1}{\epsilon} \label{selfenergy} \end{equation} Evaluating the color factor, the combinatorics and taking into account a minus sign from D--algebra we obtain \begin{equation} {\rm Diagram ~1} ~~~\rightarrow ~~~- \frac{1}{\epsilon} ~ | q - \frac{1}{q} |^2 \frac{|h|^2 N}{(4\pi)^2} \end{equation} Using the one--loop superconformal condition in the planar limit ($g^2 = |h|^2$) and the definition (\ref{andim}) we immediately find the one--loop anomalous dimension \begin{equation} \gamma^{(1)} = \frac12 \Big| q - \frac{1}{q} \Big|^2 \l \label{gamma1} \end{equation} At two loops (order $\l^2$) the diagrammatic contributions are drawn in Fig. 2. \vskip 18pt \noindent \begin{minipage}{\textwidth} \begin{center} \includegraphics[width=0.80\textwidth]{betadef2_2.eps} \end{center} \begin{center} {\small{Figure 2: Two--loop contributions to the ${\cal O}_J$ anomalous dimension}} \end{center} \end{minipage} \vskip 20pt Performing the D--algebra we reduce all the diagrams to ordinary Feynman diagrams containing the loop structure as in Fig. 3. \vskip 18pt \noindent \begin{minipage}{\textwidth} \begin{center} \includegraphics[width=0.50\textwidth]{betadef2_3.eps} \end{center} \begin{center} {\small{Figure 3: The two--loop Feynman diagram}} \end{center} \end{minipage} \vskip 20pt The associated momentum integral is \begin{equation} I_2 \equiv \int d^n k_1 d^n k_2 \frac{1}{k_1^2 (p_1 - k_1 - k_2)^2 k_2^2 (p_1+p_2-k_2)^2} \end{equation} As long as we are only concerned with UV divergences we can safely set one of the external momenta to zero. Thus the graph is easily evaluated being proportional to two nested self--energies. We obtain (in the G-scheme) \begin{equation} I_2 \sim \frac{1}{(4\pi)^4}~ \frac{1}{2\epsilon^2} (1 + 5 \epsilon) \frac{1}{(p^2)^{2\epsilon}} \end{equation} where we have kept only divergent terms. Performing the subtraction of the subdivergence we finally have \begin{equation} \left[ I_2 \right]_{sub} \sim \frac{1}{(4\pi)^4} ~\left[ - \frac{1}{2\epsilon^2} + \frac{1}{2\epsilon} \right] \label{I2} \end{equation} Computing the combinatorics, the color factors and taking into account minus signs from the vector propagator we find that the factors in front of (\ref{I2}) for the various diagrams are \begin{eqnarray} && (2a) ~~~ \rightarrow ~~~ - 2 (q - \frac{1}{q}) (\bar{q} - \frac{1}{\bar{q}}) g^2 |h|^2 N^2 \nonumber \\ && (2b) ~~~ \rightarrow ~~~ ~~ 2 (q - \frac{1}{q}) (\bar{q} - \frac{1}{\bar{q}}) g^2 |h|^2 N^2 \nonumber \\ && (2c) ~~~ \rightarrow ~~~ ~~ 2 (q - \frac{1}{q}) (\bar{q} - \frac{1}{\bar{q}}) g^2 |h|^2 N^2 \nonumber \\ && (2d) ~~~ \rightarrow ~~~ - (q - \frac{1}{q}) (\bar{q} - \frac{1}{\bar{q}}) \left( \frac{q}{\bar{q}} + \frac{\bar{q}}{q} \right)|h|^4 N^2 \end{eqnarray} Summing all the contributions, using the planar superconformal condition $ |h|^2 = g^2$ and the definition (\ref{andim}), we find \begin{equation} \gamma^{(2)} = -\frac18 \Big| q - \frac{1}{q} \Big|^4 \l^2 \label{gamma2} \end{equation} We observe that the diagrams contributing to the anomalous dimensions for our operators are exactly the same as the ones for BMN operators in ${\cal N}=4$ SYM in the planar limit \cite{BMN,GMR}. In fact, up to this order the calculation is exactly the same under the formal identification $|q - \frac{1}{q}|^2 \leftrightarrow -(e^{i\phi} +e^{-i\phi}-2)$, where $\phi$ is the phase of BMN operators \cite{BMN,GMR,SZ}. We expect that the same pattern will persist at any order in perturbation theory. In particular, as in the BMN case, the graphs relevant for the calculation are only the ones where the interactions are close to the ``impurity'' $\Phi_2$: at $L$--loop order the interactions may involve at most the $\Phi_1$ lines which are $L$--steps far away from the impurity. As an important consequence, in the large $J$ limit the anomalous dimensions do not grow with $J$. To close this Section we note that the result we have found for the anomalous dimensions of the operators ${\rm Tr}(\Phi_1^J \Phi_2)$ at large $N$ are actually valid for any operator of the form ${\rm Tr}(\Phi_i^J \Phi_k)$ with $i \neq k$. In fact the superpotential is invariant under cyclic permutation of $(\Phi_1, \Phi_2,\Phi_3)$, and in addition it becomes invariant if non--cyclic exchanges of fields are accompanied by \begin{equation} q \rightarrow -\frac{1}{q} \label{q} \end{equation} Since the anomalous dimensions are proportional to powers of the effective coupling $\a \equiv \l \Big| q - \frac{1}{q} \Big|^2$ which is invariant under (\ref{q}) we conclude that the result is valid for any operator of the form ${\rm Tr}(\Phi_i^J \Phi_k)$, $i \neq k$. \section{The exact anomalous dimensions} Motivated by the formal correspondence of the previous calculation with the BMN case, in this Section we are going to compute the {\em exact} anomalous dimensions in the large $N$, large $J$ limit by using the procedure introduced in \cite{SZ} for BMN operators. In the context of $\beta$--deformed theories the same procedure has been applied to the class of BMN operators \cite{NP}. We concentrate on the operator ${\cal O}_{J+1}$ which, as follows from eq. (\ref{relation}), in the planar limit satisfies \begin{equation} \bar{D}^2 {\cal U}_{J} = - ih [q - \frac{1}{q} ] {\cal O}_{J+1} \label{Oop} \end{equation} where we have defined \begin{equation} {\cal U}_{J} \equiv {\rm Tr} (\Phi^J_1\bar{\Phi}_3) \label{Uop} \end{equation} As already noticed, this shows that the ${\cal O}_{J+1}$ operators are descendants of the ${\cal U}_{J}$ ones. Being part of the same superconformal multiplet they share the renormalization properties, i.e. they will have the same scaling dimension and the same perturbative corrections to their overall normalization. Moreover since ${\cal U}_{J}$ is not a Konishi-like operator it is not affected by the Konishi anomaly. As discussed in details in \cite{SZ}, in any $N=1$ superconformal field theory the two--point function for a {\em primary} operator ${\cal A}_{s,\bar{s}}$ is fixed \cite{O} and given by ($z \equiv (x, \th, \bar{\theta})$) \begin{eqnarray} &&<{\cal{A}}_{(s,\bar{s})}(z)\bar{{\cal{A}}}_{(s,\bar{s})}(z')> =f_{{\cal{A}}}(g^2,N,h,\bar{h}) \left\{\frac{1}{2}D^\a \bar D^2 D_\a + \frac{w}{4(\Delta_0+\gamma)} [ D^\a,\bar D^{{\dot{\alpha}}} ] i\partial_{\a{\dot{\alpha}}} \right.~~~~~~\nonumber\\ &&~~~~~\nonumber\\ &&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.+ \frac{(\Delta_0+\gamma)^2 + w^2 -2(\Delta_0+\gamma)}{4(\Delta_0+\gamma) (\Delta_0 +\gamma- 1)}\square\right\} \frac{\delta^4(\theta-\theta')}{|x-x'|^{2(\Delta_0+\gamma)}} \label{niceformula} \end{eqnarray} where $\Delta_0 = s + \bar{s}$ is the tree--level dimension of the operator, $\o = s - \bar{s}$ is its R--symmetry charge \footnote{We assume $\o$ not to renormalize. In fact, once the R--symmetry of the elementary fields is fixed by requiring the exact R--symmetry of the superpotential, any composite operator has a fixed charge given by the sum of the charges of its elementary constituents.} and $\gamma$ is the exact anomalous dimension. The relation (\ref{niceformula}) can be straightforwardly applied to our primary operators ${\cal U}_J$. The analysis of the two-point correlator for the ${\cal O}_{J}$'s is somewhat subtler since, as we see from eq. (\ref{Oop}), these chiral operators are not primaries and in principle the relation (\ref{niceformula}) cannot be applied to their correlators. However, as we are going to show, in the large $J$ limit these operators turn out to behave as CPO's and (\ref{niceformula}) can be safely used.\\ To this end we remind that in general given a {\em chiral} operator ${\cal A}$, the condition for the operator to be non--protected (anomalous dimension acquired) is equivalent to the condition that its chiral nature is not maintained under superconformal transformations, i.e. $\bar{D}(\d_{\bar{S}} {\cal A}) \sim \{\bar{S},\bar{D}\}{\cal A} \neq 0$ (see for instance \cite{CW}). In fact, writing schematically the superconformal algebra relation for a scalar operator as $\{\bar{S},\bar{D}\} = \Delta - \omega$, we have \begin{equation} \{\bar{S},\bar{D}\} {\cal A} = (\Delta - \omega){\cal A} = \left[ (\Delta_{0} + \gamma) - \o \right] {\cal A} = \gamma {\cal A} \end{equation} where we have used $\Delta = \Delta_{0} +\gamma$ and for a chiral operator $\o = \Delta_{0}$. Therefore if $\gamma \neq 0$, $ \bar{S} {\cal A}$ is not chiral anymore. Viceversa, if $\{\bar{S},\bar{D}\} {\cal A} =0$, then $\bar{S}{\cal A}$ is still chiral and the dimension is protected by the well--known condition $\Delta = \omega$. An alternative proof goes through the simple observation that the conditions \begin{equation} s+\bar{s}=\Delta_{0}+\gamma \qquad\qquad s-\bar{s}=\Delta_{0} \end{equation} imply \begin{equation} s=\Delta_{0}+\frac{\gamma}{2}\qquad\qquad \bar{s}=\frac{\gamma}{2} \end{equation} The appearance of $\bar{s} \neq 0$ signals the lack of chirality of the quantum operator. We now apply the previous argument to our operators $O_J$ to prove that in the large $N$, large $J$ limit the violation of chirality is suppressed and they behave as CPO's. In the limit of large R--symmetry $\o = J$ it is more natural to consider \begin{equation} \frac{1}{J} \{\bar{S},\bar{D}\} {\cal O}_J = \left( \frac{\Delta_{0}+\gamma}{J} -1 \right){\cal O}_J = \frac{\gamma}{J} {\cal O}_J \label{Jlarge} \end{equation} As discussed in the previous Section, at any fixed order in perturbation theory the anomalous dimension $\gamma$ does not grow with $J$. It follows that in the large $J$ limit the r.h.s. of eq. (\ref{Jlarge}) is suppressed and the operator behaves as a chiral primary. In particular, in this limit it is consistent to apply eq. (\ref{niceformula}) for the evaluation of its two--point function. Supported by these considerations we can now proceed exactly as in \cite{SZ} and find \begin{eqnarray} &&<\bar{D}^2{\cal{U}}_J(z)D^2\bar{{\cal{U}}}_J(z')>=\nonumber\\ &&~~~~~~\nonumber\\ &&~~~=\frac{N^{J+1}}{(4\pi^2)^{J+1}} f ~\bar{D}^2\left\{\frac{1}{2}D^\a \bar D^2 D_\a + \frac{J-1}{4(J+1+\gamma)} [ D^\a,\bar D^{{\dot{\alpha}}} ] i\partial_{\a{\dot{\alpha}}} \right.~~~~~~\nonumber\\ &&~~~~~\nonumber\\ &&~~~~~\left.+ \frac{(J+1+\gamma)^2 + (J-1)^2 -2(J+1+\gamma)}{4(J+1+\gamma) (J +\gamma)}\square\right\} D^2\frac{\delta^4(\theta-\theta')}{|x-x'|^{2(J+1+\gamma)}}\nonumber\\ &&~~~= \frac{N^{J+1}}{(4\pi^2)^{J+1}} f~ (\gamma^2+2\gamma)\bar{D}^2 D^2\frac{\delta^4(\theta-\theta')} {|x-x'|^{2(J+2+\gamma)}} \label{corrUU} \end{eqnarray} and \begin{equation} <{\cal{O}}_{J+1}(z) \bar{{\cal{O}}}_{J+1}(z')>=\frac{N^{J+2}}{(4\pi^2)^{J+2}} f \bar{D}^2D^2 \frac{\delta^4(\theta-\theta')}{|x-x'|^{2(J+2+\gamma)}} \label{corrOO} \end{equation} where $f$ is the common normalization function not fixed by superconformal invariance. From the relation (\ref{Oop}) the two correlators are related by \begin{equation} \langle\bar{D}^2 {\cal U}_J (z) D^2 \bar{\cal U}_J (z')\rangle ~=~ |h|^2 \Big| q - \frac{1}{q} \Big|^2 \langle {\cal O}_{J+1} (z) \bar{\cal O}_{J+1} (z')\rangle \label{OU} \end{equation} Therefore, inserting in (\ref{OU}) the explicit expressions (\ref{corrUU}) and (\ref{corrOO}) we end up with an algebraic equation \begin{equation} \gamma^2 + 2\gamma = |h|^2 \Big| q - \frac{1}{q} \Big|^2 \frac{N}{4\pi^2} \end{equation} which allows to find the exact expression for the anomalous dimensions \begin{eqnarray} \gamma &=& -1 + \sqrt{1+ |h|^2 \Big| q - \frac{1}{q} \Big|^2 \frac{N}{4\pi^2}} \nonumber \\ &= & \frac12 |h|^2 \Big| q - \frac{1}{q} \Big|^2 \frac{N}{4\pi^2} ~-~ \frac18 |h|^4 \Big| q - \frac{1}{q} \Big|^4 \frac{N^2}{(4\pi^2)^2} ~+ ~\cdots \label{andimexact} \end{eqnarray} Up to the second order this expression coincides with the perturbative results obtained in the previous Section. We note that our operators ${\cal O}_J$ can be thought as dual to the 0--modes of the BMN sector considered in \cite{LM,NP,FRT1,FRT2}. Formula (\ref{andimexact}) is in agreement with the results presented in those papers for the spectrum of the 0--modes. \section{The superconformal condition at large $N$} In the previous Section, exploiting the superconformal invariance of the theory and its equations of motion we have shown that the exact anomalous dimension for the ${\cal O}_J$ operator can be written as \begin{equation} \gamma = -1 + \sqrt{1+ 2 \gamma^{(1)}} \label{andimexact2} \end{equation} where $\gamma^{(1)}$ is the one--loop anomalous dimension. A direct calculation provides an expression for $\gamma^{(1)}$ proportional to $|h|^2$. As discussed in Section 3, at this order we are allowed to use the planar superconformal condition $|h|^2 =g^2$ to re-express $\gamma^{(1)}$ in terms of $g^2$ only (see eq. (\ref{gamma1})). Now if in (\ref{andimexact2}) we use $\gamma^{(1)}$ given in terms of $g^2$ and expand the square root, we obtain a perturbative formula for $\gamma$ which agrees with the actual perturbative calculation only if the condition $|h|^2 =g^2$ is valid at any order. Motivated by this observation we are led to conjecture that in the large $N$ limit the condition $|h|^2 =g^2$ is indeed the correct condition for superconformal invariance at any order in perturbation theory. Direct confirmations of this conjecture can be found in the literature up to order $g^6$ \cite{FG, PSZ4, RSS}. Now we give an argument to prove that this is true to all orders. We remind that in ${\cal N}=1$ supersymmetric theories the superconformal invariance condition (i.e. vanishing of beta functions) can be expressed as the vanishing of the anomalous dimensions of the elementary superfields \cite{GRS,S,LS,JJ}. Therefore, in order to study superconformal invariance, it is sufficient to focus on the divergent corrections to the propagators of the elementary fields. In the $\b$--deformed theory we consider a generic $L$--loop diagram contributing to the propagator of the $\Phi_i$ superfield. The crucial observation is the following: If we prove that at the planar level, as long as $q\bar{q}=1$, this diagram does not depend on $q$, then we are sure that $|h|^2 =g^2$ is the exact solution of the superconformal invariance equations. In fact, if it is independent of $q$, the corresponding perturbative contribution is the same for any deformed theory, independently of the choice of the $q$--deformation. In particular, it is the same for any deformed theory ($q \neq 1$) and for the underformed one ($q=1$). Focusing on the undeformed case we can conclude that $|h|^2 =g^2$ is the exact condition for the planar superconformal invariance, since $q=1$ and $|h|^2 =g^2$ bring us back to the ${\cal N}=4$ case which is known to be exactly superconformal. The independence of the perturbative corrections on $q$ allows to extend this statement to any deformed theory. To conclude the proof we need to show that the contribution from a generic self--energy planar diagram never depends on $q$. We can focus on diagrams containing only matter vertices because adding vector propagators cannot introduce any $q$-dependence. We exploit the formal analogy between the deformed theory and noncommutative (nc) field theory. As observed in \cite{LM} the deformed potential can be written as \begin{equation} ih \int d^6z ~{\rm Tr}(\Phi_1 \ast [\Phi_2 , \Phi_3]_{\ast}) ~+~ {\rm h.c.} \end{equation} where \begin{equation} f \ast g = e^{i \pi \b Q^{(i)}_f M_{ij} Q^{(j)}_g} f \cdot g ~, \label{star} \end{equation} $Q^{(i)}$, $i=1,2$ being the non--R--symmetry $U(1)_1 \times U(1)_2$ charges and $M$ the antisymmetric matrix $ \left( \begin{matrix} {0 & 1 \cr -1 & 0} \end{matrix} \right)$. When drawing a Feynman diagram we can consider the flow of the charges inside the diagram. Observing that the charges are conserved at any vertex and propagate through the straight lines we can formally identify them with the ordinary momenta in noncommutative diagrams. A known property of planar diagrams in nc field theory is that the star product phase factors dependent on the loop momenta cancel out (for a proof see \cite{F,IIKK,MSV}) and only an overall phase depending on the external momenta survives. In our case, exploiting the formal identification of charges with momenta, we can use the same arguments to conclude that any planar diagram will have a phase factor from (\ref{star}) depending only on the configuration of the external charges. In the particular case of self--energy diagrams the overall phase is zero since $\Phi_i$ and $\bar{\Phi}_i$ have equal (but opposite) charges. In other words, any self--energy planar diagram always contains an equal number of $q= \frac{1}{\bar{q}}$ and $\bar{q}= \frac{1}{q}$ vertices. This concludes the proof of the $q$ independence of perturbative self--energy corrections. We state that in the $N\rightarrow\infty$ limit the exact condition for superconformal invariance is simply $|h|^2 =g^2$. Therefore, the theory described by the action \begin{eqnarray} && S =\int d^8z~ {\rm Tr}\left(e^{-gV} \bar{\Phi}_i e^{gV} \Phi^i\right)+ \frac{1}{2g^2} \int d^6z~ {\rm Tr} W^\a W_\a \nonumber\\ &&+ig \int d^6z~ {\rm Tr}( e^{i\pi\b} \Phi_1 \Phi_2 \Phi_3 - e^{-i\pi\b} \Phi_1 \Phi_3 \Phi_2 ) + i g \int d^6\bar{z}~ {\rm Tr} ( e^{i\pi\b} \bar{\Phi}_1 \bar{\Phi}_2 \bar{\Phi}_3 - e^{-i\pi\b} \bar{\Phi}_1 \bar{\Phi}_3 \bar{\Phi}_2 ) \nonumber \\ &&~~~ \label{confaction} \end{eqnarray} represents a ${\cal N}=1$ superconformal invariant theory for any value of $\b$ real. \section{Conclusions} For the $SU(N)$, $\b$--deformed ${\cal N}=4$ SYM theory we have considered the particular class of operators ${\cal O}_{J} = {\rm Tr}(\Phi_1^J \Phi_2)$. We have computed perturbatively their anomalous dimensions up to two loops. The calculation has been performed in the large $N$ limit in order to avoid mixing with multi-trace operators. Exploiting the techniques introduced in \cite{SZ} for the BMN operators we have evaluated their {\em exact} anomalous dimensions in the large $N$, large $J$ limit. In this limit the exact expression for the anomalous dimension depends on the deformation parameter $q$ only through the combination $|q - \frac{1}{q}|^2$ and it is then invariant under $q \rightarrow -\frac{1}{q}$. Observing that in $\b$--deformed theory exchanging $q \rightarrow -\frac{1}{q}$ amounts to exchange $\Phi_i \leftrightarrow \Phi_j$, for any $i=1,2,3$, $i \neq j$, we may conclude that in the ${\cal O}_J$ sector, in the large $N$, large $J$ limit there is an enhancement of the $SU(3)$ symmetry and all the operators of the form ${\rm Tr}(\Phi_i^J \Phi_k)$, $i\neq k$ renormalize in the same way. A comparison between the exact result and the perturbative calculation suggests that the condition $|h|^2 = g^2$, which up to three--loops guarantees the superconformal invariance of the theory in the planar limit, is actually sufficient for the exact invariance for $N \to \infty$. Indeed, we have given a direct proof at any order in perturbation theory. The main result of our paper is that the action in (\ref{confaction}) is superconformal invariant at the quantum level without additional conditions on the couplings. In the context of the AdS/CFT correspondence \cite{M,GKP,W} this is the theory whose strong coupling phase is described by the supergravity dual found in \cite{LM}. The ${\cal O}_{J}$ sector of this theory for large $J$ shares many similarities with the BMN sector of the ${\cal N}=4$ theory in the pp--wave limit. This opens the possibility for these operators to be dual to superstring states in some particular sector of the theory. It is interesting to consider the extension of our calculations to the case of $\b$ complex ($q \bar{q} \neq 1$). In this case the condition for superconformal invariance up to three loops, in the planar limit becomes \begin{equation} \frac12 |h|^2 \left( q\bar{q} + \frac{1}{q\bar{q}} \right) = g^2 \label{condcomplex} \end{equation} When (\ref{condcomplex}) holds it is easy to see that the perturbative anomalous dimension still coincides with the expansion of the exact result (\ref{andimexact}). As for the case of $\b$ real consistency of the perturbative calculation with the exact result would suggest that the condition (\ref{condcomplex}) should be valid at any order. However, the proof we have presented in Section 5 makes repeated use of the requirement $q\bar{q}=1$ and cannot be immediately extended to the more general case. A different procedure should be found to prove or disprove that the one--loop condition (\ref{condcomplex}) is sufficient to insure the exact conformal invariance even in the case of $\b$ complex. Generalizations of the present results to other deformed theories are presently under investigation and will be reported in \cite{MPPSZ}. \vspace{1.5cm} \section*{Acknowledgements} \noindent This work has been supported in part by INFN, PRIN prot. 2003023852\_008 and the European Commission RTN program MRTN--CT--2004--005104. \newpage
1,108,101,562,650
arxiv
\section{Introduction} \noindent What happens when a quantum system interacts with a classical measuring apparatus? Why is it that the wave function collapses from being in a superposition of the eigenstates of the measured observable, to being in just one of those states, in violation of the linear superposition principle obeyed by the deterministic Schr\"{o}dinger equation? And what is the origin of the Born probability rule? This set of questions is what is commonly known as the {\it quantum measurement problem} \cite{Wheeler-Zurek:1983,Bell:87,Albert:92,Leggett:2002,Leggett:2005,Ghirardi:2005,Maudlin:2011}. Broadly, there are three classes of explanations which have been investigated in detail. The first explanation is to say that collapse never takes place, and to explain the experimental result as a consequence of interaction with the environment, which causes decoherence \cite{Harris:81, Brune:96, Breuer:2000,Joos:03,Schlosshauer:2007, Zeh:70,Caldeira:81,Joos:85,Zurek:03}. This is supplemented with the many-worlds interpretation \cite{Everett:57,DeWitt:73,Kent:1990,Deutsch:1998,Vaidman:2002,Wallace:2003,Putnam:05, Tegmark:2007,Barrett:12,Saunders2010} so that an observer sees only one of the various elements of the diagonalized density matrix. This explanation requires no change to standard quantum theory, except a reinterpretation (many worlds). The second explanation is Bohmian mechanics \cite{Bohm:52,Bohm2:52,Duerr,Holland,Bohmbook, Bub:1997,DGZ} which is a mathematical reformulation of quantum theory, according to which particles move along definite trajectories, but there is a probability distribution in the initial conditions, which in turn reflects in different outcomes in successive repetitions of a quantum measurement. Bohmian mechanics makes the same experimental predictions as standard quantum theory, as far as both theories are understood. The third explanation is that standard quantum theory is an approximation to a stochastic nonlinear quantum theory \cite{Pearle:76,Pearle:79,Pearle:82,Pearle:84,Pearle:89,Pearle:99, Gisin:81,Gisin:84,Gisin:89,Diosi:88a,Diosi:88c,Gisin:95,Weinberg:11,Ghirardi:86}. There is nothing special about quantum measurement; spontaneous collapse of the wave function is an inherent property of the nonlinear theory, but in microscopic systems the collapse occurs so rarely that the theory is effectively indistinguishable from the standard linear Schr\"{o}dinger equation. However, for mesoscopic systems (such as a metal cluster of mass $10^{9}$ amu) and for macroscopic systems (such as the quantum system + measuring apparatus, or classical objects such as a table or a cat), collapse happens so frequently that any superposition breaks down very rapidly and the system gets well localized in position. The experimental predictions of this nonlinear theory are very close to the standard theory in the micro world, but differ from the standard theory in the meso and the macro world. A decisive experiment which chooses between standard quantum theory and stochastic nonlinear theories has not yet been performed, although an extraordinary worldwide effort in this direction is currently in progress \cite{Arndt:2014}. The main reason is that, even if for meso and macro systems the nonlinear effects may be relevant, in this regime the interaction with the environment plays also an important role, which masks the nonlinear effects. This is equivalent to saying, and very important to emphasize, that for objects larger than $10^{5}$ amu, quantum theory has not been tested. From $10^{18}$ amu [roughly the scale above which classical mechanics holds], down to $10^{5}$ amu, is an experimentally untested desert which spans some thirteen orders of magnitude! Thus, the way things stand today, decoherence plus many worlds, Bohmian mechanics, and nonlinear quantum theory, are all valid explanations of observed quantum phenomena and the quantum measurement process. Only future investigations can decide as to which (if any) of these explanations is the correct one; and such investigations are of tremendous importance in helping decide the domain of validity of the standard theory \cite{RMP:2012}. One successful formulation\footnote[1]{by successful we mean the model provides a solution for the quantum measurement problem without violating causality, which is a typical problem of deterministic nonlinear modifications to the Schr\"{o}dinger equation. } of stochastic nonlinear quantum theory, known as Continuous Spontaneous Localization [CSL], was proposed in the eighties \cite{Ghirardi2:90}, and has been studied very extensively since then. Simply put, CSL is a modification of the Schr\"{o}dinger equation, to which a stochastic nonlinear part is added, and two new fundamental constants of nature are introduced, a collapse rate $\lambda$, and a localization length scale $r_c$. CSL explains the collapse of the wave function during a measurement, and it explains the Born probability rule. The CSL model is being subjected to stringent experimental tests, and various constraints have also been imposed on its parameters from astrophysical and cosmological observations \cite{RMP:2012}. It is hoped that in the coming decades tests will either verify or rule out this model. Nonetheless, even if CSL were to be experimentally verified, it would still remain a phenomenological model, having been specifically designed to explain collapse of the wave function, and the Born probability rule. Furthermore, the fundamental constants $\lambda$ and $r_c$, as well as their numerical values, have been proposed in an ad hoc manner, so as to be consistent with experimental data. Moreover, the relativistic generalization of CSL is also sought for natural reasons. Significant progress would be made if one could understand why CSL is required in the first place, without taking recourse to the measurement problem, and the Born rule. Considerable effort has been invested in this direction, and some encouraging results have been obtained. One possibility is to consider CSL as a model which can be derived from a more fundamental underlying theory. A noteworthy effort has been due to Adler and collaborators, where quantum theory, and then CSL model [subject to some specific assumptions] are seen as emerging in the statistical limit from a deterministic theory known as Trace Dynamics \cite{Adler:04}. Another possibility is to look for some physical mechanism which could be effectively responsible for a significant modification of quantum theory, on macroscopic scales. One mechanism worth considering, and which has been explored in some detail, is the universally present force of gravity. All objects produce a gravitational field, including quantum objects, although today we do not know exactly how the gravitational field of a quantum object is related to its properties such as mass. Since the quantum object does not move on a definite trajectory, the gravitational field produced by it presumably has quantum fluctuations too. The evolution of the object's wave function in such a fluctuating geometry can in principle suffer decoherence and localization, as parts of the wave function that are sufficiently separated in space can lose phase coherence. This principle, or some variation of it, has actually been implemented by a few groups of researchers to show that this effect can cause loss of spatial coherence in macroscopic objects, while having negligible effect on microscopic physics. This phenomenon is commonly referred to as gravity induced decoherence of the wave function. The first work in this direction was carried out by Karolyhazy and collaborators [we will call this the K-model] \cite{Karolyhazi:66,Karolyhazi:86,Karolyhazy:74, Karolyhazy:90, Karolyhazy:95, Karolyhazy:1982,Frenkel:2002,Frenkel:90,Frenkel:95,Frenkel:97}. Subsequently, gravity induced decoherence models were also developed by Di\'osi [we will call this the D-model] \cite{Diosi:87a,Diosi:87,Diosi:89}. While we do not discuss a third development here, mention must be made of the important work of Penrose \cite{Penrose:96} on the effect of self-gravity on quantum evolution, and the subsequent investigations by various researchers on the Schr\"{o}dinger-Newton equation, reviewed for instance in \cite{RMP:2012, Bahrami2014a,Singh:2014}. It is important to stress that in the K-model as well as in the first version of the D-model \cite{Diosi:87a,Diosi:87} the evolution of the state vector is given by a random unitary Schr\"{o}dinger equation i.e. a Schr\"{o}dinger equation where a stochastic potential describes the fluctuations in the geometry of the spacetime. Therefore, in contrast to the CSL model, these two models do not describe any real collapse of the wave function: they only explain an appearance of decoherence effects due to the presence of the stochastic potential. However, there is an important connection between random unitary Schr\"{o}dinger equations and nonlinear stochastic equations like the one of collapse models. As shown in \cite{Adler:2007}, given a random unitary Schr\"{o}dinger equation, there is always a corresponding nonlinear equation (which has exactly the form as in the collapse models equation), which leads to the same master equation. Therefore, even though the dynamics for the state vectors in these models are very different, as far as we are concerned with averaged quantities, (derivable from the master equation), the two dynamics lead to exactly same predictions. As an example, in the case of the D-model, the corresponding collapse equation was proposed in \cite{Diosi:89}. Therefore, even though the K-model and D-model we consider here do not describe any real collapse of the wave function, they can still be used to get information about some relevant quantity, like the typical length and time scale over which coherence effects are suppressed, which would be the same also for the corresponding gravity induced collapse equations. While both the models study gravity induced decoherence, and have some common features, they arrive at results which are quantitatively different at times. The purpose of the present paper is to compare and contrast the K-model and the D-model, and to understand why their results differ quantitatively. The K-model begins by asking to what precision a length $s=cT$ in flat spacetime can be measured using a quantum probe which obeys the uncertainty principle. Karolyhazy shows that there will be a minimum uncertainty $\Delta s$ in the inferred length which is given by the relation \begin{equation} {\Delta s}^3 \sim l_p^2 \; s, \label{kuncertain} \end{equation} where $l_p =\sqrt{\hbar G/c^3}$ is the Planck length. This uncertainty is accounted for by hypothesizing that there coexist in spacetime a family of curved metrics $(g_{\mu\nu})_{\beta}$, each of which yields a corresponding value $s_{\beta}$ for the measured length. The family of metrics is chosen in such a way that they average to $s=cT$ with a variance \begin{equation} \Delta s^2 = \langle(s-s_\beta)^2\rangle \label{variance} \end{equation} (here $\langle...\rangle$ denotes the average over the metrics) which yields a $\Delta s$ which matches with the uncertainty given by (\ref{kuncertain}). As we will recall in the next section, this matching requires an appropriate choice for the family of metrics. Given such a family of metrics, one studies the propagation of an initial wavefunction $\Psi_0$ for an object of mass $m$ and size $R$ (assuming a spherical shape), in different metrics $(g_{\mu\nu})_{\beta}$. Inevitably, the wave function $\Psi_\beta$ at a later time will belong to a set $\{\Psi_\beta\}$ whose elements will differ from each other in their spatial dependence. In particular, the phase separation between two spatial points acquires a variance when averaged over the family, and a decoherence length scale $a_c$ (to which the wave function gets localized) is defined as the spatial separation $a_c$ over which the phase uncertainty becomes of the order of $\pi$. The decoherence time is given by $\tau_c \approx ma_c^2/\hbar$. Of great interest is the calculated dependence of the coherence length on the mass $m$ and the size $R$ of the object. The results of the K-model are as follows \cite{Frenkel:2002}. For an extended object of size $R$ the localization length is calculated from a generic form of the phase variance for a system of particles. There are two interesting cases: one for $R \gg a_c$ and another for $R \ll a_c$. For a micro-object of linear size $R\ll a_c$, the expression for the coherence length virtually reduces to that of an elementary particle of mass $m$ and the coherence length $a_c$ over which decoherence effects become relevant is given by \begin{equation} a_c \approx \frac{\hbar^2}{G}\; \frac{1}{m^3} = \left(\frac{L}{l_p}\right)^{2} L; \qquad L= \frac{\hbar}{mc}. \label{micro} \end{equation} While for $R \gg a_c$, the critical length can be expressed as, \begin{equation} a_c \approx \left(\frac{\hbar^2}{G}\right)^{1/3}\; \frac{R^{2/3}}{m} = \left(\frac{R}{l_p}\right)^{2/3} L. \label{macro} \end{equation} [Putting $a_c=R$ in Eqn. (\ref{macro}), the two expressions coincide. Thus $a_c=R$ denotes the transition from macro-regime to micro-regime.] Summarizing this, the following important inferences can be drawn: \begin{eqnarray} a_c\gg R &\implies& \frac{\hbar^2}{G} \gg m^3 R \qquad: micro-region,\\ \nonumber\\ a_c \approx R &\implies& \frac{\hbar^2}{G} \approx m^3 R \qquad: transition-region,\\ \nonumber\\ a_c\ll R &\implies& \frac{\hbar^2}{G} \ll m^3 R \qquad: macro-region. \end{eqnarray} Some estimates are of interest. From Eqn. (\ref{micro}) it can be estimated that for a proton \begin{equation} a_c \approx 10^{25} \ {\rm cm}, \qquad \tau_c \approx 10^{53} \ {\rm sec}. \label{proton} \end{equation} What this means is that while according to quantum theory, an initial wave function for the proton would continue to spread indefinitely and forever, gravity induced decoherence causes its loss of coherence, after an enormous time of $10^{53}$ sec, to a very large length scale $10^{25}$ cm. Given these length and time scales, we are of course completely justified in thinking of the proton as a quantum mechanical object. Contrast this with a macroscopic ball of radius $R$ = 1 cm, having density 1 gm/cm$^3$, for which we get \begin{equation} a_c \approx 10^{-16} \ {\rm cm}, \qquad \tau_c \approx 10^{-4} \ {\rm sec}. \label{ball} \end{equation} Furthermore, the transition from the micro to the macro domain occurs for $a_c = R$, which for a density of 1 gm/cm$^3$, works out from (\ref{macro}) to be \begin{equation} a_{tr} \approx 10^{-5} \ {\rm cm}, \qquad \tau_{tr} \approx 10^{3} \ {\rm sec}, \qquad m_{tr} \approx 10^{-14} \ {\rm gm}. \label{ball2} \end{equation} Notice that the coherence length $a_{tr}$ for transition matches with the favoured value for $r_c$ in the collapse models. The transition mass corresponds to about $10^{10}$ amu, which is still about five orders of magnitude higher than the largest masses (about $10^{5}$ amu) for which quantum position superposition has been observed through interferometry. The similarity between CSL and gravity induced localization has been discussed for instance in \cite{RMP:2012}. It is important though to emphasize that while CSL explains localization, as well as realization of a specific random outcome upon measurement in accordance with the Born rule, gravity models only explain localization [without selection of one outcome]. In this sense, these models should perhaps better be called models of gravity induced decoherence, instead of gravity induced collapse. The next model of gravity induced collapse was developed by Di\'osi, and in a spirit somewhat similar to the K-model, the work begins by asking to what accuracy a Newtonian gravitational field ${\bf g}$ can be measured by a quantum probe obeying the uncertainty principle. It is shown that the uncertainty $\delta {\bf g}$ in the measured field, averaged over a spacetime volume $VT$, is bounded by \begin{equation} (\delta\tilde{{\bf g}})^2 \geq G \hbar/VT\; . \label{dbound} \end{equation} This result is in spirit similar to the Karolyhazy bound (\ref{kuncertain}) mentioned above, and later in this paper we will discuss the relation between these two bounds. Di\'osi models this uncertainty by introducing a classical stochastic potential, whose two point correlation reflects this bound. The stochastic potential is then introduced as a source potential in the Schr\"{o}dinger equation, making the evolution of the wave function stochastic. A deterministic Markovian master equation can be deduced for the density matrix, and the stochastic potential is responsible for decoherence of the density matrix. The decoherence time $\tau_c$ and the localization length $a_c$ (related as before by $\tau_c = ma_c^2/\hbar$) can be calculated. For a spherical object of mass $m$ and size $R$ the localization length is given in two limiting cases by \begin{eqnarray} a_{c}&\sim& (\hbar^2/Gm^3)^{1/4}R^{3/4}, \quad {\rm if} \quad Rm^3 \gg \hbar^2/G,\nonumber\\ &\sim& (\hbar^2/Gm^3)^{1/2}R^{1/2}, \quad {\rm if} \quad Rm^3 \ll \hbar^2/G\; . \label{modelcoh} \end{eqnarray} The result in the first line, which is for the macro limit, should be compared with the corresponding K-model result, given by Eqn. (\ref{macro}). The two results are different, and one would like to understand the reasons for the difference. In the subsequent sections we compare the two models and show that while the two spacetime bounds (\ref{kuncertain}) and (\ref{dbound}) are equivalent, they do not imply a unique two-point noise correlation for the assumed stochastic potential. While the D-model assumes white noise, the noise correlation in the K-model is not white, and the corresponding master equation is non-Markovian. This confirms the earlier finding of \cite{DL1989} about the different noise in the two models; however unlike \cite{DL1989} we suggest that the minimal spacetime bounds in the two models are equivalent, and that the bound does not determine the noise correlation. Quantitatively too, the two models show a major difference, even though in both models the micro to macro transition takes place at the same value of the coherence length: $a_{tr} = R = \hbar^2/Gm^3$. In the D-model, for a proton, assumed to have a classical radius $R\approx 10^{-13}$ cm, the localization length and the decoherence time are found to be $10^{6}$ cm and $10^{15}$ sec respectively, much smaller than the corresponding numbers ($10^{25}$ cm and $10^{53}$ sec) for the K-model. In the macro limit, the D-model gives a localization length $10^{-12}$ cm for $R=1$ cm and a density of 1 gm/cm$^3$, which is larger than the K-model value by four orders of magnitude. Strangely enough, this corresponds to a decoherence time of about $10^3$ sec, which is unreasonably high, in contrast to the more plausible value $10^{-4}$ sec yielded by the K-model. Considering that these numerical values are now of interest to experimentalists in the field, it is highly desirable to try and make unique model predictions which do not differ by many orders. It should be emphasised that both in the K-model and in the D-model, the classical stochastic potential is {\it postulated} by way of an assumption, to represent the quantum space-time uncertainty. Consequently, strictly speaking, the two-point correlation functions associated with the corresponding stochastic fields are also postulates, albeit ones that are motivated by certain physical and mathematical choices made in the respective models. The plan of this paper is as follows. In Section II we briefly review the K-model, and argue that one can think of the family of metrics as a stochastic potential. We then reconfirm the result of \cite{DL1989} on the two point noise correlation for this potential, and show that it corresponds to non-white noise. We also write down the non-Markovian master equation for this model. In Section III we recall the D-model, and show how can one think of the uncertainty bounds of the two models, Eqns. (\ref{kuncertain}) and (\ref{dbound}) as being equivalent to each other. We also argue that the methods used by the two models for calculating the decoherence time are equivalent. In Section IV we highlight that the spacetime uncertainty bound does not uniquely determine the noise correlation, and some additional criterion will have to be sought to obtain a unique prediction for the localization length. \section{A brief review of the K-model, and some new results} In the first part of this section we report some of the most important properties of the K-model, following the derivation given in \cite{Karolyhazi:86,Karolyhazi:66}. As noted above, the bound (\ref{kuncertain}) is modeled by introducing a family of metrics $(g_{\mu\nu})_{\beta}$ which are very close to the Minkowski metric. $\beta=0$ labels the Minkowski metric. The proper length $s=cT$ between two world-points ${\bf x_1}$ and ${\bf x_2}$ is defined as the mean of the lengths $s_\beta$ as measured in different members of the family \begin{equation} s=cT=\langle {s}_\beta \rangle \label{avlength} \end{equation} and the uncertainty $\Delta s$ in the length of the line segment is given by Eqn. (\ref{variance}). Assuming the particle motion to be nonrelativistic, only the departure of the $g_{00}$ metric component from its Minkowski value is of interest, and one introduces the perturbation \begin{equation} (g_{00})_{\beta}({\bf x},t) = 1 + \gamma_{\beta}({\bf x},t)\; . \label{defbeta} \end{equation} Now, the idea is to select the set $\gamma_\beta$ in such a way that the length of the world line \begin{equation} s_{\beta}=\int dt\left[g_{\mu\nu}^{\beta}\frac{dx^{\mu}}{dt}\frac{dx^{\nu}}{dt}\right]^{1/2} \end{equation} averages to (\ref{avlength}), and the uncertainty in length as defined by Eqn. (\ref{variance}) matches with the bound (\ref{kuncertain}). For this purpose, the K-model assumes (since spacetime is free apart from matter particles) that the $\gamma_\beta$ satisfy the wave equation \begin{equation} \Box \gamma_{\beta}({\bf x},t) = 0\; . \label{box} \end{equation} In Section IV we will point out that this is not a unique choice for $\gamma_\beta$ and other choices can also yield (\ref{kuncertain}). To proceed further, it is convenient to make a Fourier expansion of the $\gamma_{\beta}$ satisfying (\ref{box}) with $\omega=|{\bf k}|c$ \begin{equation} \gamma_{\beta}({\bf x},t)=\frac{1}{\sqrt{l^{3}}}\sum_{{\bf k}}\left[c_{\beta}({\bf k}) e^{i({\bf k}\cdot{\bf x}-\omega t)}+c.c\right]\; . \label{fourier} \end{equation} Also, the K-model assumes $c_{\beta}({\bf k})=f(k) e^{i\alpha_{\beta}({\bf k})}$, where $\alpha$ is a random phase such that \begin{equation} \langle c_{\beta}({\bf k})\rangle =0, \qquad \langle c_{\beta}^{2}({\bf k})\rangle =0, \qquad \langle c_{\beta}({\bf k})c^{*}_{\beta}({\bf k'})\rangle = \delta_{{\bf k},{\bf k'}}(f(k))^{2}. \label{cond} \end{equation} This is a simplifying assumption of the model; namely that the $c_{\beta}({\bf k})$ are independent stochastic variables with zero mean, and a Gaussian probability distribution (see Eqn. 4 of \cite{Karolyhazy:90}). It is then shown that in order to recover (\ref{kuncertain}) the function $f(k)$ is given by \begin{equation} f(k)=l_p^{2/3}k^{-5/6}. \end{equation} Having determined the family $\gamma_\beta$ the next task is to determine the evolution of a given initial wave function $\psi_0$, and find out how the evolution depends on $\gamma_{\beta}$. Different metrics will result in different evolution, thus leading to a family of wave functions $\psi_\beta$, all of which in fact describe the same system. Decoherence results when there is a significant difference in the evolution as $\beta$ is varied: this is quantified as follows. We start from the Klein-Gordon equation \begin{equation} \frac{1}{\sqrt{-g_{\beta}}}\frac{\partial}{\partial x^{\mu}}(\sqrt{-g_{\beta}}g^{\mu\nu}_{\beta}\frac{\partial \phi}{\partial x^{\nu}})- \left(\frac{mc}{\hbar}\right)^2\phi=0,\nonumber \end{equation} and takes its nonrelativistic limit to arrive at the Schr\"{o}dinger equation \begin{equation} i\hbar\frac{\partial}{\partial t}\psi_{\beta}=\left(-\frac{\hbar^2}{2m}\triangledown^2+V_{\beta}\right)\psi_{\beta} \label{nonrel} \end{equation} where the perturbing potential $V_\beta$ is given by $$V_{\beta}({\bf x},t)=\frac{mc^2\gamma_{\beta}({\bf x},t)}{2}.$$ The non-relativistic limit has been arrived at by first substituting the metric form\\ $diag(g_{00}, -1, -1, -1)$ in the Klein-Gordon equation and using the perturbative expansion (\ref{defbeta}) for $g_{00}$. Then, as is conventionally done, the state $\phi$ in the Klein-Gordon equation is written as $\phi\equiv e^{iS}$ and the function $S$ expanded as a power series in $c^2$: $S = c^2 S_0 + S_1 + c^{-2} S_2 + ..$. Substitution of this expansion in the Klein-Gordon equation, and comparison of terms at different orders in $c^2$ yields the non-relativistic Schr\"{o}dinger equation at order $c^0$ after the identification $\psi_\beta\equiv e^{iS_1/\hbar}$. A more detailed discussion can be found for instance in \cite{Kiefer1991}. An important remark is in order with regard to Eqn. (\ref{nonrel}). Since this equation is being treated as the non-relativistic limit of a relativistic system, semiclassical Einstein equations imply that in principle one ought to consider a contribution to the potential from self-gravity, of the form $\nabla^2 V_{self} \propto |\psi|^2$. The latter self-interaction is precisely what is considered in the Schr\"{o}dinger-Newton [SN] equation; however the SN equation does not consider the effect of spacetime uncertainty that is being studied in the K-model / D-model. In a sense the SN equation is complimentary to the present study, although it has its own limitations \cite{Bahrami2014a}, and in particular does not incorporate quantum fluctuations of the mean self-gravity. In our view a compete treatment should simultaneously include both self-gravity and the effects of intrinsic spacetime uncertainty. To the best of our knowledge this has not been done, and we hope to investigate this in the future. Generalization of the above non-relativistic equation to the many-particle case is achieved by replacing the potential $V_\beta$ by \begin{eqnarray} U_{\beta}(\{{\bf X}\},t)=\sum_i\frac{m_ic^2\gamma_{\beta}({\bf x}_i,t)}{2}, \label{multi-potnal} \end{eqnarray} where $\{{\bf X}\}$ labels a point in configuration space: ${\{{\bf X}\}}=({\bf x_1}, {\bf x_2}, ....{\bf x_N})$. Then the Schr\"{o}dinger equation becomes: \begin{equation} i\hbar\frac{\partial}{\partial t}\Psi_{\beta}(\{{\bf X}\},t)=\left(H+U_{\beta}(\{{\bf X}\},t)\right)\Psi_{\beta}. \label{multi} \end{equation} To realize decoherence one starts with an initial wave function $\Psi_{0}(\{{\bf X}\},0)$, same for all the metrics $\{g^{\mu\nu}_{\beta}\}$. After evolution, different $\Psi_{\beta}(\{{\bf X}\},t)$ will become different. It can be shown that, to a good approximation \cite{Karolyhazy:90} \begin{equation} \Psi_{\beta}(\{{\bf X}\},t)\approx\Psi_{0}(\{{\bf X}\},t)e^{i\phi_{\beta}(\{{\bf X}\},t)}, \end{equation} with \begin{equation} \phi_{\beta}(\{{\bf X}\},t)=-\frac{1}{\hbar}\int_{0}^{t}dt' U_{\beta}(\{{\bf X}\},t). \end{equation} We fix an $\{{\bf X_1}\}$ and an $\{{\bf X_2}\}$, and calculate the difference in phase between these two points in configuration space for different $\beta$. The answer will depend on $\beta$ and on time. The root mean square spread in the phase (average is over $\beta$) $$\langle[\phi_{\beta}(\{{\bf X_1}\},t)-\phi_{\beta}(\{{\bf X_2}\},t)]^{2}\rangle^{1/2}$$ can be estimated as a function of $\{{\bf X_1}\}$, $\{{\bf X_2}\}$ and time $t$. The uncertainty in the relative phase depends only on the separation between the two points in configuration space, and for a sufficiently large separation $a_c$ can reach the value $\pi$ after some time. When that happens, decoherence and localization is said to occur, and the aforementioned results (\ref{micro}) and (\ref{macro}) are shown to hold. [The phase correlations are assumed to be Gaussian, so that the two-point function carries the entire information about the correlations.] In our paper, we will attempt to recast the analysis of the K-model in a manner which might be regarded as more conventional, and which facilitates comparison with the D-model. Thus, there is nothing which really prevents us from thinking of the family $\gamma_{\beta}$ as a stochastic potential with zero mean, and whose two point correlation is such that when a length $s=cT$ is measured in the presence of such a potential, it exhibits an uncertainty given by (\ref{kuncertain}). In order to avoid possible divergences due to taking a discrete distribution of point-like particles, we rewrite Eqn. (\ref{multi}) considering a system with mass density given by a smooth function $f({\bf x})$, \begin{equation} i\hbar\frac{\partial}{\partial t}\psi_{\beta}({\bf x}, t)=\left(H+\frac{c^2}{2}\int d^3 x^\prime \; f({\bf x}^\prime-{\bf x}) \; \gamma_{\beta} ({\bf x}^\prime,t)\right)\psi_{\beta}({\bf x}, t), \label{multi2} \end{equation} where $\gamma_{\beta}$ is now a stochastic potential, and the wave function is also a stochastic quantity, which represents the family $\psi_{\beta}$. We now compute the two point correlation (assuming a Gaussian probability distribution) for the stochastic potential $\gamma_{\beta}$ in the K-model. By means of the Fourier expansion (\ref{fourier}) we can write, \begin{equation} \corr{\gamma_{\beta}({\bf x},t)}{\gamma_{\beta}({\bf x'},t')}=\frac{1}{l^{3}}\corr{\sum_{{\bf k}}\sum_{{\bf k'}}\left[c_{\beta}({\bf k}) e^{i({\bf k}\cdot{\bf x}-\omega t)}+c.c\right]}{\left[c_{\beta}({\bf k'}) e^{i({\bf k'}\cdot{\bf x'}-\omega' t')}+c.c\right]} \end{equation} and using relations (\ref{cond}) we obtain \begin{equation} \corr{\gamma_{\beta}({\bf x},t)}{\gamma_{\beta}({\bf x'},t')}=\frac{1}{l^3}\sum_{{\bf k}}\left[f^2(k) e^{i{\bf k}\cdot({\bf x}-{\bf x'})}e^{-i\omega(t-t')}+f^2(k) e^{-i{\bf k}\cdot({\bf x}-{\bf x'})}e^{i\omega(t-t')}\right]. \end{equation} In the limit $l \to \infty$ we can write \begin{equation} \corr{\gamma_{\beta}({\bf x},t)}{\gamma_{\beta}({\bf x'},t')}=\frac{1}{(2\pi)^3}\int d{\bf k}\left[f^2(k) e^{i{\bf k}\cdot({\bf x}-{\bf x'})}e^{-i\omega(t-t')}+f^2(k) e^{-i{\bf k}\cdot({\bf x}-{\bf x'})}e^{i\omega(t-t')}\right] \end{equation} which, introducing $r=\vert{\bf x}-{\bf x'}\vert$ and $\tau=t-t'$ becomes \begin{equation} \corr{\gamma_{\beta}({\bf x},t)}{\gamma_{\beta}({\bf x'},t')}=\frac{1}{(2\pi)^3}\int d{\bf k}\left[f^2(k) e^{ikr\cos\theta}e^{-i\omega\tau}+f^2(k) e^{-ikr\cos\theta}e^{i\omega\tau}\right]. \end{equation} Let us first calculate the first term: \begin{eqnarray} I_1&=&\frac{1}{(2\pi)^2}\int^{\infty}_{0}\int^{\pi}_{0}k^2 dk f^{2}(k)\sin\theta d\theta e^{ikr\cos\theta}e^{-i\omega\tau}\\ \nonumber\\ &=&\frac{l_p^{4/3}}{(2\pi)^2}\frac{2}{r}\int^{\infty}_{0}k^{-2/3} \sin(kr)e^{-ikc\tau} dk. \nonumber \end{eqnarray} Similarly, the second term gives, \begin{equation} I_2=\frac{l_p^{4/3}}{(2\pi)^2}\frac{2}{r}\int^{\infty}_{0}k^{-2/3} \sin(kr)e^{ikc\tau} dk \end{equation} and, adding these two terms, we finally get: \begin{equation} \corr{\gamma_{\beta}({\bf x},t)}{\gamma_{\beta}({\bf x'},t')}=\frac{l_p^{4/3}}{(2\pi)^2}\frac{4}{r}\int^{\infty}_{0}k^{-2/3} \sin(kr)\cos(kc\tau) dk. \end{equation} Upon integration, the final form of two point correlation is, \begin{equation} \corr{\gamma_{\beta}({\bf x},t)}{\gamma_{\beta}({\bf x'},t')}= \frac{l_{p}^{4/3}}{4\pi^{2}r}\Gamma\left(1/3\right)\left[\frac{1}{(r+c|\tau|)^{\frac{1}{3}}}+\frac{\textrm{sign}(r-c|\tau|)}{|r-c|\tau||^{\frac{1}{3}}}\right]. \label{kcorr} \end{equation} This result was first reported in \cite{DL1989}, and this is evidently not white noise, and is the feature responsible for the difference in the results obtained for localization length in the K-model and the D-model. However, unlike what \cite{DL1989} seems to suggest, we will demonstrate in the next section that the uncertainty bounds (\ref{kuncertain}) and (\ref{dbound}) are equivalent. Furthermore, the phase variance method used in the K-model to determine the decoherence time will be shown to be equivalent to the more conventional method used in the D-model (i.e. studying the master equation for the density matrix). Thus these are not the reasons why the two models arrive at different results. We can also write down the non-Markovian master equation corresponding to this non-white noise. Rewriting (\ref{multi2}) without projecting to the position basis we have \begin{equation} i\hbar\frac{\partial}{\partial t}\left|\psi\left(t\right)\right\rangle =\left[H+\frac{c^{2}}{2}\int d^3 x' f\left({\bf x'}- \hat{{\bf q}}\right)\gamma_{\beta}\left({\bf x'},t\right)\right]\left|\psi\left(t\right)\right\rangle. \label{eq:single gen-2} \end{equation} It has been shown in \cite{Adler:2007}, that when the noise can be treated as a perturbation, the corresponding master equation to the lowest perturbative order in the noise is \begin{equation} \frac{d\rho\left(t\right)}{dt}=-\frac{i}{\hbar}\left[H,\rho\left(t\right)\right]-\left(\frac{c^{2}}{2\hbar}\right)^{2}\int d^3 x d^3 x'\int_{0}^{t}dt' \corr{\gamma_{\beta}({\bf x},t)}{\gamma_{\beta}({\bf x'},t')} \left[f\left({\bf x}\right),\left[f\left({\bf x}'\left(t'-t\right)\right),\rho\left(t\right)\right]\right], \end{equation} with $f\left({\bf x}\right)=f\left({\bf x'}-\hat{{\bf q}}\right)$, $\corr{\gamma_{\beta}({\bf x},t)}{\gamma_{\beta}({\bf x'},t')}$ given in Eq. (\ref{kcorr}) and ${\bf x'}\left(t'-t\right)$ the position operator in interaction picture evolved up to the time $t'-t$. The calculation of decoherence time from this master equation is by no means straightforward nor obvious, and the phase variance method used in the K-model is decidedly far simpler. As we show in the next section, for the D-model, the phase variance method is equivalent to the Markovian master equation, in so far as the calculation of the decoherence time is concerned. We conjecture that the same equivalence is true, if that master equation is replaced by the above non-Markovian equation of the K-model, and that this latter equation also yields decoherence in position. However we do not have a proof for this, or a derivation of the phase variance method from this master equation, and we hope to address these questions in a future study. \section{A brief review of the D-model, and comparison with the K-model} Di\'osi and Luk\'acs \cite{Diosi:87a} work in the framework of Newtonian Quantum Gravity [NQG], where $G$ and $\hbar$ appear in the analysis, but $c$ does not. They ask: if a quantum probe is used to measure a classical gravitational field $\bf g$, what is the maximum accuracy with which ${\bf g}$ can be measured? They assume that in a realistic measurement only an average $\bf \tilde{g}$ of ${\bf g}$ \begin{equation} {\bf \tilde{g}}({\bf x}, t) = \frac{1}{VT} \int {\bf g}({\bf x^\prime}, t) d^{3}x^\prime dt, \qquad |{\bf x}-{\bf x^\prime}| < R, \qquad |t^\prime -t| < \frac{T}{2} \end{equation} over a space and time interval can be measured. The volume $V=4\pi R^3/3$ and the time-interval $T$ are properties of the probe, assumed to be a spherical object with linear extent $R$. The quantum probe, assumed to be a mass $M$ with wave-packet of initial extent $R$ picks up a momentum ${\bf P}=M{\bf \tilde{g}}T$ during the measurement time $T$. However there is a quantum uncertainty $\delta P \sim \hbar / R$ in the classical value of $P$, showing that there is an inaccuracy in the measurement of ${\bf \tilde{g}}$ given by $\delta ({\bf \tilde{g}}) \sim \hbar / MRT$. This inaccuracy can be decreased by increasing $M$, but the mass $M$ produces its own gravitational field, which disturbs the field being measured, and has an intrinsic uncertainty $\delta {\bf g}\sim GM/R^2$ because of the spread of the wave-packet. Consequently the optimal choice for $M$ is $M\sim \sqrt{\hbar R/GT}$ and the final minimal uncertainty in the measurement of the gravitational field is \begin{equation} \delta(\tilde{\bf g}) \sim \sqrt{G\hbar/VT}. \label{minimal} \end{equation} This minimal uncertainty appears to have a universal character, and could be mathematically modeled, for the sake of further application, as a stochastic contribution ${\bf g}_{st}({\bf x}, t)$ (having zero mean) to the classical field ${\bf g}_{cl}({\bf x}, t)$ \begin{equation} {\bf g}({\bf x}, t) = {\bf g}_{cl}({\bf x}, t) + {\bf g}_{st}({\bf x}, t)\; . \end{equation} The minimal bound (\ref{minimal}) can be recovered using this stochastic field, provided that, after spacetime averaging, it satisfies \begin{equation} \langle{(\tilde{{\bf g}}}_{st})^2\rangle \sim \frac{G\hbar}{VT}\; . \label{mean} \end{equation} The spacetime average that appears on the left hand side can be written explicitly, so that \begin{equation} \langle{\bf \tilde{g}}_{st}^2\rangle = \frac{1}{V^2 T^2}\int \int d^3 x \; d^3 x^\prime \; dt \; dt^\prime \langle{\bf g}_{st}({\bf x}, t) {\bf g}_{st} ({\bf x^\prime}, t^\prime)\rangle\; . \label{avg} \end{equation} Now, as is easily verified, a possible two-point correlation on the right hand side which will satisfy this relation is white-noise: \begin{equation} \langle{\bf {g}_{st}({\bf x}, t}) {\bf g}_{st} ({\bf x^\prime}, t^\prime)\rangle = {\bf I} G\hbar \delta({\bf x} - {\bf x^\prime}) \delta (t - t^\prime) \end{equation} The form of the correlation leads to the cancellation of a factor $VT$ in (\ref{avg}), which is what is desired. From here it can be shown that the two point correlation for the gravitational potential $\phi$ defined by ${\bf g}=-\nabla\phi$ is given by \begin{equation} \langle\phi({\bf x}, t)\phi({\bf x}^\prime,t^\prime)\rangle - \langle\phi({\bf x}, t)\rangle\langle\phi({\bf x}^\prime,t^\prime)\rangle \sim \frac{G\hbar}{|{\bf x}-{\bf x}^\prime|}\;\delta(t-t^{\prime}), \label{phicorr} \end{equation} where $\langle\phi({\bf x},t)\rangle=\phi_{cl}({\bf x},t)$ and $\phi_{cl}({\bf x},t)$ satisfies the Poisson equation \begin{equation} \nabla^2\phi({\bf x},t) = 4\pi G\rho({\bf x},t). \end{equation} As before, the probability distribution for the stochastic potential is assumed to be Gaussian. From here on, the analysis proceeds in a straightforward manner: one uses the correlation function for the stochastic potential, and then constructs a Schr\"{o}dinger equation for evolution of an object in this stochastic potential. From there one can construct a Lindblad master equation, from which the decoherence effects of the fluctuating gravitational potential can be deduced. This leads to Diosi's results mentioned above in Eqn. (\ref{modelcoh}). Now, we notice that Eqns. (\ref{mean}) and (\ref{avg}) do not uniquely imply that the noise is white. For instance, suppose that in Eqn. (\ref{avg}) the correlation on the right hand side has a form \begin{equation} \langle{\bf g}_{st}({\bf x}, t) {\bf g}_{st} ({\bf x^{\prime}}, t^{\prime})\rangle = {\bf I} G\hbar F({\bf x} - {\bf x^\prime}) G(t - t^{\prime}) \end{equation} with $F$ and $G$ functions other than delta-functions. Now it is obvious, by defining new coordinates say, ${\bf p} = {\bf x} - {\bf x'}$, ${\bf q = {\bf x} + {\bf x'}}$, $r=t-t'$, $s=t+t'$, that the right hand side of Eqn. (\ref{avg}) can be written as \begin{equation} \frac{1}{VT} \int \; |det J|^{-1} \; d^3 p \; d r \; F({\bf p}) G({ r}). \end{equation} Here $J$ is the Jacobian of the transformation. This form can in principle yield (\ref{mean}) with a suitable choice of the functions $F$ and $G$ (other than delta functions), since a factor $1/VT$ has again cancelled out. Thus it seems to us that the minimal bound (\ref{mean}) can also be achieved by noise which is not white, and this would make a difference in the final quantitative conclusions one draws about decoherence and how it depends on the mass and size of the decohering object. [For the sake of completeness we recall that the master equation for the D-model has an intrinsic divergence. To regularise this divergence a cut-off was first proposed at the nucleon scale \cite{Diosi:89}; unfortunately such a cut-off leads to excessive heating inconsistent with observations. It was then proposed to raise this cut-off to a much higher value $r_c$, the length scale of the CSL model \cite{Ghirardi3:90}. While this solves the heating problem, it is difficult to physically justify the inclusion of the scale $r_c$ in the D-model, one of whose motivations was to give a parameter free description of decoherence and collapse. In a recent paper \cite{Bassi2014}, the issue of this divergence has been discussed in detail, and the authors also consider the possibility of avoiding overheating by introducing dissipation in the dynamics, instead of introducing a high cut-off.] The second observation which we wish to make is the role played by spacetime averaging. In principle, it is possible to consider an idealized quantum probe whose spread is small enough that the gravitational field maybe assumed uniform over the extent of the probe and over the duration of the measurement. In such a case, the analysis leading upto Eqn. (\ref{minimal}) can be again repeated, but now without referring to spacetime averaging, and the minimal uncertainty will be given by \begin{equation} \delta({\bf g}) \sim \sqrt{G\hbar/VT}, \label{minimal2} \end{equation} where no reference to averaging is made. It is possible to relate this bound to the spacetime bound in the K-model, as we will see in a moment. Spacetime averaging is of course essential in the D-model, for the purpose of deducing a white noise correlation, as is evident from Eqn. (\ref{avg}). If spacetime averaging is not done, and we accept the bound (\ref{minimal2}), then this can be related to the minimal bound in the K-model, once we think of Newtonian gravity as an approximation to general relativity. The bound (\ref{minimal2}) is equivalent to an uncertainty in the measured gravitational potential, given by \begin{equation} \delta(\phi) \sim \sqrt{G\hbar/RT}\; . \label{Dvariance} \end{equation} If the mean value of $\phi$ is zero, then $\delta (\phi)$ is of the order of the perturbing potential which distorts a Minkowski spacetime background. Now, if one attempts to measure a length $s=cT$, an uncertainty in its measurement will be induced by $\delta (\phi)$, and given as follows: \begin{equation} s^\prime = \sqrt{g_{00}} cT = \sqrt{1+ \frac{2\phi}{c^2}} cT \sim cT + \sqrt{\frac{G\hbar}{c^4 RT}}cT \end{equation} giving that \begin{equation} (\Delta s)^2 \equiv (s'- cT)^2 \sim l_p^2 \frac{s}{R}, \label{s1eqn} \end{equation} which with the assumption $R=\Delta s$ becomes the Karolyhazy relation ${\Delta s}^3 = l_p^2 \; s$. [Equating $R$ and $\Delta s$ is justified because we are looking for the minimum length uncertainty; a smaller value of $R$ would increase $\Delta s$ and a larger value of $R$ would imply that the actual uncertainty is $R$ and not $\Delta s$.] Furthermore, it can be shown that the averaged potential in the D-model also implies the K-model spacetime bound, provided the white noise correlation is assumed. We calculate the stochastic world line length similar to $s_{\beta}$ in K-model but now with a spacetime averaged potential $\tilde{\phi}=\frac{1}{VT}\int\int \phi({\bf x},t) \,d^{3}x \,dt$ : \begin{equation} s = \int_{0}^{T} \sqrt{1+\frac{2\tilde{\phi}}{c^{2}}} c \,dt. \end{equation} Note that here $\tilde{\phi}$ is a stochastic variable and the line element $s$ thus obtained is also stochastic.\\ Hence, \begin{equation} s-s_{avrg} \simeq c\int_{0}^{T}\frac{\tilde{\phi}}{c^{2}} \,dt \end{equation} \\where $s_{avrg}=cT$.\\ Next we calculate $\Delta s^{2} =\langle (s-s_{avrg})^{2}\rangle$: $$\Delta s^{2}=\frac{1}{c^{2}}\corr{\int_{0}^{T}\tilde{\phi} \,dt'}{\int_{0}^{T}\tilde{\phi} \,dt''}.$$ As $\tilde{\phi}$ has already been averaged over the measuring time $T$, it depends only weakly on $t$ and can be thought to remain almost constant within time $T$. Using this, we can directly integrate over time to get $$\Delta s^{2}= \frac{T^{2}}{c^{2}}\langle\tilde{\phi}^{2}\rangle \; .$$ Now, we explicitly calculate $\langle\tilde{\phi}^{2}\rangle $, $$ \langle\tilde{\phi}^{2}\rangle = \frac{1}{V^{2}T^{2}}\corr{\int\int \phi({\bf x},t) \,d^{3}x \,dt}{\int\int \phi({\bf x'},t') \,d^{3}x' \,dt'}\; . $$ Using the two point correlation function (\ref{phicorr}) we get, $$ \langle\tilde{\phi}^{2}\rangle = \frac{\hbar GT}{V^{2}T^{2}}\int\int \frac{1}{\vert\vec{x}-\vec{x'}\vert} \,d^{3}x \,d^{3}x'\,. $$ We now calculate the double integral by first denoting $\vec{x}\equiv (x_{1}, x_{2}, x_{3}) $ and $ \vec{x'}\equiv (x'_{1}, x'_{2}, x'_{3}) $ and make a coordinate transformation as follows: $$ s= x_{1}-x'_{1}, \quad p=x_{2} -x'_{2}, \quad q= x_{3}-x'_{3}, \quad l = x_{1} + x'_{1}, \quad m = x_{2} + x'_{2}, \quad n = x_{3} + x'_{3}. $$ This allows us to write the integral as: $$\frac{1}{\vert\det J \vert}\int\int\frac{1}{\sqrt{s^{2}+p^{2}+q^{2}}} \,ds \,dp \,dq \,dl \,dm \,dn = \frac{1}{\vert\det J \vert}\int\int\frac{1}{r} \,d^{3}r \,dl \,dm \,dn.$$ To find the limits of integration, we note that since the dimension of the probe is $R$, we can say, $$ x_{1},x_{2}, x_{3}, x'_{1}, x'_{2}, x'_{3} : -R/2 \to R/2, \qquad r : 0 \to R $$ and, evaluating the integral, we get \begin{equation} \int\int \frac{1}{\vert\vec{x}-\vec{x'}\vert} \,d^{3}x \,d^{3}x' \simeq R^{5}. \end{equation} This is of the order $V^{2}/R $, and substituting this result back in the original equation, we get \begin{equation} \Delta s^{2}=\frac{ l_p^{2}}{R} s, \label{seqn} \end{equation} where $s=cT$. Now $R \approx \Delta s $ reproduces the K-model bound, \begin{equation} \Delta s^{3}= l_p^{2} s\; . \end{equation} We thus see that the minimal spacetime bounds in the D-model and in the K-model are essentially equivalent, and the difference in their final results is coming about because in one model the noise is white, and in the other it is not. We have seen above that in the K-model the decoherence time is estimated by setting the phase variance to be of the order $\pi^2$. On the other hand, in the D-model, decoherence time is estimated from the master equation. We show below that in the D-model, the phase variance method gives the same result for decoherence time, as the master equation. This suggests that the phase variance method could well be sufficiently general, and equivalent to the master equation method, for a non-Markovian equation as well. The phase of a wavefunction moving in a potential $U_{\beta}({\bf x},t)$ after time T is, $$\delta_{\beta}({\bf x},T)= -\frac{1}{\hbar}\int_{0}^{T}U_{\beta}({\bf x},t)\,dt\; .$$ In Di\'osi's model, the potential for a given configuration ${\bf X}$ is given as follows \cite{Diosi:87}: \begin{equation} U({\bf X},t)=\int_{vol} \phi({\bf x},t)f({\bf x}\vert {\bf X})\,d^{3}x \end{equation} where $f({\bf x}\vert {\bf X})$ denotes the mass density function at a point ${\bf x}$ for the configuration ${\bf X}$.\\ Thus phases accumulated at time $t$ at configuration ${\bf X}$ is \begin{eqnarray} \delta({\bf X},t)&=&-\frac{1}{\hbar}\int_{0}^{t}\int_{vol}\phi({\bf x'},t')f({\bf x'}\vert {\bf X})\, d^{3}x' \,dt'\, \end{eqnarray} We now evaluate the variance $\langle[\delta({\bf X},t)-\delta({\bf X'},t)]^{2}\rangle$. We can write $$ \langle[\delta({\bf X},t)-\delta({\bf X'},t)]^{2}\rangle = \frac{1}{\hbar^{2}}\left\langle\left[\int_{0}^{t}\int_{vol}\phi({\bf x'},t')f({\bf x'}\vert {\bf X})\, d^{3}x' \,dt' - \int_{0}^{t}\int_{vol}\phi({\bf x''},t'')f({\bf x''}\vert {\bf X'})\, d^{3}x'' \,dt''\right]^{2} \right\rangle\,,$$ calculate term by term and take the stochastic average using Eqn. (\ref{phicorr}). The square modulus of the first term is \begin{eqnarray} \corr{\int_{0}^{T}\int_{vol}\phi({\bf x'},t')f({\bf x'}\vert {\bf X})}{\int_{0}^{T}\int_{vol}\phi({\bf x},t'')f({\bf x}\vert {\bf X})\, d^{3}x \,d^{3}x' \,dt' \,dt''} \nonumber\\ = \frac{GT}{\hbar}\int_{vol}\int_{vol}f({\bf x'}\vert {\bf X})f({\bf x}\vert {\bf X})\frac{1}{\vert {\bf x}-{\bf x'}\vert}\,d^{3}x \,d^{3}x'. \end{eqnarray} Similarly evaluating the square modulus of the second term and the cross term and summing those, we finally get, \begin{equation} \langle[\delta({\bf X},T)-\delta({\bf X'},T)]^{2}\rangle= \frac{GT}{\hbar}\int\int \,d^{3}x \,d^{3}x' \frac{[f({\bf x}\vert {\bf X})-f({\bf x}\vert {\bf X'})][f({\bf x'}\vert {\bf X})-f({\bf x'}\vert {\bf X'})]}{\vert {\bf x}-{\bf x'} \vert}\; . \end{equation} For decoherence, we need $\langle[\delta({\bf X},T)-\delta({\bf X'},T)]^{2}\rangle \approx \pi ^{2}$. This gives us a decay time scale, \begin{equation} \tau_{d}^{-1} = \frac{G}{\pi^{2}\hbar}\int\int \,d^{3}x \,d^{3}x' \frac{[f({\bf x}\vert {\bf X})-f({\bf x}\vert {\bf X'})][f({\bf x'}\vert {\bf X})-f({\bf x'}\vert {\bf X'})]}{\vert {\bf x}-{\bf x'} \vert}. \end{equation} This is same as the decay time obtained by Di\'osi using the master equation apart from some constant factors. Thus use of master equation or phase variance method gives similar result for the decay time. This suggests that the phase variance method is possibly a more general one which can give decay time for both white and non-white noise. \section{Spacetime uncertainty bound and the noise correlation} We have seen that in K-model and in D-model, the correlation functions of the potentials are completely different. Let us now try to find a general form of such potentials which would satisfy the following bound: \begin{equation} \Delta s^3 \sim l_p^2s. \end{equation} The most general form of the potential is, \begin{equation} \phi({\bf x},t)=K F_{st}({\bf x},t), \end{equation} where $K$ is a constant with a suitable combination of $G$, $c$ and $\hbar$. Now we will use this form to calculate the uncertainty in the length of a line element. Then we have, $$ s'=\int^T_0 \sqrt{g_{00}}cdt = \int^T_0 \sqrt{1+K F_{st}({\bf x},t)}cdt \approx c\int^T_0 \Big(1+\frac{1}{2}KF_{st}({\bf x},t)\Big)dt.$$ If we write $s=cT$, then \begin{equation}\label{s'-s} (s'-s)^2=\frac{K^2 c^2}{4}\Big(\int^T_0F_{st}({\bf x},t)dt\Big)^2\; . \end{equation} The uncertainty in the measurement of the length is obtained, as in the K-model, by averaging Eqn. (\ref{s'-s}): $\Delta s^2=\langle(s'-s)^2\rangle$. We assume the correlation function of $F_{st}({\bf x},t)$ to be separable in space and time, i.e. \begin{equation} \langle F_{st}({\bf x},t)F_{st}({\bf x}',t') \rangle=P({\bf x},{\bf x}')g(t,t'), \end{equation} which leads to: \begin{equation} \Delta s^2=\frac{K^2 c^2}{4}P({\bf x},{\bf x})\int_0^T\int_0^T g(t,t') \,dt \,dt'\; . \end{equation} The linear size of the object under consideration is $R$. At this point we impose the following three relations: $$ \Delta s^3 \sim l_{p}^2 s \qquad R \sim \Delta s \qquad s \sim cT .$$ For the above class of solutions, where we assumed that the noise correlation is separable in space and time coordinates, to obtain the Karolyhazy uncertainty relation, $P({\bf x},{\bf x})$ must be independent of ${\bf x}$ (e.g. $P({\bf x},{\bf x'})$ might depend on $|{\bf x}-{\bf x'}|) $, but it can be a function of $R$ and $g(t,t')$ can have a number of solutions such that the two point correlation function is given by: \begin{equation}\label{g(t,t')} g(t,t') =T^m t^{n_1} t^{'n_2}. \end{equation} We note that the form of the correlation function suggested above is neither the most general nor motivated from symmetries. However a large number of correlation functions can be constructed from the form given above and the linear combinations thereof. In general, the correlation function can take other forms also (e.g. a function of $(t-t')$ as we have already shown in the previous section). We now show below how different solutions lead to the same K-bound. Using the two point correlation in Eqn. (\ref{g(t,t')}), we find, $$ \Delta s^2=\frac{K^2 c^2}{4}P({\bf x},{\bf x})\frac{T^{m+n_1+n_2+2}}{(n_1+1)(n_2+1)}.$$ The constants can be adjusted to reproduce the K-bound from the above equation for different choices of $\{m,n_1,n_2\}$. Below we illustrate some examples for a simplified version where $n_1=n_2=n$. \begin{enumerate} \item[1)] $P({\bf x},{\bf x})=\frac{1}{R}$. In this case, we have, $$\Delta s^2 \propto \frac{1}{R} T^{m+2n+2}.$$ Now, since $R \sim \Delta s $ and $ s \propto T $, we can write, from the above equation, $$ \Delta s^3 \propto s^{m+2n+2}$$ and so, we must have, $m+2n+2=1$ which gives $m/2+n=-1/2 $. For this, we find, \begin{equation} \langle \phi^2({\bf x},t) \rangle =\frac{K^{2}}{R}T^m t^{2n}. \end{equation} All such $\phi({\bf x},t)$, for which $\frac{m}{2}+n=-\frac{1}{2}$, are possible solutions. Note that $m=-1 , n=0$ gives the form $\langle \phi^2 \rangle = \frac{G\hbar}{RT}$ which we had already predicted as a possible form of Di\'osi stochastic potential in Eqn. (\ref{Dvariance}). \item[2)] $P({\bf x},{\bf x})=1$. We get $$ \Delta s^2 \propto s^{m+2n+2}.$$ Since, according to K-bound, it must be $$\Delta s^2 \propto s^{2/3},$$ we get the condition: $$m+2n+2=\frac{2}{3}.$$ In general, we can say that if $P({\bf x},{\bf x})$ has a form $P({\bf x},{\bf x})=R^{2j}$ where $j$ is real, then the condition it must satisfy is \begin{equation} 1-j=\frac{3}{2}(m+2n+2). \end{equation} \end{enumerate} Thus, we see that for different choices of $j$, $m$ and $n$ we get different potentials all satisfying the Karolyhazy uncertainty relation. This has been shown for separable forms only. In general, the solution can be non-separable also, as in the K-model, where $\gamma({\bf x},t)$ cannot be separated in space and time coordinates. We conclude that, given the uncertainty in measurement or the space-time bound, the form of the potential cannot be uniquely determined. There is a whole class of solutions as discussed which lead to the same bound. It seems that the $\gamma$ in K-model and $\phi$ in Di\'osi's model are two special cases which simplify the mathematical treatment, but they are not unique choices. We are not suggesting that the above examples given by us are necessarily that of physically realistic noise, but rather that more work needs to be done to uniquely determine the gravitational noise correlation. It is important to compare our analysis with that of Di\'osi and Luk\'acs [DL] \cite{DL1989} who in their Eqn. (8) propose that the fundamental geodesic uncertainty relation is \begin{equation} \Delta s^2 \approx l_p^2 \frac{s}{R}, \label{DL} \end{equation} which is the same as our Eqns. (\ref{s1eqn}) and (\ref{seqn}) but different from the Karolyhazy relation (\ref{kuncertain}). To our understanding, DL suggest that (\ref{DL}), rather than (\ref{kuncertain}), is the fundamental relation. Through their analysis leading up to their Eqn. (11), which is \begin{equation} (\Phi)_{R,T}\approx\sqrt{\hbar G/RT}, \end{equation} they relate (\ref{DL}) to the metric uncertainty in their Eqn. (11), which is the same as our Eqn. (\ref{Dvariance}). On the other hand, as we have argued below Eqn. (\ref{s1eqn}), one should set $R\sim \Delta s$ for optimal minimal uncertainty, in which case Eqn. (\ref{DL}) above becomes the same as the Karolyhazy uncertainty bound (\ref{kuncertain}). In (\ref{DL}) above, it appears that increasing $R$ decreases $\Delta s$, but if $R>\Delta s$ this does not seem physically reasonable, since the uncertainty would be bounded from below by the probe size $R$. Hence $R\sim \Delta s$ seems optimal. Thus it seems to us that the main difference between the work of DL and our work is that whereas DL suggest the spacetime bounds in the two models are different, we have argued that the two bounds are actually equivalent to each other. Also, as we have attempted to demonstrate, the bound does not by itself favor white noise over colored noise, nor the other way around. It would appear that this issue is open for further examination. \section{Summary and concluding remarks} The K-model proposes a minimal spacetime uncertainty bound, namely the accuracy with which a length interval can be measured by a quantum probe. This bound is realized via a hypothesized coexisting family of metrics. The propagation of the wave function in this kind of a hazy spacetime is shown to lead to loss of coherence, which becomes relevant for macroscopic objects. We showed that this family of metrics can equivalently be interpreted as a stochastic potential, whose two-point noise correlation can be worked out, and shown to be non-white noise. The Schr\"{o}dinger evolution takes place in the presence of this stochastic potential, and the master equation for the density matrix is non-Markovian. The D-model also proposes an, apparently different, spacetime bound, i.e. the accuracy with which the gravitational field averaged over a spacetime region can be measured by a quantum probe. This uncertainty in the gravitational field can be modeled by a stochastic potential, which is assumed to have a white noise correlation. The Schr\"{o}dinger evolution of the wave function takes place in this stochastic potential, making the wave function stochastic. A Markovian master equation for the density matrix is set up, and once again decoherence in position basis for large objects is demonstrated. The quantitative estimates for the decoherence time and localization length are however different from those of the K-model. We showed that the spacetime uncertainty bounds in the two models are essentially equivalent to each other. We then argued that the difference in the quantitative results of the two models is due to the assumed nature of the noise - white in one case, and coloured in the other case. We also argued that the spacetime bound does not uniquely predict the noise correlation, and many choices are possible, each of which is likely to give different results for the decoherence time scale. White noise may be the simplest choice, but there seems to be no physical reason why gravitational effects must conform to white noise. Thus it would appear that additional criteria, apart from the minimal bound, are essential to precisely define a model of gravity induced decoherence. Nonetheless, it can be said that the role of gravity in decoherence is fundamentally suggested, and further investigation of this problem is highly desirable. \vskip\bigskipamount} \noindent {\bf Acknowledgements:} The authors are grateful to Lajos Di\'osi for bringing Ref. \cite{DL1989} to their attention, and for helpful correspondence.The work of TPS is supported by Grant \# 39530 from the John Templeton Foundation. SD acknowledges support from NANOQUESTFIT, the COST Action MP1006 and INFN, Italy. SD and KL acknowledge the hospitality of the Tata Institute of Fundamental Research (Mumbai) where part of this work has been done. TPS would like to thank Aniket Agrawal for collaboration during the early stages of this work. \vskip\bigskipamount} \vskip\bigskipamount} \centerline{\bf REFERENCES}
1,108,101,562,651
arxiv
\section{Introduction.} Many inverse problems are known to be severely ill-posed. This makes it extremely difficult to design reliable reconstruction algorithms and dramatically restricts their resolution in practice. However, in some cases, it has been observed numerically that the stability increases with respect to some parameter such as the wave number (or energy). Ill-posedness occurs at the stage of the continuation of solutions of partial differential equations from observation set toward an obstacle. Several rigorous justifications of the increasing stability phenomena in the Cauchy (or continuation) problem in different settings were obtained by Isakov {\it et al} \cite{HI, I07, IK, ASI10}. These justifications are in form of conditional stability estimates which are getting nearly Lipschitz when the wave number $k$ is getting large. For increasing stability for the Schr\"odinger potential from the Dirichlet-to-Neumann map we refer to \cite{I11}, \cite{IN}, \cite{INUW}, and \cite{IW}. As a an important example of (at least generically and locally) well-posed inverse scattering problem we mention the inverse backscattering problem \cite{ER}. We consider a solution $u$ to the Helmholtz equation \begin{equation} \label{H} (\Delta + k^2)u=0\;\mbox{in}\;{\mathbb R}^3\setminus \bar D, \end{equation} satisfying the Sommerfeld radiation condition \begin{equation} \label{Som} lim r(\partial_r u -iku)(x) = 0 \;\mbox{as}\;r\rightarrow\infty. \end{equation} Here $r=|x|$. As well known \cite{CK}, \cite{LP}, \cite{T} the relations \eqref{H}, \eqref{Som} imply that $$ u(x)= \frac{e^{ikr}}{r} A(\sigma)+ O(r^{-2}), $$ where $\sigma= r^{-1} x$ and $A(\sigma)$ is the so called scattering amplitude (or pattern). We are interested in recovery of $u$ from $A$. It is known that $A$ is a (real) analytic function on the unit sphere. Uniqueness of $u$ follows from well know Rellich Theorem. Our main goal is to study stability of this recovery. In this paper we will use denote by $B_R$ the ball $\{x: |x|<R\}$ in ${\mathbb R}^3$. To state our results we use complete orthonormal base in $L^2(B_1)$ formed of spherical harmonics $Y_n^m(\sigma), n=0,1,2,..., m=-n,...,0,...,n$. Let $A\in L^2(B_R)$ and $a_{m,n}$ be the coefficients of the expansion of $A$ with respect to $Y_n^m$, i.e., $A = \sum_{n,m} a_{m,n} Y_n^m$. For brevity, we introduce $$ Y_n(\cdot ;A) = a_n^{-1} \sum_{m=-n,...,n}a_{m,n}Y_n^m, a_n=(\sum_{m=-n,...,n} |a_{m,n}|^2)^{\frac{1}{2}} $$ then \begin{equation} \label{exp} A(\sigma)= \sum_{n=0}^{\infty} a_n Y_n(\sigma;A). \end{equation} Observe that $\|A\|_{(0)}^2=\sum_{n=0}^{\infty} |a_n|^2$. As known \cite{CK}, \cite{T}, \begin{equation} \label{expu} u(x)= \sum_{n=0}^{\infty} u_n(r) Y_n(\sigma;A),\; \mbox{where}\; u_n(r) = ki a_n h_n^{(1)}(kr). \end{equation} For a function $u$ with the expansion \eqref{expu} we will use the following (natural) Sobolev norm \begin{equation} \label{norm} \| u\|^2_{(l)}(\partial B_R)= R^2 sum_{m=0}^l \sum_{n=0}^{\infty} (\frac{n}{R})^{2m} |u_n(R)|^2. \end{equation} We let $\varepsilon_1^2=\sum_{n=0}^N |a_n|^2, \varepsilon_2^2=\sum_{n=N+1}^{\infty} |a_n|^2$. Now we state our main results where we chosen $N=[\sqrt{kR}]$ and $E=-log \varepsilon_2$. Here $[a]$ is the entire part of $a$. \begin{theorem} \label{Th1} Assume that $2\leq kR$. Then we have the following stability estimates \begin{equation} \label{T1} \lVert u \rVert^2_{(0)}(\partial B_R) \leq \frac{2 e^2}{\pi} \varepsilon_1^2 + \frac{2}{\pi}e^{\frac{2}{R}} \varepsilon_2+ R^2\frac{M_1^2}{E+k}, \end{equation} and \begin{equation} \label{T2} \lVert u \rVert^2_{(0)}(\partial B_R) \leq \frac{2 e^2}{\pi} \varepsilon_1^2 + \sqrt{\frac{2R}{\pi k}}e^{\frac{1}{R}} M_1 \varepsilon_2^{\frac{1}{2}}+ R^2\frac{M_1^2}{E+k} , \end{equation} where $M_1=\|u\|_{(1)}(\partial B_R)$. \end{theorem} \begin{theorem} \label{Th2} Assume that $2\leq kR$. Then we have the following stability estimate: $$ \lVert \partial_r u \rVert^2_{(0)}(\partial B_R) \leq $$ \begin{equation} \label{T1der} \frac{ e^2}{\pi}(3+\sqrt{5}) k^2 \varepsilon_1^2 + k^2 e^{\frac{2}{R}}\varepsilon_2+ R^2\frac{M_2^2}{E+k-2\sqrt{E+k}+1} , \end{equation} where $M_2=\|\partial_r u\|_{(1)}(\partial B_R)$. \end{theorem} From estimates \eqref{T1}, \eqref{T2} it is obvious that the stability behaves more like Lipschitz type when $k$ is large. Indeed, the second and third terms on the right side \eqref{T2} go to zero as powers of $k$, which quantifies the increasing stability. We observe that the bounds \eqref{T1}, \eqref{T2}, \eqref{T1der} are so called conditional stability estimates: they guarantee stability under a priori constrains of higher norms of solutions. Due to ill-posedness of recovery of $u$ from $A$ (\cite{B},cite{CK},\cite{I}, \cite{T}) stability estimates are impossible without such constraints. Known stability estimates for $u$ from its scattering amplitude \cite{B}, \cite{I}, \cite{T} are of logarithmic type, contain unknown constants, and do not indicate increasing stability for larger $k$. Our proofs are using well known expression \eqref{expu} of $u$ via the expansion \eqref{exp} of the scattering amplitude as the series in spherical harmonics. The crucial step is explicit upper bounds for the Hankel functions $h_n^{(1)}(t)$ given by Lemmas 2.1, 2.3 with surprisingly short and elementary proofs. Theory of Bessel and Hankel functions abound with basic, but hard open questions (about sharp maxima, zeros etc) \cite{W}. While some bounds (similar to Lemma 2.2) and asymptotic behaviour of these functions are well known, explicit bounds when $0<t<n$ are only partially available and constants in these bounds are not explicit \cite{BRV}, Lemma 1, p. 364. Some refined properties of Bessel functions were used by F. John \cite{J} to find a crucial example showing growing instability for the continuation of solution to the Helmholtz equation from the unit disk onto its complement in the plane. In \cite{IK} by using energy integrals for the Bessel's equation we demonstrated increasing stability of the continuation for the John's example in low frequency zone (which grows with $k$). The paper is organized as follows. In Section~2 we will obtain some explicit bounds on Hankel functions $h_n^{(1)}$. In Section~3 we present proofs of Theorem~\ref{Th1} and of Theorem~\ref{Th2}. In Section~4 we give applications to the increasing stability of inverse obstacle scattering problem linearized around a sphere. Finally we discuss challenging open problems and possible further developments. \section{Some bounds of Hankel functions} We will use that \begin{equation} \label{h} h_n^{(1)}(t) = \sqrt{\frac{2}{\pi}}i^n \frac{e^{i t}}{t} \sum_{m=0}^n(-1)^n\frac{(n+m)!}{m! (n-m)!}\frac{i^m}{(2t)^m}, \end{equation} provided $0<t$. This is a well-known representation of the Hankel function given for example in \cite{JEL}, p. 142, \cite{T}, p. 205, \cite{W}, p. 53. To prove main results, we need elementary but crucial lemmas. \begin{lemma} \label{low} If $n^2<t$, then \begin{equation} \label{lowb} |h_n^{(1)}(t)|<\frac{\sqrt{2}e}{\sqrt{\pi}t}. \end{equation} \end{lemma} \begin{proof} Using \eqref{h} and the triangle inequality we yield $$ |h_n^{(1)}(t)| \leq \frac{\sqrt{2}}{\sqrt{\pi} t} \sum_{m=0}^n \frac{(n+m)!}{m! (n-m)!}\frac{1}{(2t)^m} = $$ $$ \frac{\sqrt{2}}{\sqrt{\pi} t} (1+\sum_{m=1}^n \frac{(n-m)!}{(n-m)!} \frac{1}{m!} \frac{(n-m+1)...n(n+1)...(n+m)}{n^m (2n)^m}), $$ where we used the assumption that $n^2<t$. Since $m \leq n$, we have $(n-m+1)...n\leq n^m$ and $(n+1)...(n+m)\leq (2n)^m$, so continuing the bounds of $|h_n^{(1)}|$ we obtain $$ |h_n^{(1)}(t)| \leq \frac{\sqrt{2}}{\sqrt{\pi} t} \sum_{m=0}^n \frac{1}{m!} \leq \frac{\sqrt{2} e}{\sqrt{\pi}t}. $$ \end{proof} \begin{lemma} \label{global} If $0<t$, then \begin{equation} \label{globalb} |h_n^{(1)}(t)| < \frac{\sqrt{2}}{\sqrt{\pi}t} (1+\frac{n}{t})^n. \end{equation} \end{lemma} \begin{proof} Again using \eqref{h} and the triangle inequality we yield $$ |h_n^{(1)}(t)| \leq \frac{\sqrt{2}}{\sqrt{\pi} t} \sum_{m=0}^n \frac{n!}{m! (n-m)!}(n+1)...(n+m) \frac{1}{(2t)^m} \leq $$ $$ \frac{\sqrt{2}}{\sqrt{\pi} t} \sum_{m=0}^n \frac{n!}{m!(n-m)!} (\frac{n}{t})^m \leq \frac{\sqrt{2}}{\sqrt{\pi} t} (1+ \frac{n}{t})^n , $$ due to the binomial formula. \end{proof} Now we similarly obtain bounds for derivatives of the Hankel functions. \begin{lemma} \label{lowder} If $n^2<t$, then \begin{equation} \label{lowb'} |\partial_t h_n^{(1)}(t)|<\frac{\sqrt{2}e}{\sqrt{\pi}} \frac{\sqrt{t^2+1}+1}{t^2}. \end{equation} \end{lemma} \begin{proof} Differentiating \eqref{h} we yield $$ \partial_t h_n^{(1)}(t)= \frac{\sqrt{2}}{\sqrt{\pi} } \frac{e^{it}}{t} (\frac{it-1}{t} \sum_{m=0}^n (-1)^n\frac{(n+m)!}{m! (n-m)!} \frac{i^m}{(2t)^m} - $$ \begin{equation} \label{h'} \sum_{m=1}^n (-1)^n\frac{(n+m)!}{m!(n-m)!} \frac{i^m}{(2t)^m}\frac{m}{t}). \end{equation} Since $|it-1|=\sqrt{t^2+1}$ as in the proof of Lemma 2.1 we obtain \begin{equation} \label{first} |\frac{it-1}{t}\sum_{m=0}^n (-1)^n \frac{(n+m)!}{m! (n-m)!}\frac{i^m}{(2t)^m}| \leq \frac{\sqrt{t^2+1}}{t} e. \end{equation} For the second sum on the right side of \eqref{h'} we have $$ |\sum_{m=1}^n (-1)^n\frac{(n+m)!}{m!(n-m)!} \frac{i^m}{(2t)^m}\frac{m}{t}| \leq $$ \begin{equation} \label{second} \sum_{m=1}^n \frac{(n-m)!}{(n-m)!} \frac{1}{(m-1)!} \frac{(n-m+1)...n(n+1)...(n+m)}{n^m (2n)^m}\frac{1}{t} \leq \frac{e}{t}, \end{equation} where we used the assumption that $n^2<t$ and again followed the argument in Lemma 2.1. From \eqref{h'} by the triangle inequality with use of \eqref{first}, and \eqref{second} we yield $$ |\partial_t h_n^{(1)}(t)| \leq \frac{\sqrt{2}}{\sqrt{\pi}}(\frac{\sqrt{t^2+1}}{t^2} e+ \frac{1}{t^2} e) $$ and complete the proof of \eqref{lowder}. \end{proof} \begin{lemma} \label{globalder} If $0<t$, then \begin{equation} \label{globalb'} |h_n^{(1)}(t)| \leq \frac{\sqrt{2}}{\sqrt{\pi}t} (\frac{\sqrt{t^2+1}}{t}+\frac{n}{t}) (1+\frac{n}{t})^n. \end{equation} \end{lemma} \begin{proof} As in the proof of Lemma 2.3 we will bound two terms in \eqref{h'}. We have \begin{equation} \label{firstglobal} |\frac{it-1}{t}\sum_{m=0}^n (-1)^n \frac{(n+m)!}{m! (n-m)!}\frac{i^m}{(2t)^m}| \leq \frac{\sqrt{t^2+1}}{t} (1+\frac{n}{t})^n, \end{equation} by repeating the proof of Lemma 2.2. For the second sum on the right side of \eqref{h'} by the triangle inequality we have $$ |\sum_{m=1}^n (-1)^n\frac{(n+m)!}{m!(n-m)!} \frac{i^m}{(2t)^m}\frac{m}{t}| \leq $$ \begin{equation} \label{secondglobal} \frac{1}{t}\sum_{m=1}^n \frac{n!}{m!(n-m)!} (n+1)...(n+m)\frac{n}{(2t)^m} = \frac{n}{t} (1+\frac{n}{t})^n, \end{equation} where we again followed the argument in Lemma 2.1. Combining \eqref{h'}, \eqref{firstglobal}, and \eqref{secondglobal} we yield $$ |\partial_t h_n^{(1)}(t)| \leq \frac{\sqrt{2}}{\sqrt{\pi}t} (\frac{\sqrt{t^2+1}}{t} (1+\frac{n}{t})^n+ \frac{n}{t}(1+\frac{n}{t})^{n+1}), $$ and complete the proof of \eqref{globalb'}. \end{proof} \section{Proof of main results} For a ($L^2$-) function $u$ on the sphere $\partial B_R$ we have the orthonormal expansion $$ u(x)= \sum_{n=0}^{\infty} u_n Y_n(\sigma;u) $$ and introduce the low frequency projector $$ P_N u(x)= \sum_{n=0}^{N} u_n Y_n(\sigma;u). $$ \begin{lemma} Assume that $2\leq kR$. Then we have the following stability estimate: \begin{equation} \label{lowbound} \lVert P_N u \rVert_{(0)}(\partial B_R) \leq \frac{\sqrt{2}}{\pi} e \varepsilon_1. \end{equation} \end{lemma} This result follows from \eqref{lowb} and \eqref{norm}. It shows Lipschitz stability of the low frequency part $P_N u$ from low frequency part of $A$. Since we choose $N=[kR]$, this part well approximates $u$ when $k$ is large. Now we give a proof of Theorem 1.1 \begin{proof} Using the representation \eqref{expu} we yield $$ \|u\|^2_{(0)}(\partial B_R)= \int_{\partial B_R}|u|^2(x) d\Gamma(x)= $$ $$ k^2\int_{\partial B_R}|\sum_{n=0}^{\infty} i^n a_n h_n^{(1)}(kR)Y_n(\sigma)|^2 d\Gamma(x) = $$ $$ k^2 R^2 \sum_{n=0}^{\infty} |a_n|^2 |h_n^{(1)}(kR)|^2, $$ due to orthonormality of the system $Y_n$ on the unit sphere. We let $N_1=[\sqrt{E+k}]$ and consider two cases: 1) $N+1\leq N_1$ and 2) $N_1\leq N$. In case 1) we split the last sum into three terms obtaining $$ \|u\|^2_{(0)}(\partial B_R) = k^2 R^2 ( \sum_{n=0}^{N} |a_n|^2 |h_n^{(1)}(kR)|^2 + $$ \begin{equation} \label{split} \sum_{n=N+1}^{N_1} |a_n|^2 |h_n^{(1)}(kR)|^2 + \sum_{n=N_1+1}^{\infty} |a_n|^2 |h_n^{(1)}(kR)|^2). \end{equation} We have $$ (1+\frac{N_1}{kR})^{2N_1}\varepsilon_2^2= e^{2N_1 log(1+\frac{N_1}{kR})-2E}\leq $$ \begin{equation} \label{N1} e^{2\frac{N_1^2}{kR}-2E}\leq e^{-E+\frac{2}{R}}= e^{\frac{2}{R}}\varepsilon_2, \end{equation} where we used that $log(1+x)<x$, when $0<x$. Observe that $$ k^2 R^2\sum_{n=N_1+1}^{\infty} |a_n|^2 |h_n^{(1)}(kR)|^2\leq $$ \begin{equation} \label{N1+1} \frac{1}{(N_1+1)^2}k^2 R^2\sum_{n=N_1+1}^{\infty} n^2 |a_n|^2 |h_n^{(1)}(kR)|^2 = R^2\frac{M_1^2}{(N_1+1)^2} \end{equation} Finally from \eqref{split} by using \eqref{lowb}, \eqref{globalb},\eqref{N1}, and \eqref{N1+1} we obtain $$ \|u\|^2_{(0)}(\partial B_R) \leq $$ $$ k^2R^2\frac{2e^2}{\pi (kR)^2} \varepsilon_1^2 + k^2R^2 e^{\frac{2}{R}} \frac{2}{\pi k^2 R^2}\varepsilon_2 + R^2\frac{M_1^2}{(N_1+1)^2} $$ which gives \eqref{T1} because of the choice of $N_1$. In case 2) instead of \eqref{split} we write $$ \|u\|^2_{(0)}(\partial B_R) = $$ \begin{equation} \label{split1} k^2 R^2 ( \sum_{n=0}^{N} |a_n|^2 |h_n^{(1)}(kR)|^2 + \sum_{n=N+1}^{\infty} |a_n|^2 |h_n^{(1)}(kR)|^2). \end{equation} As in \eqref{N1+1} we have \begin{equation} \label{N+1} k^2\sum_{n=N+1}^{\infty} |a_n|^2 |h_n^{(1)}(kR)|^2\leq \frac{M_1^2}{(N+1)^2} \end{equation} Similarly to case 1), from \eqref{split1} by using \eqref{low} and \eqref{N+1} we yield $$ \|u\|^2_{(0)}(\partial B_R) \leq k^2R^2\frac{2e^2}{\pi (kR)^2} \varepsilon_1^2 + R^2\frac{M_1^2}{(N+1)^2} \leq $$ $$ \frac{2e^2}{\pi} \varepsilon_1^2 + R^2\frac{M_1^2}{(N_1+1)^2} \leq \frac{2e^2}{\pi} \varepsilon_1^2 + R^2\frac{M_1^2}{E+k} $$ because in case 2) $N_1\leq N$ and $E+k \leq(N_1+1)^2$. So again \eqref{T1} follows. We consider the same two cases: 1) $N+1\leq N_1$ and 2) $N_1\leq N$. In case 1), as in the previous proof, we have the equality \eqref{split}. Now we bound the second term on its right side in a different way: $$ k^2 R^2\sum_{n=N+1}^{N_1} |a_n|^2 |h_n^{(1)}(kR)|^2 = \sum_{n=N+1}^{N_1} \frac{kR}{n}|a_n||h_n^{(1)}(kR)| kRn|a_n||h_n^{(1)}(kR)| \leq $$ \begin{equation} \label{CS} R(\sum_{n=N+1}^{N_1} \frac{k^2R^2}{n^2}|a_n|^2|h_n^{(1)}(kR) |^2)^{\frac{1}{2}} (\sum_{n=N+1}^{N_1} k^2n^2|a_n|^2|h_n^{(1)}| (kR)^2)^{\frac{1}{2}} \end{equation} when we use the Cauchy-Schwarz inequality. Bounding the first term on the right side of \eqref{CS} via \eqref{globalb} and the second term from the definition of the Sobolev norm we yield $$ k^2 R^2\sum_{n=N+1}^{N_1} |a_n|^2 |h_n^{(1)}(kR)|^2 \leq $$ $$ \frac{\sqrt{2}}{\sqrt{\pi}(N+1)}((1+\frac{N_1}{kR})^{N_1} R (\sum_{n=N+1}^{N_1} |a_n|^2)^{\frac{1}{2}} M_1 = $$ \begin{equation} \label{CS1} \frac{\sqrt{2}R}{\sqrt{\pi}(N+1)}((1+\frac{N_1}{kR})^{N_1} R \varepsilon_2 M_1 \leq \sqrt{\frac{2}{\pi}}R \frac{1}{\sqrt{kR}} e^{\frac{1}{R}} M_1 \varepsilon_2^{\frac{1}{2}}, \end{equation} where we used \eqref{N1}. From \eqref{split} by using \eqref{CS1} we derive that $$ \|u\|^2_{(0)}(\partial B_R) \leq $$ $$ k^2R^2\frac{2e^2}{\pi (kR)^2} \varepsilon_1^2 + \sqrt{\frac{2R}{\pi}} \frac{1}{\sqrt{k}} e^{\frac{1}{R}} M_1 \varepsilon_2^{\frac{1}{2}} + R^2\frac{M_1^2}{(N_1+1)^2} $$ which as above produces \eqref{T2}. Case 2) is considered exactly as in Theorem 1.1. Now we choose $N_1=[E+k]-1$ and again consider two cases:1) $N+1\leq N_1$ and 2) $N_1\leq N$. \end{proof} Now we similarly prove Theorem 1.2. \begin{proof} Now we choose $N_1=[\sqrt{E+k}]-1$ and again consider two cases:1) $N+1\leq N_1$ and 2) $N_1\leq N$. In case 1) similarly to \eqref{split} $$ \|\partial_ru\|^2_{(0)}(\partial B_R) = k^4 R^2 ( \sum_{n=0}^{N} |a_n|^2 |\partial_th_n^{(1)}(kR)|^2 + $$ \begin{equation} \label{splitder} \sum_{n=N+1}^{N_1} |a_n|^2 |\partial_t h_n^{(1)}(kR)|^2 + \sum_{n=N_1+1}^{\infty} |a_n|^2 |\partial_t h_n^{(1)}(kR)|^2). \end{equation} Using Lemma 2.4 and the obvious inequality $\frac{t^2+1}{t^2}\leq\frac{5}{4}$, provided $2\leq t$, we obtain $$ |\partial_t h_n^{(1)}(kR)|^2 \leq \frac{2}{\pi k^2 R^2} \frac{5}{4} (1+\frac{n}{kR})^{2(n+1)}\leq \frac{1}{ k^2 R^2} (1+\frac{n}{kR})^{2(n+1)} $$ Similarly to \eqref{N1} \begin{equation} \label{N11} (1+\frac{N_1}{kR})^{2(N_1+1)}\varepsilon_2^2 \leq e^{2\frac{N_1(N_1+1)}{kR}-2E}\leq e^{-E+\frac{2}{R}}= e^{\frac{2}{R}}\varepsilon_2, \end{equation} because $N_1=[\sqrt{E+k}]-1$. As in \eqref{N1+1} \begin{equation} \label{N1+2} k^4 R^2\sum_{n=N_1+1}^{\infty} |a_n|^2 |\partial_t h_n^{(1)}(kR)|^2\leq r^2\frac{M_2^2}{(N_1+1)^2}. \end{equation} Hence $$ \|\partial_ru\|^2_{(0)}(\partial B_R) \leq \frac{2e^2}{\pi} \frac{\sqrt{5}+1}{2} k^2\varepsilon_1^2+ e^{\frac{2}{R}}k^2\varepsilon_2+ R^2\frac{M_2^2}{(N_1+1)^2} $$ Case 2) is considered exactly as in Theorem 1.1 by splitting into two terms instead if three in \eqref{splitder}. \end{proof} \section{Application to linearized inverse obstacle scattering} We consider a solution $u_0$ of the simplest scattering problem \begin{equation} \label{uH} \Delta u_0 + k^2 u_0=0 \;\mbox{in}\; {\mathbb R}^3 \setminus \bar D_0, \end{equation} with the Dirichlet boundary condition (soft obstacle) \begin{equation} \label{ub} u_0 = 1 \;\mbox{on}\; \partial D_0 \end{equation} and with the Sommerfeld radiation condition \eqref{Som} for $u_0$. Here $D_0$ is a bounded domain with $C^2$-boundary and with connected complement of $\bar D$. More important in applications is the hard obstacle problem where the Dirichlet boundary condition \eqref{ub} is replaced with the Neumann condition \begin{equation} \label{ub1} \partial_{\nu}u_1 = 1 \;\mbox{on}\; \partial D_0 \end{equation} for the solution $u_1$ to the Helmholtz equation \eqref{uH} with the radiation condition \eqref{Som}. We will consider obstacle $D_0= B_R$ in ${\mathbb R}^3$. Then the scattering problem \eqref{uH}, \eqref{ub} has the explicit solution \begin{equation} \label{u0} u_0(x)= \frac{R}{e^{ikR}} \frac{e^{ikr}}{r} . \end{equation} and the hard scattering problem has the solution \begin{equation} \label{u1} u_1(x)= \frac{R^2}{(ikR-1)e^{ikR}} \frac{e^{ikr}}{r} . \end{equation} Observe that $u_0, u_1$ can be viewed as incident spherical waves. Let $D =\{x: r<R+d(\sigma)\}$ where $d$ is a function on $\partial B_1$ with small norm in $C^2$. It is known \cite{H} that the solution $u$ to the scattering problem \eqref{uH}, \eqref{ub} with $D_0$ replaced by $D$ is $u_0+v_0+...$ with the scattering amplitude $A_0+A(v_0)+...$ where $A_0$ is the scattering amplitude of $u_0$, $A(v_0)$ is the scattering amplitude of the solution $v_0$ to the following scattering problem \begin{equation} \label{vH} \Delta v_0 + k^2 v_0=0 \;\mbox{in}\; {\mathbb R}^3 \setminus \bar D_0, \end{equation} \begin{equation} \label{vb} v_0 = -d \partial_r u_0 \;\mbox{on}\; \partial D_0 \end{equation} with the Sommerfeld condition \eqref{Som} for $v_0$. The term $...$ has the norm bounded by $C\|d\|_0^2$. The linearized hard obstacle problem is similarly the following scattering problem \begin{equation} \label{v1H} \Delta v_1 + k^2 v_1=0 \;\mbox{in}\; {\mathbb R}^3 \setminus \bar D_0, \end{equation} \begin{equation} \label{v1b} \partial_r v_1 = k^2 u_1 d\;\mbox{on}\; \partial D_0 \end{equation} with the Sommerfeld condition \eqref{Som} for $v_1$. Unique solvalibity of the direct scattering problems in Sobolev and H\"older spaces is well known \cite{CK}, \cite{I}, \cite{T}. For example, for any $d\in H^1(\partial D_0)$ there is an unique radiating solution $v_0\in H^1(B_{\rho}\setminus \bar B_r$ (for any $\rho>R$) to the linearized direct scattering problem \eqref{vH}, \eqref{vb}. The linearized inverse obstacle scattering problem is to find $D$ (or, equivalently, $d$) from $A(v_0)$ or $A(v_1)$. \begin{corollary} For a solution $d$ of the inverse soft obstacle problem we have $$ \|d\|^2_{(0)}(\partial B_R)\leq \frac{R^2}{k^2R^2+1}(\frac{2 e^2}{\pi} \varepsilon_1^2 + \frac{2}{\pi}e^{\frac{2}{R}} \varepsilon_2)+ R^2\frac{\|d\|^2_{(1)}(\partial B_R)}{E+k} $$ and $$ \|d\|^2_{(0)}(\partial B_R)\leq \frac{R^2}{k^2R^2+1}(\frac{2 e^2}{\pi} \varepsilon_1^2 + \sqrt{\frac{2(k^2R^2+1)}{\pi k R}}e^{\frac{1}{R}} \varepsilon_2^{\frac{1}{2}})+ R^2\frac{\|d\|^2_{(1)}(\partial B_R)}{E+k} $$ where $\varepsilon_1, \varepsilon_2, E$ are defined in Theorem 1.1 with $A$ is replaced by $A(v_0)$. \end{corollary} \begin{proof} From \eqref{u0} by elementary calculations $$ \partial_r u_0 (R\sigma) = \frac{R}{e^{ikR}} \frac{ik e^{ikR}R -e^{ikR}}{R^2} = \frac{ikR-1}{R} $$ Hence $|\partial_r u_0|^2=\frac{k^2R^2+1}{R^2}$ and $$ \|v_0\|^2_{(1)}(\partial B_R) = \frac{k^2R^2+1}{R^2}\|d\|^2_{(1)}(\partial B_R). $$ So this Corollary follows from Theorem 1.1. \end{proof} \begin{corollary} For a solution $d$ of the inverse hard obstacle problem we have $$ \|d\|^2_{(0)}(\partial B_R)\leq $$ $$\frac{k^2R^2+1}{k^2 R^2} (\frac{e^2}{\pi}(3+\sqrt{5}) \varepsilon_1^2 + e^{\frac{2}{R}}\varepsilon_2+ R^2\frac{\|d\|_{(1)}^2(\partial B_R)}{E+k-2\sqrt{E+k}+1} , $$ where $\varepsilon_1, \varepsilon_2, E$ are defined in Theorem 1.2 with $A$ is replaced by $A(v_1)$. \end{corollary} \begin{proof} Observe that, according to \eqref{u1}, \eqref{v1b}, $$ \|\partial_r v_1\|^2_{(1)}(\partial B_R)= k^4\frac{R^2}{k^2R^2+1}\|d\|^2_{(1)}(\partial B_R). $$ Now from \eqref{v1b}, \eqref{u1} and Theorem 1.2 we have $$ \| d\|_{(0)}^2(\partial B_R)\leq $$ $$ \frac{k^2R^2+1}{k^2R^2}( \frac{e^2}{\pi}(3+\sqrt{5}) \varepsilon_1^2 + e^{\frac{2}{R}}\varepsilon_2)+ R^2\frac{\|d\|^2_{(1)}(\partial B_R)}{E+k-2\sqrt{E+k}+1} $$ and Corollary 4.2 follows. \end{proof} \section{Conclusion} We think that increasing stability is an important feature of the which leads to higher resolution of numerical algorithms. It is important to collect numerical evidence of this phenomenon. We tried to obtain most explicit forms of stability estimates to make them useful in particular for numerical solution of inverse scattering problems. It is important to expand Lipschitz stability zone: i.e. to replace the condition $n^2<t$ of Lemma 2.1 by the most natural condition $ n <\theta k$ with some $\theta<1$. Given numerous previous efforts, this seems to be a hard problem. The results of this paper most likely imply similar increasing stability estimates when $B(0,R)$ is replaced by a strictly convex domain $D$. Indeed, one can represent the complement of such $D$ as the union of the family of the exteriors of spheres whose radii and centers are contained in a bounded set and use bounds \eqref{T1}, \eqref{T2} for these spheres. For general convex obstacles we do not expect such explicit and simple bounds. Much more challenging is to show increasing stability for soft and hard (convex) obstacles. It is not clear even how to handle linearized problems near a sphere when the incident wave is traditional $e^{ik\xi\cdot x}$ with $|\xi|=1$. While the solution $u_0$ of the unperturbed soft scattering problem is well known \cite{CK}, \cite{T}, its is difficult to control zeros of its normal derivative on the boundary and hence to use \eqref{vb}. \bibliographystyle{amsalpha}
1,108,101,562,652
arxiv
\section{Introduction} Let $G=(V,E)$ be a simple undirected graph with vertex set $V=V(G)=\{v_1,v_2,\dots,v_n\}$ and edge set $E=E(G)$. The {\it adjacency matrix} of $G$ is defined as the matrix $A(G)=[a_{ij}]$ of order $n$, where $a_{ij}=1$ if $v_{i}$ is adjacent to $v_{j}$, and $a_{ij}=0$ otherwise. The {\it degree matrix} of $G$ is defined by $D(G)={\rm diag}\{d(v_1),d(v_2),\dots,d(v_n)\}$, where $d(v_i)$ is the degree of the vertex $v_i$. The matrix $Q(G)=D(G)+A(G)$ is called the {\it signless Laplacian matrix} (or {\it $Q$-matrix}) of $G$. It is known that $Q$ is nonnegative, symmetric and positive semidefinite. So its eigenvalues are all nonnegative real numbers and can be arranged as: $q_1(G)\geq q_2(G)\geq \cdots \geq q_n(G)\geq 0.$ We simply call the eigenvalues of $Q(G)$ as the {\it $Q$-eigenvalues} of the graph $G$, and refer the readers to \cite{cve1, cve2, cve3, cve4, cve5} for the survey on this topic. The least $Q$-eigenvalue $q_n(G)$ is denoted by $q_{\min}(G)$, and the eigenvectors corresponding to $q_{\min}(G)$ are called the {\it first $Q$-eigenvectors} of $G$. If $G$ is connected, then $q_{\min}(G)=0$ if and only if $G$ is bipartite. So, the connected non-bipartite graphs are considered here. The very early work on the least $Q$-eigenvalue can be found in \cite{des}, where the author discuss the relationship between the least $Q$-eigenvalue and the bipartiteness of graphs. Cardoso et al. \cite{car} and Fan et at. \cite{fan} investigate the least $Q$-eigenvalue of non-bipartite unicyclic graphs. Liu et al. \cite{liu} give some bounds for the clique number and independence number of graphs in terms of the least $Q$-eigenvalue. Lima et al. \cite{lima} survey the known results and present some new ones for the least $Q$-eigenvalue. Our research group \cite{wang} investigate how the least $Q$-eigenvalue of a graph changes by relocating a bipartite branch from one vertex to another vertex, and minimize the least $Q$-eigenvalue among the connected graphs of fixed order which contain a given non-bipartite graph as an induced subgraph. A graph is called {\it minimizing} (or {\it maximizing}) in a class of graphs if its least $Q$-eigenvalue attains the minimum (or maximum) among all graphs in the class. Denote by $\mathscr{G}_n^k$ the set of connected non-bipartite graphs of order $n$ with $k$ pendant vertices. In this paper we determine the unique minimizing graph and the maximizing graph in $\mathscr{G}_n^k$, and hence provide a lower bound and an upper bound for the least $Q$-eigenvalue of a graph in terms of the number of pendent vertices. \section{Preliminaries} We first introduce some notations. We use $C_n$, $P_n$, $K_n$ denote the cycle, the path, the complete graph all on $n$ vertices, respectively. We also use $Pv_1v_2\cdots v_n$ to denote a path on vertices $v_1,v_2,\ldots, v_n$ with edges $v_iv_{i+1}$ for $i=1,2,\ldots,v_{n-1}$. Let $G$ be a graph. The graph $G$ is called {\it trivial} if it contains only one vertex; otherwise, it is called {\it nontrivial}. The graph $G$ is called {\it unicyclic}, if it is connected and has the same number of vertices and edges (or $G$ contains exactly one cycle). The {\it girth} of $G$ is the minimum of the lengths of all cycles in $G$. A {\it pendant vertex} of $G$ is a vertex of degree $1$. A path $Pv_0v_1\cdots v_{t-1}v_t$ in $G$ is called a {\it pendant path} if $d(v_1)=d(v_2)=\cdots=d(v_{t-1})=2$ and $d(v_{t})=1$. If $t=1$, then $v_0 v_1$ is a pendant edge of $G$. In particular, if $d(v_0)\geq 3$, we say $P$ is a {\it maximal pendant path}. Let $x=(x_1, x_2, \dots, x_n)^T $ be a column vector in $\mathbb{R}^n$, and let $G$ be a graph on vertices $V(G)=\{v_1,v_2, \dots, v_n\}$. The vector $x$ can be viewed as a function defined on $V(G)$, that is, any vertex $v_i$ is given by the value $x_i=:x{(v_i)}$. Thus the quadratic form $x^TQx$ can be written as $$x^TQx=\sum\limits_{uv\in E(G)}[x(u)+x(v)]^2. \eqno(2.1)$$ One can find that $q$ is a $Q$-eigenvalue of $G$ corresponding to an eigenvector $x$ if and only if $x \neq 0$ and $$[q-d(v)]x(v) =\sum_{u \in N_G(v)}x(u), \hbox{~ for each~} v\in V(G),\eqno(2.2)$$ where $N_G(v)$ denotes the neighborhood of the vertex $v$. In addition, for an arbitrary unit vector $x \in \mathbb{R}^n$, $$q_{\min}(G)\leq x^TQ(G)x,\eqno(2.3)$$ with equality if and only if $x$ is a first $Q$-eigenvector of $G$. Let $G_1$ and $G_2$ be two vertex-disjoint graphs, and let $v \in G_1$, $u \in G_2$. The {\it coalescence} of $G_1$ and $G_2$ with respect to $v$ and $u$, denoted by $G_1(v) \diamond G_2(u)$, is obtained from $G_1$, $G_2$ by identifying $v$ with $u$ and forming a new vertex. Let $G$ be a connected graph, and let $v$ be a cut vertex of $G$. Then $G$ can be expressed in the form $G=H(v)\diamond F(v)$, where $H$ and $F$ are subgraphs of $G$ both containing $v$. Here we call $H$ (or $F$) a {\it branch of $G$ with root $v$}. With respect to a vector $x$ defined on $G$, the branch $H$ is called {\it zero} if $x(v)=0$ for all $v \in V(H)$; otherwise $H$ is called {\it nonzero}. Let $G=G_1(v_2) \diamond G_2(u)$, $G^*=G_1(v_1) \diamond G_2(u)$, where $v_1$ and $v_2$ are two distinct vertices of $G_1$ and $u$ is a vertex of $G_2$. We say $G^*$ is obtained from $G$ by {\it relocating $G_2$ from $v_2$ to $v_1$}. In \cite{wang} the authors give some properties of the first $Q$-eigenvectors, and discuss how the least $Q$-eigenvalue of a graph changes when relocating a bipartite branch from one vertex to another vertex; see the following results. \begin{lemma} {\em\cite{wang}} \label{branch} Let $H$ be a bipartite branch of a connected graph $G$ with root $u$. Let $x$ be a first $Q$-eigenvector of $G$.\\ {\em (1)} If $x(u)=0$, then $H$ is a zero branch of $G$ with respect to $x$. \\ {\em (2)} If $x(u)\neq 0$, then $x(p) \neq 0$ for every vertex $p$ of $H$. Furthermore, for every vertex $p$ of $H$, $x(p)x(u)$ is positive or negative, depending on whether $p$ is or is not in the same part of bipartite graph $H$ as $u$; consequently, $x(p)x(q)<0$ for each edge $pq \in E(H)$. \end{lemma} \begin{lemma} {\em\cite{wang}} \label{tree} Let $G$ be a connected non-bipartite graph, and let $x$ be a first $Q$-eigenvector of $G$. Let $T$ be a tree with root $u$, which is a nonzero branch with respect to $x$. Then $|x(q)|< |x(p)|$ whenever $p,q$ are vertices of $T$ such that $q$ lies on the unique path from $u$ to $p$. \end{lemma} \begin{lemma} {\em\cite{wang}} \label{perturb} Let $G_1$ be a connected graph containing at least two vertices $v_1, v_2$, let $G_2$ be a connected bipartite graph containing a vertex $u$. Let $G=G_1(v_2)\cdot G_2(u)$ and $G^{\ast}=G_1(v_1)\cdot G_2(u)$. If there exists a first $Q$-eigenvector of $G$ such that $|x{(v_1)}|\geq{|x{(v_2)}|}$, then, $$ q_{\min}(G^{\ast})\leq{ q_{\min}(G)}$$ \noindent{with equality only if $|x{(v_1)}|=|x{(v_2)}|$ and $d_{G_2(u)}x{(u)}=-{\sum\limits_{v\in N_{G_2}(u)}{x(v)}}$}. \end{lemma} \begin{lemma} {\em\cite{wang}} \label{perpath} Let $G_1$ be a connected non-bipartite graph containing two vertices $v_1,v_2$, and let $P$ be a nontrivial path with $u$ as an end vertex. Let $G=G_1(v_2) \diamond P(u)$ and let $G^*=G_1(v_1) \diamond P(u)$. If there exists a first $Q$-eigenvector $x$ of $G$ such that $|x(v_1)|>|x(v_2)|$ or $|x(v_1)|=|x(v_2)|>0$, then $$q_{\min}(G^*) <q_{\min}(G).$$ \end{lemma} \section{Minimizing the least $Q$-eigenvalue among all graphs in $\mathscr{G}_n^k$} Let $\mathscr{U}_n^k(g)$ denote the set of unicyclic graphs of order $n$ with odd girth $g$ and $k \ge 1$ pendant vertices. Denote by $U_n^k(g;l; l_1, l_2, \ldots, l_k) \in \mathscr{U}_n^k(g)$ the graph of order $n$ obtained by coalescing $P_l$ with a cycle $C_g$ by identifying one of its end vertices with some vertex of $C_g$, and also coalescing this $P_l$ with each of paths $P_{l_i}$ ($i=1, 2, \ldots, k$) by identifying its other end vertex with one of the end vertices of $P_{l_i}$, where $l \ge 1$, $l_i \ge 2$ for $i=1, 2, \ldots, k$, $g+l+\sum_{i=1}^k l_i=n+k+1.$ If $l_1=l_2=\cdots=l_k=2$, $U_n^k(g; l; l_1, l_2, \ldots, l_k)$ is simply denoted by $U_n^k(g)$; see Fig. 3.1. In this section, we first show that $U_n^k(g)$ is the unique minimizing graph in $\mathscr{U}_n^k(g)$, and then investigate some properties of the least $Q$-eigenvalue and the corresponding eigenvectors of $U_n^k(g)$. By the eigenvalue interlacing property (see following Lemma \ref{interlace}), the problem of determining the minimizing graph in $\mathscr{G}_n^k$ can be transformed to that of determining the minimizing graph in $\mathscr{U}_n^k(g)$. \begin{theorem} \label{umin} Among all graphs in $\mathscr{U}_n^k(g)$, $U_n^k(g)$ is the unique minimizing graph. \end{theorem} {\it Proof:} Let $G$ be a minimizing graph in $\mathscr{U}_n^k(g)$, and let $C_g$ be the unique cycle of $G$ on vertices $v_1, v_2, \ldots, v_g$. The graph $G$ can be considered as one obtained from $C_g$ by identifying each $v_i$ with one vertex of some tree $T_i$ of order $n_i$ for each $i=1,2, \ldots, g,$ where $\sum_{i=1}^g n_i=n$. Note that some trees $T_i$ may be trivial, i.e. $n_i=1$. Let $x$ be a unit first $Q$-eigenvector of $G$. First, there exist at least one $i$, $1 \leq i \leq g$, such that $x(v_i)\neq 0$; otherwise, by Lemma \ref{branch}(1), each $T_i$, $1 \leq i \leq g$, is a zero branch of $G$ with respect to $x$, and it follows that $x$ is the zero vector, which is a contradiction. We also assert that each nontrivial tree $T_j$ is a nonzero branch with respect to $x$. Otherwise, there exists a nontrivial tree $T_j$ attached at $v_j$, $1 \leq j \leq g$, such that $x(v_j) = 0$. By Lemma \ref{perturb}, relocating the tree $T_j$ from $v_j$ to $v_i$ for some $i$ for which $x(v_i)\neq 0$, we obtain a graph in $\mathscr{U}_n^k(g)$ with smaller least $Q$-eigenvalue. Next, we contend all maximal pendant paths locate at the same vertex. Otherwise, there exist two maximal pendant path, say $P$ and $P'$, attached at $p$ and $p'$, respectively. Without loss of generality, assume $|x(p)| \geq |x(p')| >0$. Note that $d(p') \geq 3$ by the definition of maximal pendant path. Then by Lemma \ref{perpath}, we will arrive at a new graph still in $\mathscr{U}_n^k(g)$ but with smaller least $Q$-eigenvalue by relocating $P'$ from $q$ to $p$. So $G$ is obtained from $C_g$ by attaching one path at some vertex of $C_g$ if $k=1$ (i.e. $G=U_n^k(g; n-g; 2)$), or $G=U_n^k(g; l; l_1, l_2,\ldots, l_k)$ if $k \ge 2$ for some positive integers $l \ge 1$ and $l_i \ge 2 \;(i=1,2, \ldots,k)$ satisfying $g+l+\sum_{i=1}^k l_i=n+k+1$. To complete the proof, we only need to consider the case of $k \ge 2$ and prove that $l_1=\ldots=l_k=2$. If not, say $l_i \geq 3$. Denote by $P_{l_i}=Pu_1u_2 \cdots u_{l_i}$, where $u_1$ is the common end vertex of other $k-1$ maximal pendant paths, $d(u_1) = k+1, d(u_2)=\cdots=d(u_{l_i-1})=2, d(u_{l_i})=1.$ By Lemma \ref{tree} and above discussion, $0<|x(u_1)|< |x(u_{l_i-1})|$. Relocating some $P_{l_j}$ other than $P_{l_i}$ from $u_1$ to $u_{l_i-1}$, by Lemma \ref{perpath} we would arrive at a new graph in $\mathscr{U}_n^k(g)$ with smaller least $Q$-eigenvalue, a contradiction. \hfill $\blacksquare$ \begin{coro} \label{usimple} The least $Q$-eigenvalue of $U_n^k(g)$ has multiplicity one. \end{coro} {\it Proof:} Let $C_g$ be the unique cycle of $U_n^k(g)$, and let $v$ be the (unique) vertex lying on $C_g$ with degree greater than $2$. From the proof of Theorem \ref{umin}, the value of $v$ given by any first $Q$-eigenvector of $U_n^k(g)$ is nonzero. Assume to the contrary, $x$ and $y$ are two linear independent first $Q$-eigenvectors of $U_n^k(g)$. There exists a nonzero linear combination of $x$ and $y$ such that its value at $v$ equals zero, which yields a contradiction. \hfill $\blacksquare$ \begin{center} \vspace{2mm} \includegraphics[scale=0.65]{fig31.eps}\\ \vspace{2mm} \small Fig. 3.1. The graph $U_n^k(g)$ \end{center} \begin{lemma} \label{uvector} Let $U_n^k(g)$ be the graph with some vertices labeled as in Fig. 3.1, where $v_1,v_2,\ldots,v_g$ are the vertices of the unique cycle $C_g$ labeled in anticlockwise way. Let $x$ be a first $Q$-eigenvector of $U_n^k(g)$. Then\\ {\em (1)} $x(v_i)=x(v_{g-i})$ for $i=1,2, \ldots, \frac{g-1}{2}.$\\ {\em (2)} $x(v_{\frac{g-1}{2}})x(v_{\frac{g+1}{2}}) >0$, and $x(v_i)x(v_{i+1})<0$ for other edges $v_iv_{i+1}$ of $U_n^k(g)$ except $v_{\frac{g-1}{2}}v_{\frac{g+1}{2}}$.\\ {\em (3)} $|x(v_g)| > |x(v_1)|>|x(v_2)| >\ldots > |x(v_{\frac{g-1}{2}})|>0.$ \end{lemma} {\it Proof: } From the proof of Theorem \ref{umin}, the tree attached at $v_g$ is a nonzero branch with respect to $x$, and by Lemma \ref{branch}(2) each edge $uv$ of the tree holds $x(u)x(v)<0$. So it suffices to consider those edges on the cycle. Observe that there exists an automorphism $\psi$ such that $\psi(v_i)=\psi(v_{g-i})$ for $i=1, 2, \ldots, \frac{g-1}{2}$, and $\psi$ preserves other vertices. Define a vector $x_{\psi}$ by $x_{\psi}(v)=x(\psi(v))$ for each vertex $v$ of $U_n^k(g)$. Then $x_{\psi}$ is also a unit first $Q$-eigenvector of $U_n^k(g)$. Noting that $q_{\min}(U_n^k(g))$ is simple and $x_{\psi}(v_g)=x(v_g)\neq 0$, so $x_{\psi}=x$, that is $x(v_i)=x(v_{g-i})$ for $i=1,2, \ldots, \frac{g-1}{2}.$ Since $x(v_{\frac{g-1}{2}})=x(v_{\frac{g+1}{2}})$, we have $x(v_{\frac{g-1}{2}})x(v_{\frac{g+1}{2}}) \geq 0$. If $x(v_{\frac{g-1}{2}})=x(v_{\frac{g+1}{2}})=0$, by considering the eigenvector equation (2.2) of $x$ at $v_{\frac{g-1}{2}}$, we have $x(v_{\frac{g-3}{2}})=0$. Repeating the above discussion, we finally obtain $x(v_g)=0$, a contradiction. Next, we claim that $x(u)x(v) \leq 0$ for any edge $uv$ on the cycle $C_g$ other than $v_{\frac{g-1}{2}}v_{\frac{g+1}{2}}$. Assume $pq$ is an edge on $C_g$ such that $x(p)x(q)>0$. Partitioned the vertices of (the tree) $U_n^k(g)-v_{\frac{g-1}{2}}v_{\frac{g+1}{2}}$ into two parts such that its edges join vertices from one part to vertices of the other part. Note that $v_{\frac{g-1}{2}},v_{\frac{g+1}{2}}$ lie in the same part, and $p,q$ lie in different parts. Define $\widetilde{x}$ on $U_n^k(g)-v_{\frac{g-1}{2}}v_{\frac{g+1}{2}}$ such that $\widetilde{x}(v)=|x(v)|$ if $v \in S_1$, and $\widetilde{x}(v)=-|x(v)|$ if $v \in S_2$. Then $\widetilde{x}^TQ(U_n^k(g))\widetilde{x}<x^TQ(U_n^k(g))x$, which yields a contradiction. The remaining part of assertion (2) will be proved after showing the last assertion. To prove the last assertion, we start with $|x(v_g)| > |x(v_1)|$. If not, relocating the pendant tree from $v_g$ to $v_1$, we can obtain a graph $G'$ which holds $q_{\min}(G') \leq q_{\min}(U_n^k(g))$ by Lemma \ref{perturb}. Noting that $G'$ is isomorphic to $U_n^k(g)$, $q_{\min}(G') = q_{\min}(U_n^k(g))$. Also by lemma 2.3, the equality occurs only if $x(v_g)=-x(w)$, where $w$ is the neighbor of $v_g$ in the pendant tree. This contradicts the Lemma \ref{tree}. By induction, assume that $|x(v_{i-1})|>|x(v_i)|$ for $1 \leq i \leq \frac{g-1}{2}-1$, where $v_0:=v_g$. By the eigenvector equation (2.2) of $x$ at $v_i$, $$[q_{\min}(U_n^k(g))-2]x(v_i) =x(v_{i+1})+x(v_{i-1}).$$ Note that $x(v_{i-1})x(v_i) \le 0, x(v_i)x(v_{i+1}) \leq 0$ by what we have proved, and $0<q_{\min}(U_n^k(g))<1$ (see \cite{das}). By the induction hypothesis, $|x(v_{i})|>|x(v_{i+1})|$, and the assertion (3) follows. By the assertion (3) we now can deduce the assertion (2). \hfill $\blacksquare$ \begin{coro} \label{unonzero} Let $x$ be a first $Q$-eigenvector of $U_n^k(g)$. Then $x$ contains no zero entries. \end{coro} Denote by $\alpha_n^k(g)$ the minimum of the least $Q$-eigenvalues of graphs in $\mathscr{U}_n^k(g)$, that is, the least $Q$-eigenvalue of $U_n^k(g)$. \begin{lemma} \label{umonotone} $\alpha_n^k(g)$ is strictly increasing with respect to $k \geq 1$ and odd $ g \geq 3 $, respectively. \end{lemma} {\it Proof:} Let $U_n^k(g)$ have some vertices labeled as in Fig. 3.1. Let $x$ be a first $Q$-eigenvector of $U_n^k(g)$. Suppose $k \geq 2$. Replacing the edge $uu_2$ by $u_1u_2$, we arrive at a new graph $\overline{G} \in \mathscr{U}_n^{k-1}(g)$, which holds that $q_{\min}(\overline{G}) < q_{\min}(U_n^k(g))$ by Lemma \ref{perturb} as $|x(u)| < |x(u_1)|$. So, by Theorem \ref{umin} we have $$\alpha_n^{k-1}(g) \le q_{\min}(\overline{G}) < q_{\min}(U_n^k(g))=\alpha_n^k(g).$$ Next we prove the second result. Suppose $g \geq 5$. Replacing the edge $v_{g-2}v_{g-1}$ by edge $v_{g-2}v_1$, we obtain a new graph $\widetilde{G} \in \mathscr{U}_n^{k+1}(g-2)$ whose least $Q$-eigenvalue is not greater than $\alpha_n^k(g)$ as $x(v_1)=x(v_{g-1})$ by Lemma \ref{uvector}. Then $$\alpha_n^k(g-2) < \alpha_n^{k+1}(g-2) \leq q_{\min}(\widetilde{G}) \leq \alpha_n^k(g).$$ The result follows. \hfill $\blacksquare$ \begin{lemma} {\em\cite{car}} \label{interlace} Let $G$ be a graph of order $n$ containing an edge $e$. Then $$q_1(G-e) \leq q_1(G) \leq q_2(G-e) \leq q_2(G) \leq q_3(G-e) \leq \ldots \leq q_n(G-e) \leq q_n(G).$$ \end{lemma} Now we arrive at the main result of this section. \begin{theorem} \label{mineig} Among all graphs in $\mathscr{G}_n^k$, $U_n^k(3)$ is the unique minimizing graph. \end{theorem} {\it Proof:} Let $G$ be a minimizing graph in $\mathscr{G}_n^k$. Then $G$ contains at least an induced odd cycle, say $C_g$. Let $G'$ be a connected unicyclic spanning subgraph of $G$, which contains $C_g$ as the unique cycle and contains all pendant edges of $G$. Thus $G' \in \mathscr{U}_n^{k'}(g)$, where $k' \ge k$. By Lemma 3.6 and Lemma 3.5, $$q_{\min}(U_n^k(3))=\alpha_n^k(3) \le \alpha_n^{k}(g) \leq \alpha_n^{k'}(g) \leq q_{\min}(G') \leq q_{\min}(G).\eqno(3.1)$$ As $G$ is a minimizing graph in $\mathscr{G}_n^k$, all inequalities in (3.1) hold as equalities, which implies $k'=k, g=3$ and $G'=U_n^k(3)$ by Lemma 3.5 and Theorem \ref{umin}, and also $q_{\min}(G)=q_{\min}(U_n^k(3))$. Now we return to the origin graph $G$, which is obtained from $G'=U_n^k(3)$ possibly by adding some edges. Suppose $E(G)\setminus E(U_n^k(3)) \neq \emptyset$. Recalling the definition of $G'$ and $U_n^k(3)$, the set $E(G)\setminus E(U_n^k(3))$ consists of some edges joining the vertices of $C_3$ and the vertices of $P_l$ or some edges within the vertices of $P_l$. So, for each edge $uv \in E(G)\setminus E(U_n^k(3))$, if $x$ is a first $Q$-eigenvector of $U_n^k(3)$, then $x(u)+x(v) \ne 0$ by Lemma \ref{uvector}(3) and Lemma \ref{tree}. Let $x$ be a unit first $Q$-eigenvector of $G$. Then \begin{eqnarray*} q_{\min}(G)&=&\sum_{uv \in E(G)}[x(u)+x(v)]^2\\ &=&\sum_{uv \in E(U_n^k(3))}[x(u)+x(v)]^2 + \sum_{uv \in E(G)\setminus E(U_n^k(3))}[x(u)+x(v)]^2\\ &\geq&\sum_{uv \in E(U_n^k(3))}[x(u)+x(v)]^2 \geq q_{\min}(U_n^k(3)) \end{eqnarray*} Since $q_{\min}(G)=q_{\min}(G')$, $x$ is also an first $Q$-eigenvector of $U_n^k(3)$, and for each edge $uv \in E(G)\setminus E(U_n^k(3))$, $x(u)+x(v) = 0$, which yields a contradiction. The result follows. \hfill $\blacksquare$ By Theorem \ref{mineig} and Lemma \ref{umonotone}, we have the following result. \begin{coro} \label{eigpen} Let $G$ be a connected graph of order $n$ which contains pendant vertices. Then $q_{\min}(G) \ge q_{\min}(U_n^1(3))$ with equality if and only if $G=U_n^1(3)$. If, in addition, $G$ contains $k$ pendant vertices, then $q_{\min}(G) \ge q_{\min}(U_n^k(3))$ with equality if and only if $G=U_n^k(3)$. \end{coro} Cardoso et al. \cite{car} determine the unique minimizing graph of non-bipartite connected graph, i.e. the graph $U_n^1(3)$. By Lemma \ref{interlace}, the minimizing graph is unicyclic. If we know that the (unicyclic) minimizing graph contains pendant vertices, then we also determine this minimizing graph by Corollary \ref{eigpen}. \section{Maximizing the least $Q$-eigenvalue among all graphs in $\mathscr{G}_n^k$} Let $\n=(\n_1,\n_2, \ldots, \n_{n-k}) \in \mathbb{N}^{n-k}$ be a nonnegative integer sequence arranged in non-increasing order, where $\n_1+\n_2+ \cdots+ \n_{n-k}=k$. In this section, all nonnegative integer sequence has the same form as $\n$. Denote by $K(\n)$ the graph obtained from $K_{n-k}$ on vertices $v_1,v_2,\ldots,v_{n-k}$ by attaching $\n_i$ pendant edges to $v_i$ for $i=1,2, \ldots n-k$, respectively. By Lemma 3.6, the maximizing graph in $\mathscr{G}_n^k$ is achieved by $K(\n)$ for some $\n \in \mathbb{N}^{n-k}$. \begin{lemma} \label{maxvec} Let $x$ be a first $Q$-eigenvector of $K(\n)$. If $\n_i > \n_j$, then $|x(v_i)| \ge |x(v_j)|$. \end{lemma} {\it Proof:} Assume to the contrary, $|x(v_i)| < |x(v_j)|$. Relocating $\n_i-\n_j$ pendent edges from $v_i$ to $v_j$, by Lemma \ref{perturb}, $q_{\min}(\widetilde{G}) < q_{\min}(K(\n))$. But $ \widetilde{G}$ is isomorphic to $K(\n)$ so that $q_{\min}(\widetilde{G}) = q_{\min}(K(\n))$, a contradiction.\hfill $\blacksquare$ Recalling the notation of majorization, if $\n=(\n_1, \n_2, \ldots, \n_r)$ and $\m=(\m_1,\m_2, \ldots,\m_r)$ are two nonnegative integer sequences arranged in non-increasing order, then $\n$ {\it majorizes} $\n$, denote by $\n \succeq \m$, if, for $1 \le k \le r-1$, $$\sum_{i=1}^k\n_i \geq \sum_{i=1}^k \m_i {\rm~~and~~} \sum_{i=1}^r\n_i=\sum_{i=1}^r\m_i.$$ If $\n \succeq \m$ and $\n \ne \m$, we will denote $\n \succneqq \m$. \begin{lemma} \label{maj} Let $\n=(\n_1, \n_2, \ldots, \n_{n-k})$ be a nonnegative integer sequences arranged in non-increasing order, where $\n_1+\n_2+ \cdots+ \n_{n-k}=k$. If $\n_1-\n_{n-k} \ge 2$, there exists a nonnegative integer sequences $\m \in \mathbb{N}^{n-k}$ such that $ \n \succneqq \m$ and $q_{\min}(K(\n)) \le q_{\min}(K(\m)).$ \end{lemma} {\it Proof:} Suppose $K(\n)$ has the vertices labeled at the beginning of this section. Relocating a pendant edge from $v_1$ to $v_{n-k}$, we will arrive at a new graph $G$ isomorphic to $K(\m)$ for some $\m\in \mathbb{N}^{n-k}$. Surely $\n \succneqq \m$. Let $x$ be a first $Q$-eigenvector of $G$. By Lemma \ref{maxvec}, $|x_{v_1}| \ge |x_{v_{n-k}}|$ as $\n_1-1 \ge \n_{n-k}+1$. (If $\n_1-1 = \n_{n-k}+1$ and $|x_{v_1}| < |x_{v_{n-k}}|$, we may interchange the labeling of $v_1,v_{n-k}$ to make the above inequality hold.) Now relocating a pendant edge from $v_{n-k}$ to $v_1$, we go back to the original graph $K(\n)$. By Lemma \ref{perturb}, $q_{\min}(K(\n)) \le q_{\min}(G)=q_{\min}(K(\m))$. The result holds.\hfill $\blacksquare$ By repeatedly using Lemma \ref{maj}, we get the following result. \begin{theorem}\label{max} The maximizing graph in $\mathscr{G}_n^k$ can be achieved by $K(\n)$, where $\n=(\n_1,\n_2, \ldots, \n_{n-k})$, $\sum_{i=1}^{n-k}\n_i=k$, and $|\n_i-\n_j|\leq 1$ for all $i,j=1,2,\ldots,n-k$. \end{theorem} \begin{coro} \label{upb} Let $G$ be a connected graph containing pendant vertices. Then $$q_{\min}(G) \le \frac{n-1+\frac{1}{n-1}-\sqrt{n^2-6n+11+\frac{1}{(n-1)^2}}}{2}.$$ If, in addition, $G$ contains $k \ge 1$ pendant vertices, then $$q_{\min}(G) \le \frac{n-k+\frac{k}{n-k}-\sqrt{(n-k-2)^2+2k+\frac{k^2}{(n-k)^2}}}{2}.$$ \end{coro} {\it Proof:} Assume $G \in \mathscr{G}_n^k$ for some $k \ge 1$. By Theorem \ref{max}, $q_{\min}(G) \le q_{\min}(K(\n))$, where $\n$ has the prescribed property in Theorem \ref{max}. Let $t:=\lceil k/(n-k) \rceil$, and let $B$ be the principal submatrix of $Q(K(\n))$ indexed by the vertex with degree $t+n-k-1$ and $t$ pendant vertices adjacent to it. By the eigenvalue interlacing property of symmetry matrices, \begin{align*} q_{\min}(K(\n)) \le q_{\min}(B)&=\frac{n-k+t-\sqrt{(n-k+t)^2-4(n-k-1)}}{2}\\ &\le \frac{n-k+\frac{k}{n-k}-\sqrt{(n-k+\frac{k}{n-k})^2-4(n-k-1)}}{2}\\ &=\frac{n-k+\frac{k}{n-k}-\sqrt{(n-k-2)^2+2k+\frac{k^2}{(n-k)^2}}}{2}, \end{align*} where $q_{\min}(B)$ is the least eigenvalue of $B$. Noting the function $f(k):=\frac{n-k+\frac{k}{n-k}-\sqrt{(n-k-2)^2+2k+\frac{k^2}{(n-k)^2}}}{2}$ is strictly decreasing with respect to $k$, so we get the first result of this theorem. \hfill $\blacksquare$ We now give a remark on Lemma \ref{maxvec}, Lemma \ref{maj} and Theorem \ref{max}. Consider the graphs $K(2,2,2,0)$ and $K(2,2,2,1)$ in Fig. 4.1. Using the software {\sc Mathematica}, we find they have the same least $Q$-eigenvalues, both being $(5-\sqrt{17})/2$ with multiplicity $2$. So, the inequality in Lemma \ref{maj} may hold as an equality; and the maximizing graph in $\mathscr{G}_n^k$ may not be unique. The two independent first $Q$-eigenvectors of $K(2,2,2,0)$ are listed below: {\small $$x=\left(\frac{\sqrt{17}-3}{2}, 0, \frac{3-\sqrt{17}}{2}, 0, -1, -1, 0, 0, 1,1\right), \ \ y=\left(\frac{\sqrt{17}-3}{2}, \frac{3-\sqrt{17}}{2}, 0,0, -1, -1, 1, 1, 0,0\right).$$} We find that $|x(v_1)|>|x(v_2)|=|x(v_4)|$ even though $\n_1=\n_2>\n_4$. So, in Lemma \ref{maxvec}, if $\n_i > \n_j$ we cannot say $|x(v_i)| > |x(v_j)|$, and if $\n_i=\n_j$ we also cannot say $|x(v_i)| = |x(v_j)|$. \begin{center} \vspace{2mm} \includegraphics[scale=0.65]{fig41.eps}\\ \vspace{2mm} \small Fig. 4.1 The graph $K(2,2,2,0)$ (left) and $K(2,2,1,1)$ (right) \end{center} Finally we give a remark on some upper bounds of the least $Q$-eigenvalue of a graph $G$ in terms of minimum degree $\delta(G)$. Liu and Liu \cite{liul} observe that $q_{\min}(G) \le \delta(G)$. Das \cite{das} show that $q_{\min}(G) < \delta(G)$. Lima et al. \cite{lima} improve the bound as $$q_{\min}(G) \le \frac{n-1+\delta(G)-\sqrt{(n-1-\delta(G))^2+4}}{2}<\delta(G).$$ If the graph $G$ contains pendant vertices, i.e. $\delta(G)=1$, then the above bound is $$\frac{n-\sqrt{n^2-4n+8}}{2} > \frac{n-1+\frac{1}{n-1}-\sqrt{n^2-6n+11+\frac{1}{(n-1)^2}}}{2}.$$ So we give a subtle upper bound for the least $Q$-eigenvalue of a graph if the graph contains pendant vertices. {\small
1,108,101,562,653
arxiv
\section{Introduction} \label{sec:intro} Galaxies at $z > 1$ typically have velocity dispersions greater than nearby galaxies \citep{Kassin2012,Wisnioski2015,Johnson2018,Ubler2019}. While observations of galaxies at $z > 1$ reveal a significant proportion of galaxies with velocity dispersions in the range 50 -- 100 km s$^{-1}${} \citep[e.g.][]{Genzel2006,Law2007,ForsterSchreiber2009,Law2009,Epinat2010,Jones2010,LemoineBusserolle2010}, nearby galaxies typically have velocity dispersions of < 50 km s$^{-1}${} \citep{Epinat2008,Moiseev2015,Varidel2016,Yu2019}. Although this has been observed, the process by which galaxies settle to lower velocity dispersions across epochs is not well understood. Another important observation is that galaxies at all epochs exhibit velocity dispersions that are greater than expected by the thermal contribution of the gas alone. In the case of ionised gas measured using the H$\alpha$ emission line, the characteristic temperature of 10$^4$ K corresponds to an expected velocity dispersion of $\sim$9 km s$^{-1}${} \citep{Glazebrook2013}. Galaxies have velocity dispersions > 9 km s$^{-1}${} at all epochs. Studies suggest that turbulent motions above the thermal contribution dissipate on timescales of the order of the flow crossing time \citep{MacLow1998,Stone1998,MacLow1999}. The crossing time for a galaxy with Toomre stability \citep{Toomre1964} of $Q \sim 1$ will be of order the dynamical time, which is typically $\mathcal{O}$(100 Myr) \citep{Krumholz2018}. If the turbulent motions are on the scale of Giant Molecular Clouds (GMCs), it will decay on $\mathcal{O}$(< 10 Myr). Therefore, we should rarely see galaxies with velocity dispersions greater than the thermal contribution, unless there is an ongoing driving mechanism to sustain the observed gas turbulence. Numerous energetic sources have been proposed to contribute to the non-thermal turbulence observed in galaxies. These drivers can typically be split into star-formation feedback processes \citep{Norman1996,MacLow2004,Krumholz2009,Murray2010}, gravitational transport of gas onto \citep{Elmegreen2010,Hopkins2013} or through \citep{Krumholz2010} the disc, dynamical drivers such as shear and differential rotations across the disc \citep{Federrath2016,Federrath2017IAUS}, or interactions between galaxy components \citep[e.g.][]{Dobbs2007,Dekel2009a,Ceverino2010,Aumer2010,Oliva-Altamirano2018}. In this paper, we will be focusing primarily on differentiating star-formation feedback processes and gravitational transport of gas through the disc due to the clear predictions that have been made in the integrated star-formation rate (SFR) and global velocity dispersion ($\sigma_v$) plane \citep{Krumholz2016,Krumholz2018}. Star-formation feedback is thought to be dominated by the energy imparted by supernovae \citep{Norman1996,MacLow2004}. However, other drivers such as stellar winds, expansion of \ion{H}{ii} regions \citep{Chu1994,Matzner2002}, and radiation pressure in high density star clusters \mbox{\citep{Krumholz2009,Murray2010}} will also inject momentum into the interstellar medium. Observational evidence for star-formation feedback as the primary driver of gas turbulence has been argued by observing that SFR is correlated with $\sigma_v$. The SFR -- $\sigma_v$ correlation has been shown both within a single sample at constant redshift \citep{Green2010,Green2014,Moiseev2015,Yu2019} and by combining multiple samples across epochs \citep{Green2010,Green2014}. Assuming that star-formation feedback processes are a significant driver of the turbulence, it would be natural to expect a relation between local star-formation rate surface density ($\Sigma_\text{SFR}$) and local velocity dispersion. There are conflicting results in the literature regarding the relationship between these local quantities. Some studies have found a significant relationship \citep{Lehnert2009,Lehnert2013}, whereas others have found the localised relationship to be weak \citep{Genzel2011,Varidel2016,Zhou2017,Ubler2019}. Furthermore, the physical mechanism for an energetic source to account for velocity dispersions due to star-formation feedback of several tens of km s$^{-1}${} is not well established. Constructing equilibrium solutions between gravitational infall of the disc \mbox{supported} by outward pressure solely by supernovae leads to $\sigma_v \lesssim 25$ km s$^{-1}${} with little variation as a function of SFR \citep{Ostriker2011,Krumholz2018}. An alternative approach that can \mbox{account} for increased turbulence is to assume that the star-formation efficiency per free-fall time ($\epsilon_\text{ff}${}) changes as a function of galaxy properties, thus changing the energetic input from star-formation feedback processes \citep{Faucher-Giguere2013}. However, \mbox{numerous} observations suggest that $\epsilon_\text{ff}${} is approximately constant across a wide range of galaxy properties \citep{Krumholz2007,Krumholz2012,Federrath2013,Salim2015,Krumholz2019}. An alternative set of driving mechanisms are due to gravitational effects. This includes the initial gravitationally unstable formation of the disc \citep{Aumer2010}, that can account for short-lived supersonic turbulence on the order of the disc formation time, $\mathcal{O}$(100 Myr). It is thought that the supersonic turbulence that is initially set at disc formation can be maintained by the gravitational transport of gas through the disc \citep{Krumholz2010}. \citet{Krumholz2016} also argued that the gravitational transport model predicts an increase in velocity dispersion at increased SFR that is more consistent with the data than models assuming star-formation feedback processes. A further complication involved in inferring the ongoing drivers of turbulence across epochs is the effects of the spectral and spatial resolution on the observed velocity dispersion. The spectral resolution broadens the observed emission line often on order of the intrinsic velocity dispersion. This is typically accounted for by convolving the modelled emission line profile by the known Line-Spread Function (LSF) while fitting to the data \citep[e.g.][]{ForsterSchreiber2009,Davies2011,Green2014,Varidel2019}. This is a reasonable approximation as long as the model assumptions regarding the LSF are well known. The spatial resolution is more difficult to account for as it acts to blur the emission line flux spatially per spectral slice. The observed velocity dispersion is then a complex function of the intrinsic flux distribution, line of sight (LoS) velocity profile, and LoS velocity dispersion profile. This effect is usually referred to as beam smearing. In general, beam smearing acts to increase the observed velocity dispersion particularly where the velocity gradient is steepest \citep{Davies2011,Glazebrook2013}, and in detail can result in spurious substructure in the velocity dispersion profile \citep{Varidel2019}. Furthermore, beam smearing could result in spurious correlations such as the SFR -- $\sigma_v$ correlation, as SFR is related to the mass which shapes the gravitational potential, and thus increases the velocity gradient at the centre of galaxies with higher SFR. Similarly, the width of the Point-Spread Function (PSF) relative to the galaxy size increases for increasing $z$, thus resulting in higher observed velocity dispersions if beam smearing is not corrected for appropriately. The SFR -- $\sigma_v$ relation has been used to distinguish between the different energetic sources of turbulence \citep{Krumholz2016,Krumholz2018}. However, comparisons between theoretical models and observations have typically been performed by combining several studies with different redshift ranges and beam smearing corrections. In this paper, we improve comparisons of the observed velocity dispersion to theoretical models by studying a sample of nearby galaxies using a single technique to mitigate the effects of beam smearing. The data encompasses a wide range of \mbox{SFR $\in$ [10$^{-3}$, 10$^2$] M$_\odot$ yr$^{-1}${}} of local galaxies at $z \lesssim 0.1$. The combined sample is comprised of observations from the SAMI Galaxy Survey Data Release Two \citep[SAMI Galaxy Survey DR2,][]{Croom2012,Scott2018} and the DYNAMO survey \citep{Green2014}. We use a consistent disc-fitting routine referred to as \textsc{Blobby3D}{} \citep{Varidel2019}, for all the galaxy gas kinematic modelling in this paper. \textsc{Blobby3D}{} is a disc fitting code that constructs a regularly rotating thin-disc galaxy model in 3D (position -- position -- wavelength space) that is then convolved by the PSF and LSF prior to comparing the model to the data. In that way it can account for the effect of beam smearing when inferring the velocity dispersion of the galaxy. The outline of this paper is as follows. In Section \ref{sec:data} we describe the SAMI Galaxy Survey and DYNAMO surveys, as well as our sample selection criteria. In Section \ref{sec:methods} we outline the methods used to measure the key gas kinematic properties. In Section \ref{sec:results}, we will discuss our results. In Section \ref{sec:vdisp_drivers} we compare our results to theoretical models of the drivers for turbulence. We summarise our conclusions in Section \ref{sec:conclusions}. Throughout this paper we assume the concordance cosmology \citep[$\Omega_\Lambda$ = 0.7, $\Omega_m$ = 0.3, $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$;][]{Hinshaw2009} and a \citet{Chabrier2003} Initial Mass Function (IMF). \section{Data selection} \label{sec:data} \subsection{The SAMI Galaxy Survey} \label{subsec:SAMI} The SAMI Galaxy Survey was conducted with the Sydney-AAO Multi-object Integral field Spectrograph \citep[SAMI,][]{Croom2012}. SAMI was mounted at the Anglo-Australian Telescope (AAT), that provided a 1 degree diameter Field-of-View (FoV). SAMI used 13 fused fibre bundles, known as Hexabundles \citep{Bland-Hawthorn2011,Bryant2014}, with a 75\% fill factor. Each bundle contains 61 fibres of 1.6$''$ diameter, resulting in an approximately 15$''$ diameter FoV. The IFUs as well as 26 sky fibres were attached to pre-drilled plates using magnetic connectors. SAMI fibres were fed to the double-beam AAOmega spectrograph \citep{Sharp2006}. The 580V grating at 3750--5750 \AA{} provides a resolution of \mbox{$R = 1808$} (\mbox{$\sigma = 70.4$ km s$^{-1}$} at 4800 \AA{}) and the 1000R grating from \mbox{6300--7400 \AA{}} providing a resolution of $R = 4304$ (\mbox{$\sigma = 29.6$ km s$^{-1}$} at 6850 \AA{}) \citep{Scott2018}. During the survey, observations of over 3000 galaxies were obtained. Target selection for the SAMI Galaxy Survey are provided in \citet{Bryant2015}. The redshift range for the observed galaxies was $0.004 < z < 0.113$ and a stellar mass range of $7.5 < \log(M_*/M_\odot) < 11.6$. The Full-Width Half-Maximum (FWHM) of the seeing distribution was $1.10'' < \text{FWHM}_\text{PSF} < 3.27''$. Relevant data used for the analysis in this paper are from the SAMI Galaxy Survey DR2 \citep{Scott2018}. This includes the aperture spectra, emission line products \citep{Green2018}, data cubes \citep{Sharp2015}, and input catalogue \citep{Bryant2015}. \subsection{Sample selection from the SAMI Galaxy Survey} \label{subsec:Sample} Our aim was to select galaxies on the star-forming main sequence within the SAMI Galaxy Survey. As such, we performed the following selection criteria cuts to the sample from the SAMI Galaxy Survey DR2 \citep{Scott2018}. Star-forming galaxies are selected by applying a cutoff integrated H$\alpha$ equivalent width of $EW > 3$ \AA{} \citep{CidFernandes2011}. The equivalent width is calculated as the total H$\alpha$ flux compared to the total continuum flux across the SAMI FoV. The continuum flux in the region around H$\alpha$ is estimated by calculating the mean continuum in the wavelength range [6500, 6540] \AA{}. The integrated H$\alpha$ flux estimates is sourced from the SAMI Galaxy Survey DR2 emission line data products. We remove galaxies with ionised emission from non star-forming sources such as Active Galactic Nuclei (AGN) and Low-Ionisation Nuclear Emission-line Regions (LINERs). To implement this criteria, we remove galaxies where the AGN classification criteria proposed by \citet{Kauffmann2003} is met, \begin{equation} \log ( [\text{\ion{O}{iii}}] / \text{H} \beta ) > \frac{0.61}{\log([\text{\ion{N}{ii}}]/\text{H}\alpha) - 0.05} + 1.3. \label{eq:kauffmanagn} \end{equation} [\ion{O}{iii}] and [\ion{N}{ii}] represent the emission line fluxes at 5007 \AA{} and 6583 \AA{}, respectively. The line fluxes are estimated for the central region of the galaxy where AGN and LINER contamination should be greatest, using the 1.4$''$ aperture spectra from the SAMI Galaxy Survey DR2. We retain galaxies that are face-on up to $e = 1 - b/a = 0.5$ (0$^{\circ}$ < $i$ < 60$^{\circ}$, assuming a thin disc). We avoid galaxies observed at high inclination as the intrinsic velocity dispersion is more difficult to constrain due to beam smearing. Plus galaxies are optically thick such that edge-on observations limit the ability to observe the integrated LoS from the entire galaxy. Furthermore, a thin disc model is assumed in \textsc{Blobby3D}{}, such that the galaxies will not be well modelled when observed close to edge-on. We apply the following signal-to-noise cut on the spaxels in the data. We first apply a mask to spaxels with H$\alpha$ flux signal-to-noise < 3. Spatially resolved H$\alpha$ flux and it's error are obtained from the SAMI Galaxy Survey DR2 pipeline. We then construct groups of unmasked spaxels that are adjacent and meet the signal-to-noise criteria. The largest unmasked group is retained, whereas the remaining spaxels are masked. We retain galaxies that had at least 300 unmasked spaxels. The above masking routine only finds the largest group of spaxels, which in principle could reject clumpy flux profiles. In practice, the effect of removing H$\alpha$ clumps originating from the galaxy was negligible. Instead, it primarily removed spurious spaxels that were reported to have high signal-to-noise, yet by eye did not appear to be legitimate detections of flux originating from the galaxy. We also remove mergers or galaxies with clearly disturbed gas kinematics from our final sample. Potential mergers were determined by eye from observations of the gas kinematic maps. 9 galaxies were removed from our final sample due to this criteria. There are 1523 galaxies in the SAMI Galaxy Survey DR2 where all of the above diagnostic criteria are measurable. 342 galaxies remain once our criteria is applied. Figure \ref{fig:sample} shows that we are selecting galaxies along the star-forming main sequence. We see a clear bimodal distribution in the log equivalent width, where we have selected those galaxies with $EW$ > 3 \AA{}. The equivalent width cut removes massive galaxies that are typically passive, which can be seen when plotting the equivalent width compared to M$_*$ and $R_e$. There are a limited number of galaxies in our sample with 3 \AA{} < $EW$ $\lesssim$ 10 \AA{} as many of those galaxies are removed due to being classified as LINER/AGN or having < 300 spaxels that meet our signal-to-noise masking criteria. \begin{figure*} \begin{center} \includegraphics[ width=\textwidth, keepaspectratio=true, trim=10mm 20mm 20mm 10mm, clip=true]{2_DataSelection/SamplevsSAMI.pdf} \\ \end{center} \caption{Galaxy parameters for our sample of 342 galaxies (red) selected from the total SAMI Galaxy Survey (grey). We show the marginalised (diagonal) and conditional (off-diagonal) distributions for the stellar mass (log$_{10}$(M$^*$/M$_\odot$)), effective radius (log$_{10}(R_e$/asec)), ellipticity ($e = 1 - b/a$), H$\alpha$ equivalent width (log$_{10}$($EW$/\AA{})), and NSNGT3. NSNGT3 corresponds to the number of spaxels that meet our signal-to-noise masking criteria. We select a sample of star-forming galaxies from the SAMI Galaxy Survey with inclination and signal-to-noise cuts that can be adequately modelled using \textsc{Blobby3D}{}.} \label{fig:sample} \end{figure*} Removing highly inclined galaxies results in a large cut to our sample, but does not bias our sample along any galaxy properties. Also, the selection of galaxies with at least 300 unmasked spaxels does remove galaxies with $R_e \lesssim 1''$, but there are very few of these galaxies in the underlying SAMI Galaxy Survey DR2 sample. \subsection{DYNAMO sample} \label{subsec:dynamo} The DYnamics of Newly Assembled Massive Objects \citep[DYNAMO,][]{Green2014} survey consists of a sample of star-forming galaxies in the local Universe ($z \lesssim 0.1$). These galaxies were classified as star-forming in the MPA-JHU Value Added Catalog from the Sloan Digital Sky Survey \citep[SDSS,][]{York2000}. The galaxies comprising the DYNAMO survey were chosen primarily based on H$\alpha$ luminosity. The aim was to include both high H$\alpha$ luminious galaxies, that are rare in the local Universe, as well as a sample of typical galaxies in the local Universe. The resulting galaxy sample ranged \mbox{SFR $\in [1, 100]$ M$_\odot$ yr$^{-1}${}}. The data for the DYNAMO samples was obtained via observations using the 3.9 m Anglo-Australian Telescope (AAT) and the ANU 2.3 m Telescope at Siding Spring Observatory. The AAT was equiped with the SPIRAL Integral-Field Unit (IFU) with the AAOmega Spectrograph \citep{Sharp2006}. SPIRAL is an array of $32 \times 16$ square, 0.7$''$ lenslets with a contiguous integral field of $22.4'' \times 11.2''$. The 1700I grating was used on the red spectrograph providing a nominal resolution power of $R \sim 12000$. The ANU 2.3 m Telescope was equiped with the Wide-Field Spectrograph \citep[WiFeS,][]{Dopita2007}. WiFeS has a $25'' \times 38''$ FoV with either $1.0'' \times 0.5''$ or $1.0'' \times 1.0''$ spaxels. The I7000 grating was chosen for the red arm, which has a $6893 - 9120$ \AA{} wavelength range with a spectral resolving power of $R \sim 7000$. A total of 67 galaxies comprised the original DYNAMO sample. We remove galaxies observed at $i > 60^\circ$, where $i$ has been measured using the SDSS photometric pipeline using an exponential disc fit to the $r$-band. We perform the same masking criteria as described for the galaxies from the SAMI Galaxy Survey. We also remove galaxies with less than 30 unmasked spaxels. 41 galaxies were retained from the original DYNAMO sample. \section{Methods} \label{sec:methods} \subsection{Modelling the gas disc kinematics} \label{subsec:method_vdisp} We use \textsc{Blobby3D}{} \citep{Varidel2019} to infer the intrinsic gas kinematics for the observed galaxies. \textsc{Blobby3D}{} is a forward-fitting disc modelling procedure. It assumes that the gas lies in a regularly rotating thin-disc. The prior for the spatial gas distribution within the disc allows for clumpy gas profiles using a hierarchical Gaussian mixture-model. The model is constructed in 3D (position -- position -- wavelength space) and then convolved in accordance with the PSF and instrumental broadening by the LSF. The convolved model is then compared to the observed data cube. The advantage of \textsc{Blobby3D}{} is that it is capable of performing inference for the spatial gas distribution, including substructure, plus the gas kinematics simultaneously. This is important as the effect of beam smearing is a function of the spatial gas distribution being blurred per spectral slice. As such, the observed gas kinematics is a complex function of the intrinsic spatial gas distribution, the velocity profile, and the velocity dispersion plus instrumental broadening and beam smearing. For example, \citet{Varidel2019} found that it is possible to observe spurious substructure in the gas kinematics in a symmetric regularly rotating disc with an asymmetric spatial gas distribution plus beam smearing. Previous testing of \textsc{Blobby3D}{} has found that it is well optimised to infer the intrinsic velocity dispersion of galaxies \citep{Varidel2019}. \textsc{Blobby3D}{} was compared to an alternative forward-fitting methodology known as \textsc{$^\mathrm{3D}$Barolo}{} \citep{DiTeodoro2015}. It was also compared to other heuristic modelling approaches that have been used in the literature, such as estimating the velocity dispersion in the outskirts of the galaxy \citep[e.g.][]{Zhou2017}, correcting the observed velocity dispersion as a function of the velocity gradient \citep[e.g.][]{Varidel2016}, and subtracting the velocity gradient in quadrature from the observed velocity dispersion \citep[e.g.][]{Oliva-Altamirano2018}. \textsc{Blobby3D}{} was found to infer the intrinsic velocity dispersion more accurately than these alternative methods, particular for galaxies where the PSF or velocity gradient were greatest. The parameterisation for \textsc{Blobby3D}{} is set within the Bayesian framework. The joint prior distribution for the parameters, hyperparameters, and data were defined in \citet{Varidel2019}. We only make minor changes to the priors that were previously proposed. We outline the motivation for changing some of the prior distributions below. The joint prior distribution used for this work performs inferences for the H$\alpha$ flux plus the [\ion{N}{ii}]/H$\alpha$ emission flux ratio for each spatial Gaussian flux profile (often referred to as a `blob' in \textsc{Blobby3D}{}). The gas kinematics have been assumed to be consistent across the different gas components. Therefore, the inferences for the kinematics are constrained using extra information from the [\ion{N}{ii}] emission lines at 6548.1 \AA{} and 6583.1 \AA{}. The ratio of the flux between the [\ion{N}{ii}] emission lines is assumed to be $F_{6583.1} / F_{6548.1} = 3$. To simplify the inference for the velocity dispersion, we assume a constant velocity dispersion across the disc ($\sigma_{v, 0}$). We assume no radial gradient as the results for some galaxies returned large positive gradients when using the prior suggested by \citet{Varidel2019}. The large spatial gradients in velocity dispersion after convolution appeared to be over-fitting for wider-tailed non-Gaussian emission line profiles. Therefore, we removed the velocity dispersion gradient from the inference in order to robustly infer the constant velocity dispersion component for the large sample of galaxies that were studied in this work. We have also widened the bounds for our priors for the systemic velocity ($v_\text{sys}$) and the asymptotic velocity ($v_c$) in order to model a larger set of galaxies than was performed by \citet{Varidel2019}. Our new priors are, \begin{align} v_\text{sys} & \sim \text{Cauchy}(0, 30 \text{ km s}^{-1})T(-300 \text{ km s}^{-1}, 300 \text{ km s}^{-1}), \\ v_c & \sim \text{Loguniform}(1 \text{ km s}^{-1}, 1000 \text{ km s}^{-1}). \end{align} Where $T(a, b)$ represents the distribution being truncated to the interval [$a, b$]. \subsubsection{Mitigating the effects of beam smearing} \label{subsubsec:b3d_smear} The effect of beam smearing by the PSF is accounted for in \textsc{Blobby3D}{} by convolving the underlying model constructed by the PSF, prior to calculating the likelihood function. The PSF profile assumed in \textsc{Blobby3D}{} is a superposition of 2D concentric circular Gaussian profiles. Therefore, the PSF needs to first be modelled assuming this flux profile. The SAMI Galaxy Survey pipeline provides estimates for the PSF by fitting a profile to a star that was observed simultaneously with the galaxy. We have used the Moffat profile estimates, where the PSF is described as, \begin{equation} p(r) = \frac{\beta - 1}{\pi \alpha^2} \bigg(1 + \frac{r^2}{\alpha^2}\bigg)^{-\beta}. \end{equation} $\alpha$ is the FWHM and $\beta$ is a shape parameter that controls the tails of the Moffat profile. To refactor the Moffat profile parameters into a set of concentric Gaussians, we construct the 1D Moffat profile, then fit it with two 1D Gaussians. Two Gaussians were enough to adequately model the PSF profile. The estimated Gaussian parameters are then passed to \textsc{Blobby3D}{}. For the DYNAMO sample, the FWHM of the PSF was measured during observations. As such, we assumed a 2D circular Gaussian profile to be representative of the PSF for the DYNAMO sample. Thus, the underlying model in \textsc{Blobby3D}{} was convolved with a Gaussian profile prior to comparing the model to the data for our galaxies from the DYNAMO survey. \subsubsection{Continuum substraction} \label{subsubsec:mask} \textsc{Blobby3D}{} requires the data to be continuum subtracted. For galaxies from the SAMI Galaxy Survey, we use the continuum models made available in the SAMI Galaxy Survey DR2 pipeline. The full description for the continuum modelling routine is described in \citet{Owers2019}. We estimate the continuum for the galaxies from the DYNAMO survey using a 300 bin moving median filter as also implemented by \citet{Green2014}. It is possible for the continuum modelling to introduce systematics in the resulting continuum subtracted data cube. These systematics may not be well accounted for in the \textsc{Blobby3D}{} approach. We make the assumption that the stellar continuum will be adequately modelled in regions of high H$\alpha$ signal-to-noise. This is a significant motivation for implementing the H$\alpha$ signal-to-noise masking outlined in Section \ref{subsec:Sample}. \subsubsection{Posterior optimisation} \label{subsubsec:posterioroptimisation} We use \textsc{DNest4}{} \citep{Brewer2011DNEST,Brewer2018DNest4} to get a point estimate of the maxima for the posterior PDF. \textsc{DNest4}{} is a sampling algorithm based on nested sampling \citep{Skilling2004}, where the new levels are constructed by exploring a weighted mixture of the previous levels. Exploration of the levels is performed using a Metropolis Markov Chain Monte Carlo (MCMC). The multi-level exploration allows \textsc{DNest4}{} to be significantly more robust to local maxima compared to typical nested sampling, allowing for the exploration of high parameter spaces and multi-modal posterior distributions. Estimated values throughout this paper are of the maximum posterior PDF value in the chain sampled using \textsc{DNest4}{}. \subsection{Global velocity dispersion} \label{subsec:globalvdisp} \subsubsection{Beam smearing corrections} \label{subsubsec:vdispcorr} Assuming that \textsc{Blobby3D}{} accurately corrects for beam smearing, there should be no residual correlation between the PSF profile parameters and the inferred intrinsic velocity dispersion ($\sigma_{v, 0}$). The distribution of $\sigma_{v, 0}$ is consistent with our expectations for a beam smearing corrected sample. Figure \ref{fig:psf_vdisp} shows a comparison between the PSF Moffat profile parameters and $\sigma_{v, 0}$ for our sample from the SAMI Galaxy Survey. For both $\alpha$ and $\beta$, zero remains inside the 68\% shortest credible intervals for the Spearman-rank correlation coefficients. \begin{figure} \begin{center} \includegraphics[ width=0.5\textwidth, keepaspectratio=true, trim=17mm 9mm 0mm 16mm, clip=true]{3_Inferences/psf_vdisp.pdf} \\ \end{center} \caption{ Comparing the PSF Moffat profile parameters $\alpha$ and $\beta$ to the inferred global velocity dispersion for galaxies in our sample from the SAMI Galaxy Survey. We also show the PDF of the Spearman-rank correlation coefficients estimated using 10${^4}$ bootstrap samples (bottom). $\rho = 0$ lies within the 68\% shortest credible intervals suggesting that $\sigma_{v, 0}$ is adequately corrected for beam smearing.} \label{fig:psf_vdisp} \end{figure} For galaxies from the DYNAMO survey, the Spearman-rank correlation coefficient is estimated as $\rho(\text{FWHM}, \sigma_v) = 0.10^{+0.17}_{-0.17}$. As zero remains within the 68\% confidence interval, this result is also consistent with a beam smearing corrected sample. We also compare $\sigma_{v, 0}$ to an estimate of the velocity dispersion that was not corrected for beam smearing ($\sigma_{v, \text{uncorrected}}$). The uncorrected estimator is calculated as the arithmetic mean velocity dispersion across the FoV, when fitting a single Gaussian component to each spaxel. Spaxels with H$\alpha$ signal-to-noise < 3 are masked in this process to eliminate the effects of poorly constrained spaxels on the final estimate. Estimates for $\sigma_{v, 0}$ are significantly lower than $\sigma_{v, \text{uncorrected}}$ (see Figure \ref{fig:vdisp_correction}). Using the sample of galaxies from the SAMI Galaxy Survey, typical corrections were $\Delta \sigma_v = -5.3^{+4.0}_{-7.0}$ km s$^{-1}${} and $\Delta \sigma_v / \sigma_{v, 0} = -0.20^{+0.14}_{-0.18}$, where $\Delta \sigma_v = \sigma_{v, 0} - \sigma_{v, \text{uncorrected}}$. The typical beam smearing corrections are consistent with the results found by \citet{Varidel2019} on a sample of 20 star-forming galaxies in the SAMI Galaxy Survey using \textsc{Blobby3D}{}. \begin{figure} \begin{center} \includegraphics[ width=0.5\textwidth, keepaspectratio=true, trim=2mm 6mm 8mm 15mm, clip=true]{3_Inferences/vdisp_corrections.pdf} \\ \end{center} \caption{$\sigma_{v, 0}$ estimated using \textsc{Blobby3D}{} compared to the arithmetic mean of the single-component fits per spaxel ($\sigma_{v, \text{uncorrected}}$) to each galaxy from the SAMI Galaxy Survey sample. Estimates for the velocity dispersion are typically lower using \textsc{Blobby3D}{} as it mitigates the effects of beam smearing.} \label{fig:vdisp_correction} \end{figure} All estimated values have $\sigma_{v, 0} > \sigma_{v, \text{thermal}}$ = 9 km s$^{-1}${}. $\sigma_{v, \text{thermal}}$ is the typical emission line width expected for a \ion{H}{ii} region at $\sim 10^4$ K \citep{Glazebrook2013}. As such, $\sigma_{v, \text{thermal}}$ sets a physically motivated lower bound. \subsubsection{Considerations of the effects of the LSF on the velocity dispersion estimates} \label{subsubsec:lsf} The SAMI instrument has the spectral resolution of \mbox{$\sigma_{\text{LSF}}${} = 0.68 \AA{}} ($\sigma_{v, \text{LSF}}$ = 29.6 km s$^{-1}${}) in the red arm. For reference, we show the 1-$\sigma_{v, \text{LSF}}${} and 1/2-$\sigma_{v, \text{LSF}}${} on Figure \ref{fig:vdisp_correction}. 89\% of our galaxies have estimated intrinsic velocity dispersions less than $\sigma_{v, \text{LSF}}${} and 4.6\% of our sample were estimated to have intrinsic velocity dispersion less than $\sigma_{v, \text{LSF}}${}/2. We correct for the LSF by convolving the emission line by a Gaussian profile with $\sigma_{v, \text{LSF}}$ during the fitting procedure in \textsc{Blobby3D}{}. This procedure assumes that the observed emission line is a convolution of two Gaussians. Therefore, the estimated velocity dispersion can be affected by non-Gaussianities in the shape of the LSF, particularly when the velocity dispersion is significantly less than the width of the LSF. However, deviations of the SAMI LSF from a Gaussian profile are minor \citep{vandeSande2017}. Also 95.4\% of our sample were estimated to be $\sigma_{v, 0} >$ $\sigma_{v, \text{LSF}}${}/2, as such the effects of minor systematic differences of the LSF from a Gaussian profile is unlikely to have significant effects on our inferences. Similarly, the effect of variations in the LSF FWHM are minor for the SAMI Galaxy Survey. The LSF FWHM varied at the $\sim$5\% level as a function of fibre, time, and wavelength during the SAMI Galaxy Survey \citep{Scott2018}. For the velocity dispersions values that we estimate, this should result in uncertainties on the level of \mbox{$\Delta \sigma_{v}$ $\sim$ 1 km s$^{-1}${}}. As such, the variation of the LSF FWHM is not expected to have any significant effect on the conclusions drawn in this paper. \subsubsection{Estimating the vertical velocity dispersion} \label{subsubsec:sigmavz} Our disc modelling approach calculates a global estimate for the intrinsic line-of-sight (LoS) velocity dispersion ($\sigma_{v, 0} \equiv \sigma_{v, \text{LoS}}$). Most studies using IFS observations report $\sigma_{v, \text{LoS}}${}. However, $\sigma_{v, \text{LoS}}$ is a mixture of the radial ($\sigma_{v, R}$), azimuthal ($\sigma_{v, \phi}$), and vertical ($\sigma_{v, z}$) velocity dispersion components. At any point in the sky, $\sigma_{v, \text{LoS}}${} is given by \citep[e.g. Equation 27a, ][]{Cappellari2019}, \begin{equation} \sigma^2_{v, \text{LoS}} = \big( \sigma^2_{v, R} \sin^2 \phi + \sigma^2_{v, \phi} \cos^2 \phi \big) \sin^2 i + \sigma^2_{v, z} \cos^2 i. \label{eq:sigmavlosdecomp} \end{equation} Observed $\sigma_{v, \text{LoS}}${} is the luminosity-weighted integral along the LoS. To calculate the average velocity dispersion, we make the following approximations. We assume that the flux is constant across a thin disc with finite radial extent. We also assume spatially constant velocity dispersion components and that $\sigma^2_{v, \perp} \equiv \sigma^2_{v, R} \approx \sigma^2_{v, \phi}$ then the average LoS velocity dispersion is given by, \begin{equation} \bar\sigma^2_{v, \text{LoS}} = \sigma^2_{v, \perp} \sin^2 i + \sigma^2_{v, z} \cos^2 i. \label{eq:sigmavlosdecompint} \end{equation} Setting $\gamma^2 = \sigma^2_{v, z} / \sigma^2_{v, \perp}$ , and rearranging, then \begin{equation} \sigma_{v, \text{LoS}} = \sigma_{v, z} \sqrt{\sin^2 i / \gamma^2 + \cos^2 i} \label{eq:sigmavzsolve} \end{equation} The above model predicts changing $\sigma_{v, \text{LoS}}${} as a function of $i$ if $\gamma \neq 1$. For $\gamma > 1$, $\sigma_{v, \text{LoS}}${} increases with increasing $i$, whereas $\sigma_{v, \text{LoS}}${} decreases with $i$ when $\gamma < 1$. To estimate $\gamma$ we assume that $\sigma_{v, \text{LoS}}${} follows a loguniform distribution with mean $\sigma_{v, z, 0}$ and log variance $\tau^2$. The generating function for a single data point $\sigma_{v, z, i}$ is then, \begin{equation} p(\sigma_{v, \text{LoS}, j} | \sigma_{v, z, 0}, \tau^2, \gamma) \sim \text{lognormal}(\sigma_{v, z, 0} \sqrt{\sin^2 i / \gamma^2 + \cos^2 i}, \tau^2). \label{eq:evdisp_like} \end{equation} We assume the following priors, \begin{align} &p(\sigma_{v, z, 0}) \sim \text{loguniform}(1, 100) \\ &p(\tau) \sim \text{loguniform}(10^{-3}, 1) \\ &p(\gamma) \sim \text{loguniform}(0.1, 10). \label{eq:evdisp_prior} \end{align} The posterior distribution is then given by, \begin{multline} p(\sigma_{v, z0}, \tau, \gamma | \mathbf{D}) = p(\sigma_{v, z, 0}) p(\tau) p(\gamma) \prod^N_{j=1} p(\sigma_{v, \text{LoS}, j} | \sigma_{v, z, 0}, \tau^2, \gamma). \label{eq:evdisp_posterior} \end{multline} The above formulation assumes independence of the prior distribution between $\sigma_{v, z, 0}$, $\tau$, $\gamma$, as well as all $\sigma_{v, \text{LoS}, j}$. The above posterior distribution can now be sampled using typical techniques. We used \textsc{emcee} to sample the posterior distribution \citep{Foreman-Mackey2013}. We estimate $\gamma = 0.80^{+0.06}_{-0.05}$ as shown in Figure \ref{fig:evdisp_samples}, suggesting that the vertical velocity dispersion is less than the averaged azimuthal and radial components. This analysis was consistent with other approaches that we applied. For example, the bootstrapped Spearman-rank correlation coefficient distribution between the inclination and $\sigma_{v, \text{LoS}}${} was $\rho(i, \sigma_{v, \text{LoS}}) = 0.18^{+0.05}_{-0.05}$, where the uncertainties for the Sperman-rank correlation coefficient is estimated as the 68\% shortest credible interval after bootstrap resampling. We also performed the above analysis using uniform priors for $\sigma_{v, z, 0}$ and $\gamma$ with the same ranges, yet we still find $\gamma = 0.80^{+0.06}_{-0.06}$. \begin{figure} \begin{center} \includegraphics[ width=0.5\textwidth, keepaspectratio=true, trim=0mm 4mm 0mm 2mm, clip=true]{3_Inferences/inc_vdisp_corner.pdf} \\ \end{center} \caption{Corner plot \citep{Foreman-Mackey2016} showing the marginalised (diagonal) and joint (off-diagonal) posterior distributions for the parameter estimation for the inclination dependence model. There is evidence for a dependence of $\sigma_{v, \text{LoS}}${} on inclination for our sample of galaxies from the SAMI Galaxy Survey. This suggests that the vertical velocity dispersion ($\sigma_{v, z}${}) is less than the averaged azimuthal and radial velocity dispersion ($\sigma_{v, \perp}$).} \label{fig:evdisp_samples} \end{figure} Previous studies have suggested that $\sigma_{v, z}/\sigma_{v, R} \sim 0.6$ \citep[Section 1.2.2, ][]{Glazebrook2013} for stars. Mean \ion{H}{i} gas velocity dispersion was reported up to $\sim 3$ times higher for galaxies observed at $i > 60^\circ$ compared to $i < 60^\circ$ by \citet{Leroy2008}, also suggesting that the contribution of $\sigma_{v, R}$ and $\sigma_{v, \phi}$ dominates. Studies of gas kinematics have typically not reported or found evidence that $\sigma_{v, z}$ is related to the inclination. For example, studies of high-$z$ in the KMOS3D Survey have found no significant correlation between the axis ratio $q = b/a$ and $\sigma_{v, \text{LoS}}${} \citep{Wisnioski2015,Ubler2019}. However, such a relation may be difficult to identify in high-$z$ galaxies with lower signal-to-noise and spatial resolution. We estimate the vertical velocity dispersion ($\sigma_{v, z}$) for individual galaxies by inverting Equation \ref{eq:sigmavzsolve} and using $\gamma$ = 0.8. We estimated the Spearman-rank correlation between the inclination and $\sigma_{v, z}${} to be $\rho(i, \sigma_{v, z}) = 0.00^{+0.05}_{-0.05}$ after performing the correction per galaxy, suggesting that our analysis appropriately removed the correlation as a function of the inclination angle. \begin{figure} \begin{center} \includegraphics[ width=0.5\textwidth, keepaspectratio=true, trim=17mm 9mm 0mm 16mm, clip=true]{3_Inferences/inc_vdisp.pdf} \\ \end{center} \caption{The relationship between the inclination ($i$) and inferred velocity dispersion estimates. We also show the PDF of the Spearman-rank correlation coefficients using bootstrap resampling (bottom). There is evidence for a weak positive correlation between the LoS velocity dispersion $\sigma_{v, \text{LoS}}${} and $i$. Whereas the distribution for the vertical velocity dispersion after applying a correction factor yields no relation with $i$.} \label{fig:globalvdisp} \end{figure} Converting from $\sigma_{v, \text{LoS}}${} to $\sigma_{v, z}${} adjusts the reported values by a couple of km s$^{-1}${}. The marginalised distributions yield \mbox{$\sigma_{v, \text{LoS}}${} = 21.1$^{+3.9}_{-5.2}$ km s$^{-1}${}} and \mbox{$\sigma_{v, z}${} = 18.8$^{+3.4}_{-4.8}$ km s$^{-1}${}} (see Figure \ref{fig:vdisp_hist}). Typical differences are \mbox{$\sigma_{v, \text{LoS}} - \sigma_{v, z} = 2.4^{+0.9}_{_-1.3}$ km s$^{-1}${}}, with the greatest correction being \mbox{$\sigma_{v, \text{LoS}} - \sigma_{v, z} = 7.9$ km s$^{-1}${}}. \begin{figure} \begin{center} \includegraphics[ width=0.5\textwidth, keepaspectratio=true, trim=5mm 8mm 8mm 18mm, clip=true]{3_Inferences/vdisp_hist.pdf} \\ \end{center} \caption{The distribution of the LoS ($\sigma_{v, \text{LoS}}${}, blue) and vertical ($\sigma_{v, z}${}, red) velocity dispersion for our sample of galaxies from the SAMI Galaxy Survey. The estimated vertical velocity dispersion is adjusted down with respect to $\sigma_{v, \text{LoS}}${} by a couple of km s$^{-1}${} in accordance with the inclination correction described in Section \ref{subsubsec:sigmavz}.} \label{fig:vdisp_hist} \end{figure} For the remainder of this paper, we will report the values of $\sigma_{v, z}${}. The subsequent analysis and results do not change qualitatively whether we use $\sigma_{v, z}${} or $\sigma_{v, \text{LoS}}${}, but $\sigma_{v, z}${} is preferred as it is an estimator free from effects from the viewing angle. It is also more appropriate to compare $\sigma_{v, z}${} to theoretical models, as they are typically framed with respect to $\sigma_{v, z}${}. We report both values in Appendix \ref{appendix:parameters}. We have not applied the inclination correction for galaxies observed in the DYNAMO survey. This is due to finding no significant relation with $\rho(i, \sigma_{v, \text{los}}) = -0.09^{+0.15}_{-0.15}$ for our galaxies from the DYNAMO survey. This suggests that there is no inclination effect to correct for within this sample. It may be that the sample from the DYNAMO survey is too small to infer the inclination effect. In this case, we choose not to apply the inclination effect found from the SAMI Galaxy Survey, as it is still possible that the inferred effect is methodological rather than physical across all galaxies. \subsection{Circular velocity estimates} \label{subsec:vcirc} \textsc{Blobby3D}{} estimates the LoS velocity profile using the empirical model proposed by \citet{Courteau1997}, \begin{equation} v(r) = v_\text{c} \frac{(1 + r_t/r)^\beta}{(1 + (r_t/r)^\gamma)^{1/\gamma}} \sin(i) \cos(\theta) + v_{\text{sys}}. \label{eq:vprof} \end{equation} Where $v_c$ is the asymptotic velocity and $v_\text{sys}$ is the systemic velocity. $r$ is defined by the distance to the kinematic centre. $r_t$ is the turnover radius. $\beta$ is a shape parameter that controls the gradient for $r > r_t$, where the velocity gradient increases for $\beta < 0$, and decreases when $\beta > 0$. $\gamma$ is a shape parameter that controls how sharply the velocity profile turns over. $i$ is the inclination of the galaxy. Then $\theta$ is the polar angle in the plane of the disc. We intend to estimate the circular velocity from our inferred parameters. While $v_c$ is a natural choice, it is difficult to get a strong constraint on $v_c$ across our complete sample due to the FoV for the SAMI Galaxy Survey typically extending out to $\sim$1.5 $R_e$. Instead, we estimate the absolute circular velocity at 2.2 $R_e$ denoted as $v_{2.2}$ following \citep{Bloom2017a}. For low values of $i$, small differences in the estimated $i$ can result in large difference of $v_{2.2}$. Therefore, for low values of $i$, incorrect estimates for the observed ellipticity can result in large changes in our estimates for the inclination. As such, we restrict our calculated values for $v_{2.2}$ to galaxies in the range $i \in [30^\circ, 60^\circ]$ ($e \in [0.13, 0.5]$ assuming a thin disc). Similarly, galaxies with $R_e < 3.0''$ tended to have very large scatter on their $v_{2.2}$. At these limits, the spatial resolution of our observations are likely playing a role in increasing the scatter in the rotational velocity estimates. 230 galaxies meet the above inclination and $R_e$ criteria. We only reference $v_{2.2}$ for galaxies that meet that inclination for the remainder of this paper. \subsection{Integrated star-formation rates} \label{subsec:sfr} We used the best fit SFRs from the GAMA Survey \citep{Gunawardhana2013,Davies2016,Driver2018}. The SFRs are estimated using full spectral energy distribution (SED) fitting of 21 bands of photometry across the UV, optical, and far infrared ranges with \textsc{MAGPHYS} \citep{daCunha2008}. \textsc{MAGPHYS} fits the observed photometry using a library that includes stellar spectral and dust emission profiles. In this way, the SFRs are corrected for dust emission. These estimates for the SFR were used instead of the SAMI H$\alpha$ luminosity maps as there are known aperture affects given the limited FoV of the SAMI instrument \citep[Appendix A,][]{Medling2018}. For the galaxies from the DYNAMO survey, we used the SFR values reported by \citet{Green2014}. SFRs were estimated using the H$\alpha$ luminosity estimated from their observations. The SFR estimate includes a dust correction using the Balmer decrement from the ratio between their measured H$\alpha$ and H$\beta$ measurements. The SFR was then calculated using the dust-corrected H$\alpha$ luminosity maps that were converted to SFR maps using the \citet{Kennicutt1994} conversion assuming a \citet{Chabrier2003} IMF. \subsection{Integrated \ion{H}{i} gas measurements} \label{subsec:totalhi} Follow-up 21 cm observations of SAMI galaxies were obtained as part of the SAMI-HI survey, carried out with the Arecibo radio telescope (Catinella et al. in prep.). Observations and data reduction were analogous to those of the xGASS survey \citep{Catinella2018}, with the only difference that these were not gas fraction-limited observations. We observed each galaxy until detected, but moved to another target if there was no hint of \ion{H}{i} signal within the first 20-25 minutes of on-source integration. \ion{H}{i} emission-line spectra were obtained for 153 galaxies with these dedicated follow-up observations; on-source integration times ranged between 2 and 50 minutes, with an average of 15 minutes. Together with an additional 143 good HI detections (i.e., classified as detection code `1') in the Arecibo Legacy Fast ALFA \citep[ALFALFA][]{Giovanelli2005,Haynes2018} survey, SAMI-\ion{H}{i} includes global \ion{H}{i} spectra for 296 SAMI galaxies from the SAMI Galaxy Survey catalogue. 95 galaxies overlap with our sample selection from the SAMI Galaxy Survey. \section{Results} \label{sec:results} \subsection{Low gas velocity dispersion in the SAMI Galaxy Survey} \label{subsec:vdisp_sami} We find vertical velocity dispersions lower than previously reported for studies of the gas kinematics in the SAMI Galaxy Survey. The median vertical velocity dispersion is \mbox{$\sigma_{v, z}${} = 18.8 km s$^{-1}${}} for our sample as shown in Figure \ref{fig:vdisp_hist}. The 68-th shortest credible interval is \mbox{[14.1, 22.1] km s$^{-1}${}} and the 95-th shortest credible interval is \mbox{[11.4, 30.0] km s$^{-1}${}}. The maximum inferred vertical velocity dispersion for a single galaxy is $\sigma_{v, z}${} = 51 km s$^{-1}${}. We now compare this to two other studies of the gas kinematics of galaxies from the SAMI Galaxy Survey by \citet{Zhou2017} and \citet{Johnson2018}. Analysing 8 star-forming galaxies in the SAMI Galaxy Survey, \citet{Zhou2017} found that 7 out of 8 galaxies had \mbox{$\sigma_\text{gas} \in [20, 31]$ km s$^{-1}${}}. Their remaining galaxy (GAMA 508421) was reported as \mbox{$\sigma_\text{gas} = 87 \pm 44$ km s$^{-1}${}}. GAMA 508421 exhibits a high circular velocity in the outskirts of the SAMI FoV \mbox{($v \sim 130$ km s$^{-1}${})} and a clear centralised peak in velocity dispersion that is typical of beam smearing affected galaxies. Our estimate for GAMA 508421 is \mbox{$\sigma_{v, z}${} = 22 km s$^{-1}${}}. As such, we suspect that the reported velocity dispersion for GAMA 508421 is greater than it's intrinsic velocity dispersion. The discrepancy between \citet{Zhou2017} and our estimates, particularly with GAMA 508421, is most likely due to the different beam smearing corrections. \citet{Zhou2017} report the flux-weighted mean velocity dispersion using spaxels where \mbox{$\sigma_v > 2 v_\text{grad}$}. $v_\text{grad}$ is an estimate for the local velocity gradient using adjacent spaxels defined as \citep{Varidel2016}, \begin{equation} v_\text{grad}(x, y) = \sqrt{(v(x + 1) - v(x-1))^2 + (v(y + 1) - v(y - 1))^2}. \label{eq:vgrad} \end{equation} See Section 5.1.1 by \citet{Varidel2019} for a revised calculation of the velocity gradient using a finite-difference scheme. The approach used by \citet{Zhou2017} usually removes the centre of the galaxies, where the velocity gradient is steepest. This approach results in a significant downward correction compared to the uncorrected velocity dispersion estimates. However, the outskirts of galaxies can still be affected by beam smearing. Also, it is possible that the centre of the galaxy may be effected by beam smearing, yet not reach the \mbox{$\sigma_v > 2 v_\text{grad}$} criteria, which is likely to have occurred in the case of GAMA 508421. The approach of \citet{Zhou2017} was also shown previously to over-estimate the intrinsic velocity dispersion in toy models \citep[Section 5.1.1.,][]{Varidel2019} Another study of a sample of 274 star-forming galaxies from the SAMI Galaxy Survey was performed by \citet{Johnson2018}. They removed galaxies with \mbox{$M_* > 8 \times 10^{10}$ M$_\odot$} and S\'ersic index of $n > 2$. They also removed galaxies that they deem to be spatially unresolved or have kinematic uncertainties greater than 30\%. While they do not provide summary statistics for their inferred velocity dispersion values from the SAMI Galaxy Survey, their plots show a typical range of \mbox{$\sigma_0 \in [20, 60]$ km s$^{-1}${}}, plus one galaxy at \mbox{$\sigma_0 \sim 90$ km s$^{-1}${}}. This is slightly above our range of velocity disperisions. \begin{figure*} \begin{center} \includegraphics[ width=\textwidth, keepaspectratio=true, trim=15mm 15mm 15mm 15mm, clip=true]{4_Results/vdisp_corr.pdf} \\ \end{center} \caption{Comparing global intrinsic vertical velocity dispersion ($\sigma_{v, z}${}) to global properties for galaxies from the SAMI Galaxy Survey. We show the relation of $\sigma_{v, z}${} with measures of mass (top), star-formation rate (middle), and rotational velocity (bottom), respectively. Red points indicate the galaxies with observed integrated \ion{H}{i} masses. The Spearman-rank correlation coefficients are shown at the top of each plot, with brackets indicating the correlation coefficient for galaxies with measured \ion{H}{i} masses. The uncertainties for the Spearman-rank correlation coefficients are estimated as the 68\% shortest credible interval from 10$^4$ bootstrapped samples. We find significant positive correlations with measures of mass, star-formation rate, and rotational velocity. The greatest positive correlation we find is with star-formation rate surface density ($\Sigma_\text{SFR}${}).} \label{fig:globaltrends} \end{figure*} To estimate the intrinsic velocity dispersion, \citet{Johnson2018} calculated the median velocity dispersion across the kinematic maps or at the outskirts of their galaxy. They then apply a further correction on their estimated velocity dispersion by using a lookup table of toy galaxies that have been constructed with beam smearing effects. The slight difference between our studies may be driven solely by their choice of using a single FWHM estimate for the PSF rather than the Moffat profile used in this paper. Also, increased scatter may occur in their estimator due to being affected by low signal-to-noise spaxels in the outskirts of the galaxies. \subsection{Correlation of global velocity dispersion and integrated star-formation rate} \label{subsec:vdisp_sfr} Correlation analysis between the global velocity dispersion and several global galaxy properties from the SAMI Galaxy Survey reveals that $\sigma_{v, z}${} has the greatest positive correlation with star-formation rate measures (Figure \ref{fig:globaltrends}). We estimate the Spearman-rank correlation between the SFR and $\sigma_{v, z}${} to be $\rho$(SFR, $\sigma_{v, z}${}) = $0.44^{+0.05}_{-0.05}$. We control for several factors in order to investigate this relationship further. The correlation between $\sigma_{v, z}${} and star-formation rate increases when accounting for the galaxy size. To do this, we estimate the average star-formation rate surface density, $\Sigma_\text{SFR}${} = SFR/$\pi{} R_e^2$ where $R_e$ is the effective radius. The Spearman-rank correlation is then $\rho$($\Sigma_\text{SFR}${}, $\sigma_{v, z}${}) = $0.54^{+0.04}_{-0.04}$. Velocity dispersion is expected to increase with star-formation rate surface density assuming that star-formation feedback processes are acting as a driver of turbulence \citep[e.g.][]{Ostriker2011,Faucher-Giguere2013}. As such, this does provide support that star-formation feedback processes is acting as a driver of turbulence within this sample of galaxies. Figure \ref{fig:globaltrends} also shows a positive correlation between $\sigma_{v, z}${} and integrated stellar mass (M$_*$), \ion{H}{i} gas mass (M$_\text{\ion{H}{i}}$), as well as the sum of M$_*$ and M$_\text{\ion{H}{i}}$. Interestingly, there is a suggestion that M$_\text{\ion{H}{i}}$ is slightly more correlated than M$_*$ with $\sigma_{v, z}${}, although the uncertainties are wide enough that we cannot confirm that is the case. SFR is well known to be correlated with M$_*$, which adds a further complication in determining the relation between $\sigma_{v, z}${} and SFR. To account for the SFR -- M$_*$ relation, we calculated the specific star-formation rate (sSFR = SFR/M$_*$) and $\Delta$MS. $\Delta$MS is calculated as the log difference between the SFR and the star-forming main sequence relation as proposed by \citet{Renzini2015}. We find that the correlation between $\sigma_{v, z}${} and star-formation rate decreased after accounting for stellar mass. This suggests that the relation between $\sigma_{v, z}${} and star-formation rate is a combination of both SFR and stellar mass related quantities. Despite the correlation between $\sigma_{v, z}${} and star-formation rate estimators, the absolute change in $\sigma_{v, z}${} as a function of SFR remains slight across the dynamic range of SFR $\in [10^{-3}, 10]$ M$_\odot$ yr$^{-1}${}. We report the change in velocity dispersion in 5 SFR bins in Table \ref{tab:surveys_sfr_vdisp}. The change in mean velocity dispersion between the end bins from SFR = 0.029 M$_\odot$ yr$^{-1}${} to SFR = 2.4 M$_\odot$ yr$^{-1}${} is only 6.41 km s$^{-1}${}. A similarly shallow gradient was found by \citet{Johnson2018} using data from the SAMI Galaxy Survey. \begin{table*} \caption{Comparing summary statistics of the vertical velocity dispersion in other samples compared to those in this work. Each sample was split into 5 bins of equal percentile widths. We show the mean ($\bar\sigma_{v, z}$), standard deviation ($\Delta$$\sigma_{v, z}${}), the standard error ($\Delta \bar\sigma_{v,z}$), median (med($\sigma_{v, z}${})), and bootstrap resampled standard deviation of the median ($\Delta$med($\sigma_{v, z}${})). The groups of galaxies are as follows: Low-$z$ (H$\alpha$) \citep{Epinat2008,Moiseev2015}, \ion{H}{i} surveys where 15 km s$^{-1}${} has been added in-quadrature \citep{Leroy2008,Walter2008,Ianjamasimanana2012,Stilp2013}, high-$z$ analogues from \citet{Varidel2016} plus the re-analysed galaxies from the DYNAMO survey, plus high-$z$ (H$\alpha$) \citep{Johnson2018,Cresci2009,Wisnioski2011,Epinat2009,Jones2010,DiTeodoro2016}. } \begin{center} \begin{tabular}{lccccccc} \hline Group & Bin & SFR (M$_\odot$ yr$^{-1}${}) & $\bar\sigma_{v, z}$ (km s$^{-1}${}) & $\Delta$$\sigma_{v, z}${} (km s$^{-1}${}) & $\Delta \bar\sigma_{v,z}$ (km s$^{-1}${}) & med($\sigma_{v, z}${}) (km s$^{-1}${}) & $\Delta$med($\sigma_{v, z}${}) (km s$^{-1}${}) \\ \hline SAMI (H$\alpha$) & 1 & 0.029 & 17.12 & 3.21 & 0.39 & 17.13 & 0.29 \\ & 2 & 0.11 & 18.54 & 3.99 & 0.49 & 18.31 & 0.41 \\ & 3 & 0.25 & 18.79 & 4.34 & 0.53 & 18.52 & 0.43 \\ & 4 & 0.57 & 21.07 & 6.47 & 0.79 & 19.72 & 0.71 \\ & 5 & 2.4 & 23.54 & 5.35 & 0.65 & 23.54 & 0.64 \\ \hline Low-$z$ (H$\alpha$) & 1 & 0.0047 & 19.46 & 2.89 & 0.43 & 18.84 & 0.72 \\ & 2 & 0.046 & 20.77 & 4.33 & 0.65 & 19.21 & 0.41 \\ & 3 & 0.18 & 20.57 & 3.86 & 0.58 & 19.21 & 0.6 \\ & 4 & 0.37 & 21.66 & 4.55 & 0.68 & 19.85 & 0.44 \\ & 5 & 1.0 & 23.5 & 7.0 & 1.0 & 21.21 & 0.81 \\ \hline Low-$z$ (H\textsc{i}) & 1 & 0.0014 & 16.95 & 0.55 & 0.18 & 16.86 & 0.15 \\ & 2 & 0.005 & 17.39 & 0.64 & 0.20 & 17.44 & 0.25 \\ & 3 & 0.066 & 18.65 & 2.98 & 0.99 & 17.81 & 0.6 \\ & 4 & 0.58 & 19.18 & 1.36 & 0.43 & 18.78 & 0.57 \\ & 5 & 2.2 & 20.82 & 2.58 & 0.82 & 19.9 & 1.4 \\ \hline High-$z$ & 1 & 0.96 & 27.0 & 3.2 & 1.1 & 26.23 & 0.94 \\ Analogues (H$\alpha$) & 2 & 3.2 & 39.4 & 12.6 & 4.4 & 40.0 & 4.9 \\ & 3 & 9.1 & 40.7 & 14.3 & 5.0 & 41.2 & 7.8 \\ & 4 & 17 & 43.0 & 15.2 & 5.4 & 42.9 & 7.6 \\ & 5 & 27 & 55.9 & 15.6 & 5.2 & 54.8 & 5.4 \\ \hline High-$z$ (H$\alpha$) & 1 & 3.4 & 44.0 & 20.5 & 1.6 & 39.8 & 1.9 \\ & 2 & 6.4 & 45.8 & 18.2 & 1.5 & 43.1 & 1.2 \\ & 3 & 10 & 44.3 & 20.3 & 1.6 & 42.8 & 3.2 \\ & 4 & 20 & 48.3 & 20.2 & 1.6 & 45.0 & 1.5 \\ & 5 & 82 & 53.2 & 20.0 & 1.6 & 51.0 & 2.6 \\ \hline \end{tabular} \label{tab:surveys_sfr_vdisp} \end{center} \end{table*} Galaxies are often kinematically classified as either rotationally or turbulence dominated by comparing the ratio of rotational and random velocities ($v/\sigma$). In a similar vain to such analysis, we also investigated the relation between $\sigma_{v, z}${} and rotational velocity. $\sigma_{v, z}${} is shown compared to the rotational velocity measures using \textsc{Blobby3D}{} ($v_{2.2}$) as outlined in Section \ref{subsec:vcirc} and using the Tully-Fisher relation \citep[$v_{2.2, \text{tf}}$,][]{Bloom2017b}. We find a positive correlation between $\sigma_{v, z}${} and the rotational velocity estimators. This is to be expected as rotational velocity is also correlated with stellar mass. To control for that effect, we calculated the ratio between $v_{2.2}$ and $v_{2.2, \text{tf}}$. We then find a negative correlation between $\sigma_{v, z}${} and $v_{2.2} / v_{2.2, \text{tf}}$. As such, we observe that galaxies exhibit greater rotation than their mass predicts when $\sigma_{v, z}${} is lesser, and lesser rotation when $\sigma_{v, z}${} is greater. \subsection{Comparisons with other surveys} \label{subsec:surveys} In this section we aim to describe our results from the SAMI Galaxy Survey in the context of other studies. In Table \ref{tab:surveys_sfr_vdisp} and Figure \ref{fig:sfr_vdisp_compar} we show comparisons of velocity dispersion compared to SFR. The data is shown in four groups of galaxies; low-$z$ measured using H$\alpha$ \citep{Epinat2008,Moiseev2015}, low-$z$ measured using \ion{H}{i} \citep{Leroy2008,Walter2008,Ianjamasimanana2012,Stilp2013}, High-$z$ analogues from \citet{Varidel2016} plus the galaxies that we re-analysed from the DYNAMO sample, and high-$z$ galaxies at z $\gtrsim$ 1 \citep{Johnson2018,Cresci2009,Wisnioski2011,Epinat2009,Law2009,Jones2010,DiTeodoro2016}. Table \ref{tab:sami_compare} also outlines qualitative ranges for the galaxy parameters for galaxies at low-$z$ measured using the H$\alpha$ emission line, including other studies of the SAMI and DYNAMO samples. \begin{figure*} \begin{center} \includegraphics[ width=\textwidth, keepaspectratio=true, trim=3mm 8mm 5mm 10mm, clip=true]{4_Results/sfr_vdisp_compare.pdf} \\ \end{center} \caption{Comparison of the SFR -- velocity dispersion ($\sigma_{v}${}) relation compared to others surveys in the literature. The sets of galaxies that constitute each subplot are the same as outlined in Table \ref{tab:surveys_sfr_vdisp}. We find the SFR -- $\sigma_{v}${} relation increases slightly across the range SFR $\in [10^{-3}, 1]$ M$_\odot$ yr$^{-1}${}, then turns up significantly at SFR $\gtrsim$ 1 M$_\odot$ yr$^{-1}${}. This relation is approximately consistent across all surveys. } \label{fig:sfr_vdisp_compar} \end{figure*} The comparative data sets have been measured using both ionised and neutral gas. For ionised gas, there are two additional contributions to the velocity dispersion. One is the thermal broadening of \mbox{$\sigma_\text{thermal} \sim 9$ km s$^{-1}${}}, corresponding to the typical temperature of an \ion{H}{ii} region. There is also a contribution from the expansion speed of the \ion{H}{ii} region. Studies of the expansions speed reveal \mbox{$\sigma_\text{expand} \sim 10$ km s$^{-1}${}} for small regions, up to \mbox{$\sigma_\text{expand} \sim 13 - 17$ km s$^{-1}${}} for larger regions \citep{Chu1994}. Given the contributions of $\sigma_\text{thermal}$ and $\sigma_\text{expand}$ to the observed ionised gas kinematics, we perform several adjustments to the comparative velocity dispersion estimates. For ionised gas estimates, we remove any corrections for the additional contributions. For \ion{H}{i} studies, we assume a nominal contribution due to these effects of 15 km s$^{-1}${}, that we add in quadrature to the published velocity dispersion estimates. We note that in other studies, 15 km s$^{-1}${} has been subtracted in quadrature from the ionised gas measurements for comparisons between different studies. We prefer the alternative as 15\% of our galaxies have \mbox{$\sigma_{v, z}${} < 15 km s$^{-1}${}}. \subsubsection{Comparison with surveys at low-$z$} \label{subsubsec:lowz_surveys} The SAMI Galaxy Survey has similar selection criteria to the Mapping Nearby Galaxies at Apache Point Observatory \citep[MaNGA,][]{Bundy2015} survey in terms of fundamental galaxy properties (see Table \ref{tab:sami_compare}). Our data have similar ranges in redshifts, stellar mass, and SFR. As such, we would naively expect the gas turbulence within our sample to be similar to the MaNGA survey estimates. \begin{table*} \caption{Qualitative ranges of galaxy parameters for low-$z$ samples in the literature, where gas kinematics were estimated using the H$\alpha$ emission line.} \begin{center} \begin{tabular}{lcccc} \hline Sample & $z$ & log$_{10}$(M$_*$/M$_\odot$) & log$_{10}$(SFR / M$_\odot$ yr$^{-1}${}) & $\sigma_v$ (km s$^{-1}${}) \\ \hline SAMI (this work) & [0.005, 0.08] & [7.5, 11] & [-3, 1] & [10, 60] \\ SAMI \citep{Johnson2018} & $< 0.1$ & [7.5, 11] & [-3, 1] & [20, 90] \\ SAMI \citep{Zhou2017} & $< 0.1$ & [9.8, 10.8] & - & [20, 90] \\ DYNAMO (this work) & [0.06, 0.15] & [9, 11] & [-1, 2] & [10, 80] \\ DYNAMO \citep{Green2014} & [0.06, 0.15] & [9, 11] & [-1, 2] & [10, 90] \\ GHASP \citep{Epinat2008} & $\sim 0.01$ & - & [-3, 1] & [15, 30] \\ \citet{Moiseev2015} & < 90 Mpc & - & [-3, 1] & [15, 40] \\ \citet{Varidel2016} & [0.01, 0.04] & [10.5, 11] & [1, 1.6] & [20, 50] \\ MaNGA \citep{Yu2019} & [0.01, 0.15] & [8.5, 11.5] & [-2, 1] & [10, 130] \\ \hline \end{tabular} \label{tab:sami_compare} \end{center} \end{table*} We find systematically lower velocity dispersions than those estimated by \citet{Yu2019}. They estimated mean velocity dispersions of \mbox{$\sigma \in [20, 50]$ km s$^{-1}${}} across various galaxy property ranges \citep[Figure 6,][]{Yu2019}. Specifically for SFR vs. velocity dispersion they found mean \mbox{$\sigma \in [30, 50]$ km s$^{-1}${}} across 4 bins in the range \mbox{SFR $\in [10^{-2}, 10]$ M$_\odot$ yr$^{-1}${}}. Whereas we estimate mean $\bar\sigma_{v, z} \in [17, 24]$ km s$^{-1}${} across 5 bins of \mbox{SFR $\in [10^{-3}, 10]$}. \citet{Yu2019} also reported galaxies with velocity dispersion of \mbox{$\sigma_v \gtrsim 50$ km s$^{-1}${}} up to \mbox{$\sigma_v \sim 130$ km s$^{-1}${}}. This is similar to $\sigma_v$ estimates for galaxies at high redshift (see high-$z$ galaxies, Table \ref{tab:surveys_sfr_vdisp}). However, we see very little evidence for a significant fraction of galaxies with \mbox{$\sigma_v \gtrsim 50$ km s$^{-1}${}}. The spectral resolution of \mbox{$\sigma_{\text{LSF}}${} $\in [50, 80]$ km s$^{-1}${}} \citep{Bundy2015,Yu2019} may be an issue for MaNGA. The variability in the MaNGA spectral resolution could correspond to a large scatter in their estimated velocity dispersion, that may explain their upper limit of $\sigma_v \sim 100$ km s$^{-1}${}. We also show that the velocity dispersion is significantly less than their spectral resolution, thus their assumptions regarding the LSF will be important. Instead, our results are closer to the velocity dispersion estimates found in the Gassendi HAlpha survey of SPirals \citep[GHASP,][]{Epinat2008}, where their galaxies overlap in SFR. We can see in \mbox{Figure \ref{fig:sfr_vdisp_compar}} that our samples match well with the work of \citet{Epinat2008} both in terms of mean velocity dispersion and gradient as a function of SFR. We only disagree slightly in terms of the intrinsic scatter, which could be sample selection, methodology, or signal-to-noise dependent. We highlight that \citet{Epinat2008} estimated their velocity dispersion using the residuals in spatially resolved mean velocity compared to a rotational velocity model. As such, their measurements are fundamentally different and should not be affected by $\sigma_\text{thermal}$ and $\sigma_\text{expand}$. So we added \mbox{15 km s$^{-1}${}} in quadrature to their published velocity dispersion estimates for comparison purposes. Our results are also qualitatively similar to those published by \citet{Moiseev2015}, who studied a sample of nearby dwarf galaxies. Their results agree with the higher end of our velocity dispersion estimates, although there is still an offset in the mean velocity dispersion. We note that \citet{Moiseev2015} do not explicitly correct for beam smearing, but due to studying nearby galaxies at < 90 Mpc, the effects of beam smearing should be minimal. Combining the results of \citet{Moiseev2015} and \citet{Epinat2008}, we find differences of the mean and median velocity dispersion estimates compared to our sample of \mbox{$\sim 1-3$ km s$^{-1}${}} (see Table \ref{tab:surveys_sfr_vdisp}), where our results were systematically lower. The difference of $\sim 2$ km s$^{-1}${} could be explained due to calculating $\sigma_{v, z}${} rather than $\sigma_{v, \text{LoS}}${}, which resulted in a downward shift in our velocity dispersion estimates by \mbox{$\sim 2$ km s$^{-1}${}} as described in Section \ref{subsubsec:sigmavz}. We find little difference in the intrinsic scatter between our sample and the combined samples of \citet{Moiseev2015} and \citet{Epinat2008}. Calculating the 1-sigma standard deviation for the sample ($\Delta$$\sigma_{v, z}${}), sample mean ($\Delta\bar\sigma_{v, z}$), and median ($\Delta$med($\sigma_{v, z}${})), we find that all variance estimates were of similar magnitude (see Table \ref{tab:surveys_sfr_vdisp}). As such, we conclude that our results are approximately consistent with the analyses of \citet{Moiseev2015} and \citet{Epinat2008} at low-$z$ using ionised gas, albeit with different selection and methodologies in inferring the intrinsic velocity dispersion. The only exception in inferred velocity dispersions at low-$z$ using the ionised gas is the results of \citet{Yu2019} using MaNGA data where we estimate systematically lower $\sigma_v$. Comparisons to the \ion{H}{i} observations suggest that we get the same approximately flat SFR -- $\sigma_{v}${} relation across the range SFR $\in [10^{-3}, 10]$ M$_\odot$ yr$^{-1}${}. While there are only slight differences between the mean velocity dispersion of \mbox{$\sim 1-4$ km s$^{-1}${}} across varying SFR ranges, it is important to reiterate that the \ion{H}{i} results have \mbox{15 km s$^{-1}${}} added in quadrature, which is the typical difference between \ion{H}{i} and H$\alpha$ estimates for the velocity dispersion. The varying contributions of $\sigma_\text{thermal}$ and $\sigma_\text{expand}$ may cause a larger scatter than the neutral hydrogen estimates. \subsubsection{Comparisons with surveys at high-$z$ and high-$z$ analogues} \label{subsubsec:highz_surveys} We now compare our results to those at high-$z$ and high-$z$ analogues. The data sets included are from the DYNAMO survey, which we have re-analysed using \textsc{Blobby3D}{}. We also include the beam-smearing corrected estimates denoted as $\sigma_{\text{m}, \text{uni}, v_g=0}$ from \citet[][]{Varidel2016}. These samples are of galaxies at low-$z$ with SFR $\gtrsim$ 1 M$_\odot$ yr$^{-1}${}, that are similar to galaxies at high-$z$ (see Table \ref{tab:surveys_sfr_vdisp}). As such, high-$z$ analogues are likely to have similar properties to our galaxy sample at similar SFR. Our re-analysis of the galaxies from the DYNAMO survey find results consistent with \citet{Green2014}. The difference between our results and those of \citet{Green2014} are \mbox{$\sigma_{v, z} - \sigma_{v, \text{green}} = 0.0^{+4.9}_{-6.5}$ km s$^{-1}${}}. Follow-up studies of galaxies from the DYNAMO survey have also found similar results including re-analysis using alternative beam smearing corrections \citep{Bekiaris2016} and observations using adaptive optics \citep[]{Oliva-Altamirano2018}. There is a slight increase in $\sigma_v$ when comparing SAMI with the high-$z$ analogues at overlapping SFR. At \mbox{SFR $\sim 3$ M$_\odot$ yr$^{-1}${}}, we estimate \mbox{$\bar\sigma_{v, \text{SAMI}} = 23.54\pm0.65$ km s$^{-1}${}} compared to \mbox{$\bar\sigma_{v, \text{HzA}} = 27.0\pm1.1$ km s$^{-1}${}} at \mbox{SFR $\sim 2.4$ M$_\odot$ yr$^{-1}${}} and \mbox{$\bar\sigma_{v, \text{HzA}} = 39.4\pm4.4$ km s$^{-1}${}} at \mbox{SFR $\sim 3.2$ M$_\odot$ yr$^{-1}${}} for the high-$z$ analogues. The highest velocity dispersions are primarily from the DYNAMO survey. We note that while \textsc{Blobby3D}{} was applied to both samples, the PSF for DYNAMO was assumed to be a Gaussian profile compared to a Moffat profile for the SAMI Galaxy Survey. This may result in an increased beam smearing correction in the SAMI Galaxy Survey compared to the DYNAMO survey. Also, the inclination correction was only applied to SAMI, which resulted in a $\sim$ 2 km s$^{-1}${} subtraction to the initially inferred velocity dispersion from \textsc{Blobby3D}{}. As such, a difference of $\sim$ 10 km s$^{-1}${} may not be significant given limitations of comparing the two samples. The high-$z$ analogues extend the trend of increasing $\sigma_{v}${} with SFR (see Figure \ref{fig:sfr_vdisp_compar}). This trend starts to increases within the sample from SAMI Galaxy Survey at SFR $\gtrsim 1$ M$_\odot$ yr$^{-1}${}. Expanding the star-formation rate range up to SFR $\sim$ 100 M$_\odot$ yr$^{-1}${} using the high-$z$ analogues, we see that trend increases dramatically with $\sigma_v$ up to 80 km s$^{-1}${} in the range SFR $\in$ [10, 100] M$_\odot$ yr$^{-1}${}, which is qualitatively consistent with samples at high-$z$. The high-$z$ galaxies exhibit a wide range of \mbox{$\sigma_v \in [10, 150]$ km s$^{-1}${}}. Some of this extent is likely to be driven by lower signal-to-noise at higher redshift. Furthermore, systematic biases such as beam smearing effects, that act to increase $\sigma_v$, will be greater due to the lower spatial resolution. Instead, the high-$z$ galaxies still exhibit similar $\sigma_v$ as the high-$z$ analogues when studied as a group. The high-$z$ galaxies still exhibit a trend of increasing velocity dispersion as function of SFR. There is a change from \mbox{$\sigma_v$ $\sim$ 40 km s$^{-1}${}} to \mbox{$\sim$ 50 km s$^{-1}${}} for \mbox{SFR of 3 to 82 M$_\odot$ yr$^{-1}${}} (see Table \ref{tab:surveys_sfr_vdisp}). We estimated the correlation to be \mbox{$\rho(\text{SFR}, \sigma_v) = 0.17^{+0.03}_{-0.04}$}. This is a weaker correlation between SFR and $\sigma_v$ than observed in low-$z$ galaxies. Lesser correlation is likely linked to increased scatter for observations of galaxies at high-$z$. The increase in scatter may be driven by signal-to-noise, beam smearing effects due to lower spatial resolution, or a change in the physical drivers of gas turbulence at high-$z$. There is evidence for increased $\sigma_v$ at high-$z$ compared to the high-$z$ analogues at similar SFRs. In Table \ref{tab:surveys_sfr_vdisp}, we show binned estimators for dynamic ranges of SFR $\in [3, 30]$ M$_\odot$ yr$^{-1}${} for these two samples. $\sigma_v$ is $\sim$5 km s$^{-1}${} higher at similar SFRs for the high-$z$ galaxies compared to the high-$z$ analogues. \section{The drivers of turbulence within low-$z$ galaxies} \label{sec:vdisp_drivers} Turbulence in the Interstellar Medium (ISM) is expected to dissipate on the order of the disc crossing time \citep{MacLow1998,Stone1998}. Thus, an ongoing energy source is required to maintain supersonic gas turbulence across epochs. Two proposed drivers are star-formation feedback process and gravity driven turbulence. \subsection{Star formation feedback driven turbulence} \label{subsec:sffeedback} Star-formation feedback processes inject momentum into the ISM through several mechanisms. These mechanisms include supernova, stellar winds, expanding \ion{H}{ii} regions, and radiation pressure from highly dense star clusters. Therefore, there has been a claim that star-formation feedback processes could provide an ongoing source of energy for the supersonic turbulence in the ISM. Observational studies have routinely found that there is a positive correlation between global $\sigma_v$ and SFR, that has been used as evidence to support star-formation feedback processes as a driver of turbulence \citep{Green2010,Green2014,Moiseev2015,Johnson2018,Ubler2019,Yu2019}. In Section \ref{subsec:vdisp_sfr} we showed that this correlation exists in our sample of galaxies. We also showed that this correlation extends to higher SFR when connecting our sample to other galaxy surveys. The relationship between SFR and $\sigma_v$ has also been considered in theoretical and computational studies. Typically, the energy contribution from supernovae is considered to dominate, and therefore, has been the primary focus of most of these studies. The momentum injection per mass of stars is often assumed to be on the order of \mbox{$\langle p_*/m_* \rangle${} = 3000 km s$^{-1}${}}. Incorporating this momentum injection into theoretical models results in assuming that the rate of momentum injection is proportional to the star-formation rate surface density, thus \mbox{$\dot{P}$ $\propto$ $\langle p_*/m_* \rangle${} $\Sigma_\text{SFR}$} \citep[e.g.][]{Ostriker2011,Faucher-Giguere2013,Krumholz2018}. Therefore, we expect the velocity dispersion to be positively correlated with star-formation rate surface density, if star-formation feedback processes is playing a role in driving turbulence in the ISM. We showed in Section \ref{subsec:vdisp_sfr} that $\sigma_{v, z}${} has a strong positive correlation with the galaxy averaged star-formation rate surface density. This is consistent with other analyses of the star-formation rate density and velocity dispersion \citep[e.g.][]{Lehnert2009,Yu2019,Ubler2019}. In some cases, this has been used as evidence for star-formation feedback processes acting as a primary driver of turbulence \citep{Lehnert2009,Lehnert2013}. Yet if star-formation feedback processes are acting as a driver of turbulence, we should expect that the localised $\Sigma_\text{SFR}${} and $\sigma_{v}${} are correlated, yet some analyses have found this relation \citep{Lehnert2009,Lehnert2013}, and other studies have found a weak or statistically insignificant relation between these localised properties \citep{Genzel2011,Varidel2016,Zhou2017,Ubler2019}. Another approach to compare the observed velocity dispersion to the star-formation rate is to construct a bottom-up approach whereby $\Sigma_\text{SFR}${} is modeled on the local scale and then integrated across the disc to estimate SFR. To estimate $\Sigma_\text{SFR}${} as a function of galaxy properties, it is first noted that the star-formation rate surface density is a function of the star-forming molecular gas fraction ($f_\text{sf}$) of the gas surface density ($\Sigma_\text{gas}$), that is then converted to stars at a star-formation rate efficiency per free-fall time ($\epsilon_\text{ff}${}). Following \citet{Krumholz2018} this can be written as, \begin{equation} \Sigma_\text{SFR} = \frac{\epsilon_\text{ff}}{t_\text{ff}} f_\text{sf} \Sigma_\text{gas}, \end{equation} where the remaining undefined quantity is the free-fall time ($t_\text{ff}$). This can then be incorporated into models to make predictions for the velocity dispersion. One approach is to assume that the star-formation law is retained on the subgalactic scale. This assumes that $\epsilon_\text{ff}${} is approximately constant across the galaxy, which is broadly in agreement with the literature \citep{Krumholz2007,Krumholz2012,Federrath2013,Salim2015,Krumholz2019}. While some studies have found evidence for varying $\epsilon_\text{ff}${} as a function of galaxy properties \citep{Hirota2018,Utomo2018}, the results and implications for the value of $\epsilon_\text{ff}${} remains in dispute. Furthemore, studies using the above approximation have found that \mbox{$\sigma_{v, z} \lesssim 25$ km s$^{-1}${}}, with little variation of $\sigma_{v, z}${} as a function of star-formation rate \citep{Ostriker2011,Krumholz2018}. As noted in the above samples, there is a large population of galaxies with \mbox{$\sigma_{v, z}${} $\gtrsim$ 25 km s$^{-1}${}}, particularly at high redshifts. As such, it is unlikely that this model is able to explain the full range of observed $\sigma_{v, z}${}. Furthermore, such models allow for the variation of the Toomre $Q$ stability parameter, which leads to disagreements with observations. Hereafter, we will use the `No Transport, Fixed $\epsilon_\text{ff}${}' model constructed by \citet{Krumholz2018} as representative of such models. Another approach is to assume that $\epsilon_\text{ff}${} can vary as a function of galaxy properties. One such approach was developed by \citet{Faucher-Giguere2013}, which assumes that the Toomre stability criteria $Q$ self-regulates to 1. In their model, when $Q < 1$ the rate of constructing giant molecular clouds (GMCs) increases, thus increasing star-formation efficiency, driving $Q$ upwards to 1. When $Q > 1$ the rate of GMC construction is limited and thus star-formation slows, leading to $Q$ decreasing to 1. The \citet{Faucher-Giguere2013} predicts that $\epsilon_\text{ff}${} increases with molecular gas content of the galaxy, leading to a correlation between SFR and velocity dispersion, thus potentially providing an explanation for the SFR -- $\sigma_{v}${} relation. Hereafter, we will refer to this model as `No Transport, Fixed $Q$' and use the analytical model proposed by \citet{Krumholz2018} for comparison in the following sections. \subsection{Gravity driven turbulence} \label{subsec:gravturbulence} An alternative to star-formation feedback processes is driving of turbulence due to gravitational mechanisms. In such models, the gravitational potential energy of the gas is converted to kinetic energy, thus driving the turbulence in the ISM. Several mechanisms for this to occur are via accretion onto the disc, accretion through the disc, gravitational instabilities in the disc, or gravitational interactions between components of the disc. During the initial formation of the disc, there is evidence that accretion onto the disc can cause the high levels of gas turbulence. However, this can only be sustained on the order of the accretion time \citep[][]{Aumer2010,Elmegreen2010}. After initial disc formation, the effect of accretion onto the disc is unlikely to have a significant contribution on the gas turbulence \citep{Hopkins2013}. Instead, it has been shown that the supersonic turbulence initially set in the ISM during galaxy formation will quickly approach a steady-state solution \citep{Krumholz2010}. Such a steady state solution can be found where the sole driving force is due to the accretion of gas through the disc balanced by the loss of turbulence primarily by shocks. This yields prescriptions for radial models of the gas surface density and $\sigma_{v, z}$. Making simplifying assumptions whereby the entire ISM is assumed to be a single star-forming region, and integrating the models over the radial extent of the disc, they derive a relationship that simplifies to SFR $\propto$ $\sigma_{v, z}${}, assuming other disc parameters are constant. The above model is an instantaneous steady state solution, that is a function of the gas accretion rate and energy loss at the time. As the gas accretion rate has decreased over epochs, this model predicts lower gas turbulence in the ISM of galaxies at low-$z$. In Section \ref{subsubsec:highz_surveys} we highlighted that velocity dispersions were $\sim$ 5 km s$^{-1}${} higher in the high-$z$ sample compared to the high-$z$ analogues sample at similar SFR. This is consistent with the velocity dispersion decreasing as a function of decreasing gas accretion rate over time. Numerous other studies have also found that gas turbulence increases as a function of $z$ \citep[][]{Kassin2012,Wisnioski2015,Johnson2018,Ubler2019}. \subsection{Combining star-formation feedback and gravity driven turbulence} \label{subsec:sff+grav} \citet{Krumholz2018} recently pointed out that star-formation feedback processes can be added as an extra source of energy to the transport equation derived in \citet{Krumholz2010}. Similar to the previously mentioned models for star-formation feedback processes, they only assume the contribution of supernovae on the gas turbulence. Their full `Transport + Feedback' model gives a SFR -- $\sigma_{v, z}${} relation of the form, \begin{multline} \text{SFR} = \frac{2}{1 + \beta} \frac{\phi_a f_\text{sf}}{\pi G Q} f_{g, Q} v_c^2 \sigma_{v, z} \\ \times \max \bigg[ \sqrt{ \frac{2 (1 + \beta)}{3 f_\text{g,P}}} \phi_\text{mp} \frac{8 \epsilon_\text{ff} f_{g, Q}}{Q}, \frac{t_{\text{orb}, \text{out}}}{t_\text{sf, max}} \bigg] \label{eq:krumholz2018eq60} \end{multline} $f_\text{sf}$ is the fraction of the gas in the molecular star-forming phase. $f_{g, P}$ is the fractional contribution of the gas to the self-gravitation pressure at the mid-plane. $f_{g, Q}$ is the fractional gas contribution to the toomre-$Q$ parameter. $\beta$ describes the slope of the rotation curve ($\beta = d \ln v_c / d \ln r$). $t_\text{sf, max}$ corresponds to the maximum star-formation timescale. $t_\text{orb, out}$ corresponds to the orbital period at the edge of the star-forming dominated disc. $\phi_a$ is a constant that accounts for an offset due to observing global rather than local properties, with $\phi_a = 1$ for local galaxies. $\phi_\text{mp}= 1.4$ corresponds to the assumed ratio of total pressure compared to turbulent pressure at the mid-plane. This model results in a SFR -- $\sigma_{v, z}${} relation with a floor at \mbox{15 km s$^{-1}${} $\lesssim$ $\sigma_{v, z}${} $\lesssim$ 25 km s$^{-1}${}} (including the expansion and thermal contributions) for the lower SFR region, thus reproducing gas turbulence that is consistent with the `No Transport, Fixed $\epsilon_\text{ff}${}' model. The SFR -- $\sigma_{v, z}${} relation then transitions to SFR $\propto$ $\sigma_{v, z}${} for higher SFR, consistent with the `No Feedback' model. Another important contribution of \citet{Krumholz2018} is that after deriving the transport equation, they can use it to find the steady state solutions making various assumptions. The above model assumes that there is a contribution of star-formation driven turbulence ($\sigma_{v, \text{sf}}$) to the total turbulence ($\sigma_{v, z}${}), where \begin{multline} \sigma_{v, \text{sf}} = \frac {4 f_\text{sf} \epsilon_\text{ff} \langle p_*/ m_* \rangle} {\sqrt{3 f_{g, P}} \pi \eta\phi_\text{mp} \phi_Q \phi_\text{nt}^{3/2}} \\ \times \max \bigg[ 1, \sqrt{\frac{3 f_{g, P}}{8(1 + \beta)}} \frac{Q_\text{min} \phi_\text{mp}}{ 4 f_{g, Q} \epsilon_\text{ff}} \frac{t_\text{orb}}{t_\text{sf, max}} \bigg]. \label{eq:sigmasf} \end{multline} Here $\eta = 1.5$ is a scaling parameter for the dissipation rate. $\phi_\text{mp} = 1.4$ is the ratio of total to turbulent pressure at the midplane. $\phi_Q = 2$ is the gas to stellar $Q$ plus one. By setting $\sigma_{v, \text{sf}} = 0$, \citet{Krumholz2018} derive the `No Feedback' model. In that case, the disc must remain stable, such that $Q = 1$. \citet{Krumholz2018} derive the `No Transport, Fixed $\epsilon_\text{ff}${}' model by setting $\sigma_{v, z} = \sigma_{v, \text{sf}}$. In that case, the contribution is purely driven by the balance between gravitational collapse and star-formation driven by supernovae outwards. The model is similar to the model proposed by \citet{Ostriker2011}. The `No Transport, Fixed $Q$' model, is derived by revisiting their transport equation and looking for solutions where $Q$ is set as a constant. They derive a slightly different relation given by, \begin{equation} \text{SFR} = \frac{4 \eta \sqrt{ \phi_\text{mp} \phi_\text{nt}^3 } \phi_Q}{G Q^2} \bigg( \frac{p_*}{m_*} \bigg)^{-1} \frac{f_{g, Q}^2}{f_{g, P}} v_c^2 \sigma_{v, z}^2. \label{eq:ntfq} \end{equation} The formulation of different drivers using the same theoretical backing allows for a relatively easy comparison between the observations and different model assumptions. \subsection{Comparison with theoretical model tracks} \label{subsubsec:krumholztracks} We now compare our observations to the theoretical models described above. We compare our data to the \citet{Krumholz2018} theoretical model tracks for various galaxy groups; low-$z$ dwarfs, low-$z$ spirals, and high-$z$ galaxies. For each galaxy group we use the set of parameters suggested by \citet{Krumholz2018}, which are shown in Table \ref{tab:krumholz_params}. To account for the thermal and expansion contributions to the velocity dispersion of the \ion{H}{ii} regions, 15 km s$^{-1}${} was added in quadrature to the theoretical models. \begin{table} \caption{Parameter values for \citet{Krumholz2018} theoretical model tracks used for Figure \ref{fig:vdispallkrumholz2018}.} \begin{center} \begin{tabular}{lcccc} \hline Parameter & Local dwarf & Local Spiral & High-$z$ \\ \hline $f_\text{sf}$ & 0.2 & 0.5& 1.0 \\ $v_c$ (km s$^{-1}${}) & 100 & 220 & 200 \\ $t_\text{orb}$ (Myr) & 100 & 200 & 200 \\ $\beta$ & 0.5 & 0.0 & 0.0 \\ $f_{g, Q} = f_{g, P}$ & 0.9 & 0.5 & 0.7 \\ $\phi_a$ & 1 & 1 & 3 \\ SFR$_\text{min}$ (M$_\odot$ yr$^{-1}${}) & - & - & 1 \\ SFR$_\text{max}$ (M$_\odot$ yr$^{-1}${}) & 0.5 & 50 & - \\ \hline \end{tabular} \label{tab:krumholz_params} \end{center} \end{table} We find the best agreement between our data and the `Transport + Feedback' model (Figure \ref{fig:vdispallkrumholz2018}). The lower-end of the SFR -- $\sigma_{v, z}${} relation in the range SFR $\in$ [10$^{-3}$, 1] M$_\odot$ yr$^{-1}${} is explained by the floor of the `Transport + Feedback' model tracks, which is driven by star-formation feedback processes. Importantly, the slight increase in $\sigma_{v, z}${} can be explained by a change in galaxy properties across the dynamic range of SFR. The upturn in the SFR -- $\sigma_{v, z}${} relation at SFR $\gtrsim$ 1 M$_\odot$ yr$^{-1}${} is also consistent with `Transport + Feedback' model tracks. This is in contrast to the alternative models, that cannot account for the relation across the full dynamic range of SFR. \begin{figure*} \begin{center} \includegraphics[ width=\textwidth, keepaspectratio=true, trim=5mm 13mm 5mm 10mm, clip=true]{5_Drivers/krumholz_allmodels.pdf} \\ \end{center} \caption{Comparison of the intrinsic vertical velocity dispersion compared to the theoretical model proposed by \citet{Krumholz2018}. From left to right, we show the `Transport + Feedback', `No Feedback', `No Transport, Fixed $\epsilon_\text{ff}${}', and 'No Transport, Fixed $Q$' models. The individual tracks use a set of parameters (see Table \ref{tab:krumholz_params}) that represent typical galaxies for each galaxy type. We find that our observations are the most consistent with the `Transport + Feedback' model.} \label{fig:vdispallkrumholz2018} \end{figure*} The `No Feedback' model is able to model the upturn in the SFR -- $\sigma_{v, z}${} relation, but it cannot account for the lower-end of the relation. At the lower end of the relation, this model assumes $\sigma_{v, z}${} approaches the thermal and expansion contributions alone. We observed that most of our galaxies lie above the assumed \mbox{$\sigma_{v, z}${} > 15 km s$^{-1}${}} contributions from the thermal and expansion broadening. Furthermore, there is a positive correlation of $\sigma_{v, z}${} with SFR even at SFR $\lesssim$ 10 M$_\odot$ yr$^{-1}${} that the `No Feedback' model does not appear to account for. Despite the `No Feedback' model appearing to be a better model, we note that it is difficult to distinguish between the `No Feedback model' and `Transport + Feedback' model, as the thermal and expansion broadening contribution is not well known. The `No Transport, Fixed $\epsilon_\text{ff}${}' model accounts well for the lower-end SFR -- $\sigma_{v, z}${} relation in our sample. However, it predicts very little evolution in $\sigma_{v, z}${} across galaxy properties for low-$z$ galaxies. This is in contrast to the observations that do appear to have an upturn in $\sigma_{v, z}${} for increasing SFR. This suggests that there must be an additional energetic input to the `No Transport, Fixed $\epsilon_\text{ff}${}' to account for increase $\sigma_{v, z}${} across SFR. The `No Transport, Fixed $Q$' model provides an alternative SFR -- $\sigma_{v, z}${} relation (SFR $\propto \sigma_{v, z}^2$). The upturn in the theoretical relation qualitatively matches the observed upturn. However, the model tracks are lower than the observed $\sigma_{v, z}${}. Similar to the `No Feedback' model, increasing the thermal and expansion contributions to $\sigma_{v, z}${} would result in better agreement. The `No Transport, Fixed $Q$' cannot account for the increased scatter in $\sigma_{v, z}${} for increasing SFR, due to estimating very little variation in $\sigma_{v, z}${} across most of our dynamic range of SFR. To distinguish between the `Transport + Feedback' and `No Transport, Fixed $Q$' models, we also compare the theoretical model tracks while varying the circular velocity (see Figure \ref{fig:vcirckrumholz2018}). We see generally good agreement between the `Transport + Feedback' model tracks and the observed velocity dispersion. The upturn in the velocity dispersion occurs approximately at the expected circular velocity. To quantify the differences, we calculate the relative residuals between the data and the models. To do this, we used the `local spiral' tracks for SFR < 10 M$_\odot$ yr$^{-1}${} and a model with intermediate parameters between the `local spiral' and `high-$z$' models ($f_\text{sf} = 0.8$, $t_\text{orb} = 200$ M$_\odot$ yr$^{-1}${}, $\beta = 0$, $f_{g, Q} = f_{g, P} = 0.6$, $\phi_a = 2$) for \mbox{SFR $\geq$ 10 M$_\odot$ yr$^{-1}${}}. The relative residuals between the model tracks and data reveal $\Delta\sigma_{v,z}/\sigma_{v,z} = -0.02 \pm 0.32$ for the `Transport + Feedback' model compared to $\Delta\sigma_{v,z}/\sigma_{v,z} = 0.29 \pm 0.42$ for the `No Transport, Fixed $Q$' model. In particular, the relative residuals for the `No Tranport, Fixed $Q$' model increase to $\Delta\sigma_{v,z}/\sigma_{v,z} = 1.16 \pm 0.52$ for SFR > 10 M$_\odot$ yr$^{-1}${}. Thus, suggesting that the `Transport + Feedback' model provides a better fit to the data than the `No Transport, Fixed $Q$' model. \begin{figure*} \begin{center} \includegraphics[ width=\textwidth, keepaspectratio=true, trim=5mm 1mm 5mm 4mm, clip=true]{5_Drivers/krumholz_vmax.pdf} \\ \end{center} \caption{Comparison of the velocity dispersions for the total galaxy sample to the `Transport + Feedback' (left) and `No Transport, Fixed $Q$' (right) models proposed by \citet{Krumholz2018}. The top two panels show the data compared to the model tracks, where the data and model tracks are colour coded by $v_{2.2, \text{tf}}$. For all other input parameters to the model tracks, the solid lines use the `local spiral'. The dashed lines use intermediate values between the `local spiral' and `high-$z$' models; $f_\text{sf} = 0.8$, $t_\text{orb} = 200$ M$_\odot$ yr$^{-1}${}, $\beta = 0$, $f_{g, Q} = f_{g, P} = 0.6$, $\phi_a = 2$. See Table \ref{tab:krumholz_params} for the `local spiral' and `high-$z$' parameters. The bottom two panels show the relative residuals, where $\Delta \sigma_{v, z} = \sigma_{v, z} - \sigma_{v, z, \text{model}}$. We use the models represented by the solid lines for SFR < 10 M$_\odot$ yr$^{-1}${} and the dashed lines for SFR $\geq$ 10 M$_\odot$ yr$^{-1}${}. We also show the mean and standard deviation of the relative residuals for each model. Both theoretical models predict an increase in $\sigma_{v, z}$ as a function of SFR, however, `Transport + Feedback' provides a better fit as a function of circular velocity ($v_{2.2, \text{tf}}$). } \label{fig:vcirckrumholz2018} \end{figure*} For galaxies at SFR $\gtrsim 10$ M$_\odot$ yr$^{-1}${}, we require a transition to values more representative of the high-$z$ galaxy model tracks, with higher $f_\text{sf}$, $f_{g, Q}$, and $f_{g, P}$ to explain the SFR -- $\sigma_{v, z}${} relation. This is not surprising given that those galaxies were selected from the DYNAMO sample. Many of these galaxies exhibit similar properties to those of high-$z$ galaxies \citep{Green2014,Fisher2017} including increased molecular gas fractions \citep{Fisher2014}. A similar conclusion was reached by \citet{Ubler2019}, when comparing the `Transport + Feedback' model tracks as a function of circular velocity for high-$z$ galaxies. They found $\sim$60\% of their galaxies could be explained by varying the circular velocity alone. Increasing the molecular gas fraction ($f_\text{sf}$) and the gas gravitational contribution at the mid-plane ($f_{g, Q}$, $f_{g, P}$) also shifts the base $\sigma_{v, z}${} by a few km s$^{-1}${}. As galaxies shift to higher $f_\text{sf}$, $f_{g, Q}$, $f_{g, P}$ as a function of SFR, this provides a mechanism to explain the increase in $\sigma_{v, z}${} seen in the SAMI Galaxy Survey (see Section \ref{subsubsec:vdispcorr}). In comparison, the `No Transport, Fixed $Q$' model predicts an increase in $\sigma_{v, z}$ as a function of SFR at a slower rate than the `Transport + Feedback' model. Comparing the model tracks when varying the circular velocity and gas properties, we find that the $\sigma_{v, z} \gtrsim 30$ km s$^{-1}${} are not predicted unless assuming a much lower circular velocity ($v_{2.2, \text{tf}} \lesssim 50$ km s$^{-1}${}) than expected given the stellar masses of the galaxies. Increasing the molecular gas content and gas gravitational contribution at the mid-plane as in the high-$z$ galaxies only shifts the model tracks to higher SFR. The above analysis suggests that the `Transport + Feedback' model provides a better agreement with the data than those dominated by star-formation feedback processes. This does not completely rule out star-formation feedback processes as the primary driver, instead it may suggest that the assumed energy momentum due to star-formation feedback is too low. The assumed energy source is purely from single supernova, with momentum injection per unit of stars of $\langle p_*/m_* \rangle${} = 3000 km s$^{-1}${}. However, $\langle p_*/m_* \rangle${} may be significantly higher if other sources are incorporated. For example, \citet{Gentry2017} argue that $\langle p_*/m_* \rangle${} could be up to an order of magnitude higher when incorporating the effects of clustered supernova. As such, further studies will be required to understand the energetic sources of star-formation feedback processes to incorporate in these models. As a further caveat to the above analysis, we note that the theoretical models assume that we are observing the star-forming molecular gas, rather than the ionised gas. The full set of differences between the kinematics of the molecular star-forming gas compared to the ionised gas is not complete. For example, there is evidence that ionised gas may have systematically lower rotation and higher velocity dispersions compared to the molecular gas \citep{Levy2018}. However, there is limited research into these differences at this time, as such we make the assumption that these differences are minimal. Further research into the differences in molecular gas and ionised gas kinematics will be required. \subsection{Comparing the correlation analysis to the theoretical models} The above theoretical models (Equations \ref{eq:krumholz2018eq60} and \ref{eq:ntfq}) suggest that SFR $\propto v^2_c$, all else being set equal. Thus, we should expect a strong inverse relationship between $\sigma_{v, z}${} and $v_c$. In Figure \ref{fig:globaltrends} we showed that there is a negative correlation between velocity dispersion and rotational velocity after accounting for the stellar mass contribution. We are forced to control for the stellar mass using the Tully-Fisher relation as both $\sigma_{v, z}${} and $v_c$ increase for increasing stellar mass. As such, the rotational velocity is a significant factor in prescribing the intrinsic turbulence within the galaxy. This is consistent with the theoretical models of \citet{Krumholz2018}. However, the relationship between the turbulence and rotational velocity does not distinguish between star-formation feedback or gravitational driven mechanisms of turbulence. The proposed models also suggest a dependence of the SFR -- $\sigma_{v, z}${} relation on the mid-plane gas fraction ($f_{g, P}$), the mid-plane gas contribution to the toomre-$Q$ parameter ($f_{g, Q}$), and on the molecular to neutral gas fraction ($f_\text{sf}$). \citet{Krumholz2018} also showed that galaxy turbulence driven solely by star-formation feedback has the relation SFR $\propto \sigma_v f_{g, Q}^2 / f_{g, P}$ whereas solely driven by gravitational mechanisms has SFR $\propto \sigma_v f_{g, Q}^2$. The contribution of the gas content to the velocity dispersion is difficult to determine in our sample. We have measurements of the integrated \ion{H}{i} mass for 95 galaxies in our sample from the SAMI Galaxy Survey. We showed a slight negative but still consistent with zero correlation between the total \ion{H}{i} gas fraction ($f_{g}$) and $\sigma_{v, z}${} in Section \ref{subsubsec:vdispcorr}. A negative correlation between integrated \ion{H}{i} mass and $\sigma_{v, z}${} could be due to the expected negative correlation expected between $\sigma_{v, z}${} and $f_{g, Q}$ in the `Transport + Feedback' model. However, it could also be a result of increasing molecular gas fraction ($f_\text{sf}$) for increasing SFR and M$_*$ that are also positively correlated with $\sigma_{v, z}${}. We also note that the integrated \ion{H}{i} measurements are not the ideal measurement as we cannot determine the mid-plane \ion{H}{i} gas content within each galaxy. To accurately determine the relation between $\sigma_{v, z}${} and the gas content of the galaxy, we expect that resolved measurements of the \ion{H}{i} and \textsc{H$_2$}{} masses are required. In that way, we would be able to more precisely determine the mid-plane gravitational contribution of the galaxy gas content. We note that recent work by \citet[][]{Sun2020} has begun to shed light on the mid-plane gas contributions to the observed turbulence, although further studies will be required. \section{Conclusions} \label{sec:conclusions} We studied the intrinsic kinematic properties of the ionised gas in 383 low-$z$ star-forming galaxies. 342 galaxies were obtained from the SAMI Galaxy Survey DR2 plus another 41 were from the DYNAMO survey. The total galaxy sample spans a wide range of galaxy properties with \mbox{SFR $\in [10^{-3}, 10^2]$ M$_\odot$ yr$^{-1}${}}. The intrinsic gas kinematics were estimated using \textsc{Blobby3D}{}. \textsc{Blobby3D}{} is a flexible galaxy modelling approach that assumes that the galaxy is regularly rotating with spatially clumpy ionised gas distributions. In order to mitigate the effects of beam smearing and instrumental broadening, a convolution by the PSF and LSF on the underlying model is performed prior to calculating the likelihood function. We also performed a minor inclination correction for the sample from the SAMI Galaxy Survey to estimate the intrinsic vertical velocity dispersion ($\sigma_{v, z}${}) as described in Section \ref{subsubsec:vdispcorr}. The sample of galaxies from the SAMI Galaxy Survey is a representation of typical galaxies at $z \lesssim 0.1$. As such, we only used that galaxy sample to determine the typical gas kinematics in galaxies at $z \lesssim 0.1$. We find the following: \begin{itemize} \item Low velocity dispersions of \mbox{$\sigma_{v, z}${} $\in [14.1, 22.1]$ km s$^{-1}${}} for the 68\% shortest credible interval. This is $\sim 10$ km s$^{-1}${} lower than previous studies of the SAMI Galaxy Survey. The difference in results is likely driven by our beam smearing correction technique using \textsc{Blobby3D}{}, compared to the heuristic approaches applied by \citet{Zhou2017} and \citet{Johnson2018}. We also find little evidence for a significant population of galaxies with \mbox{$\sigma_{v, z}${} $\gtrsim 50$ km s$^{-1}${}} as found by \citet{Yu2019} in a sample of galaxies of similar galaxy properties from the MaNGA Survey. In contrast, our velocity dispersions are approximately consistent with other studies of nearby galaxies \citep{Moiseev2015,Epinat2008}. \item There is a significant positive correlation between $\sigma_{v, z}${} and star-formation rate measures. The greatest correlation was with $\Sigma_\text{SFR}${}. Although, the correlation is significant, the average $\sigma_{v, z}${} only increased by $\sim$ 6 km s$^{-1}${} for a dynamic range of \mbox{SFR $\in [10^{-3}, 10]$ M$_\odot$ yr$^{-1}${}}. \item We also find positive correlations of $\sigma_{v, z}${} with integrated stellar and \ion{H}{i} gas mass as well as absolute rotational velocity. \item After controlling for stellar mass, there is a negative correlation between $\sigma_{v, z}${} and rotational velocity. This is consistent with theoretical models proposed by \citet{Krumholz2018} for both star-formation feedback processes and gravitational driving mechanisms of turbulence. \item We find a weak, but still consistent with zero, negative trend between $\sigma_{v, z}${} and the integrated \ion{H}{i} gas fraction. Theoretical models have suggested that there should be a relation between the gravitational contributions of the gas at the mid-plane and $\sigma_{v, z}${}. We suspect that the signal between gas fraction and $\sigma_{v, z}${} is lost when using the integrated \ion{H}{i} mass. Accurately determining the gravitational contributions of both \ion{H}{i} and H$_2$ at the mid-plane is likely required to observe the proposed relations. \end{itemize} The combined SAMI Galaxy Survey and DYNAMO data sets span a wide range of SFR, allowing for improved comparisons to the theoretical models proposed by \citet{Krumholz2018}. The SFR -- $\sigma_{v, z}${} relation for our sample of galaxies is the most consistent with the `Transport + Feedback' model proposed by \citet{Krumholz2018}. We find that the SFR -- $\sigma_{v, z}${} relation can be approximately explained by a transition of increasing circular velocity and molecular gas at higher SFR. \section*{Acknowledgements} The SAMI Galaxy Survey is based on observations made at the Anglo-Australian Telescope. The Sydney-AAO Multi-object Integral field spectrograph (SAMI) was developed jointly by the University of Sydney and the Australian Astronomical Observatory. The SAMI input catalogue is based on data taken from the Sloan Digital Sky Survey, the GAMA Survey and the VST ATLAS Survey. The SAMI Galaxy Survey is supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and other participating institutions. The SAMI Galaxy Survey website is \url{http://sami-survey.org/}. The authors acknowledge the University of Sydney HPC service at The University of Sydney for providing HPC and database resources that have contributed to the research results reported within this paper. URL: \url{http://sydney.edu.au/research_support/} DBF and KG acknowledge support from the Australian Research Council Discovery Program grant DP160102235. DBF acknowledges support from Australian Research Council Future Fellowship FT170100376. LC is the recipient of an Australian Research Council Future Fellowship (FT180100066) funded by the Australian Government. MRK acknowledges support from Australian Research Council Future Fellowship FT180100375, and from a Humboldt Research Award from the Alexander von Humboldt Foundation. JJB acknowledges support of an Australian Research Council Future Fellowship (FT180100231). CF acknowledges funding provided by the Australian Research Council (Discovery Projects DP170100603 and Future Fellowship FT180100495), and the Australia-Germany Joint Research Cooperation Scheme (UA-DAAD). BG is the recipient of an Australian Research Council Future Fellowship (FT140101202). MSO acknowledges the funding support from the Australian Research Council through a Future Fellowship (FT140100255). JvdS is funded under JBH's ARC Laureate Fellowship (FL140100278). \bibliographystyle{mnras}
1,108,101,562,654
arxiv
\section{Introduction} Anisotropic materials have properties that vary with respect to different spatial directions. Such feature is preferred in many applications, for instance, when the intended use is to carry loading that requires different stiffness and strength in different directions. While many materials might have the required stiffness and strength, an anisotropic material might display higher strength to weight ratio along preferential directions \cite{Cowin2007_tissue}. Bone tissue, wood, nacre, and muscles, are all examples of anisotropic materials \cite{Cowin2007_tissue,song_processing_2018,gao_materials_2003}. These examples from nature follow the Neumann's principle~\cite{newnham2005properties}, which states that to achieve anisotropic responses, the material microstructures should possess less geometric symmetry. Following advances in manufacturing, researchers have been able to mimic natural materials by creating mechanical metamaterials with engineered subscale microstructures, offering a variety of special and unusual properties~\cite{schwaiger_extreme_2019,sanders_optimal_2021,injeti_metamaterials_2019,Haghpanah2016,fleck_micro-architectured_2010,zhang_hierarchical_2021,meza_strong_2014,hussein_dynamics_2014}, such as negative thermal expansion \cite{wang_lightweight_2016, cabras2019micro}, negative Poisson’s ratio \cite{Bertoldi2017,morvaridi2021hierarchical,PradeepLiu2019}, vanishing shear modulus \cite{kadic_practicability_2012}, and shear-normal coupling \cite{frenzel_three-dimensional_2017,lipton_handedness_2018}. However, most existing designs of mechanical metamaterials have properties that are limited to either isotropic or orthotropic symmetries. The full spectrum of anisotropic responses is yet to be explored. For instance, it is unclear how common properties defined under isotropic or orthotropic symmetry could be generalized, and how traditionally independent properties would couple with each other, in systems with less or zero symmetries. Among different types of unit cell symmetries for a periodic system, the triclinic symmetry is the one that yields fully anisotropic properties \cite{Cowin2007_tissue,zheng_description_1994,podesta_symmetry_2019}. The triclinic symmetry describes a periodic system whose primitive vectors are of unequal length, and the angles between these vectors are all different and may not even include 90$^\circ$. Due to its rich design space, origami structures have been a major source of inspiration for creating metamaterial microstructures with various symmetry types \cite{Filipov2015a,Zhai2018,Mukhopadhyay2020,pratapa_reprogrammable_2021,liu_bio-inspired_2021,sadoc_geometrical_1999,Waitukaitis2015,Miyazawa2021}. In the literature, some tubular origami-based metamaterials have been created to achieve triclinic symmetry \cite{Overvelde2017}. However, the disadvantage of tubular designs is that their unit cell geometry and configuration space are typically intricate, involving several parameters. Consequently, the energy landscapes of their tessellations are usually difficult to program \cite{IniguezRabago2019}, which is critical for generating reprogrammability. Conversely, in this work, we introduce a simple and effective origami pattern composed of degree-4 unit cells (consisting of four tilted panels and four corresponding creases), which is assembled into a class of triclinic mechanical metamaterials displaying reprogrammable defects, with neither rotational nor reflective symmetry. The aforementioned origami, named the Trimorph pattern, can be continuously folded into three distinct modes along the kinematic path and two flat-folded states, allowing the metamaterial unit cell to reconfigure itself and hence significantly change all the Bravais lattice parameters of the triclinic crystal family (three angles and three lengths). Consequently, the elastic properties of the metamaterial are tunably anisotropic, leading to unusual Poisson's effect and shear-normal coupling in the changing triclinic frame. By tuning the fold energy parameters, we can show that the unit cell has three stable states, each residing in a different mode. Zooming out from the unit cell to 1D, 2D, and 3D assemblies, we show that the resultant metamaterial can switch reversibly among different frustrated states, causing an initially homogeneous system to have intended inhomogeneity, as shown in Fig.~\ref{fig:intro}. As the first report of this triclinic metamaterial, we would mainly focus on the behavior of the Trimorph unit cell and resulting 2D tessellations. However, 3D assemblies are possible by stacking the 2D tessellations, as shown in Figs.~\ref{fig:intro}H and~\ref{fig:intro}I, whose mechanical behavior is largely inherited from their 2D parents. In summary, we investigate the Trimorph pattern through mathematical analyses, numerical simulations, and experimental validation, including both rigid and non-rigid behaviors. We propose a theory to quantify the Poisson's effect in the changing triclinic frame through the lattice Poisson's ratio. To quantify the unusual Poisson's effect experimentally, we establish both a manufacturing technique for this non-developable pattern, and an experimental device named the \textit{Saint-Venant setup}. According to the Saint-Venant principle \cite{Timoshenko1951}, extra zones near the boundary of a tested sample must be excluded when evaluating the properties of the material, which leads to a need for large enough samples in conventional mechanical testing to ensure a uniform deformation in the central portion of the sample. We demonstrate that the \textit{Saint-Venant setup} alleviates the influence of unwanted boundary effects, leading to precise and reliable measurements on relatively small samples that represent the physics of the parent periodic system~\cite{misseroni2022experimental}. We further observe that the Trimorph metamaterial displays equal but opposite Poisson's ratio under stretching and bending by our generalized lattice-based definition (this was previously observed in standard origami metamaterials only when their lattice and principal Poisson's ratios coincide, i.e. under strict orthotropic symmetry conditions~\cite{PradeepLiu2019, Schenk2013a, Wei2013}). We discover the existence of line and point defects in the multistable Trimorph based metamaterial, and study their scaling effect, which is relevant for actual applications. We identify that the point defect causes significant frustration of the metamaterial. As both the line and point defects are recoverable, we can control the location of the defects in a piece of metamaterial, and thus reprogram its frustrated state(s). As the aforementioned manufacturing technique allows precise control of the properties of each folding hinge, we are able to observe and demonstrate the defects on physical samples extracted from periodic systems. \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth]{Fig-1_3x3_tesselation23.png} \caption{Trimorph origami-based triclinic metamaterials. (\textbf{A}) A piece of metamaterial based on 2D tessellation of the Trimorph origami. (\textbf{B-G}) Different self-stressed stable configurations of the metamaterial shown in A. (\textbf{H}) A 3D metamaterial assembly obtained by stacking the 2D metamaterial. (\textbf{I}) A different stable configuration of the 3D metamaterial, in analogy to state (\textbf{D}) of the 2D metamaterial. Scale bar: 20mm.} \label{fig:intro} \end{figure} \section{Triclinic Configuration Space} To understand the mechanical behaviour of the triclinic metamaterial, we start by examining the geometry of the Trimorph origami. A Trimorph unit cell consists of four rhombus panels, as shown in Figs.~\ref{fig:geom}A and~\ref{fig:geom}B. We denote the vertices as $O_1$ to $O_9$, the folding angles as $\gamma_1$ to $\gamma_4$, and the two angles between opposite creases as $\phi$ and $\psi$. The four panels are characterized by angles $\alpha$, $\delta$, and uniform side length $a$. Compared to the well-known Miura-ori and eggbox patterns \cite{Schenk2013a,Wei2013,nassar2017curvature}, the Trimorph pattern distinguishes itself by having a triclinic symmetry, which means that the bounding box of a Trimorph unit cell is composed of non-orthogonal faces, as shown in Fig.~\ref{fig:geom}C. Taking the parallelogram $O_1 O_7 O_9 O_3$ as a base, if $O_1 O_7$ is placed along the $x$-direction, $O_7 O_9$ is not parallel to the $y$-axis. The folding kinematics of a Trimorph unit cell is described by an implicit function of the opposite crease angles $\phi$ and $\psi$: \begin{align}\label{eq:fphipsi} f(\phi,\psi) = 4\cos^2\phi\cos^2\psi-4(\cos^2\phi+\cos^2\psi) + 16 \xi_1 (\cos\phi+\cos\psi) - 8 \xi_2 \cos\phi\cos\psi - \xi_3 = 0. \end{align} The coefficients are given by: \begin{align} \xi_1 &= \cos^2\alpha\cos\delta, \\ \xi_2 &= (\cos2\alpha+\cos^2\delta), \\ \xi_3 &= \sin^22\delta + \cos^2\delta(4+8\cos2\alpha). \end{align} Clearly, $f(\phi,\psi) = f(\psi,\phi)$, which reflects an algebraically symmetric role of $\phi$ and $\psi$, as plotted in Fig.~\ref{fig:geom}D. \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth]{Fig-2-bimorph.png} \caption{Geometry of the Trimorph unit cell. (\textbf{A}) Schematic with notation of vertices, panel angles, and folding angles. (\textbf{B}) Sketch of a Trimoprh unit cell in the Cartesian frame. (\textbf{C}) The triclinic bounding box of the Trimorph unit cell. (\textbf{D}) The kinematic path that shows all configurations during folding. The colors of the panels in the insets consistently follows the color code in A and B. (\textbf{E-F}) Variations of the kinematic path due to change of the defining angles of the Trimorph pattern, i.e., $\alpha$ and $\delta$. (\textbf{G}) Relationships between the folding angles: $\gamma_1$ vs. $\gamma_2$ and $\gamma_3$ vs. $\gamma_4$. (\textbf{H}) The triclinic lattice angle $\eta_1$ vs. folding angle $\gamma_3$. (\textbf{I}) $\eta_2$ vs. $\gamma_3$ and $\eta_3$ vs. $\gamma_3$.} \label{fig:geom} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.75\linewidth]{Fig_Trimode_Table.pdf} \caption{Spherical polygon and Gauss map representations of the three characteristic modes of the Trimorph unit cell. This figure connects the math of spherical trigonometry and the programmable states of matter.} \label{fig:table} \end{figure} Different ranges of $\phi$ and $\psi$ lead to three modes of the Trimorph unit cell, which are: Miura mode - type I, eggbox mode, and Miura mode - type II. The eggbox mode has four mountain folds (inset (4) in Fig.~\ref{fig:geom}D). The two Miura modes have three mountain folds and one valley fold, similar to the well-known Miura-ori pattern. The two Miura modes are different as in type I, $O_5 O_6$ is a valley fold with $\pi < \gamma_3 < 2 \pi$ (insets (1), (2) in Fig. \ref{fig:geom}D); while in type II, $O_5 O_8$ is a valley fold with $\pi < \gamma_4 < 2 \pi$ (insets (6), (7) in Fig.~\ref{fig:geom}D; also, see Fig.~\ref{fig:table}). The three modes are topologically different in terms of their Gauss maps, as shown in Fig.~\ref{fig:table}. While the eggbox mode projects a convex spherical quadrilateral, the two Miura modes project spherical bow-ties in two different orientations. The two transition states between the three modes have degenerate creases (either $O_5 O_6$, or $O_5 O_8$) that become flat (insets (3), (5) in Fig.~\ref{fig:geom}D). The Trimorph unit cell has two flat folded states, as shown by the insets (1), (7) in Fig.~\ref{fig:geom}D, with distinct orders of folded panels. Varying the values of design variables $\alpha$ and $\delta$, we obtain different shapes of the implicit function $f(\phi,\psi)$ (Figs.~\ref{fig:geom}E-F). When $\alpha = 90 ^ \circ$, the Trimorph pattern becomes the Barreto Mars pattern \cite{evans_rigidly_nodate} with the eggbox mode vanishing; When $\delta = 0 ^ \circ$, the Trimorph pattern degenerates to the standard eggbox pattern with the two Miura modes vanishing. These are particular cases obtained from the intrinsic geometric parametrization of the pattern. The folding angles can be derived using spherical trigonometry from $\phi$ and $\psi$ (Supporting Information). Their mutual relationships are plotted in Fig.~\ref{fig:geom}G. To describe the folding kinematics of a Trimorph unit cell, both $\phi$ and $\psi$ are needed, because only using either one of the two leads to ambiguous situations. Therefore, we typically use $\gamma_3$ (or $\gamma_4$) to parametrize the kinematic path, because throughout the range of folding, the angle $\gamma_3$ (or $\gamma_4$) has a unique value for each configuration. In each mode, the Trimorph unit cell display distinct folding motion, which leads to different mechanical properties of the tessellated metamaterial, such as the sign of Poisson's ratio and shear-normal coupling coefficient. Therefore, we can regard each mode as the fundamental structure of different material phases. The triclinic bounding box of a Trimorph unit cell is characterized by the three angles: $\eta_1$, $\eta_2$, $\eta_3$, as shown in Fig.~\ref{fig:geom}C. The value of $\eta_1$, the projected angle onto the $xy$-plane, as a function of $\gamma_3$ is plotted in Fig.~\ref{fig:geom}H. For most range of folding, $\eta_1$ stays close to $90 ^ \circ$, especially in the eggbox mode and when $\delta$ is small. Hence, it can be difficult to notice this non-orthogonality on physical models. Similarly, the variation of angles $\eta_2$ and $\eta_3$ are plotted in Fig.~\ref{fig:geom}I. They play important roles when we tessellate the pattern in three dimensions. Unlike $\eta_1$, the other two triclinic angles often deviate significantly from $90 ^ \circ$. \section{Results} \subsection{Geometric Mechanics of the 2D Assembly} As the system folds across various modes, its properties vary significantly at each folded state. The geometrically dependent mechanics of the Trimorph metamaterial can be captured through the linearized response at an arbitrary folded state. In this work, we mainly discuss two geometry induced mechanical properties: (1) the in-plane stretching and out-of-plane bending responses of the Trimorph metamaterial that are characterized by the corresponding Poisson's ratios; (2) the shear-normal coupling effect that is characterized by the shear coupling coefficient defined as the ratio of the shear strain to a normal strain. \begin{figure*}[!ht] \centering \includegraphics[width=0.90\linewidth]{Fig_Exp_setups.pdf} \caption{The experimental setup for characterizing the mechanical properties of the Trimorph assembly. (\textbf{A}) Photo and zoom-in details of the \textit{Saint-Venant setup}. (\textbf{B}-\textbf{C}) Design of the \textit{Saint-Venant setup} in lateral and top views, respectively. (\textbf{D}) A snapshot of a sample under testing in the \textit{Saint-Venant setup}. Scale bar: 20mm. (\textbf{E}-\textbf{H}) The photo, design, and sample under testing of the \textit{basic setup}, which is often used in conventional mechanical testing. The non-uniform transverse deformation caused by the \textit{basic setup} reduces the accuracy of the experimental measurements and resulting Poisson's ratio. Scale bar: 20mm. } \label{fig:mech_setup} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=0.90\linewidth]{Fig_Exp_new_setup.pdf} \caption{Geometric mechanics of the Trimorph origami-based assembly (2D). (\textbf{A}-\textbf{B}) The lattice Poisson's ratio (LPR) $\nu_{WL}$ vs. average unit cell length $W$, measured in tension and compression tests, respectively. The same sample is tested three times, and the results are shown by different markers. The evaluated coefficients of determination $R^2=0.984\pm0.007$ (tension) and $R^2=0.982\pm0.003$ (compression) indicate an excellent agreement between theory and experiments. (\textbf{C}-\textbf{D}) The shear-normal coupling coefficient $\zeta$ vs. average unit cell length $W$. In this case, $R^2=0.812\pm0.059$ (tension) and $R^2=0.78\pm0.070$ (compression). (\textbf{E}-\textbf{F}) Nonlinear mechanics behavior through load-displacement diagram. The displacement is defined as the total extension of the entire sample, as illustrated in the insets. Here, $R^2=0.90\pm0.021$ (tension) and $R^2=0.92\pm0.020$ (compression).} \label{fig:mech_data} \end{figure*} We consider a 2D tessellation of the Trimorph unit cell with lattice vectors $\mathbf{W}$ ($O_1O_7$) and $\mathbf{L}$ ($O_7O_9$). Uniform folding of all the unit cells in a tessellation results in in-plane strains of the Trimorph metamaterial. Typically, for isotropic or orthotropic materials, such deformation is characterized by Poisson's ratio, which can be defined as the negative ratio of instantaneous infinitesimal strains along two orthogonal directions \cite{Wei2013,PradeepLiu2019}. For the triclinic Trimorph metamaterial, we define a lattice Poisson's ratio (LPR) to characterize its in-plane deformation, which is defined as the negative ratio of the normal, or extensional, strains along the two lattice directions (i.e. the $\mathbf{L}$ and $\mathbf{W}$ directions). Mathematically, this ratio relates the relative differential change of the angles $\phi$ and $\psi$, and is given by \begin{equation} \label{eq:PR1} \nu_{WL} =-\frac{\varepsilon_L}{\varepsilon_W}=- \frac{\mathrm{d} L / L}{\mathrm{d} W / W} = -\frac{\tan(\psi/2)}{\tan(\phi/2)}\bigg( \frac{\mathrm{d}\phi}{\mathrm{d}\psi}\bigg) \,. \end{equation} Due to the single degree of freedom nature of the system, we have $\nu_{LW}=1/\nu_{WL}$. As can be noted from Fig.~\ref{fig:geom}D, the slope of the curve (given by the ratio $\mathrm{d}\phi/\mathrm{d}\psi$) is negative for the eggbox mode and positive for the two Miura modes. Therefore, from Eqn.~(\ref{eq:PR1}), the stretching Poisson's ratio is positive for the eggbox mode and negative for the Miura modes. By traversing through the complete kinematic path, the Trimorph pattern takes on values for the lattice Poisson's ratio from the entire set of real numbers, hence displaying reversible auxeticity. Taking a total differentiation of Eqn.~(\ref{eq:fphipsi}), a closed-form expression for the in-plane stretching Poisson's ratio can be derived as (Supporting Information): \begin{equation} \label{eq:PR2} \nu_{WL} =\frac{\sin^2(\psi/2)}{\sin^2(\phi/2)}\bigg[\frac{\xi_2\cos\phi-2\xi_1+\sin^2\phi\cos\psi}{\xi_2\cos\psi-2\xi_1+\sin^2\psi\cos\phi}\bigg] \,. \end{equation} This expression indicates that the Poisson's ratio for the Trimorph metamaterial is purely a geometric quantity depending only on $\alpha$, $\delta$, and the folded state, independent of the length scale as well as the constituent material of the system. Contrasting the in-plane stretching behaviour, out-of-plane bending of the Trimorph metamaterial requires the panels to undergo non-rigid deformation, that simultaneously induces curvatures along the lattice directions. The geometry of the unit cell that corresponds to bending of the system is obtained by imposing quasi-periodicity and frame constraints (Supporting Information). The out-of-plane deformation response is then characterized by the bending-induced lattice Poisson's ratio, which is defined as the negative of the ratio of normal curvatures along the $\mathbf{W}$ and $\mathbf{L}$ directions in the bent configuration. For conventional continuum material, the stretching-induced and bending-induced Poisson's ratio yield same values \cite{timoshenko1982theory}. However, in line with a few studies on origami metamaterials in recent years \cite{Wei2013,nassar2017curvature,PradeepLiu2019}, we also find that the Trimorph metamaterial satisfies the property that the Poisson's ratio in bending and stretching are equal in magnitude but opposite in sign. Since the primitive vectors are non-orthogonal for the triclinic metamaterial, then the Poisson effects discussed above deviate from the conventional definition of Poisson's ratios. To address this aspect, we also study the conventional Poisson's ratios along principal directions. Specifically, we define the stretching Poisson's ratio as the negative of the ratio of principal strains, and the bending Poisson's ratio as the negative of the ratio of principal curvatures, which result in evaluations measured along orthogonal directions. We find that the Poisson's ratios defined along the principal directions and the lattice directions are almost the same. Interestingly, however, the principal Poisson's ratios in bending and stretching are not exactly equal and opposite (Supporting Information). An interesting biproduct of the non-orthogonal primitive vectors is the shear-normal coupling effect, which relates the shear strain with normal strains. Such effect is useful in some mechanical devices, where the metamaterial is used to transform forces and motions, as a scale-free alternative to traditional mechanisms \cite{frenzel_three-dimensional_2017,lipton_handedness_2018}. A coupling coefficient $\zeta$ is defined to characterize this effect. Denoting $\varepsilon_{WL}$ as the half shear strain induced by normal strain $\varepsilon_W$, we obtain (Supporting Information): \begin{equation} \label{eq:shearnormal} \zeta = -\frac{2 \varepsilon_{WL}}{\varepsilon_W} = 2 \cot{\eta_1} \,, \end{equation} with \begin{equation} \cos{\eta_1} = \frac{\cos{\alpha} (\cos{\delta} - 1)}{2 \sin{(\phi/2)} \sin{(\psi/2)}} . \end{equation} In the eggbox mode, $\zeta$ stays close to zero, implying a nearly orthotropic symmetry of the Trimorph metamaterial. To verify the reversible auxeticity and shear normal coupling of the Trimorph metamaterial, we perform uniaxial tension and compression tests on a physical prototype composed of $7\times4$ unit cells, and tracked the deformations of a sub-region as shown in Fig.~\ref{fig:mech_setup}. For such experiments, we create a new experimental setup, the \textit{Saint-Venant setup} (Fig. \ref{fig:mech_setup}A-D), to alleviate the influence of artificial boundary effect in the traditional setup (Fig. \ref{fig:mech_setup}E-H) that leads to inaccuracy of measurements (Fig. S12). Compared to the traditional setup \cite{Liu2020}, where the sample is clamped by two smooth plates, in the \textit{Saint-Venant setup}, the sample is constrained by a linear slide system that comprises several sliders inserted into a rail, namely the \textit{Saint-Venant fixture}, which allows for a completely free sample deployment. By eliminating the negative impact of the dog-bone shape on the measurement of Poisson's ratio, the \textit{Saint-Venant fixture} notably improves the agreement between experiments and theory, as plotted in Fig. \ref{fig:mech_data}. In summary, the \textit{Saint-Venant setup} permits the testing of relatively small samples, which are reliable in the sense of representing a true periodic system without violating the underlying theoretical hypothesis. According to Fig.~\ref{fig:mech_data}, the experimentally measured lattice Poisson's ratio (LPR) and coupling coefficient match with the theoretically predicted values, under both tensile and compression testing conditions. We successfully observed the transition of the lattice Poisson's ratio from positive to negative on the changing triclinic frame. To assert the quality of our fabrication method, we also report the load-displacement curve of the sample, which agrees with the theoretically predicted curve based on rigid origami assumption. The derivation of the theoretical curve is elaborated upon in the Supporting Information. To assess the theoretical formulae (Poisson Ratio’s, Shear-Coupling coefficient, Load vs. Displacement) in predicting the observed data, we have computed the mean coefficient of determination $R^2$ and its standard deviation for all the experiments reported in Fig.~\ref{fig:mech_data} (see Supporting Information section B4 for the details). A coefficient of determination $R^2$ equal to 1 indicates the limit case of perfect agreement between theory and experiments. For all cases, the values of $R^2$ indicate a good match between our theory and the experiments, as listed in the caption of Fig. \ref{fig:mech_data}. Video recording of the experiments performed with the \textit{Saint-Venant} setup and the \textit{basic setup} are provided as Movie S1 and S4, respectively. \subsection{Reprogrammable Frustration} \begin{figure*}[!hp]s \centering \includegraphics[width=0.9\linewidth]{Fig-4-compos2.png} \caption{Multistability and reprogrammable frustration of the Trimorph origami. (\textbf{A}) Emergence of tristability. The energy contour has three tangent points with the kinematic path, which indicates three local minima of stored energy. (\textbf{B}) The elastic energy as a function of folding angle $\gamma_3$. The dashed line shows the result from numerical simulation. (\textbf{C}) Photos of the three stable states of a physical model of the Trimorph unit cell. (\textbf{D}) Representative states of a 2D Trimorph assemblage (paper model). The dashed boxes highlight the rows and columns that are ``defected''. Configuration (1*) is the homogeneous state, which marks the ground energy state of the tessellation; Configuration (2*) has one ``line defect''. Configuration (3*) is a frustrated state with two intersecting ``line defects'' and a ``point defect'' at the intersection; Configuration (4*) is another frustrated state with the ``point defect'' at a different location. Configurations (5*) and (6*) are different frustrated states derived from state (4*), each has two ``point defects''. (\textbf{E}) Mechanics setup for numerical simulation of the snapping transitions from state (1*) to (2*), and then to (3*). The dots and arrows show the degrees of freedom that are being traced in the corresponding diagrams. (\textbf{F}) Force vs. displacement curves in the transition from (1*) to (2*), and (2*) to (3*). Notice that the displacement in each diagram is measured on a different degree of freedom, and the inset on the second diagram shows an instance of snap-back. The forces and displacements are normalized. (\textbf{G}) The variation of elastic energy stored in the assemblage during the transition processes. The symbols $E_F$, $E_B$, $E_S$ denote the stored elastic energy caused by folding, bending, and stretching, respectively. (\textbf{H}) The changes of $\gamma_1$ and $\gamma_2$ of the star-marked unit cell in E, i.e. the ``point defect,'' during the transition processes, compared to the kinematic path of a rigid origami Trimorph unit cell. Scale bar: 20mm.} \label{fig:tristable} \end{figure*} The intrinsic geometry of the Trimorph origami allows for realization of multistability. We model the stored energy $E_V$ of a Trimorph origami unit cell with torsional springs in the folding hinges as: \begin{equation} E_V = \frac{1}{2}\sum_{i=1}^{4} K_{F,i} (\gamma_i - \bar{\gamma_i}) ^ 2\,, \end{equation} where \{$K_{F,i}$\} are the rotational stiffness and \{$\bar{\gamma_i}$\} are the rest angles. This is a theoretical model that follows the rigid origami assumption, which assumes that the origami panels do not deform. When \{$\bar{\gamma_i}$\} do not reside on the rigid folding kinematic path (Fig.~\ref{fig:geom}G), we observe multiple minima of stored energy on the kinematic path \cite{Waitukaitis2015}. We design a tristable case for the Trimorph unit cell~\cite{li_theory_2020}, so that there is one local energy minimum in each of the three folding modes. The merit of having each stable state in a different folding mode is that the topological difference between modes leads to significantly different mechanical properties, and thus we can reprogram the properties of the resultant metamaterial by mechanical snapping. To simplify the design and manufacturing, we assign both $O_5 O_6$ ($\gamma_3$) and $O_5 O_8$ ($\gamma_4$) hinges to be free of rotational stiffness (i.e. $K_{F,3} = K_{F,4} = 0$). In addition, we restrict hinges $O_5 O_4$ ($\gamma_1$) and $O_5 O_2$ ($\gamma_2$) to have the same rotational stiffness (i.e. $K_{F,1} = K_{F,2})$, so that the energy contour on the $\gamma_1$ vs. $\gamma_2$ diagram is circular. Normally, such strong simplification will not allow multistability to appear. However, the special folding kinematics of the Trimorph origami makes it possible. Examining the kinematic path of $\gamma_1$ and $\gamma_2$, as shown in Fig.~\ref{fig:tristable}A, we can assign $\bar{\gamma_1}$ and $\bar{\gamma_2}$ at a central point such that the circular energy contour intersects the kinematic path at three tangent points. Due to the symmetry of the kinematic path, $(\bar{\gamma_1}, \bar{\gamma_2})$ must reside on the symmetry axis of the path. Therefore, the two energy minima (1') and (3') become symmetric, each within Miura mode type-I and Miura mode type-II, respectively. The other energy minimum (2') in the eggbox mode happens at the special occasion when $\gamma_1 = \gamma_2$. The change of stored energy in the system is plotted in Fig.~\ref{fig:tristable}B with respect to $\gamma_3$. We note that the tristable unit cell is in a self-stressed state, such that system never rests at a zero-energy state, which can be seen from the non-zero base energy in Fig. \ref{fig:tristable}B. We can clearly identify three local minima, at the configurations (1'), (2'), and (3'), which are the tangent points in Fig.~\ref{fig:tristable}A. The peaks of energy occur at configurations (4'), (5'), (6'), and (7'), among which, (5') and (6'), (4') and (7'), share the same stored energy ($E_V$). We stress that although configurations (4') and (7') are represented at the same point on the kinematic path, they are not the same as the vertex is flat folded in different orderings. To study the transition from one stable configuration to another, we conduct nonlinear structural analyses using the bar-and-hinge model (Movie S3), and consider non-rigid deformations of the panels, i.e. non-rigid origami \cite{Liu2017}. The numerical implementation is detailed in the Methods and Materials. In the numerical simulation, we apply force to push the Trimorph unit cell from the stable configuration (2') to (1'). Because of symmetry, we only perform simulation for the (2') to (1') transition. The stored energy during the snap through process agrees well with the analytical curve, as shown in Fig.~\ref{fig:tristable}B. Overall, the non-rigid numerical model is slightly more compliant than the theoretical rigid origami model. To validate our theory, we fabricate physical models (Movie S2). We first make a unit cell comprising of four rigid panels jointed together by four hinges, two free and two elastic, as shown in Fig.~\ref{fig:tristable}C. Details about the fabrication are elaborated upon in the Methods section and the Supporting Information. We observed three stable configurations with the physical model, two Miura modes and one eggbox mode (Fig. \ref{fig:tristable}C and Movie S2). When the tristable unit cell is tessellated into a 2D assemblage, the resultant metamaterial displays multiple stable states, as shown in Fig.~\ref{fig:intro}. In the 2D tessellation, each row (a strip of unit cells along the $x$-direction) can transition between the eggbox mode and Miura mode type I, or each column (a strip of unit cells along the $y$-direction) can transition between the eggbox mode and Miura mode type II. This morphing behaviour leads to lines of irregular vertices in the tessellation, resembling a line defect from a crystallographic point of view. The Miura mode changes the primitive vectors of the metamaterial such that the regions in eggbox mode on both sides of a Miura mode strip do not share the same base plane anymore (Fig.~S6). This phenomenon exists robustly also for non-rigid origami. We display six out of many possible stable states in Fig. \ref{fig:tristable}D. Assuming rigid origami, as we have shown in the analysis of the unit cell configuration space, the two Miura modes cannot commute without passing through the eggbox mode. Therefore, if one row of unit cells are in Miura mode type-I, and one column of unit cells are in Miura mode type-II, their intersecting unit cell must be within these two modes at the same time, which is forbidden. However, if we consider compliant panels, ``line defects'' in rows and columns would be able to occur simultaneously, as demonstrated by configurations (3*) to (6*). This is possible by having an intersection unit cell that involves not only energy trapped in the folding creases, but also in bent and stretched panels. That is why we need a paper made model to show this scenario, and cannot do the same with the plastic model that is nearly rigid origami. The intersection unit cell is almost crushed and overlaid onto another unit cell, analogous to an interstitial point defect in crystals. To understand the formation of the ``point defect,'' we perform nonlinear structural analyses (Movie S3). We first simulate the process of forming a ``line defect'' in a row (x-direction), i.e., transitioning from configuration (1*) to (2*) (Fig.~\ref{fig:tristable}E). Then, based on the configuration (2*), we fold one column to its corresponding Miura mode, i.e. transitioning from configuration (2*) to (3*) (Fig.~\ref{fig:tristable}E). As shown in Fig.~\ref{fig:tristable}F, both processes display snap-through behaviour. Examining the stored energy in the system during the entire process from (1*) to (2*) to (3*), we observe from Fig.~\ref{fig:tristable}G that configuration (3*) stores significantly more energy than (1*) and (2*). This is mainly caused by the non-rigid origami deformation of the intersection unit cell, where the ``point defect'' happens. Fig.~\ref{fig:tristable}H suggests that this unit cell is forced to deviate from its normal kinematic path into a state that significantly deforms the panels, comprising both bending and stretching (Fig.~\ref{fig:tristable}G) deformations. As shown in Fig.~\ref{fig:tristable}D, the frustration can be reprogrammed into different states. We perform extra numerical simulations to study the scaling effect of the line and point defects. In addition to the Trimorph pattern consisting of $5 \times 5$ unit cells in Fig. \ref{fig:tristable}, we have added simulations on $3 \times 3$ and $4 \times 4$ patterns. We observe that the line defects exist (without external forces) for all samples sizes, regardless of the number of unit cells. This is owing to the fact that the line defect is a linear combination of natural stable states of the unit cells. However, in our numerical study, the point defect does not appear for $3 \times 3$ and $4 \times 4$ patterns. At the point defect, the unit cell is forced into a highly deformed, frustrated state \cite{sadoc_geometrical_1999} that is not a natural stable state, storing a notable amount of elastic energy. Hence, it can only maintain its local high energy state owing to the kinematic constraints from surrounding unit cells in a tessellation. The effectiveness of such kinematic constraints is a function of the number of unit cells in the corresponding line defects radiating from the point defect. When the constraints from surrounding unit cells are not strong enough, the point defect cannot sustain itself without external forces - it is an unfavourable frustrated state. To unfold each point defect, the unfolding order must exactly reverse the folding order. For example, if a point defect is formed by first folding a line defect in the $x$-direction and then another in the $y$-direction, this point defect can only be unfolded by first resolving the $y$-direction line defect and then the $x$-direction. This is because the folding order of the unit cell at the point defect becomes different for different forming sequences, as seen from states (1) and (7) of Fig. \ref{fig:geom}D. Due to contact of panels, there is no feasible path to transition from (1) to (7) or vice versa, unless the pattern is unfolded through the entire folding range. In other words, the point defects can lock the pattern if one tries to resolve them in wrong orders. Instead of taking this phenomenon as an disadvantage, we believe that it may become useful for encoding hysteresis information, as mechanical memory for applications in mechanical logic/computing devices \cite{Yasuda2021_natpersp}. \section{Conclusion} The Bravais lattices (in general) and the triclinic system (in particular) offer great freedom to create origami-based architected programmable metamaterials. Owing to the folding of the origami, the resultant metamaterial can change the six lattice parameters of its triclinic geometry. This change of lattice symmetry leads to coupled normal strains and shear strain. We have demonstrated how origami can be exploited to create anisotropic and inhomogeneous metamaterials, which have properties that are functions of space, orientation, and folding state, resulting in highly tunable responses. By tailoring local folding energies, we create a metamaterial that has multiple stable states with distinct configurations, which allows encoding of various phases of matter (see Fig. \ref{fig:table}). As a result, it transitions from an initially homogeneous tessellation to different inhomogeneous assemblages, as a result of geometric frustration. These phenomena are verified experimentally with a standardized manufacturing procedure, showing great potential for engineering applications. Beyond the elastostatic properties considered in this paper, there are other aspects of this triclinic metamaterial system worth of investigation. For example, material failure behaviour such as fracture pattern, elastodynamic properties such as bandgaps and wave speed, and multi-physical responses such as stimuli responsive actuation, could be addressed in future investigations. \section{Experimental Section} \threesubsection{Sample Fabrication}\\ Different types of unit cells were designed to create i) the multistable 2D tessellation shown in Fig.\ref{fig:intro}(A-G), ii) to carry out the Poisson's ratio experiments reported in Fig. \ref{fig:mech_setup}, and iii) to realize the 3D metamaterials depicted in Fig.\ref{fig:intro}(H,I). The multistable unit cells comprise 4 rigid panels milled with a CNC milling machine from a 2 mm thick Polycarbonate sheet jointed together by 4 hinges, 2 elastic (realized by cutting a silicon rubber solid) and 2 free (milled from a Polypropylene sheet). The unit cells composing the 2D tessellation and the 3D metamaterial were obtained by milling a 1 mm thick Polypropylene sheet. They consist of a single piece of Polypropylene folded from its flat configuration and closed with just one bond. Please see details in Supporting Information. The paper model reported in Fig.~S6 is made with Canson Mi-Teintes paper (Canson SAS, France), and we use a Silhouette CAMEO machine (Silhouette America Inc., Utah) to cut the perforated patterns. \threesubsection{Mechanical Characterization}\\ The reversible auxeticity of the 2D tessellation was verified using the experimental setup reported in Fig.~\ref{fig:mech_setup}A. The compression/tensile experiments were performed by imposing a constant speed of 1.5 mm/s at one end of the sample with a $\mu$-strain testing machine. Four black markers (1 mm in diameter), located along the sides of a rectangular region in the middle of the sample (Fig.~\ref{fig:mech_setup}A), were used to determine the Poisson’s ratio of the tessellation. The displacements of each marker were determined by a post-processing analysis of the records of the experiments. The compression/tension experiments were performed by imposing a constant speed of 1.5 mm/s at one end of the sample with the testing machine. Such a speed was carefully chosen, combining the need to ensure the quasi-static condition and the requirement to reduce the stick and slip phenomena between the sample and the testing Teflon platform. In particular, a higher speed would have effected the measurements with spurious inertia contribution. Please see details in the Supporting Information. \threesubsection{Numerical Simulations}\\ The numerical simulations are performed using the MERLIN software \cite{Liu2018}. The software implements the bar-and-hinge model for discretization of origami structures. We adopt the N5B8 model \cite{Filipov2017}, which discretizes each quadrilateral panel into four triangles, and represent the origami behavior by bars and torsional springs, which captures three essential deformation modes: folding, panel bending, and stretching. The elastic energy stored in the bars and hinges compose the system elastic energy. The quasi-static response of the structure is then obtained by finding the stationary states of the system energy, using the Modified Generalized Displacement Control Method. It has been shown by experiments that the accuracy of the bar-and-hinge model is surprisingly good. In this work, we take the folding stiffness parameter $K_F$ to be 1/10 of the bending stiffness parameter $K_B$, which represents a typical non-rigid origami. Other input information such as the detailed boundary conditions for the simulations in this paper can be read from the input files to the MERLIN software (version 2), shared in the Supporting Information. \medskip \textbf{Supporting Information} \par Supporting Information is available from the Wiley Online Library or from the author. \medskip \textbf{Acknowledgements} \par The authors thank the support from the US National Science Foundation (NSF) through grant no.1538830. K.L acknowledges the support from Peking University College of Engineering. P.P.P. acknowledges the support from the Indian Institute of Technology Madras through the seed grant and the Science \& Engineering Research Board (SERB) of the Department of Science \& Technology, Government of India, through award SRG/2019/000999. D.M. is supported by the European Commission under the H2020 FET Open (“Boheme”) grant No. 863179 and by the ERC-ADG-2021-101052956-BEYOND. T.T. is supported by Japan Science and Technology Agency PRESTO JPMJPR1927. \medskip \bibliographystyle{MSP} \section{Introduction} Anisotropic materials have properties that vary with respect to different spatial directions. Such feature is preferred in many applications, for instance, when the intended use is to carry loading that requires different stiffness and strength in different directions. While many materials might have the required stiffness and strength, an anisotropic material might display higher strength to weight ratio along preferential directions \cite{Cowin2007_tissue}. Bone tissue, wood, nacre, and muscles, are all examples of anisotropic materials \cite{Cowin2007_tissue,song_processing_2018,gao_materials_2003}. These examples from nature follow the Neumann's principle~\cite{newnham2005properties}, which states that to achieve anisotropic responses, the material microstructures should possess less geometric symmetry. Following advances in manufacturing, researchers have been able to mimic natural materials by creating mechanical metamaterials with engineered subscale microstructures, offering a variety of special and unusual properties~\cite{schwaiger_extreme_2019,sanders_optimal_2021,injeti_metamaterials_2019,Haghpanah2016,fleck_micro-architectured_2010,zhang_hierarchical_2021,meza_strong_2014,hussein_dynamics_2014}, such as negative thermal expansion \cite{wang_lightweight_2016, cabras2019micro}, negative Poisson’s ratio \cite{Bertoldi2017,morvaridi2021hierarchical,PradeepLiu2019}, vanishing shear modulus \cite{kadic_practicability_2012}, and shear-normal coupling \cite{frenzel_three-dimensional_2017,lipton_handedness_2018}. However, most existing designs of mechanical metamaterials have properties that are limited to either isotropic or orthotropic symmetries. The full spectrum of anisotropic responses is yet to be explored. For instance, it is unclear how common properties defined under isotropic or orthotropic symmetry could be generalized, and how traditionally independent properties would couple with each other, in systems with less or zero symmetries. Among different types of unit cell symmetries for a periodic system, the triclinic symmetry is the one that yields fully anisotropic properties \cite{Cowin2007_tissue,zheng_description_1994,podesta_symmetry_2019}. The triclinic symmetry describes a periodic system whose primitive vectors are of unequal length, and the angles between these vectors are all different and may not even include 90$^\circ$. Due to its rich design space, origami structures have been a major source of inspiration for creating metamaterial microstructures with various symmetry types \cite{Filipov2015a,Zhai2018,Mukhopadhyay2020,pratapa_reprogrammable_2021,liu_bio-inspired_2021,sadoc_geometrical_1999,Waitukaitis2015,Miyazawa2021}. In the literature, some tubular origami-based metamaterials have been created to achieve triclinic symmetry \cite{Overvelde2017}. However, the disadvantage of tubular designs is that their unit cell geometry and configuration space are typically intricate, involving several parameters. Consequently, the energy landscapes of their tessellations are usually difficult to program \cite{IniguezRabago2019}, which is critical for generating reprogrammability. Conversely, in this work, we introduce a simple and effective origami pattern composed of degree-4 unit cells (consisting of four tilted panels and four corresponding creases), which is assembled into a class of triclinic mechanical metamaterials displaying reprogrammable defects, with neither rotational nor reflective symmetry. The aforementioned origami, named the Trimorph pattern, can be continuously folded into three distinct modes along the kinematic path and two flat-folded states, allowing the metamaterial unit cell to reconfigure itself and hence significantly change all the Bravais lattice parameters of the triclinic crystal family (three angles and three lengths). Consequently, the elastic properties of the metamaterial are tunably anisotropic, leading to unusual Poisson's effect and shear-normal coupling in the changing triclinic frame. By tuning the fold energy parameters, we can show that the unit cell has three stable states, each residing in a different mode. Zooming out from the unit cell to 1D, 2D, and 3D assemblies, we show that the resultant metamaterial can switch reversibly among different frustrated states, causing an initially homogeneous system to have intended inhomogeneity, as shown in Fig.~\ref{fig:intro}. As the first report of this triclinic metamaterial, we would mainly focus on the behavior of the Trimorph unit cell and resulting 2D tessellations. However, 3D assemblies are possible by stacking the 2D tessellations, as shown in Figs.~\ref{fig:intro}H and~\ref{fig:intro}I, whose mechanical behavior is largely inherited from their 2D parents. In summary, we investigate the Trimorph pattern through mathematical analyses, numerical simulations, and experimental validation, including both rigid and non-rigid behaviors. We propose a theory to quantify the Poisson's effect in the changing triclinic frame through the lattice Poisson's ratio. To quantify the unusual Poisson's effect experimentally, we establish both a manufacturing technique for this non-developable pattern, and an experimental device named the \textit{Saint-Venant setup}. According to the Saint-Venant principle \cite{Timoshenko1951}, extra zones near the boundary of a tested sample must be excluded when evaluating the properties of the material, which leads to a need for large enough samples in conventional mechanical testing to ensure a uniform deformation in the central portion of the sample. We demonstrate that the \textit{Saint-Venant setup} alleviates the influence of unwanted boundary effects, leading to precise and reliable measurements on relatively small samples that represent the physics of the parent periodic system~\cite{misseroni2022experimental}. We further observe that the Trimorph metamaterial displays equal but opposite Poisson's ratio under stretching and bending by our generalized lattice-based definition (this was previously observed in standard origami metamaterials only when their lattice and principal Poisson's ratios coincide, i.e. under strict orthotropic symmetry conditions~\cite{PradeepLiu2019, Schenk2013a, Wei2013}). We discover the existence of line and point defects in the multistable Trimorph based metamaterial, and study their scaling effect, which is relevant for actual applications. We identify that the point defect causes significant frustration of the metamaterial. As both the line and point defects are recoverable, we can control the location of the defects in a piece of metamaterial, and thus reprogram its frustrated state(s). As the aforementioned manufacturing technique allows precise control of the properties of each folding hinge, we are able to observe and demonstrate the defects on physical samples extracted from periodic systems. \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth]{Fig-1_3x3_tesselation23.png} \caption{Trimorph origami-based triclinic metamaterials. (\textbf{A}) A piece of metamaterial based on 2D tessellation of the Trimorph origami. (\textbf{B-G}) Different self-stressed stable configurations of the metamaterial shown in A. (\textbf{H}) A 3D metamaterial assembly obtained by stacking the 2D metamaterial. (\textbf{I}) A different stable configuration of the 3D metamaterial, in analogy to state (\textbf{D}) of the 2D metamaterial. Scale bar: 20mm.} \label{fig:intro} \end{figure} \section{Triclinic Configuration Space} To understand the mechanical behaviour of the triclinic metamaterial, we start by examining the geometry of the Trimorph origami. A Trimorph unit cell consists of four rhombus panels, as shown in Figs.~\ref{fig:geom}A and~\ref{fig:geom}B. We denote the vertices as $O_1$ to $O_9$, the folding angles as $\gamma_1$ to $\gamma_4$, and the two angles between opposite creases as $\phi$ and $\psi$. The four panels are characterized by angles $\alpha$, $\delta$, and uniform side length $a$. Compared to the well-known Miura-ori and eggbox patterns \cite{Schenk2013a,Wei2013,nassar2017curvature}, the Trimorph pattern distinguishes itself by having a triclinic symmetry, which means that the bounding box of a Trimorph unit cell is composed of non-orthogonal faces, as shown in Fig.~\ref{fig:geom}C. Taking the parallelogram $O_1 O_7 O_9 O_3$ as a base, if $O_1 O_7$ is placed along the $x$-direction, $O_7 O_9$ is not parallel to the $y$-axis. The folding kinematics of a Trimorph unit cell is described by an implicit function of the opposite crease angles $\phi$ and $\psi$: \begin{align}\label{eq:fphipsi} f(\phi,\psi) = 4\cos^2\phi\cos^2\psi-4(\cos^2\phi+\cos^2\psi) + 16 \xi_1 (\cos\phi+\cos\psi) - 8 \xi_2 \cos\phi\cos\psi - \xi_3 = 0. \end{align} The coefficients are given by: \begin{align} \xi_1 &= \cos^2\alpha\cos\delta, \\ \xi_2 &= (\cos2\alpha+\cos^2\delta), \\ \xi_3 &= \sin^22\delta + \cos^2\delta(4+8\cos2\alpha). \end{align} Clearly, $f(\phi,\psi) = f(\psi,\phi)$, which reflects an algebraically symmetric role of $\phi$ and $\psi$, as plotted in Fig.~\ref{fig:geom}D. \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth]{Fig-2-bimorph.png} \caption{Geometry of the Trimorph unit cell. (\textbf{A}) Schematic with notation of vertices, panel angles, and folding angles. (\textbf{B}) Sketch of a Trimoprh unit cell in the Cartesian frame. (\textbf{C}) The triclinic bounding box of the Trimorph unit cell. (\textbf{D}) The kinematic path that shows all configurations during folding. The colors of the panels in the insets consistently follows the color code in A and B. (\textbf{E-F}) Variations of the kinematic path due to change of the defining angles of the Trimorph pattern, i.e., $\alpha$ and $\delta$. (\textbf{G}) Relationships between the folding angles: $\gamma_1$ vs. $\gamma_2$ and $\gamma_3$ vs. $\gamma_4$. (\textbf{H}) The triclinic lattice angle $\eta_1$ vs. folding angle $\gamma_3$. (\textbf{I}) $\eta_2$ vs. $\gamma_3$ and $\eta_3$ vs. $\gamma_3$.} \label{fig:geom} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.75\linewidth]{Fig_Trimode_Table.pdf} \caption{Spherical polygon and Gauss map representations of the three characteristic modes of the Trimorph unit cell. This figure connects the math of spherical trigonometry and the programmable states of matter.} \label{fig:table} \end{figure} Different ranges of $\phi$ and $\psi$ lead to three modes of the Trimorph unit cell, which are: Miura mode - type I, eggbox mode, and Miura mode - type II. The eggbox mode has four mountain folds (inset (4) in Fig.~\ref{fig:geom}D). The two Miura modes have three mountain folds and one valley fold, similar to the well-known Miura-ori pattern. The two Miura modes are different as in type I, $O_5 O_6$ is a valley fold with $\pi < \gamma_3 < 2 \pi$ (insets (1), (2) in Fig. \ref{fig:geom}D); while in type II, $O_5 O_8$ is a valley fold with $\pi < \gamma_4 < 2 \pi$ (insets (6), (7) in Fig.~\ref{fig:geom}D; also, see Fig.~\ref{fig:table}). The three modes are topologically different in terms of their Gauss maps, as shown in Fig.~\ref{fig:table}. While the eggbox mode projects a convex spherical quadrilateral, the two Miura modes project spherical bow-ties in two different orientations. The two transition states between the three modes have degenerate creases (either $O_5 O_6$, or $O_5 O_8$) that become flat (insets (3), (5) in Fig.~\ref{fig:geom}D). The Trimorph unit cell has two flat folded states, as shown by the insets (1), (7) in Fig.~\ref{fig:geom}D, with distinct orders of folded panels. Varying the values of design variables $\alpha$ and $\delta$, we obtain different shapes of the implicit function $f(\phi,\psi)$ (Figs.~\ref{fig:geom}E-F). When $\alpha = 90 ^ \circ$, the Trimorph pattern becomes the Barreto Mars pattern \cite{evans_rigidly_nodate} with the eggbox mode vanishing; When $\delta = 0 ^ \circ$, the Trimorph pattern degenerates to the standard eggbox pattern with the two Miura modes vanishing. These are particular cases obtained from the intrinsic geometric parametrization of the pattern. The folding angles can be derived using spherical trigonometry from $\phi$ and $\psi$ (Supporting Information). Their mutual relationships are plotted in Fig.~\ref{fig:geom}G. To describe the folding kinematics of a Trimorph unit cell, both $\phi$ and $\psi$ are needed, because only using either one of the two leads to ambiguous situations. Therefore, we typically use $\gamma_3$ (or $\gamma_4$) to parametrize the kinematic path, because throughout the range of folding, the angle $\gamma_3$ (or $\gamma_4$) has a unique value for each configuration. In each mode, the Trimorph unit cell display distinct folding motion, which leads to different mechanical properties of the tessellated metamaterial, such as the sign of Poisson's ratio and shear-normal coupling coefficient. Therefore, we can regard each mode as the fundamental structure of different material phases. The triclinic bounding box of a Trimorph unit cell is characterized by the three angles: $\eta_1$, $\eta_2$, $\eta_3$, as shown in Fig.~\ref{fig:geom}C. The value of $\eta_1$, the projected angle onto the $xy$-plane, as a function of $\gamma_3$ is plotted in Fig.~\ref{fig:geom}H. For most range of folding, $\eta_1$ stays close to $90 ^ \circ$, especially in the eggbox mode and when $\delta$ is small. Hence, it can be difficult to notice this non-orthogonality on physical models. Similarly, the variation of angles $\eta_2$ and $\eta_3$ are plotted in Fig.~\ref{fig:geom}I. They play important roles when we tessellate the pattern in three dimensions. Unlike $\eta_1$, the other two triclinic angles often deviate significantly from $90 ^ \circ$. \section{Results} \subsection{Geometric Mechanics of the 2D Assembly} As the system folds across various modes, its properties vary significantly at each folded state. The geometrically dependent mechanics of the Trimorph metamaterial can be captured through the linearized response at an arbitrary folded state. In this work, we mainly discuss two geometry induced mechanical properties: (1) the in-plane stretching and out-of-plane bending responses of the Trimorph metamaterial that are characterized by the corresponding Poisson's ratios; (2) the shear-normal coupling effect that is characterized by the shear coupling coefficient defined as the ratio of the shear strain to a normal strain. \begin{figure*}[!ht] \centering \includegraphics[width=0.90\linewidth]{Fig_Exp_setups.pdf} \caption{The experimental setup for characterizing the mechanical properties of the Trimorph assembly. (\textbf{A}) Photo and zoom-in details of the \textit{Saint-Venant setup}. (\textbf{B}-\textbf{C}) Design of the \textit{Saint-Venant setup} in lateral and top views, respectively. (\textbf{D}) A snapshot of a sample under testing in the \textit{Saint-Venant setup}. Scale bar: 20mm. (\textbf{E}-\textbf{H}) The photo, design, and sample under testing of the \textit{basic setup}, which is often used in conventional mechanical testing. The non-uniform transverse deformation caused by the \textit{basic setup} reduces the accuracy of the experimental measurements and resulting Poisson's ratio. Scale bar: 20mm. } \label{fig:mech_setup} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=0.90\linewidth]{Fig_Exp_new_setup.pdf} \caption{Geometric mechanics of the Trimorph origami-based assembly (2D). (\textbf{A}-\textbf{B}) The lattice Poisson's ratio (LPR) $\nu_{WL}$ vs. average unit cell length $W$, measured in tension and compression tests, respectively. The same sample is tested three times, and the results are shown by different markers. The evaluated coefficients of determination $R^2=0.984\pm0.007$ (tension) and $R^2=0.982\pm0.003$ (compression) indicate an excellent agreement between theory and experiments. (\textbf{C}-\textbf{D}) The shear-normal coupling coefficient $\zeta$ vs. average unit cell length $W$. In this case, $R^2=0.812\pm0.059$ (tension) and $R^2=0.78\pm0.070$ (compression). (\textbf{E}-\textbf{F}) Nonlinear mechanics behavior through load-displacement diagram. The displacement is defined as the total extension of the entire sample, as illustrated in the insets. Here, $R^2=0.90\pm0.021$ (tension) and $R^2=0.92\pm0.020$ (compression).} \label{fig:mech_data} \end{figure*} We consider a 2D tessellation of the Trimorph unit cell with lattice vectors $\mathbf{W}$ ($O_1O_7$) and $\mathbf{L}$ ($O_7O_9$). Uniform folding of all the unit cells in a tessellation results in in-plane strains of the Trimorph metamaterial. Typically, for isotropic or orthotropic materials, such deformation is characterized by Poisson's ratio, which can be defined as the negative ratio of instantaneous infinitesimal strains along two orthogonal directions \cite{Wei2013,PradeepLiu2019}. For the triclinic Trimorph metamaterial, we define a lattice Poisson's ratio (LPR) to characterize its in-plane deformation, which is defined as the negative ratio of the normal, or extensional, strains along the two lattice directions (i.e. the $\mathbf{L}$ and $\mathbf{W}$ directions). Mathematically, this ratio relates the relative differential change of the angles $\phi$ and $\psi$, and is given by \begin{equation} \label{eq:PR1} \nu_{WL} =-\frac{\varepsilon_L}{\varepsilon_W}=- \frac{\mathrm{d} L / L}{\mathrm{d} W / W} = -\frac{\tan(\psi/2)}{\tan(\phi/2)}\bigg( \frac{\mathrm{d}\phi}{\mathrm{d}\psi}\bigg) \,. \end{equation} Due to the single degree of freedom nature of the system, we have $\nu_{LW}=1/\nu_{WL}$. As can be noted from Fig.~\ref{fig:geom}D, the slope of the curve (given by the ratio $\mathrm{d}\phi/\mathrm{d}\psi$) is negative for the eggbox mode and positive for the two Miura modes. Therefore, from Eqn.~(\ref{eq:PR1}), the stretching Poisson's ratio is positive for the eggbox mode and negative for the Miura modes. By traversing through the complete kinematic path, the Trimorph pattern takes on values for the lattice Poisson's ratio from the entire set of real numbers, hence displaying reversible auxeticity. Taking a total differentiation of Eqn.~(\ref{eq:fphipsi}), a closed-form expression for the in-plane stretching Poisson's ratio can be derived as (Supporting Information): \begin{equation} \label{eq:PR2} \nu_{WL} =\frac{\sin^2(\psi/2)}{\sin^2(\phi/2)}\bigg[\frac{\xi_2\cos\phi-2\xi_1+\sin^2\phi\cos\psi}{\xi_2\cos\psi-2\xi_1+\sin^2\psi\cos\phi}\bigg] \,. \end{equation} This expression indicates that the Poisson's ratio for the Trimorph metamaterial is purely a geometric quantity depending only on $\alpha$, $\delta$, and the folded state, independent of the length scale as well as the constituent material of the system. Contrasting the in-plane stretching behaviour, out-of-plane bending of the Trimorph metamaterial requires the panels to undergo non-rigid deformation, that simultaneously induces curvatures along the lattice directions. The geometry of the unit cell that corresponds to bending of the system is obtained by imposing quasi-periodicity and frame constraints (Supporting Information). The out-of-plane deformation response is then characterized by the bending-induced lattice Poisson's ratio, which is defined as the negative of the ratio of normal curvatures along the $\mathbf{W}$ and $\mathbf{L}$ directions in the bent configuration. For conventional continuum material, the stretching-induced and bending-induced Poisson's ratio yield same values \cite{timoshenko1982theory}. However, in line with a few studies on origami metamaterials in recent years \cite{Wei2013,nassar2017curvature,PradeepLiu2019}, we also find that the Trimorph metamaterial satisfies the property that the Poisson's ratio in bending and stretching are equal in magnitude but opposite in sign. Since the primitive vectors are non-orthogonal for the triclinic metamaterial, then the Poisson effects discussed above deviate from the conventional definition of Poisson's ratios. To address this aspect, we also study the conventional Poisson's ratios along principal directions. Specifically, we define the stretching Poisson's ratio as the negative of the ratio of principal strains, and the bending Poisson's ratio as the negative of the ratio of principal curvatures, which result in evaluations measured along orthogonal directions. We find that the Poisson's ratios defined along the principal directions and the lattice directions are almost the same. Interestingly, however, the principal Poisson's ratios in bending and stretching are not exactly equal and opposite (Supporting Information). An interesting biproduct of the non-orthogonal primitive vectors is the shear-normal coupling effect, which relates the shear strain with normal strains. Such effect is useful in some mechanical devices, where the metamaterial is used to transform forces and motions, as a scale-free alternative to traditional mechanisms \cite{frenzel_three-dimensional_2017,lipton_handedness_2018}. A coupling coefficient $\zeta$ is defined to characterize this effect. Denoting $\varepsilon_{WL}$ as the half shear strain induced by normal strain $\varepsilon_W$, we obtain (Supporting Information): \begin{equation} \label{eq:shearnormal} \zeta = -\frac{2 \varepsilon_{WL}}{\varepsilon_W} = 2 \cot{\eta_1} \,, \end{equation} with \begin{equation} \cos{\eta_1} = \frac{\cos{\alpha} (\cos{\delta} - 1)}{2 \sin{(\phi/2)} \sin{(\psi/2)}} . \end{equation} In the eggbox mode, $\zeta$ stays close to zero, implying a nearly orthotropic symmetry of the Trimorph metamaterial. To verify the reversible auxeticity and shear normal coupling of the Trimorph metamaterial, we perform uniaxial tension and compression tests on a physical prototype composed of $7\times4$ unit cells, and tracked the deformations of a sub-region as shown in Fig.~\ref{fig:mech_setup}. For such experiments, we create a new experimental setup, the \textit{Saint-Venant setup} (Fig. \ref{fig:mech_setup}A-D), to alleviate the influence of artificial boundary effect in the traditional setup (Fig. \ref{fig:mech_setup}E-H) that leads to inaccuracy of measurements (Fig. S12). Compared to the traditional setup \cite{Liu2020}, where the sample is clamped by two smooth plates, in the \textit{Saint-Venant setup}, the sample is constrained by a linear slide system that comprises several sliders inserted into a rail, namely the \textit{Saint-Venant fixture}, which allows for a completely free sample deployment. By eliminating the negative impact of the dog-bone shape on the measurement of Poisson's ratio, the \textit{Saint-Venant fixture} notably improves the agreement between experiments and theory, as plotted in Fig. \ref{fig:mech_data}. In summary, the \textit{Saint-Venant setup} permits the testing of relatively small samples, which are reliable in the sense of representing a true periodic system without violating the underlying theoretical hypothesis. According to Fig.~\ref{fig:mech_data}, the experimentally measured lattice Poisson's ratio (LPR) and coupling coefficient match with the theoretically predicted values, under both tensile and compression testing conditions. We successfully observed the transition of the lattice Poisson's ratio from positive to negative on the changing triclinic frame. To assert the quality of our fabrication method, we also report the load-displacement curve of the sample, which agrees with the theoretically predicted curve based on rigid origami assumption. The derivation of the theoretical curve is elaborated upon in the Supporting Information. To assess the theoretical formulae (Poisson Ratio’s, Shear-Coupling coefficient, Load vs. Displacement) in predicting the observed data, we have computed the mean coefficient of determination $R^2$ and its standard deviation for all the experiments reported in Fig.~\ref{fig:mech_data} (see Supporting Information section B4 for the details). A coefficient of determination $R^2$ equal to 1 indicates the limit case of perfect agreement between theory and experiments. For all cases, the values of $R^2$ indicate a good match between our theory and the experiments, as listed in the caption of Fig. \ref{fig:mech_data}. Video recording of the experiments performed with the \textit{Saint-Venant} setup and the \textit{basic setup} are provided as Movie S1 and S4, respectively. \subsection{Reprogrammable Frustration} \begin{figure*}[!hp]s \centering \includegraphics[width=0.9\linewidth]{Fig-4-compos2.png} \caption{Multistability and reprogrammable frustration of the Trimorph origami. (\textbf{A}) Emergence of tristability. The energy contour has three tangent points with the kinematic path, which indicates three local minima of stored energy. (\textbf{B}) The elastic energy as a function of folding angle $\gamma_3$. The dashed line shows the result from numerical simulation. (\textbf{C}) Photos of the three stable states of a physical model of the Trimorph unit cell. (\textbf{D}) Representative states of a 2D Trimorph assemblage (paper model). The dashed boxes highlight the rows and columns that are ``defected''. Configuration (1*) is the homogeneous state, which marks the ground energy state of the tessellation; Configuration (2*) has one ``line defect''. Configuration (3*) is a frustrated state with two intersecting ``line defects'' and a ``point defect'' at the intersection; Configuration (4*) is another frustrated state with the ``point defect'' at a different location. Configurations (5*) and (6*) are different frustrated states derived from state (4*), each has two ``point defects''. (\textbf{E}) Mechanics setup for numerical simulation of the snapping transitions from state (1*) to (2*), and then to (3*). The dots and arrows show the degrees of freedom that are being traced in the corresponding diagrams. (\textbf{F}) Force vs. displacement curves in the transition from (1*) to (2*), and (2*) to (3*). Notice that the displacement in each diagram is measured on a different degree of freedom, and the inset on the second diagram shows an instance of snap-back. The forces and displacements are normalized. (\textbf{G}) The variation of elastic energy stored in the assemblage during the transition processes. The symbols $E_F$, $E_B$, $E_S$ denote the stored elastic energy caused by folding, bending, and stretching, respectively. (\textbf{H}) The changes of $\gamma_1$ and $\gamma_2$ of the star-marked unit cell in E, i.e. the ``point defect,'' during the transition processes, compared to the kinematic path of a rigid origami Trimorph unit cell. Scale bar: 20mm.} \label{fig:tristable} \end{figure*} The intrinsic geometry of the Trimorph origami allows for realization of multistability. We model the stored energy $E_V$ of a Trimorph origami unit cell with torsional springs in the folding hinges as: \begin{equation} E_V = \frac{1}{2}\sum_{i=1}^{4} K_{F,i} (\gamma_i - \bar{\gamma_i}) ^ 2\,, \end{equation} where \{$K_{F,i}$\} are the rotational stiffness and \{$\bar{\gamma_i}$\} are the rest angles. This is a theoretical model that follows the rigid origami assumption, which assumes that the origami panels do not deform. When \{$\bar{\gamma_i}$\} do not reside on the rigid folding kinematic path (Fig.~\ref{fig:geom}G), we observe multiple minima of stored energy on the kinematic path \cite{Waitukaitis2015}. We design a tristable case for the Trimorph unit cell~\cite{li_theory_2020}, so that there is one local energy minimum in each of the three folding modes. The merit of having each stable state in a different folding mode is that the topological difference between modes leads to significantly different mechanical properties, and thus we can reprogram the properties of the resultant metamaterial by mechanical snapping. To simplify the design and manufacturing, we assign both $O_5 O_6$ ($\gamma_3$) and $O_5 O_8$ ($\gamma_4$) hinges to be free of rotational stiffness (i.e. $K_{F,3} = K_{F,4} = 0$). In addition, we restrict hinges $O_5 O_4$ ($\gamma_1$) and $O_5 O_2$ ($\gamma_2$) to have the same rotational stiffness (i.e. $K_{F,1} = K_{F,2})$, so that the energy contour on the $\gamma_1$ vs. $\gamma_2$ diagram is circular. Normally, such strong simplification will not allow multistability to appear. However, the special folding kinematics of the Trimorph origami makes it possible. Examining the kinematic path of $\gamma_1$ and $\gamma_2$, as shown in Fig.~\ref{fig:tristable}A, we can assign $\bar{\gamma_1}$ and $\bar{\gamma_2}$ at a central point such that the circular energy contour intersects the kinematic path at three tangent points. Due to the symmetry of the kinematic path, $(\bar{\gamma_1}, \bar{\gamma_2})$ must reside on the symmetry axis of the path. Therefore, the two energy minima (1') and (3') become symmetric, each within Miura mode type-I and Miura mode type-II, respectively. The other energy minimum (2') in the eggbox mode happens at the special occasion when $\gamma_1 = \gamma_2$. The change of stored energy in the system is plotted in Fig.~\ref{fig:tristable}B with respect to $\gamma_3$. We note that the tristable unit cell is in a self-stressed state, such that system never rests at a zero-energy state, which can be seen from the non-zero base energy in Fig. \ref{fig:tristable}B. We can clearly identify three local minima, at the configurations (1'), (2'), and (3'), which are the tangent points in Fig.~\ref{fig:tristable}A. The peaks of energy occur at configurations (4'), (5'), (6'), and (7'), among which, (5') and (6'), (4') and (7'), share the same stored energy ($E_V$). We stress that although configurations (4') and (7') are represented at the same point on the kinematic path, they are not the same as the vertex is flat folded in different orderings. To study the transition from one stable configuration to another, we conduct nonlinear structural analyses using the bar-and-hinge model (Movie S3), and consider non-rigid deformations of the panels, i.e. non-rigid origami \cite{Liu2017}. The numerical implementation is detailed in the Methods and Materials. In the numerical simulation, we apply force to push the Trimorph unit cell from the stable configuration (2') to (1'). Because of symmetry, we only perform simulation for the (2') to (1') transition. The stored energy during the snap through process agrees well with the analytical curve, as shown in Fig.~\ref{fig:tristable}B. Overall, the non-rigid numerical model is slightly more compliant than the theoretical rigid origami model. To validate our theory, we fabricate physical models (Movie S2). We first make a unit cell comprising of four rigid panels jointed together by four hinges, two free and two elastic, as shown in Fig.~\ref{fig:tristable}C. Details about the fabrication are elaborated upon in the Methods section and the Supporting Information. We observed three stable configurations with the physical model, two Miura modes and one eggbox mode (Fig. \ref{fig:tristable}C and Movie S2). When the tristable unit cell is tessellated into a 2D assemblage, the resultant metamaterial displays multiple stable states, as shown in Fig.~\ref{fig:intro}. In the 2D tessellation, each row (a strip of unit cells along the $x$-direction) can transition between the eggbox mode and Miura mode type I, or each column (a strip of unit cells along the $y$-direction) can transition between the eggbox mode and Miura mode type II. This morphing behaviour leads to lines of irregular vertices in the tessellation, resembling a line defect from a crystallographic point of view. The Miura mode changes the primitive vectors of the metamaterial such that the regions in eggbox mode on both sides of a Miura mode strip do not share the same base plane anymore (Fig.~S6). This phenomenon exists robustly also for non-rigid origami. We display six out of many possible stable states in Fig. \ref{fig:tristable}D. Assuming rigid origami, as we have shown in the analysis of the unit cell configuration space, the two Miura modes cannot commute without passing through the eggbox mode. Therefore, if one row of unit cells are in Miura mode type-I, and one column of unit cells are in Miura mode type-II, their intersecting unit cell must be within these two modes at the same time, which is forbidden. However, if we consider compliant panels, ``line defects'' in rows and columns would be able to occur simultaneously, as demonstrated by configurations (3*) to (6*). This is possible by having an intersection unit cell that involves not only energy trapped in the folding creases, but also in bent and stretched panels. That is why we need a paper made model to show this scenario, and cannot do the same with the plastic model that is nearly rigid origami. The intersection unit cell is almost crushed and overlaid onto another unit cell, analogous to an interstitial point defect in crystals. To understand the formation of the ``point defect,'' we perform nonlinear structural analyses (Movie S3). We first simulate the process of forming a ``line defect'' in a row (x-direction), i.e., transitioning from configuration (1*) to (2*) (Fig.~\ref{fig:tristable}E). Then, based on the configuration (2*), we fold one column to its corresponding Miura mode, i.e. transitioning from configuration (2*) to (3*) (Fig.~\ref{fig:tristable}E). As shown in Fig.~\ref{fig:tristable}F, both processes display snap-through behaviour. Examining the stored energy in the system during the entire process from (1*) to (2*) to (3*), we observe from Fig.~\ref{fig:tristable}G that configuration (3*) stores significantly more energy than (1*) and (2*). This is mainly caused by the non-rigid origami deformation of the intersection unit cell, where the ``point defect'' happens. Fig.~\ref{fig:tristable}H suggests that this unit cell is forced to deviate from its normal kinematic path into a state that significantly deforms the panels, comprising both bending and stretching (Fig.~\ref{fig:tristable}G) deformations. As shown in Fig.~\ref{fig:tristable}D, the frustration can be reprogrammed into different states. We perform extra numerical simulations to study the scaling effect of the line and point defects. In addition to the Trimorph pattern consisting of $5 \times 5$ unit cells in Fig. \ref{fig:tristable}, we have added simulations on $3 \times 3$ and $4 \times 4$ patterns. We observe that the line defects exist (without external forces) for all samples sizes, regardless of the number of unit cells. This is owing to the fact that the line defect is a linear combination of natural stable states of the unit cells. However, in our numerical study, the point defect does not appear for $3 \times 3$ and $4 \times 4$ patterns. At the point defect, the unit cell is forced into a highly deformed, frustrated state \cite{sadoc_geometrical_1999} that is not a natural stable state, storing a notable amount of elastic energy. Hence, it can only maintain its local high energy state owing to the kinematic constraints from surrounding unit cells in a tessellation. The effectiveness of such kinematic constraints is a function of the number of unit cells in the corresponding line defects radiating from the point defect. When the constraints from surrounding unit cells are not strong enough, the point defect cannot sustain itself without external forces - it is an unfavourable frustrated state. To unfold each point defect, the unfolding order must exactly reverse the folding order. For example, if a point defect is formed by first folding a line defect in the $x$-direction and then another in the $y$-direction, this point defect can only be unfolded by first resolving the $y$-direction line defect and then the $x$-direction. This is because the folding order of the unit cell at the point defect becomes different for different forming sequences, as seen from states (1) and (7) of Fig. \ref{fig:geom}D. Due to contact of panels, there is no feasible path to transition from (1) to (7) or vice versa, unless the pattern is unfolded through the entire folding range. In other words, the point defects can lock the pattern if one tries to resolve them in wrong orders. Instead of taking this phenomenon as an disadvantage, we believe that it may become useful for encoding hysteresis information, as mechanical memory for applications in mechanical logic/computing devices \cite{Yasuda2021_natpersp}. \section{Conclusion} The Bravais lattices (in general) and the triclinic system (in particular) offer great freedom to create origami-based architected programmable metamaterials. Owing to the folding of the origami, the resultant metamaterial can change the six lattice parameters of its triclinic geometry. This change of lattice symmetry leads to coupled normal strains and shear strain. We have demonstrated how origami can be exploited to create anisotropic and inhomogeneous metamaterials, which have properties that are functions of space, orientation, and folding state, resulting in highly tunable responses. By tailoring local folding energies, we create a metamaterial that has multiple stable states with distinct configurations, which allows encoding of various phases of matter (see Fig. \ref{fig:table}). As a result, it transitions from an initially homogeneous tessellation to different inhomogeneous assemblages, as a result of geometric frustration. These phenomena are verified experimentally with a standardized manufacturing procedure, showing great potential for engineering applications. Beyond the elastostatic properties considered in this paper, there are other aspects of this triclinic metamaterial system worth of investigation. For example, material failure behaviour such as fracture pattern, elastodynamic properties such as bandgaps and wave speed, and multi-physical responses such as stimuli responsive actuation, could be addressed in future investigations. \section{Experimental Section} \threesubsection{Sample Fabrication}\\ Different types of unit cells were designed to create i) the multistable 2D tessellation shown in Fig.\ref{fig:intro}(A-G), ii) to carry out the Poisson's ratio experiments reported in Fig. \ref{fig:mech_setup}, and iii) to realize the 3D metamaterials depicted in Fig.\ref{fig:intro}(H,I). The multistable unit cells comprise 4 rigid panels milled with a CNC milling machine from a 2 mm thick Polycarbonate sheet jointed together by 4 hinges, 2 elastic (realized by cutting a silicon rubber solid) and 2 free (milled from a Polypropylene sheet). The unit cells composing the 2D tessellation and the 3D metamaterial were obtained by milling a 1 mm thick Polypropylene sheet. They consist of a single piece of Polypropylene folded from its flat configuration and closed with just one bond. Please see details in Supporting Information. The paper model reported in Fig.~S6 is made with Canson Mi-Teintes paper (Canson SAS, France), and we use a Silhouette CAMEO machine (Silhouette America Inc., Utah) to cut the perforated patterns. \threesubsection{Mechanical Characterization}\\ The reversible auxeticity of the 2D tessellation was verified using the experimental setup reported in Fig.~\ref{fig:mech_setup}A. The compression/tensile experiments were performed by imposing a constant speed of 1.5 mm/s at one end of the sample with a $\mu$-strain testing machine. Four black markers (1 mm in diameter), located along the sides of a rectangular region in the middle of the sample (Fig.~\ref{fig:mech_setup}A), were used to determine the Poisson’s ratio of the tessellation. The displacements of each marker were determined by a post-processing analysis of the records of the experiments. The compression/tension experiments were performed by imposing a constant speed of 1.5 mm/s at one end of the sample with the testing machine. Such a speed was carefully chosen, combining the need to ensure the quasi-static condition and the requirement to reduce the stick and slip phenomena between the sample and the testing Teflon platform. In particular, a higher speed would have effected the measurements with spurious inertia contribution. Please see details in the Supporting Information. \threesubsection{Numerical Simulations}\\ The numerical simulations are performed using the MERLIN software \cite{Liu2018}. The software implements the bar-and-hinge model for discretization of origami structures. We adopt the N5B8 model \cite{Filipov2017}, which discretizes each quadrilateral panel into four triangles, and represent the origami behavior by bars and torsional springs, which captures three essential deformation modes: folding, panel bending, and stretching. The elastic energy stored in the bars and hinges compose the system elastic energy. The quasi-static response of the structure is then obtained by finding the stationary states of the system energy, using the Modified Generalized Displacement Control Method. It has been shown by experiments that the accuracy of the bar-and-hinge model is surprisingly good. In this work, we take the folding stiffness parameter $K_F$ to be 1/10 of the bending stiffness parameter $K_B$, which represents a typical non-rigid origami. Other input information such as the detailed boundary conditions for the simulations in this paper can be read from the input files to the MERLIN software (version 2), shared in the Supporting Information. \medskip \textbf{Supporting Information} \par Supporting Information is available from the Wiley Online Library or from the author. \medskip \textbf{Acknowledgements} \par The authors thank the support from the US National Science Foundation (NSF) through grant no.1538830. K.L acknowledges the support from Peking University College of Engineering. P.P.P. acknowledges the support from the Indian Institute of Technology Madras through the seed grant and the Science \& Engineering Research Board (SERB) of the Department of Science \& Technology, Government of India, through award SRG/2019/000999. D.M. is supported by the European Commission under the H2020 FET Open (“Boheme”) grant No. 863179 and by the ERC-ADG-2021-101052956-BEYOND. T.T. is supported by Japan Science and Technology Agency PRESTO JPMJPR1927. \medskip \bibliographystyle{MSP}
1,108,101,562,655
arxiv
\section{Introduction - the Cantor set and regular star-polygonal attractors} A classic example of a strictly self-similar fractal that can be constructed by Iterated Function System is the Cantor Set \cite{Mandelbrot_1982}. Let us have the interval $E=[-1,1]$ and the contracting maps $S_1,S_2:$ $\mathbb{R}\to \mathbb{R}$, $S_1(x)=x/3-2/3$, $S_2=x/3+2/3$, where $x\in E$. Also $S^k:$ $S(S^{k-1}(E))=S^k(E)$, $S^0(E)=E$, where $S(E)=S_1(E)\cup S_2(E)$. Thus if we iterate the map $S$ infinitely many times this will result in the well known Cantor Set; see figure \ref{fr1}. This iteration procedure can be generalised by the following theorem \cite{Falconer_1990}: \begin{Th}\label{th1} If we have $S_1,...,S_N:$ $|S_i(x)-S_i(y)|\leq c_i|x-y|$, $c_i<1$, then $\exists$ unique non-empty set $F:$ $F=\cup_{i=1}^{N}S_i(F)$, hence invariant for the map $S$ and $F=\cap_{k=1}^{\infty}S^k(E)$ \end{Th} \begin{figure}[hbp] \begin{center} \begin{picture}(140,70)(0,0) \put(0,0){ \put(0,0){\includegraphics{fpic1.pdf}} } \end{picture} \end{center} \vspace{0.0cm} \caption{Sketch of the repeated actions of the maps $S_1$ and $S_2$ on the interval $E$ that result in the Canator Set, where the left and right arrows represent $S_1$ and $S_2$, respectively. On the right hand side of the intervals, the corresponding iterations of the map $S$ and the number of the intervals of the particular iteration are shown.}\label{fr1} \end{figure} Using a polygon as an initial set for fractal generation is a well known technique since the most famous strictly self-similar fractal examples, the Cantor set and the Sierpinski triangle, consist of infinitely many line-segments and triangles, respectively. In the present paper the number of the vertices of the polygon will be increased to an arbitrary number $n\in\mathbb{N}, n\geq 2$. Thus, the fractal can consist of infinitely many pentagrams, hexagons, etc. Furthermore, the building regular polygons will be all $\{n/m\}$ star-polygons \cite{Coxeter_1947} where $n\geqslant 2$ and $m\leqslant n/2$, $n\in\mathbb{N}$, and $m\in\mathbb{N}$. For our purpose we will take the unit circle and it will be divided in $n$ equal segments. For example, in the case of pentagram, we have $m=2$ which gives us $\{5/2\}$-polygon. Once we choose an $\{n/m\}$-polygon it can be scaled by a factor of $P\in(0,1)$ with respect to all of the vertices of the polygon. This will produce $n$ new polygons similar to the initial one but scaled down by factor of $P$. Now, if we repeat the procedure for each one of these new polygons, another $n^2$ polygons will be created that will be $P^2$ times smaller than the initial $\{n/m\}$-polygon inscribed in the unit circle. If the $P$ is chosen carefully, after infinitely many contractions, the result will be a strictly self-similar fractal composed by non-intersecting polygons. Thus, at the $i$-th contraction the defined $n$ contracting maps are applied $n^i$ times, and by theorem \ref{th1} when $i\rightarrow\infty$ we will define infinitely many points of attraction. These points of attraction specify the attractor of the iteration procedure. This polygonal attractor is a fractal produced by the infinite contractions ($n^i$, where $i\rightarrow\infty$) of the initial polygon and it is self similar, i.e. it is composed of infinitely many polygons similar to the initial one. The present study is focused on non-self-intersecting fractals where the scaled copies of the initial polygons osculate with each other. This restricts the possibilities for $P$ when Sierpinsky $n$-gon \cite{Dennis_1995} for arbitrary $n$ is constructed and a formula about $P$ is derived. This important ratio is reported in two other places \cite{Dennis_1995} and \cite{Zuhair_2009} where in the latter the proof is omitted. Moreover, in both manuscripts the authors did not prove that the Hausdorff dimension of the non-self-intersecting $\infty$-gon is 1. In the present paper, an original derivation of the equation for the scaling ratio $P$ is presented. It is done in a very detailed way by using simple geometric laws which makes the result affordable even for high-school students. Also, the Hausdorff dimension of the $\infty$-gon is shown to be 1. Furthermore, universal constructions for $n$-flakes are proposed for the cases when $n$ is even or odd, and the Hausdorff dimension of the $\infty$-flake is proved to be 2. To this end, formulas for the scaling ratio and the rotation of the central polygon of the $n$-flake are derived, which to the knowledge of the author have not been reported previously. Finally, it is shown that different initial polygons may result in an identical attractor when an IFS iteration scheme is applied and it is shown which are the main parameters that define the shape of the Sierpinski $n$-gon and the polyflake attractors. The paper is constructed as follows: in section \ref{sec2} the parameters $P$ and $m$ are introduced and an important equation for the ratio $P=P(n,m)$ is derived. In section \ref{sec3} a condition for $m$ is obtained that ensures no self-intersection of the studied class of fractals. Then, two techniques for imaging IFS attractors are introduced and several Sierpinsky $n$-gons are computed together with their dimensions. In section \ref{sec4} the possibility for additional scaling map that scales down towards the centre of the polygon is taken into account. Different constructions for the cases when $n$ is odd or even are proposed so the resulted $n$-flake to be non-self-intersecting for arbitrary $n$. Also, a few interesting examples are given and the Hausdorff dimension of the $\infty$-flake is computed. In section \ref{sec5} some well known fractals are shown to be a special case of the fractal generation scheme shown here. It is also explained why identical attractors may originate from different star-polygons and how we can exploit this feature. \newpage \section{The parameters $P$ and $m$}\label{sec2} An important result of the present paper will be explained in this section. Here the scaling parameter $P$ will be deduced from $n$ and $m$. Therefore, $P=P(n,m)$ is a specific scaling factor for the chosen initial $\{n/m\}$-polygon, where $P$ does not depend on the diameter of the circumscribed circle. In figure \ref{fr2} a sketch of a $\{n/3\}$ star-polygon is shown where $m=3$. Here, the vertices $A_i$ for $i=1,...,n$ are the vertices of the $\{n/3\}$ star-polygon and $O_a$ is the centre of the circumscribed circle $\cal{S_{\text{a}}}$, $M$ is the intersection point of the secants $A_1A_4$ and $A_3A_6$, $H$ is the orthogonal projection of $O_a$ on $A_1A_4$ and $L$ is the orthogonal projection of $O_a$ on $A_3A_4$. Our purpose will be to find the ratio $P=\displaystyle\frac{MA_4}{A_1A_4}$ because $MA_4$ will be a line segment of the star-polygon resulted after the scaling of the initial polygon with respect to the point $A_4$ by factor of $P$. \begin{figure}[t] \begin{center} \begin{picture}(140,70)(0,0) \put(0,0){ \put(0,0){\includegraphics{fpic2a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{Sketches of \{n/3\} and \{n/m\} star-polygons circumscribed in $\cal{S_{\text{a}}}$ and $\cal{S_{\text{b}}}$ respectively.}\label{fr2} \end{figure} The initial polygon is circumscribed by $\cal{S_{\text{a}}}$ and $O_a$ is its centre thus $\measuredangle{A_3O_aA_4}=2\pi/n \Rightarrow \measuredangle{LO_aA_4}=\pi/n$ because $A_3O_a=A_4O_a$. Also, $\measuredangle{A_1O_aA_3}=4\pi/n \Rightarrow \measuredangle{A_1A_4A_3}=2\pi/n$ and $\measuredangle{A_1O_aH}=3\pi/n$, since $\cal{S_{\text{a}}}$ with centre at $O_a$ circumscribes $A_1$, $A_3$ and $A_4$. Now we can deduce $A_1A_4$ and $A_4M$ by the radius $r$ of $\cal{S_{\text{a}}}$; $r=O_aA_i$ for $i=1,...,n$ : $A_1A_4=2r\sin(3\pi/n)$. In order to deduce $A_4M$ we will first find $A_4L$. Thus, $A_4L=r\sin(\pi/n) \Rightarrow A_4M=\displaystyle\frac{A_4L}{\cos(2\pi/n)}=\displaystyle\frac{r\sin(\pi/n)}{\cos(2\pi/n)}$. Now we can substitute the values for $A_1A_4$ and $MA_4$ in order to find $P=\displaystyle\frac{MA_4}{A_1A_4}=\displaystyle\frac{\sin(\pi/n)}{2\cos(2\pi/n)\sin(3\pi/n)}$. We have just found $P(n,3)$, now, let us do the same computations for any $1\leq m \leq n/2$. In figure \ref{fr2}(b) we can find the sketch of a $\{n/m\}$ star-polygon where the vertices $A_i$ for $i=1,...,n$ are the vertices of the star-polygon and $O_b$ is the centre of the circumscribed circle $\cal{S_{\text{b}}}$ with radius $r$, $M$ is the intersection point of the secants $A_1A_{m+1}$ and $A_{m}A_{2m}$, $H$ is the orthogonal projection of $O_b$ on $A_1A_{m+1}$ and $L$ is the orthogonal projection of $O_b$ on $A_{m}A_{m+1}$. Our purpose will be to find the ratio $P=\displaystyle\frac{MA_{m+1}}{A_1A_{m+1}}$. Thus, $\measuredangle{LO_bA_{m+1}}=\pi/n$ because $A_{m}O_b=A_{m+1}O_b$, $\measuredangle{A_1O_bA_{m}}=(2m-2)\pi/n \Rightarrow \measuredangle{A_1A_{m+1}A_{m}}=(m-1)\pi/n$ and $\measuredangle{A_1O_bH}=m\pi/n$, since $\cal{S_{\text{b}}}$ with centre at $O_b$ circumscribes $A_1$, $A_{m}$ and $A_{m+1}$. Now we can deduce $A_1A_{m+1}$ and $A_{m+1}M$ by the radius $r$ of $\cal{S_{\text{b}}}$: $A_1A_{m+1}=2r\sin(m\pi/n)$. In order to deduce $A_{m+1}M$ we will first find $A_{m+1}L$. Thus, $A_{m+1}L=r\sin(\pi/n) \Rightarrow A_{m+1}M=\displaystyle\frac{A_{m+1}L}{\cos((m-1)\pi/n)}=\displaystyle\frac{r\sin(\pi/n)}{\cos((m-1)\pi/n)}$ Now we can substitute the values for $A_1A_{m+1}$ and $MA_{m+1}$ in order to find:\\ \footnotesize\begin{equation}\label{eq1} P=\displaystyle\frac{MA_{m+1}}{A_1A_{m+1}}=\displaystyle\frac{\sin(\pi/n)}{2\cos((m-1)\pi/n)\sin(m\pi/n)} \end{equation} \normalsize \begin{figure}[t] \begin{center} \begin{picture}(140,75)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic3.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS generated fractal sets made of points that lie on \{9/2\} and \{9/3\} star-polygons in subpanels (a) and (b), respectively.}\label{fr3} \end{figure} \section{Generation of fractals by using IFS}\label{sec3} The values for $P$ obtained in the previous section will be used here for the computation of self-similar fractals that are IFS attractors. These attractors will be derived by the random walk/orbit method or so called chaos game \cite{Devaney_1990,Peitgen_2004}. First, we define a matrix that specifies how many points along the unit circle will be taken into account ($n$), and what is the contraction $P(n,m)$ towards those points. For example, the matrices for the fractals in figures \ref{fr3}(a) and \ref{fr3}(b) are bellow; see Table \ref{T1}. \begin{table} \caption{The two matrices $M_{\{9/2\}}$ and $M_{\{9/3\}}$ where each of them defines nine contracting maps (every two rows define a map), needed for the random IFS procedure, resulting in the \{9/2\} and \{9/3\} fractal attractors shown in figure \ref{fr3}(a) and \ref{fr3}(b).} \begin{center} \footnotesize \begin{tabular}{|p{1cm}|p{1cm}|p{1.5cm}||p{1cm}|p{1cm}|p{1.5cm}|} \hline \multicolumn{3}{|c||}{$M_{\{9/2\}}$} & \multicolumn{3}{|c|}{$M_{\{9/3\}}$} \\ \hline 0.2831 & 0 & 0.8660254 & 0.2578 & 0 & 0.8660254\\ \hline 0 & 0.2831 & -0.5 & 0 & 0.2578 & -0.5\\ \hline 0.2831 & 0 & 0.9848078 & 0.2578 & 0 & 0.9848078\\ \hline 0 & 0.2831 & 0.1736482 & 0 & 0.2578 & 0.1736482\\ \hline 0.2831 & 0 & 0.6427876 & 0.2578 & 0 & 0.6427876\\ \hline 0 & 0.2831 & 0.7660444 & 0 & 0.2578 & 0.7660444\\ \hline 0.2831 & 0 & 0 & 0.2578 & 0 & 0\\ \hline 0 & 0.2831 & 1 & 0 & 0.2578 & 1\\ \hline 0.2831 & 0 & -0.642788 & 0.2578 & 0 & -0.642788\\ \hline 0 & 0.2831 & 0.7660444 & 0 & 0.2578 & 0.7660444\\ \hline 0.2831 & 0 & -0.984808 & 0.2578 & 0 & -0.984808\\ \hline 0 & 0.2831 & 0.1736482 & 0 & 0.2578 & 0.1736482\\ \hline 0.2831 & 0 & -0.866025 & 0.2578 & 0 & -0.866025\\ \hline 0 & 0.2831 & -0.5 & 0 & 0.2578 & -0.5\\ \hline 0.2831 & 0 & -0.34202 & 0.2578 & 0 & -0.34202\\ \hline 0 & 0.2831 & -0.939693 & 0 & 0.2578 & -0.939693\\ \hline 0.2831 & 0 & 0.3420201 & 0.2578 & 0 & 0.3420201\\ \hline 0 & 0.2831 & -0.939693 & 0 & 0.2578 & -0.939693\\ \hline \end{tabular} \normalsize \end{center} \end{table}\label{T1} \vspace{0.4cm} This matrix is then plugged into the random generator, where the number of points that we want to map over the IFS attractor are specified; see the Appendix section \ref{app} for the MATLAB code. \subsection{Condition for non-self-intersection} In figure \ref{fr3} two examples of star-polygon fractals with initial $\{9/2\}$- and $\{9/3\}$-polygons clarify why the parameter $m$ in the ratio $\displaystyle\frac{MA_{m+1}}{A_1A_{m+1}}$ is important when non-self-intersecting fractals are desired. We would like the self-intersection of the resulted sets to be prevented, thus, we will state the following theorem. \begin{Th}\label{V1} If we have a strictly self-similar fractal set obtained as an attractor of IFS, where $n$-attracting points lie on $S^1$, so that they are the vertices of a $\{n/m\}$ star-polygon, and where the attraction towards these points is $P=P(n,m)$ given by equation (\ref{eq1}), then this fractal set is not-self-intersecting if and only if $m\in[n/4,n/4+1]$, which uniquely defines $P$ for a given $n$. \end{Th} \begin{proof} For clarity, one must look at figure \ref{fr4} where with red is denoted the scaled down polygon towards $A_{m+1}$, self-similar to the original one. Although, it has $9$ vertices, it must be considered as ${n/m}$ star-polygon, because we will only use geometrical properties that are independent of $n$ and $m$. For this purpose we must find out the following angles: $\measuredangle{A'MA_{m+1}}$ and $\measuredangle{O_bMA_{m+1}}.$ \begin{figure} \begin{center} \begin{picture}(140,80)(0,0) \put(0,0){ \put(0,0){\includegraphics{fpic4a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{Sketch of \{n/m\} star-polygon circumscribed in $\cal{S_{\text{b}}}$ together with the scaled down polygon towards the $A_{m+1}$ vertex (in red) and its mirror image across the line $O_bL$ (in blue) which is equivalent to the scaled down polygon towards the $A_{m}$ vertex.}\label{fr4} \end{figure} We already found that $\measuredangle{LA_{m+1}M}=(m-1)\pi/n$ and therefore\\ $\measuredangle{MA_{m+1}O_{b}}=\measuredangle{O_bA_{m+1}A_{2m+1}}=\pi/2-\pi/n-(m-1)\pi/n$ $\Rightarrow$ \\ $\Rightarrow$ $\measuredangle{LA_{m+1}A_{2m+1}}=2(\pi/2-\pi/n-(m-1)\pi/n)+(m-1)\pi/n=\pi-2\pi/n-(m-1)\pi/n$ $\Rightarrow$\\ $\Rightarrow$ $\measuredangle{LA_{m+1}A_{2m+1}}=\displaystyle\frac{\pi}{n}(n-m-1)$. On the other hand due to the symmetry of the scaled polygon (in red) $\measuredangle{LA_{m+1}A_{2m+1}}=\measuredangle{A_{m+1}MA'}$ since $\measuredangle{A_{m+1}MN}=\measuredangle{MA_{m+1}A_{2m+1}}$ and $\measuredangle{NMA'}=\measuredangle{MA_{m+1}L}$. $\measuredangle{O_bMA_{m+1}}=\pi/2+(m-1)\pi/n$ as an exterior angle for $\triangle MLA_{m+1}$. Now, note that the scaled polygon towards the $A_{m+1}$ vertex (in red) reflected across the $O_bL$ line segment will result in the scaled polygon towards the $A_{m}$ vertex (in blue); see also figure \ref{fr2}. This is true because the $O_bL$ represents an axis of symmetry for the initial polygon and it contains point $M$ which is a common point for both the scaled polygons towards the $A_{m}$ and $A_{m+1}$ vertices. Now we can deduce that these two scaled polygons will not intersect with each other if the vertices $A'$ and $A''$ stay together with $A_{m+1}$ on the same side with respect to the $O_bL$ axis. Then, $\measuredangle{O_bMA_{m+1}}\geq\measuredangle{A_{m+1}MA'}$ and $\measuredangle{LMA_{m+1}}\geq\measuredangle{A_{m+1}MA''}$ $\Rightarrow$\\ \vspace{-0.4cm} \footnotesize \begin{align}\label{eq2} \measuredangle{O_bMA_{m+1}}\geq\measuredangle{A_{m+1}MA'} ~~~&~~~ \measuredangle{LMA_{m+1}}\geq\measuredangle{A_{m+1}MA''}\\ \pi/2+(m-1)\pi/n\geq\pi-2\pi/n-(m-1)\pi/n ~~~&~~~ \pi/2-(m-1)\pi/n\geq (m-1)\pi/n \nonumber \\ (4m-4)/2n\geq (n-4)/2n ~~~&~~~ 1/2\geq 2(m-1)/n \nonumber \\ m\geq n/4 ~~~&~~~ n/4+1 \geq m \nonumber \end{align} \normalsize Thus, if the resulted fractal does not self-intersect, then $n/4\geq m\geq n/4+1$, which uniquely defines $m$ except when $n$ can be divided by $4$ without residual. At the same time, if the strict inequalities of equations (\ref{eq2}) hold, this ensures that both of the vertices $A'$ and $A''$ stay on the right hand side of $O_bL$ line segment (see figure \ref{fr4}) $\Rightarrow$ their mirror images with respect to $O_bL$ will stay on the left hand side of $O_bL$. Otherwise, if $A'$ and $A''$ were to cross the $O_bL$ segment becoming on its left hand side, the two polygons would intersect with each other and this intersection would be repeated everywhere since the resulted fractal is strictly self-similar. Finally, if $A'$ or $A''$ lies on $O_bL$ and the two scaled polygon have a common side $MA'$ or $MA''$, then one of the equations (\ref{eq2}) must be with a sign for equality. \footnotesize \begin{align}\label{eq3} P=\frac{\sin(\frac{\pi}{n})}{2\cos(\frac{(m-1)\pi}{n})\sin(\frac{m\pi}{n})} \nonumber \\ P=P_1(n,m)~ \text{for} ~n=4v~ \text{and} ~m=v ~~&~~ P=P_2(n,m)~ \text{for} ~n=4v~ \text{and} ~m=v+1 \nonumber \\ P_1=\frac{\sin(\frac{\pi}{4v})}{2\cos(\frac{(v-1)\pi}{4v})\sin(\frac{v\pi}{4v})} ~~&~~ P_2=\frac{\sin(\frac{\pi}{4v})}{2\cos(\frac{v\pi}{4v})\sin(\frac{(v+1)\pi}{4v})} \nonumber \\ P_1=\frac{\sin(\frac{\pi}{4v})}{2\cos(\frac{(v-1)\pi}{4v})\sin(\frac{\pi}{4})} ~~&~~ P_2=\frac{\sin(\frac{\pi}{4v})}{2\cos(\frac{\pi}{4})\sin(\frac{(v+1)\pi}{4v})} \nonumber \\ P_1=\frac{\sin(\frac{\pi}{4v})}{\sqrt{2}\cos(\frac{(v-1)\pi}{4v})} ~~&~~ P_2=\frac{\sin(\frac{\pi}{4v})}{\sqrt{2}\sin(\frac{(v+1)\pi}{4v})} \nonumber \\ P_1=\frac{\sin(\frac{\pi}{4v})}{\sqrt{2}\cos(\frac{v\pi}{4v})\cos(\frac{\pi}{4v})+\sqrt{2}\sin(\frac{v\pi}{4v})\sin(\frac{\pi}{4v})} ~~&~~ P_2=\frac{\sin(\frac{\pi}{4v})}{\sqrt{2}\sin(\frac{v\pi}{4v})\cos(\frac{\pi}{4v})+\sqrt{2}\cos(\frac{v\pi}{4v})\sin(\frac{\pi}{4v})} \nonumber \\ P_1=\frac{\sin(\frac{\pi}{4v})}{\cos(\frac{\pi}{4v})+\sin(\frac{\pi}{4v})} ~~&~~ P_2=\frac{\sin(\frac{\pi}{4v})}{\cos(\frac{\pi}{4v})+\sin(\frac{\pi}{4v})} \end{align} \normalsize Therefore, $n$ must be divided by $4$ without residual because $m\in\mathbb{N}$. Thus, when $n$ is divided by $4$ without residual and the scaling ratio is $P=P(n,m)$ from equations (\ref{eq2}), then each of the scaled polygons has two common vertices with both of the adjacent scaled polygons. The case when $n$ is divided by $4$ without residual and $m\in[n/4,n/4+1]$ implies that equation (\ref{eq1}) produces two values for $P$. We will compute those values $P(4v,v)$ and $P(4v,v+1)$, where $n=4v$ for some $v>0, v\in\mathbb{N}$; see equations (\ref{eq3}). Equations (\ref{eq3}) clearly show that $P(n,n/4)=P(n,n/4+1)$, when $4$ divides $n$ without residual, which ensures the unique definition of $P(n,m)$ when the resulted fractals are non-self-intersecting. \end{proof} \begin{Cor} The attractors of the IFS with $n$ attracting points that are the vertices of \{n,n/4\} and \{n,n/4+1\} star-polygons and whose scaling ratios are P(n,n/4) and P(n,n/4+1), respectively, are identical and composed of star-polygons that have two vertices in common with each of the adjacent polygons. \end{Cor} \begin{Rem} Note that theorem \ref{V1} is stated for the specific type of self-similar fractal sets obtained using IFS where the attracting points are the vertices of a $\{n/m\}$ star-polygon and $P$ is defined by $n$ and $m$ as given in equation (\ref{eq1}). Thus, it does not exclude other kinds of non-self-intersecting star-polygonal fractal sets constructed in a different fashion. \end{Rem} \subsection{Fractal dimensions}\label{dimensions} Theorem \ref{V1} allow us to define $F_{\{n/m\}}$ as the IFS attractor produced from an initial equilateral \{n/m\}-polygon with $n$ contracting maps that scale towards the $n$-vertices of the initial \{n/m\}-polygon with ratio $P(n,m)$. We ensured that the obtained self-similar fractal set $F_{\{n/m\}}$ is non-self-intersecting, which allows us to compute its Hausdorff dimension $dim_HF_{\{n/m\}}$ \cite{Falconer_1990} by solving the following equation: \begin{equation}\label{eq4} \sum_{i=1}^Nc_i^{dim_HF_{\{n/m\}}}=1 \end{equation} where $N$ indicates the amount of similarity maps $S_i$ (see Theorem \ref{th1}) and $0<c_i<1$ are the scaling ratios for each similarity. Since $P(2,1)=1/2$, $dim_HF_{\{2/1\}}=-ln(2)/ln(P(2,1))=1$ which means that the attractor of $F_{\{2/1\}}$ is the initial line segment that connects both vertices or one can think about the Cantor set with scale ratio $1/2$. For the $F_{\{9/3\}}$ in figure \ref{fr3}(b) we have $9P(9,3)^{dim_HF_{\{9/3\}}}=1$ which lead to $dim_HF_{\{9/3\}}=-ln(9)/ln(P(9,3))\approx1.6207585335597825$. In the following figures \ref{fr5}, \ref{fr6}, \ref{fr7} and \ref{fr8} are shown the attractors of $F_{\{n/m\}}$ where $n=3,4,5,6,7,8,10,11,12,13,14,15,16,24$ and $m\in[n/4,n/4+1]$. The dimensions of the presented fractals are computed in the examples that follows every figure. One can recognize well known fractals in the cases of $n=3,4,5,6$ but the other examples are not that famous due to the need of a special ratio in order to be constructed. \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic5.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS generated fractal sets made of points that lie in $F_{\{3/1\}}$, $F_{\{4/2\}}$, $F_{\{5/2\}}$ and $F_{\{6/2\}}$ in subpanels (a), (b), (c) and (d), respectively.}\label{fr5} \end{figure} \begin{Ex}\label{ex1} We will compute the Hausdorff dimensions of the attractors shown in figure \ref{fr5}: \begin{itemize}[noitemsep, topsep=0pt] \item Figure \ref{fr5}(a) $dim_HF_{\{3/1\}}=-ln(3)/ln(P(3,1))\approx1.5849625007211563$ \item Figure \ref{fr5}(b) $dim_HF_{\{4/2\}}=-ln(4)/ln(P(4,2))=2$ \item Figure \ref{fr5}(c) $dim_HF_{\{5/2\}}=-ln(5)/ln(P(5,2))\approx1.6722759381845547$ \item Figure \ref{fr5}(d) $dim_HF_{\{6/2\}}=-ln(6)/ln(P(6,2))\approx1.6309297535714573$ \end{itemize} \end{Ex} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic6.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS generated fractal sets made of points that lie in $F_{\{7/2\}}$, $F_{\{8/3\}}$, $F_{\{10/3\}}$ and $F_{\{11/3\}}$ in subpanels (a), (b), (c) and (d), respectively. The IFS attractor of $F_{\{9/3\}}$ is shown in figure \ref{fr3}(b).}\label{fr6} \end{figure} \begin{Ex}\label{ex2} We will compute the Hausdorff dimensions of the attractors shown in figure \ref{fr6}: \begin{itemize}[noitemsep, topsep=0pt] \item Figure \ref{fr6}(a) $dim_HF_{\{7/2\}}=-ln(7)/ln(P(7,2))\approx1.6522616056918107$ \item Figure \ref{fr6}(b) $dim_HF_{\{8/3\}}=-ln(8)/ln(P(8,3))\approx1.6934291475411138$ \item Figure \ref{fr6}(c) $dim_HF_{\{10/3\}}=-ln(10)/ln(P(10,3))\approx1.5949906555938886$ \item Figure \ref{fr6}(d) $dim_HF_{\{11/3\}}=-ln(11)/ln(P(11,3))\approx1.5911325154416658$ \end{itemize} \end{Ex} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic7.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS generated fractal sets made of points that lie in $F_{\{12/4\}}$, $F_{\{13/4\}}$, $F_{\{14/4\}}$ and $F_{\{15/4\}}$ in subpanels (a), (b), (c) and (d), respectively.}\label{fr7} \end{figure} \begin{Ex}\label{ex3} We will compute the Hausdorff dimensions of the attractors shown in figure \ref{fr7}: \begin{itemize}[noitemsep, topsep=0pt] \item Figure \ref{fr7}(a) $dim_HF_{\{12/4\}}=-ln(12)/ln(P(12,4))\approx1.598670034685813$ \item Figure \ref{fr7}(b) $dim_HF_{\{13/4\}}=-ln(13)/ln(P(13,4))\approx1.5653005271788485$ \item Figure \ref{fr7}(c) $dim_HF_{\{14/4\}}=-ln(14)/ln(P(14,4))\approx1.5490615012592472$ \item Figure \ref{fr7}(d) $dim_HF_{\{15/4\}}=-ln(15)/ln(P(15,4))\approx1.5430579163288531$ \end{itemize} \end{Ex} \newpage \begin{figure}[t] \begin{center} \begin{picture}(140,80)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic8.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS generated fractal sets made of points that lie in $F_{\{16/5\}}$ and $F_{\{24/7\}}$ in subpanels (a) and (b), respectively.}\label{fr8} \end{figure} \begin{Ex}\label{ex4} We will compute the Hausdorff dimensions of the attractors shown in figure \ref{fr8}: \begin{itemize}[noitemsep, topsep=0pt] \item Figure \ref{fr8}(a) $dim_HF_{\{16/5\}}=-ln(16)/ln(P(16,5))\approx1.5434949184823248$ \item Figure \ref{fr8}(b) $dim_HF_{\{24/7\}}=-ln(24)/ln(P(24,7))\approx1.4772930562556852$ \end{itemize} \end{Ex} Also, the $dim_HF_{\{n/m\}}$ for $n\in[17,50]$ and $m\in[n/4,n/4+1]$ are as follows: $\{dim_HF_{\{17/5\}},dim_HF_{\{18/5\}}, ... , dim_HF_{\{50/13\}}\}\approx\{1.5238, 1.5126, 1.5071, 1.5056, 1.4924,\\ 1.4841, 1.4794, 1.4773, 1.4677, 1.4613, 1.4573, 1.4551, 1.4478, 1.4426, 1.4391, 1.437, 1.4312, 1.4269,\\ 1.4239, 1.422, 1.4172, 1.4136, 1.4109, 1.4091, 1.4051, 1.402, 1.3997, 1.398, 1.3946, 1.3919, 1.3898,\\ 1.3883, 1.3853, 1.3829\}$ Finally, for $n=1e+308$, $dim_HF_{\{1e+308/2.5e+307\}}\approx1.001622$. \begin{Th}\label{D1} As $n$ goes to infinity, $dim_HF_{\{n/m\}}$ approaches $1$ \end{Th} \begin{proof} Let $s=dim_HF_{\{n/m\}}$ then from $\displaystyle{\lim_{n \rightarrow \infty} P =\lim_{n \rightarrow \infty} \frac{sin(\pi/n)}{2cos((m-1)\pi/n)sin(m\pi/n)}=}$\\ $\displaystyle{=\lim_{n \rightarrow \infty} \frac{sin(\pi/n)}{2cos(\pi/4)sin(\pi/4)} = \lim_{n \rightarrow \infty} sin(\pi/n) = \lim_{n \rightarrow \infty} \pi/n}$ and $nP^s=1$ we can deduce $s$.\\ Thus, $\displaystyle{\lim_{n \rightarrow \infty} s = \lim_{n \rightarrow \infty} \frac{\ln(n)}{\ln(n/\pi)}=\infty/\infty}$, hence $\displaystyle{\lim_{n \rightarrow \infty} s = \lim_{n \Rightarrow \infty} \frac{\frac{\partial\ln(n)}{\partial n}}{\frac{\partial \ln(n/\pi)}{\partial n}}=1}$ \end{proof} As $F_{\{n/m\}}$ is inscribed in the same circle in which the initial \{n/m\}-polygon is inscribed, a corollary of Theorem \ref{D1} is that as $n \rightarrow \infty$ the $F_{\{n/m\}}$ is going to be arbitrary close to the circle in which the initial \{n/m\}-polygon is inscribed. \subsection{Exact drawing of the IFS iterations} All the figures above were drawn by using a random walk generator that draws points which lie in the IFS attractor \cite{Devaney_1990,Falconer_1990}. Another way of showing the attractor is by plotting large enough iteration (3th or 4th is usually enough) of the IFS where multiple scaled-down copies of the initial polygon are imaged. In figure \ref{fr9} an example of this plotting approach is shown where in panel (b) the fourth iteration of the \{5/2\}-polygon looks like figure \ref{fr5}(c) where the same attractor is produced by the random walk technique. In the next section we will use the latter technique more often for the sake of the clarity of the concepts presented. \begin{figure}[t] \begin{center} \begin{picture}(140,80)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic9.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{5/2\} star-polygon where the second and the fourth iterations are shown in panels (a) and (b), respectively.}\label{fr9} \end{figure} \section{The centre of the circle as an additional point of attraction}\label{sec4} If we add the centre of the circle as another attracting point that the random generator takes into account, then we can produce non-self-intersecting fractal sets that cover a great amount of the area that is bounded by the unit circle. This result is due to the fact that the centre point adds to the IFS attractor (the invariant set) one more scaled copy of the initial star-polygon, hence we need an additional contracting map which we will call $S_c$. Moreover, if the scaling factor of the central point is carefully computed, one can exploit a number of different features of the star-polygons. Here we will give a few introductory examples. Let us consider an initial polygon \{3/1\}, then $P(3,1)=1/2$ and let us have a central map $S_c$ with the same ratio $1/2$ and rotation $\pi/3$ added to the set of maps $\{S_1,S_2,S_3\}$. This will result in a triangular shape attractor with a Hausdorf dimension $dim_HF_{\{3/1\}}[L^1,\pi/3]=-ln(4)/ln(P(3,1))=ln(4)/ln(2)=2$. Thus, the only difference from the attractor shown in figure \ref{fr5}(b) will be the triangular shape. We will present the IFS of the $\{5/2\}$ (see figure \ref{fr10}, Example \ref{ex5}) with an attracting centre, where the similarity map corresponding to the centre point has the same scaling factor $P(5,2)$ as the maps that correspond to the vertices of the initial $\{5/2\}$-polygon. In figure \ref{fr11} another IFS is shown and its dimension is computed in Example \ref{ex6}. This fractal has centre-map that not only scales, but also rotates the initial polygon at angle $\pi/5$ while keeping the ratio $P(5,2)$. Let us also see the IFS of the $\{6/2\}$ with an attracting centre, where the similarity map corresponding to the centre point has a scaling factor $P(6,2)$ and does not imply rotation, presented in figure \ref{fr12} and Example \ref{ex7}. From the dimensions computed in section \ref{dimensions} we can deduce that as $n$ grows the scaling ratio $P(n,m)$, where $m\in[n/4,n/4+1]$ monotonically decreases and the resulting fractals shrink in dimension. Thus, if we want to increase $n$, but keep the attractors with a reasonably high dimension, we can no longer use the same ratio for the centre scaling map as in the cases for $n=5$ and $n=6$. Therefore, we will define different rules for the scaling ratio of the centre map $S_c$ for any $n$, depending if it is odd or even and if $S_c$ includes any rotation such as $\pi/n$ or it does not. \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic10.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{5/2\} star-polygon where the central map $S_c$ has no rotation and uses the ratio $P(5,2)$. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr10} \end{figure} \begin{Ex}\label{ex5} We will compute the Hausdorff dimensions of the attractor shown in figure \ref{fr10}(d): $dim_HF_{\{5/2\}}[L^1,0]=-ln(6)/ln(P(5,2))\approx1.8617$ \end{Ex} \begin{figure}[hpb] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic11.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{5/2\} star-polygon where the central map $S_c$ has rotation $\pi/5$ and uses the ratio $P(5,2)$. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr11} \end{figure} \begin{Ex}\label{ex6} We will compute the Hausdorff dimensions of the attractor shown in figure \ref{fr11}(d): $dim_HF_{\{5/2\}}[L^1,\pi/5]=dim_HF_{\{5/2\}}[L^1,0]=-ln(6)/ln(P(5,2))\approx1.8617$ \end{Ex} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic12a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{6/2\} star-polygon where the central map $S_c$ has no rotation and uses the ratio $P(6,2)$. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr12} \end{figure} \begin{Ex}\label{ex7} We will compute the Hausdorff dimensions of the attractor shown in figure \ref{fr12}(d): $dim_HF_{\{6/2\}}[L^0,0]=-ln(7)/ln(P(6,2))\approx1.7712$ \end{Ex} \newpage \begin{figure}[t] \begin{center} \begin{picture}(140,70)(0,0) \put(0,0){ \put(0,0){\includegraphics{fpic13.pdf}} } \end{picture} \end{center} \vspace{0.0cm} \caption{Sketch of a $\{7/2\}$ polygon IFS with a centre map after the first iteration. Shown is the way of computing of the ratio for the centre map.}\label{fr13} \end{figure} In order to clearly show how the ratios of $S_c$ are deduced, in figure \ref{fr13} is sketched part of the IFS of $\{7/2\}$-polygon when it is iterated only once. There are two centre polygons, one is rotated at angle $\pi/7$ (the one with a vertex at $M^1$) and another rotated at an angle to be computed later (with vertex $L^1$). Thus, the contraction map that scales the original $\{7/2\}$-polygon towards the point $O$ and images it in one of these two polygons includes rotation as well. Using the sketch in figure \ref{fr13} we will show that some of the points $M^i$ and $L^i$ will be used as vertices of a central polygon that corresponds to a central map $S_c$, which can be used for non-self-intersecting polygonal IFS. Let us construct the points $M^1, M^3, M^5, ..., M^{2i+1}$ lying on the line that crosses the line-segment $O^1O$ at angle $\pi/7$. This line is intersected by the line segments that start from the vertices $L^1, L^3$ and $L^5$ and are perpendicular to the line-segment $O^1O$ resulting in the points $M^1, M^3$ and $M^5$. Now we will define six different ratios $P_c$ for the centre map $S_c$: $OM^1/OA$, $OM^3/OA$, $OM^5/OA$ and $OL^1/OA$, $OL^3/OA$, $OL^5/OA$. All of them are defined by the angles $\measuredangle{OO^1L^l}=l\pi/n$ where in general $l=2i+1$ for $i\geq0, i\in\mathbb{Z}$ when the polygons are odd-sided and $l=2i$ when the polygons are even-sided. For $l=1,3,5$, $ON=OA-O^1A-O^1L^l\cos(l\pi/7)$ and $OM^l=ON/\cos(\pi/7)$, thus $OM^l/OA=1/\cos(\pi/7)-(O^1A/OA)(1/\cos(\pi/7)-\cos(l\pi/7)/\cos(\pi/7))$. This equation also holds for $l=7$, where $A\equiv L^7$ and if apply equation (\ref{eq1}), so that $O^1A/OA=P(7,2)$, then $OM^l/OA=1/\cos(\pi/7)-P(7,2)/\cos(\pi/7)-P(7,2)\cos(l\pi/7)/\cos(\pi/7)$. Also, if generalised for an arbitrary initial $\{n/m\}$-polygon, it leads to the following equation: \begin{align}\label{eq5} &\displaystyle{\frac{OM^l}{OA}}=\displaystyle{\frac{1-P(n,m)-P(n,m)\cos(l\pi/n)}{\cos(\pi/n)}} \\ &\measuredangle{O^1OM^l}=\pi/n \nonumber \\ &0\leq l\leq n, l=2i \text{ if $n$-even}, l=2i+1 \text{ if $n$-odd}, i\geq0, i\in\mathbb{Z} \nonumber \end{align} Another ratio that may be used for the map $S_c$ is $OL^l/OA$. Here $NL^1=O^1L^1\sin(\pi/7)$, hence, $\tan(\measuredangle{O^1OL^1})=\displaystyle{\frac{O^1L^1\sin(\pi/7)}{OA-O^1A-O^1L^1\cos(\pi/7)}}$ which for an arbitrary $l$ will become\\ $\tan(\measuredangle{O^1OL^l})=\displaystyle{\frac{O^1L^l\sin(l\pi/7)}{OA-O^1A-O^1L^l\cos(l\pi/7)}}$. Now let us take into account that $O^1L^l=O^1A$ and $O^1A/OA=P(7,2)$, hence $\tan(\measuredangle{O^1OL^l})=\displaystyle{\frac{P(7,2)\sin(l\pi/7)}{1-P(7,2)-P(7,2)\cos(l\pi/7)}}$. Finally, for an arbitrary initial $\{n/m\}$-polygon we can deduce the angle $\measuredangle{O^1OL^l}$ and from $(OL^l)^2=(NL^l)^2+(ON)^2$ we can also deduce the ratio $OL^l/OA$ as follows: \begin{align}\label{eq6} &\displaystyle{\frac{OL^l}{OA}}=\sqrt{2P(n,m)(P(n,m)-1)(1+\cos(l\pi/n))+1} \\ &\gamma(n,m,l)=\measuredangle{O^1OL^l}=\arctan\left(\displaystyle{\frac{P(n,m)\sin(l\pi/n)}{1-P(n,m)-P(n,m)\cos(l\pi/n)}}\right) \nonumber \\ &0\leq l\leq n, l=2i \text{ if $n$-even}, l=2i+1 \text{ if $n$-odd}, i\geq0, i\in\mathbb{Z} \nonumber \end{align} Now we can look back at figures \ref{fr10}, \ref{fr11} and \ref{fr12} and understand how they are constructed. In example \ref{ex5}, figure \ref{fr10}, the contraction ratio of $S_c$ is $OL^1/OA$, and the attractor of the IFS is denoted as $F_{\{5/2\}}[L^1,0]$, where 0 indicates the angle of rotation that $S_c$ has. In this case of $\{5/2\}$-polygon $OM^1/OA=OL^1/OA$, so it does not matter if $L^1$ or $M^1$ is used for the notation. In the other examples where $OL^l/OA=OM^l/OA$, again $L^l$ will be used as notation. In example \ref{ex6}, figure \ref{fr11} the contraction ratio of $S_c$ is again $OL^1/OA$, but here we have rotation at angle $\pi/5$, thus the attractor of the IFS is denoted as $F_{\{5/2\}}[L^1,\pi/5]$. And finally in example \ref{ex7}, figure \ref{fr12} the contraction ratio of $S_c$ is $OL^0/OA$ and the attractor of the IFS is denoted $F_{\{6/2\}}[L^0,0]$. \subsection{Even $n$} In this subsection we will take a close look at the IFS that originates from even sided star-polygons. Firstly, we should note that the same way as $F_{\{6/2\}}[L^0,0]$, for any even $n$, $F_{\{2i/m\}}[L^0,0]$ will always be a non-self-intersecting attractor if $m\in[n/4,n/4+1]$ and $i\in\mathbb{N}$. Now, the first example has \{6/2\} as initial polygon, and $S_c$ has scaling ratio $OL^2/OA$ and rotation $\pi/6$. The resulting fractal can be seen in figure \ref{fr14}, where from the random generated attractor, see panel (d), we can expect the exact dimension of 2. Indeed, this is analytically proven in the computations of example \ref{ex8}. Similarly, the constructions and the attractors of $F_{\{8/2\}}[L^2,\frac{\pi}{8}]$ and $F_{\{8/3\}}[L^2,\frac{\pi}{8}]$ are shown in figures \ref{fr15} and \ref{fr16}. They have equal dimension computed in example \ref{ex9}. Another pair of attractors that have central map and originate from \{8/2\}-star polygon are the $F_{\{8/2\}}[L^0,0]$ and $F_{\{8/3\}}[L^0,0]$ shown in figures \ref{fr17} and \ref{fr18}. They have equal dimension computed in example \ref{ex10}. The last four examples of attractors clearly show that the scaling ratio and the number of the vertices are the parameters that define the attractor of the IFS. \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic14a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{6/2\} star-polygon where $S_c$ has scaling ratio $OL^2/OA$ and rotation $\pi/6$. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr14} \end{figure} \begin{Ex}\label{ex8} We will compute the Hausdorff dimensions of the attractor shown in figure \ref{fr14}(d): $6P(6,2)^{dim_HF_{\{6/2\}}[L^2,\frac{\pi}{6}]}+\sqrt{2P(6,2)(P(6,2)-1)(1+\cos(2\pi/6))+1}^{dim_HF_{\{6/2\}}[L^2,\frac{\pi}{6}]}=1$ \\ $6(1/3)^{dim_HF_{\{6/2\}}[L^2,\frac{\pi}{6}]}+\sqrt{(1/3)}^{dim_HF_{\{6/2\}}[L^2,\frac{\pi}{6}]}=1$. Let $y=\sqrt{(1/3)}^{dim_HF_{\{6/2\}}[L^2,\frac{\pi}{6}]}$\\ Hence, $6y^2+y-1=0$ and $y_{1,2}=1/3;-1/2$, therefore as $y\geq0$\\ $\sqrt{(1/3)}^{dim_HF_{\{6/2\}}[L^2,\frac{\pi}{6}]}=1/3\rightarrow dim_HF_{\{6/2\}}[L^2,\frac{\pi}{6}]=2$. \end{Ex} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic15a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{8/2\} star-polygon where $S_c$ has scaling ratio $OL^2/OA$ and rotation $\pi/8$. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr15} \end{figure} \begin{Ex}\label{ex9} The Hausdorff dimensions of the attractor shown in figure \ref{fr15}(d) is\\ $dim_HF_{\{8/2\}}[L^2,\frac{\pi}{8}]\approx1.9799$ \end{Ex} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic16a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{8/3\} star-polygon where $S_c$ has scaling ratio $OL^2/OA$ and rotation $\pi/8$. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr16} \end{figure} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic17a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{8/2\} star-polygon where $S_c$ has scaling ratio $OL^0/OA$ and no rotation. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr17} \end{figure} \begin{Ex}\label{ex10} The Hausdorff dimensions of the attractor shown in figure \ref{fr17}(d) is\\ $dim_HF_{\{8/2\}}[L^0,0]\approx1.8678$ \end{Ex} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic18a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{8/3\} star-polygon where $S_c$ has scaling ratio $OL^0/OA$ and no rotation. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr18} \end{figure} \subsection{Odd $n$} In this subsection we will take a close look at the IFS that originates from odd-sided star-polygons. The first example has \{7/2\} as an initial polygon and $S_c$ has a scaling ratio $OM^1/OA$ and rotation $\pi/7$; see Eqs. (\ref{eq5}). The resulting fractal can be seen in figure \ref{fr19}, where the first, the second and the fourth iterations are in panels (a), (b) and (c), while the randomly generated attractor is in panel (d). The Hausdorf dimension of the attractor $F_{\{7/2\}}[M^1,\pi/7]$ is computed in example \ref{ex11} to be 1.8773. Another fractal that originates from a \{7/2\}-polygon is shown in figure \ref{fr20}. Here the scaling ratio is $OL^1/OA$ and the angle of rotation is computed using equations (\ref{eq6}), which leads to polygons that meet at their vertices. The dimension of $F_{\{7/2\}}[L^1,\gamma(7,2,1)]$ is computed in example \ref{ex12} to be 1.8564. The technique that uses the ratio $OL^1/OA$ and the angle from equations (\ref{eq6}) can also be used for producing non-intersecting self-similar fractals for any odd $n$. Therefore, for any $n$ we can generate a $n$-flake which will be either $F_{\{n/m\}}[L^0,0]$ if $n$ is even or $F_{\{n/m\}}[L^1,\gamma(n,m,1)]$ if $n$ is odd. \subsection{The dimension of $\infty$-flake} For any $n$, $m\in[n/4,n/4+1]$ and $i\in\mathbb{N}$, by construction if $n=2i$ then $F_{\{n/m\}}[L^0,0]$ is non-self-intersecting, and by construction if $n=2i+1$ then $F_{\{n/m\}}[L^1,\gamma(n,m,1)]$ is non-self-intersecting. Therefore, equations (\ref{eq1}), (\ref{eq4}) and (\ref{eq6}) can be used for the corresponding $dim_HF_{\{n/m\}}[L^0,0]$ and $dim_HF_{\{n/m\}}[L^1,\gamma(n,m,1)]$ to be obtained. \begin{Th}\label{D2} As $n$ goes to infinity, $dim_HF_{\{n/m\}[L^0,0]}$ and $dim_HF_{\{n/m\}[L^1,\gamma(n,m,1)]}$ approach $2$. \end{Th} \begin{proof} Both dimensions can be deduced from the equation\\ \begin{equation*} nP(n,m)^s+\sqrt{2P(n,m)(P(n,m)-1)(1+\cos(l\pi/n))+1}^s=1, \end{equation*} where $s$ denotes $dim_HF_{\{n/m\}[L^0,0]}$ or $dim_HF_{\{n/m\}[L^1,\gamma(n,m,1)]}$. The latter equation can be modified to \begin{equation*} \displaystyle{P^s=\frac{1}{n}-\frac{\sqrt{2P(n,m)(P(n,m)-1)(1+\cos(l\pi/n))+1}^s}{n}}, \end{equation*} from where: \begin{align*} s&=\lim_{n\rightarrow\infty}\frac{\ln\Big(\displaystyle\frac{1}{n}-\frac{\sqrt{2P(n,m)(P(n,m)-1)(1+\cos(l\pi/n))+1}^s}{n}\Big)}{\ln(P)}=\\ &=\lim_{n\rightarrow\infty} \frac{\ln\Big(\displaystyle\frac{1}{n}-\frac{\sqrt{4P(n,m)(P(n,m)-1)+1}^s}{n}\Big)}{\ln(P)}=\\ &=\lim_{n\rightarrow\infty} \frac{\ln\Big(\displaystyle\frac{1}{n}-\frac{(1-2P)^s}{n}\Big)}{\ln(P)}=\lim_{n\rightarrow\infty} \frac{\ln\Big(\displaystyle\frac{1}{n}-\frac{(1-2\pi/n)^s}{n}\Big)}{\ln(\pi/n)} \end{align*} Let us substitute $\nu=\pi/n$, hence \begin{align*} s&=\lim_{\nu\rightarrow 0} \frac{\ln\Big(\displaystyle\frac{\nu}{\pi}(1-(1-2\nu)^s)\Big)}{\ln(\nu)}= \lim_{\nu\rightarrow 0} \frac{\ln\Big(\displaystyle\frac{\nu}{\pi}\Big)}{\ln(\nu)}+\lim_{\nu\rightarrow 0} \frac{\ln(1-(1-2\nu)^s)}{\ln(\nu)}=\\ &=1+\frac{-\infty}{-\infty}=1+\lim_{\nu\rightarrow 0} \frac{2s\nu(1-2\nu)^{s-1}}{1-(1-2\nu)^s}=1+\lim_{\nu\rightarrow 0} \frac{2s\nu(1-(s-1)2\nu+O(2))}{1-(1-s2\nu+O(2))}=\\ &=1+\lim_{\nu\rightarrow 0} \frac{2s\nu+O(2)}{2s\nu+O(2)}=2 \end{align*} \end{proof} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic19a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{7/2\} star-polygon where $S_c$ has scaling ratio $OM^1/OA$ and rotation $\pi/7$. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr19} \end{figure} \begin{Ex}\label{ex11} The Hausdorff dimensions of the attractor shown in figure \ref{fr19}(d) is\\ $dim_HF_{\{7/2\}}[M^1,\frac{\pi}{7}]\approx1.8773$ \end{Ex} \newpage \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic20a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{IFS with initial \{7/2\} star-polygon where $S_c$ has scaling ratio $OL^1/OA$ and rotation $\gamma(7,2,1)$ radians. The first, the second and the fourth iterations are shown in panels (a), (b) and (c) respectively. 100000 points that lie on the attractor of the IFS are shown in panel (d).}\label{fr20} \end{figure} \begin{Ex}\label{ex12} The Hausdorff dimensions of the attractor shown in figure \ref{fr20}(d) is\\ $dim_HF_{\{7/2\}[L^1,\gamma(7,2,1)]}\approx1.8564$ \end{Ex} \section{Special cases and equivalent IFS attractors}\label{sec5} \begin{figure}[hbp] \begin{center} \begin{picture}(140,160)(0,0) \put(0,0){ \put(-10,0){\includegraphics{fpic21a.pdf}} } \end{picture} \end{center} \vspace{-0.5cm} \caption{ In subpanels (a) and (b) we can see the third iteration of the IFS that originate from \{6/3\} and \{8/4\}, respectively. Both $S_c$ have scaling ratio $OL^2/OA$, where in panel (a) the rotation is $\pi/6$ and in (b) it is $\pi/8$. In subpanels (c) and (d) we can see the third iteration of the IFS that originate from \{7/3.5\} and \{9/4.5\}, respectively. Both $S_c$ have scaling ratio $OM^1/OA$, where in panel (c) the rotation is $\pi/7$ and in (d) it is $\pi/9$.}\label{fr21} \end{figure} As in the previous sections, here we assume $n\geq2$, $n\in\mathbb{Z}$, $m\in\mathbb{Z}$, $0\leq m<n$, and $i\in\mathbb{Z}$, $i\geq 0$. Let us now review the developed notation and how many of the well known fractals can be associated with it. Firstly, we can say that the Cantor set results as an IFS attractor if $n=2$ and $P=1/3$. As we saw in figure \ref{fr5}(a), the Sierpinski Triangle comes out when $n=3$ and $P=P(3,1)=1/2$ or $F_{\{3/1\}}$ and the Sierpinski Hexagon when $n=6$ and $P=P(6,2)=1/3$ or $F_{\{6/2\}}$. The Greek Cross fractal appears when $n=4$ and $P=P(4,2)=1/2$ or $F_{\{4/2\}}$, while for $n=4$ and $P<1/2$ the invariant set for a Horseshoe map is produced. The Sierpinski Pentagon appears for $n=5$ and $P=P(5,2)=1/(1+\text{golden ratio})$ or $F_{\{5/2\}}$. When the centre map $S_c$ is taken into account, the Vicsec fractal can be produced when $n=4$ and $P=P_c=1/3$. Also, the Pentaflake and the Hexaflake are shown in figures \ref{fr11} and \ref{fr12} as $F_{\{5/2\}}[L^1,0]$ and $F_{\{6/2\}}[L^0,0]$, respectively. However, the attractors of the special cases mentioned above do not originate from an unique star-polygons because if we generate random points on the attractor of the IFS, the image is defined by the maps $\{S_i\}$; see theorem \ref{th1}. Therefore, if we do not alter $P$ and $n$, the IFS-attractors with any initial $\{n,m\}$-polygon will be the same. Thus, we can assume that every attractor that originates from a \{2i,m\}-polygon is equivalent to the attractor of the IFS that originate from the \{2i,i\}-polygon, where the ratios and the rotations of the maps $\{S_1,...,S_{2i},S_c\}$ are kept the same as the ones used for the IFS of the \{2i,m\}-polygon. However, when the exact polygons are plotted, as in section \ref{sec4}, due to the impossibility for infinite iterations to be realised, the integer $m$ also plays its role. Two such examples are shown in figure \ref{fr21}, where in panels (a) and (b) the third iterations of the IFS of $F_{\{6/3\}}[L^2,\frac{\pi}{6}]$ and $F_{\{8/4\}}[L^2,\frac{\pi}{8}]$ are realised respectively. One can see the difference with figures \ref{fr14}, \ref{fr15} and \ref{fr16} where $F_{\{6/2\}}[L^2,\frac{\pi}{6}]$, $F_{\{8/2\}}[L^2,\frac{\pi}{8}]$ and $F_{\{8/3\}}[L^2,\frac{\pi}{8}]$ are shown. In the case of odd $n=2i+1$ we do not have a star-polygon composed of $n$ line-segments that cross each other at the centre and are inscribed in all the \{2i+1,m\}-polygons in the same way as the \{2i,i\}-polygon is inscribed in any \{2i,m\}-polygon. Therefore, we will construct such polygons as the star of $2i+1$ line segments that start from the centre of a $2i+1$ regular polygon and end up at its vertices. Let us denote this figure as $\{2i+1,i+1/2\}$-star polygon. As the $\{2i+1,i+1/2\}$-polygon is inscribed in all the $\{2i+1,m\}$-polygons we can generalise that any attractor that originates from a \{n,m\}-polygon is equivalent to the attractor of the IFS that originates from the \{n,n/2\}-polygon where the ratios and the rotations of the maps $\{S_1,...,S_{n},S_c\}$ are kept the same as the ones used for the IFS of the \{n,m\}-polygon. Two such examples are shown in figure \ref{fr21}, where in panels (c) and (d) the third iterations of the IFS of $F_{\{7/3.5\}}[M^1,\frac{\pi}{7}]$ and $F_{\{9/4.5\}}[M^1,\frac{\pi}{9}]$ are realised. One can see the difference with figures \ref{fr19} where $F_{\{7/3\}}[M^1,\frac{\pi}{7}]$ was shown. \begin{Ex}\label{ex13} If the fractal figure \ref{fr21}(d) is infinitely iterated the attractor will have Hausdorff dimension: $dim_HF_{\{9/4.5\}}[M^1,\frac{\pi}{9}]\approx1.8879$ \end{Ex} Unlike the cases of $n=3,5,7$ and $9$, where the $F_{\{n/\frac{n}{2}\}}[M^1,\frac{\pi}{n}]$ is a non-self intersecting fractal with the copies of the initial $\{n/\frac{n}{2}\}$-polygon osculating with each other, for $n=2i+1$ when $n\geq11$, the scaled copies stop osculating and with the increase of the iterations they do not fill up the space in the most effective way. The same effect appears when we take $F_{\{n/\frac{n}{2}\}}[L^2,\frac{\pi}{n}]$ for $n=2i$ when $n\geq10$. Thus, the problem of finding scaling ratio for the $S_c$ for every $n\geq10$ where the rotation of $S_c$ is equal to $\pi/n$ stays as an open problem. This is an important question due to the fact that the fractals that originate from an $n$-gon with rotation of $S_c$ equal to $\pi/n$ could have dimensions very close or equal to 2; see examples \ref{ex8} and \ref{ex9}. \section*{Conclusion} The present paper develops a universal technique that allows any star-polygon to be used for the construction of non-self intersecting fractal (Sierpinski $n$-gon or $n$-flake) by using IFS through random walk or through an exact scaling. Along the proposed scaling ratios, the Matlab code for IFS random walk fractal generation is provided so that someone interested in studying the geometry of this class of fractals could use it. Important dimensions are computed, namely the dimension of the Sierpinki $\infty$-gon is proved to be 1, the dimension of the $\infty$-flake to be 2, and the dimension of $F\{6,2\}[L^2,\pi/6]$ to be 2 as well. It is also shown, that by using random walk IFS generator, identical attractors may result from different initial star-polygons. The proposed study can be extended if rotations are applied not only to the $S_c$ map, but to the $S_i$ maps as well. However, this is still an ongoing research in development. The techniques for construction and the provided ratios needed for the dimensions of the presented class of fractals is important not only for mathematicians, but also for engineers and other scientists who may be interested in fractal-shaped devices or who study the fractal shapes of nature. With the advanced precision of the fabrication technology polyflakes may become an important design for devices such as antennas or chemical mixers for fuel cells, batteries, etc. Also, fractal shapes are applicable in any kind of wave absorbers where the wave could be a sound wave, electromagnetic signal, light, caused by turbulent flow, etc. In other words, fractal designs are going to be part of the future physical devices at all scales and, hence, the research focused on fractal-shaped figures is important for many innovation processes. \section*{Acknowledgement} First of all I would like to thank my Geometry teacher Tanya Stoeva for the dedicated classes during the years at National Gymnasium of Natural Sciences and Mathematics "Lyubomir Chakalov" which still help me to overcome problems in the most intuitive way. Also, I would like to thank my family and friends for the support. \section*{Appendix (Matlab random generator)}\label{app} \% name: Vassil Tzanov \\ \% function 'frac' that takes the matrix $C$, \\ \%$C= \begin{bmatrix} A1 & b1 &\\ A2 & b2 &\\ \vdots \\ An & bn & \end{bmatrix} $\\ \% Ai are matrices 2x2 ,bi are vectors 2x1, which are the i-th linear function in the "IFS"\\ \% $k$ defines the amount of points that we want to be plotted \\ \% on the attractor defined by $C$; the function 'frac' can plot fractals \\ \% with central polygon rotated at angle $rot$; the central map must be defined by\\ \% the last two rows of $C$; if there is no central map, $rot$ has to be defined as $0$\\ \texttt{function Fractal = frac(C,k,rot)}\\\\ \phantom{ss}\% verification of $C$\\ \texttt{\phantom{ss}D=size(C);\\ \phantom{ss}n=D(1)/2;\\ \phantom{ss}if D(2)$\sim$=3 \big| floor(n)$\sim$=n\\ \phantom{ssss}Fractal = 'Bad input size';\\ \phantom{ss}else} \phantom{ssss}\% computation of the determinants of the matrices A\\ \texttt{\phantom{ssss}for i=1:n\\ \phantom{ssssss}dets(i)=abs(det(C((2*i-1):(2*i),1:2)));\\ \phantom{ssss}end\\ \phantom{ssss}dets=dets';} \phantom{ssss}\% if some determinant is zero we have to add a small value, because\\ \phantom{ssss}\% we do not like this function to be executed with zero probability\\ \texttt{\phantom{ssss}dets=max(dets, max(dets)/(25*n));} \phantom{ssss}\% the determinant are divided on their sum so we derive a probability vector\\ \texttt{\phantom{ssss}dets = dets/sum(dets);} \phantom{ssss}\% the vector prob is defined as\\ \phantom{ssss}\% prob = [0, dets(1), dets(1)+dets(2), . . ., dets(1)+...+dets(n-1)] \\ \phantom{ssss}\% then "sum(prob$<$rand)" gives a random integer between $1$ and $n$ \\ \phantom{ssss}\% that we use in order to randomly choose one of the $n$-th functions\\ \texttt{\phantom{ssss}de=dets(1);\\ \phantom{ssss}prob(1)=0;\\ \phantom{ssss}for i=2:n\\ \phantom{ssssss}prob(i)=de;\\ \phantom{ssssss}de=de+dets(i);\\ \phantom{ssss}end} \phantom{ssss}\% computations of the points' coordinates and filling the matrix $Fractal$\\ \texttt{\phantom{ssss}x=[0;0];\\ \phantom{ssss}Fractal=zeros(2,k+20);\\ \phantom{ssss}for j=1:(k+20)\\ \phantom{ssssss}int=sum(prob<rand);}\\ \phantom{ssssss}\% one of the functions is randomly chosen\\ \phantom{ssssss}i=int;\\ \phantom{ssssss}\% matrix computation of $t*x+(1-t)*y$, where $x$ is\\ \phantom{ssssss}\% the current point and $y$ is the point towards which $x$ is attracted\\ \phantom{ssssss}\% if $i$ corresponds to the last row, \\ \phantom{ssssss}\% a rotation at $rot$ radians around [0,0] must be executed\\ \texttt{\phantom{ssssss}if i==n\\ \phantom{ssssssss}x=[cos(rot),-sin(rot);sin(rot),cos(rot)]*x;\\ \phantom{ssssssss}x=C((2*i-1):(2*i),1:2)*x+(diag([1,1],0)-\\ -C((2*i-1):(2*i),1:2))*C((2*i-1):(2*i),3);\\ \phantom{ssssssss}Fractal(:,j)=x;\\ \phantom{ssssss}else\\ \phantom{ssssssss}x=C((2*i-1):(2*i),1:2)*x+(diag([1,1],0)-\\ -C((2*i-1):(2*i),1:2))*C((2*i-1):(2*i),3);\\ \phantom{ssssssss}Fractal(:,j)=x;\\ \phantom{ssssss}end\\ \phantom{ssss}end} \phantom{ssss}\% the points of the matrix $Fractal$ are plotted\\ \texttt{\phantom{ssss}Fractal=Fractal(:,21:(k+20));\\ \phantom{ssss}plot(Fractal(1,:),Fractal(2,:),'k.','MarkerSize',1)\\ \phantom{ssss}axis('equal')\\ \phantom{ss}end\\ end }
1,108,101,562,656
arxiv
\section{Introduction} For a smooth projective variety $X$ over $\mathbb{C}$, let $A^i(X):=CH^i(X)_{\mathbb{Q}}$ denote the Chow groups (i.e. the groups of codimension $i$ algebraic cycles on $X$ with $\mathbb{Q}$-coefficients, modulo rational equivalence). The domain of algebraic cycles is an alluring treasure trove for anyone looking for open problems \cite{B}, \cite{J2}, \cite{Kim}, \cite{Mur}, \cite{Vo}, \cite{MNP}. Inside this treasure trove, one niche of particular interest is occupied by hyperk\"ahler varieties (i.e. projective irreducible holomorphic symplectic manifolds \cite{Beau1}, \cite{Beau0}). For these varieties, recent years have seen an intense amount of new constructions and significant progress in the understanding of their Chow groups \cite{Beau3}, \cite{V13}, \cite{V14}, \cite{V17}, \cite{SV}, \cite{V6}, \cite{Rie}, \cite{Rie2}, \cite{LFu}, \cite{Lin}, \cite{Lin2}, \cite{SYZ}, \cite{FTV}, \cite{FT}, \cite{MN}, \cite{V20}. Much of this progress has centered around the following conjecture: \begin{conjecture}[Beauville, Voisin \cite{Beau3}, \cite{V17}]\label{conjbv} Let $X$ be a hyperk\"ahler variety. Let $D^\ast(X)\subset A^\ast(X)$ denote the $\mathbb{Q}$-subalgebra generated by divisors and Chern classes of $X$. Then the cycle class maps induce injections \[ D^i(X)\ \hookrightarrow\ H^{2i}(X,\mathbb{Q})\ \ \ \forall i\ .\] \end{conjecture} (For some cases where Conjecture \ref{conjbv} is satisfied, cf. \cite{Beau3}, \cite{V17}, \cite{BV}, \cite{Rie2}, \cite{LFu}, \cite{V6}, \cite{Yin}, \cite{FT}, \cite{LV2}, \cite{FLV2}.) The ``motivation'' underlying Conjecture \ref{conjbv} is that for a hyperk\"ahler variety $X$, the Chow ring $A^\ast(X)$ is expected to have a bigrading $A^\ast_{[\ast]}(X)$, where the piece $A^i_{[j]}(X)$ corresponds to the graded $\hbox{Gr}^j_F A^i(X)$ for the conjectural Bloch--Beilinson filtration. In particular, it is expected that the subring $A^\ast_{[0]}(X)$ injects into cohomology, and that $D^\ast(X)\subset A^\ast_{[0]}(X)$. In addition to divisors and Chern classes, what other cycles should be in the subring $A^\ast_{[0]}(X)$ (assuming this subring exists) ? A conjecture of Voisin provides more candidate members: \begin{conjecture}[Voisin \cite{V14}]\label{conjv} Let $X$ be a hyperk\"ahler variety of dimension $n=2m$. Let $Z\subset X$ be a codimension $i$ subvariety swept out by $i$-dimensional constant cycle subvarieties. There exists a subring $A^\ast_{[0]}(X)\subset A^\ast(X)$ injecting into cohomology, containing $D^\ast(X)$ and \[ Z\ \ \in A^i_{[0]}(X)\ .\] \end{conjecture} A {\em constant cycle subvariety\/} is a closed subvariety $T \subset X$ such that the image of the natural map $A_0(T)\to A^n(X)$ has dimension $1$. In particular, Conjecture \ref{conjv} stipulates that Lagrangian constant cycle subvarieties (i.e., constant cycle subvarieties of dimension $m$) should lie in $A^m_{[0]}(X)$. Some results towards Conjecture \ref{conjv} can be found in \cite{V13}, \cite{Lin2}, \cite{FLV}, \cite{FLV2}. Amongst hyperk\"ahler varieties, of particular interest are those admitting a {\em Lagrangian fibration\/} (i.e. a proper surjective morphism $\pi\colon X\to B$ with connected fibers and $0<\dim B<\dim X$; in this case the general fiber of $\pi$ is an abelian variety that is Lagrangian with respect to the symplectic form on $X$ \cite{Mat}. In dimension 2, a Lagrangian fibration is an elliptic K3 surface\footnote{For background on Lagrangian fibrations, cf. the foundational \cite{Mat} as well as the recent \cite{BK}, \cite{HX} and the references given there.}). As explained in \cite{mmj}, Conjecture \ref{conjv} plus the Bloch--Beilinson conjectures lead in particular to the following: \begin{conjecture}\label{conj3} Let $X$ be a hyperk\"ahler variety of dimension $4$. Assume that $X$ admits a Lagrangian fibration with general fibre $A$. Then \[ \hbox{Im} \bigl( A^2(X)\xrightarrow{\cdot A} A^4(X)\bigr)=\mathbb{Q}[c_4(T_X)]\ .\] \end{conjecture} (For a more general conjecture, which is more awkward to state, cf. \cite[Conjecture 1.3]{mmj}.) The goal of this note is to study the conjectural injectivity property (as outlined by Conjectures \ref{conjbv}, \ref{conjv} and \ref{conj3}) for some classical examples of Lagrangian fibrations. The first result is as follows: \begin{nonumbering}[=Theorem \ref{main1}] Let $X$ be a hyperk\"ahler fourfold, and assume that $X$ admits a Lagrangian fibration $\pi$ which is a compactified Jacobian of a family of curves. Let $A$ be a general fibre of $\pi$. Then \[ \hbox{Im} \bigl( A^2(X)\xrightarrow{\cdot A} A^4(X)\bigr)=\mathbb{Q}[c_4(T_X)]\ .\] \end{nonumbering} To prove Theorem \ref{main1}, thanks to Markushevich \cite{Mark} one is reduced to fibrations arising from hyperplane sections of a genus $2$ K3 surface (such fibrations are cited as examples of Mukai flops in the introduction of Mukai's beautiful paper \cite[Example 0.6]{Mu}). Then, we exploit the existence of a {\em multiplicative Chow--K\"unneth decomposition\/} \cite{SV}, combined with results concerning the {\em Franchetta property\/} for families of Hilbert powers of low degree K3 surfaces \cite{FLV}. The second result is about the six-dimensional Lagrangian fibration $\pi\colon J_1\to \mathbb{P}^3$, where $J_1$ is the compactified Jacobian of genus $3$ curves arising as hyperplane sections of a general quartic K3 surface $S$. This is another example in the introduction of Mukai's foundational paper \cite[Example 0.8]{Mu}, where it is shown that the flop of $J_1$ along a certain codimension $2$ subvariety $P\subset J_1$ is isomorphic to a moduli space of sheaves on $S$. \begin{nonumbering}[=Theorem \ref{main2}] Let $h_1\in A^1(J_1)$ be the polarization class, let $h_2\in A^1(J_1)$ be $\pi^\ast(d)$ where $d\subset\mathbb{P}^3$ is a hyperplane class, and let $P\subset J_1$ be as above. Let $R^\ast(J_1)$ be the $\mathbb{Q}$-subalgebra \[ R^\ast(J_1):= \langle h_1, h_2, P, c_j(T_{J_1})\rangle\ \ \ \ \subset\ A^\ast(J_1)\ .\] The cycle class map induces an injection \[ R^\ast(J_1)\ \hookrightarrow\ H^\ast(J_1,\mathbb{Q})\ .\] \end{nonumbering} In particular, Conjecture \ref{conjbv} is true for the very general sixfold $J_1$. Theorem \ref{main2} is also in agreement with Conjecture \ref{conjv}, because $P$ (being a $\mathbb{P}^2$-bundle over $S$) has codimension $2$ and is swept out by constant cycle surfaces. In proving Theorem \ref{main2}, we rely on (a sharpening of) a recent result of B\"ulles \cite{Bul}, combined with results on the Franchetta property from \cite{FLV}. For Lagrangian fibrations of higher dimension (such as the tenfolds of \cite{LSV} or \cite{V18}), the argument of the present note quickly runs into problems: this is because the Franchetta property is not known outside of a few selected cases (e.g., for the tenfolds of \cite{V18}, in view of \cite{LPZ} one would need the Franchetta property for the 5th relative power of cubic fourfolds; this is currently unknown and perhaps not even true). \vskip0.6cm \begin{convention} In this note, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. For a smooth variety $X$, we will denote by $A^j(X)$ the Chow group of codimension $j$ cycles on $X$ with $\mathbb{Q}$-coefficients. The notation $A^j_{hom}(X)$ will be used to indicate the subgroups of homologically trivial cycles. For a morphism between smooth varieties $f\colon X\to Y$, we will write $\Gamma_f\in A^\ast(X\times Y)$ for the graph of $f$, and ${}^t \Gamma_f\in A^\ast(Y\times X)$ for the transpose correspondence. The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$. \end{convention} \section{Preliminaries} \subsection{B\"ulles' result revisited} The following theorem is a slight sharpening of a result of B\"ulles \cite{Bul}: \begin{theorem} \label{bul} Let $S$ be a projective K3 surface or an abelian surface, and $\alpha\in Br(S)$ a Brauer class. Let $M$ be a smooth projective moduli space of Gieseker stable $\alpha$-twisted sheaves on $S$, of dimension $\dim M=2m$. There is an inclusion as direct summand \[ h(M)\ \hookrightarrow\ \bigoplus_{i=1}^r h(S^{k_i})(\ell_i)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\] where $\ell_i\in\mathbb{Z}$ and $1\le k_i\le m$. \end{theorem} \begin{proof} We follow B\"ulles' proof, with a slight twist to get a better bound on the integers $k_i$ (in \cite[Theorem 0.1]{Bul}, the $k_i$ are bounded by $2m$). Let \[ [\hbox{Ext}^!_\pi]:=\sum_i (-1)^i [\hbox{Ext}^i_\pi(\mathcal E,\mathcal F)] \ \ \in\ K_0(M\times M) \] be as in \cite[Proof of Theorem 0.1]{Bul}. Then (as explained in loc. cit.) a result of Markman's gives the equality \begin{equation}\label{mm} \Delta_M = c_{2m}(- [\hbox{Ext}^!_\pi]) \ \ \ \hbox{in}\ A^{2m}(M\times M)\ .\end{equation} As in loc. cit., we consider the two-sided ideal in the ring of correspondences \[ I:= \bigcup_{k\ge 1} I_k \ \ \ \subset \ A^{\ast}(M\times M)\ ,\] where $I_k$ is defined as \[ I_k:= \langle \beta\circ\alpha\ \big\vert\ \alpha\in A^\ast(M\times S^\ell)\ ,\ \beta\in A^\ast( S^\ell\times M)\ ,\ 1\le \ell\le k \rangle\ .\] B\"ulles shows \cite[Proof of Theorem 0.1]{Bul} that $I$ is closed under intersection product, and more precisely that $I_k\cdot I_\ell\subset I_{k+\ell}$. In addition, let us state a lemma: \begin{lemma}\label{div} Let $\gamma\in I_k$ (for some $k\ge 1$), and $\delta\in A^1(M\times M)$. Then \[ \gamma\cdot\delta\ \ \in\ I_k\ .\] \end{lemma} \begin{proof} Since the irregularity $q(M)=0$, every divisor $\delta\in A^1(M\times M)$ can be written as a sum of pullbacks $(p_i)^\ast(D_i)$ under the two projections $p_i\colon M\times M\to M$. We may thus suppose $\delta$ is of the form $D\times M$ or $M\times D$, where $D\subset M$ is an irreducible reduced divisor. Let $\iota\colon D\to M$ denote the inclusion morphism. We have \[ \begin{split} \gamma\cdot\delta=\gamma\cdot (D\times M) &= (\iota\times\hbox{id})_\ast(\iota\times\hbox{id})^\ast(\gamma)= (\Gamma_\iota\times\Delta_M)_\ast ({}^t \Gamma_\iota\times\Delta_M)_\ast (\gamma)\\ &= \Gamma_\iota\circ {}^t \Gamma_\iota\circ \gamma \ \ \ \ \hbox{in}\ A^\ast(M\times M)\ ,\\ \end{split}\] where the last equality is by virtue of Lieberman's lemma \cite[Lemma 3.3]{V3}, \cite[Proposition 2.1.3]{MNP}. Similarly, in case $\delta$ is of the form $M\times D$, we find that \[ \gamma\cdot\delta=\gamma\cdot (M\times D)= \gamma\circ \Gamma_\iota\circ {}^t \Gamma_\iota\ \ \ \hbox{in}\ A^\ast(M\times M)\ .\] In both cases, it follows that $\gamma\in I_k$ implies that also $\gamma\cdot \delta\in I_k$. \end{proof} Let us write $c_n:=c_n(- [\hbox{Ext}^!_\pi])\in A^n(M\times M)$. As shown by B\"ulles, we have \begin{equation}\label{rel} c_n=(-1)^{n-1}(n-1)!\, ch_n + p(c_1,\ldots,c_{n-1})\ \ \ \hbox{in}\ A^{n}(M\times M)\ \ \ \forall n\ge 1\ .\end{equation} Here $ch_n$ denotes the degree $n$ part of the Chern character $ch(- [\hbox{Ext}^!_\pi])\in A^\ast(M\times M)$, and $p$ is some weighted homogeneous polynomial of degree $n$. We have $ch_n\in I_1$ for all $n\ge 1$. In particular, for $n=2$ we find that \[ c_2 = {1\over 2} c_1^2 - ch_2\ \ \ \hbox{in}\ A^{2}(M\times M)\ .\] The class $c_1=ch_1$ is in $I_1$ and so (using lemma \ref{div}) $c_1^2$ is also in $I_1$. It follows that \[ c_2\ \ \in\ I_1\ .\] Likewise, $c_3$ can be expressed in terms of $ch_3\in I_1$ and $c_1^3\in I_1$ and $c_1\cdot c_2\in I_1$, and so (again using lemma \ref{div}) we see that \[ c_3\ \ \in I_1\ .\] We now make the claim that \begin{equation}\label{thispol} p(c_1,\ldots,c_{n-1}) \ \ \in\ I_{\lfloor {n\over 2}\rfloor}\ \end{equation} for any weighted homogeneous polynomial $p$ of degree $n\ge 2$. Let us prove this claim by induction. From what we have just checked, it is clear that the claim is true for $n=2, 3$. Let us now suppose $n\ge 4$. The polynomial $p$ can be decomposed \[ p(c_1,\ldots,c_{n-1})= \lambda c_1\cdot c_{n-1}+\mu c_1^2c_{n-2} +\nu c_2 c_{n-2} + c_1^2 q(c_1,\ldots,c_{n-3})+ c_2 r(c_1,\ldots,c_{n-3})\ ,\] where $\lambda, \mu,\nu\in\mathbb{Q}$ and $q$ and $r$ are weighted homogeneous polynomials of degree $n-2$. By the induction hypothesis combined with (\ref{rel}), we know that $c_{n-1}$ and $c_{n-2}$ are in $I_{\lfloor {n\over 2}\rfloor}$; using lemma \ref{div} this implies that the terms $ c_1\cdot c_{n-1}$ and $ c_1^2c_{n-2}$ are in $I_{\lfloor {n\over 2}\rfloor}$. By the induction hypothesis, $c_{n-2}\in I_{\lfloor {n-2\over 2}\rfloor}$; since $c_2\in I_1$ this gives $ c_2 c_{n-2}\in I_{\lfloor {n\over 2}\rfloor}$. Again by the induction hypothesis, the polynomials $q$ and $r$ are in $ I_{\lfloor {n-2\over 2}\rfloor}$. Using lemma \ref{div}, it follows that $ c_1^2 q(c_1,\ldots,c_{n-3})\in I_{\lfloor {n-2\over 2}\rfloor}$. Using the fact that $c_2\in I_1$, it follows that $c_2 r(c_1,\ldots,c_{n-3})\in I_{\lfloor {n\over 2}\rfloor}$. Altogether, this proves the claim (\ref{thispol}). Claim (\ref{thispol}), combined with relation (\ref{rel}) and the fact that $ch_n\in I_1$, implies that \[ c_n\ \ \in\ I_{\lfloor {n\over 2}\rfloor}\ \ \ \forall n\ge 2\ .\] In view of equality (\ref{mm}), it follows that \[ \Delta_M\ \ \in\ I_m\cap A^{2m}(M\times M)\ ,\] which proves the theorem. \end{proof} \begin{remark} As noted by B\"ulles \cite[Remark 2.1]{Bul}, Theorem \ref{bul} is also valid for moduli spaces of $\sigma$-stable objects on a K3 surface or abelian surface, where $\sigma$ is a generic stability condition. For instance, Theorem \ref{bul} applies to Ouchi's eightfolds \cite{Ouch} and to the Laza--Sacc\`{a}--Voisin tenfolds \cite{LSV}. In \cite{FLV2}, we use Theorem \ref{bul} to prove the generalized Franchetta conjecture for Lehn--Lehn--Sorger--van Straten eightfolds. \end{remark} \subsection{The Franchetta property} \begin{definition} Let $\pi\colon\mathcal X\to B$ be a smooth projective family of varieties, and let us write $X_b:=\pi^{-1}(b)$ for a fibre. We say that the family $\pi\colon\mathcal X\to B$ has the {\em Franchetta property\/} if for any $\Gamma\in A^\ast(\mathcal X)$ there is equivalence \[ \begin{split}\Gamma\vert_{X_b}=0\ \ \hbox{in}\ H^\ast(X_b)\ \ \ \hbox{for\ $b\in B$\ very\ general}\ \ \iff\ \ \Gamma\vert_{X_b}=0\ \ \hbox{in}\ A^\ast(X_b) \ \ \ \hbox{for}&\\ \hbox{$b\in B$\ very\ general}&\ .\\ \end{split}\] \end{definition} \begin{remark} In view of \cite[Lemma 3.2]{Vo}, the vanishing $ \Gamma\vert_{X_b}=0$ in $A^\ast(X_b)$ for $b\in B$ very general is equivalent to the vanishing $ \Gamma\vert_{X_b}=0$ in $A^\ast(X_b)$ for all $b\in B$. \end{remark} \begin{notation}\label{not} Let $\mathbb{P}$ denote weighted projective space $\mathbb{P}(1^3,3)$. Let $\mathcal S_{g2}\to B_{g2}$ denote the universal family of K3 surfaces of genus $2$, where \[B_{g2}\subset \mathbb{P} H^0(\mathbb{P},\mathcal O_\mathbb{P}(6))\] is the Zariski open parametrizing smooth sections. Let $\mathcal S_{g3}\to B_{g3}$ denote the universal family of K3 surfaces of genus $3$, where \[B_{g3}\subset \mathbb{P} H^0(\mathbb{P}^3,\mathcal O_{\mathbb{P}^3}(4))\] is the Zariski open parametrizing smooth sections. \end{notation} \begin{notation} For any family $\mathcal X\to B$ and $m\in\mathbb{N}$, we write $\mathcal X^{m/B}:=\mathcal X\times_B\cdots\times_B \mathcal X$ for the $m$-fold fibre product. \end{notation} \begin{theorem}[\cite{FLV}]\label{gfc} The families $\mathcal S_{g2}^{m/B_{g2}}\to B_{g2}$, $m\le 3$ and $\mathcal S_{g3}^{m/B_{g3}}\to B_{g3}$, $m\le 5$ have the Franchetta property. \end{theorem} \begin{proof} This is (part of) \cite[Theorem 1.5]{FLV}. \end{proof} \section{Multiplicative Chow--K\"unneth decomposition} \begin{definition}[Murre \cite{Mur}]\label{ck} Let $X$ be a smooth projective variety of dimension $n$. We say that $X$ has a {\em CK decomposition\/} if there exists a decomposition of the diagonal \[ \Delta_X= \pi^0_X+ \pi^1_X+\cdots +\pi^{2n}_X\ \ \ \hbox{in}\ A^n(X\times X)\ ,\] such that the $\pi^i_X$ are mutually orthogonal idempotents and $(\pi^i_X)_\ast H^\ast(X)= H^i(X)$. Given a CK decomposition for $X$, we set $$A^i_{(j)}(X) := (\pi_X^{2i-j})_\ast A^i(X).$$ The CK decomposition is said to be {\em self-dual\/} if \[ \pi^i_X = {}^t \pi^{2n-i}_X\ \ \ \hbox{in}\ A^n(X\times X)\ \ \ \forall i\ .\] (Here ${}^t \pi$ denotes the transpose of a cycle $\pi$.) (NB: ``CK decomposition'' is short-hand for ``Chow--K\"unneth decomposition''.) \end{definition} \begin{remark} \label{R:Murre} The existence of a Chow--K\"unneth decomposition for any smooth projective variety is part of Murre's conjectures \cite{Mur}, \cite{MNP}. It is expected that for any $X$ with a CK decomposition, one has \begin{equation*}\label{hope} A^i_{(j)}(X)\stackrel{??}{=}0\ \ \ \hbox{for}\ j<0\ ,\ \ \ A^i_{(0)}(X)\cap A^i_{num}(X)\stackrel{??}{=}0. \end{equation*} These are Murre's conjectures B and D, respectively. \end{remark} \begin{definition}[Definition 8.1 in \cite{SV}]\label{mck} Let $X$ be a smooth projective variety of dimension $n$. Let $\Delta_X^{sm}\in A^{2n}(X\times X\times X)$ be the class of the small diagonal \[ \Delta_X^{sm}:=\bigl\{ (x,x,x) : x\in X\bigr\}\ \subset\ X\times X\times X\ .\] A CK decomposition $\{\pi^i_X\}$ of $X$ is {\em multiplicative\/} if it satisfies \[ \pi^k_X\circ \Delta_X^{sm}\circ (\pi^i_X\otimes \pi^j_X)=0\ \ \ \hbox{in}\ A^{2n}(X\times X\times X)\ \ \ \hbox{for\ all\ }i+j\not=k\ .\] In that case, \[ A^i_{(j)}(X):= (\pi_X^{2i-j})_\ast A^i(X)\] defines a bigraded ring structure on the Chow ring\,; that is, the intersection product has the property that \[ \hbox{Im} \Bigl(A^i_{(j)}(X)\otimes A^{i^\prime}_{(j^\prime)}(X) \xrightarrow{\cdot} A^{i+i^\prime}(X)\Bigr)\ \subseteq\ A^{i+i^\prime}_{(j+j^\prime)}(X)\ .\] (For brevity, we will write {\em MCK decomposition\/} for ``multiplicative Chow--K\"unneth decomposition''.) \end{definition} \begin{remark} The property of having an MCK decomposition is severely restrictive, and is closely related to Beauville's ``splitting property'' conjecture \cite{Beau3}. Examples of varieties admitting an MCK decomposition include hyperelliptic curves, K3 surfaces, abelian varieties, cubic hypersurfaces. For more ample discussion and more examples, we refer to \cite[Chapter 8]{SV}, as well as \cite{V6}, \cite{SV2}, \cite{FTV}, \cite{FV}, \cite{LV}, \cite{FLV}. \end{remark} There are the following useful general results: \begin{theorem}[Shen--Vial \cite{SV}]\label{square} Let $X$ be a hyperk\"ahler fourfold that is birational to a Hilbert square $S^{[2]}$ where $S$ is a K3 surface. Then $X$ has an MCK decomposition. \end{theorem} \begin{proof} The statement for $S^{[2]}$ is \cite[Theorem 13.4]{SV} (a more general result is \cite[Theorem 1]{V6}). The statement for $X$ then follows by applying the result of Rie\ss\ \cite{Rie} (as duly noted in \cite[Introduction]{V6}). \end{proof} \begin{proposition}[Shen--Vial \cite{SV}]\label{product} Let $M,N$ be smooth projective varieties that have an MCK decomposition. Then the product $M\times N$ has an MCK decomposition. \end{proposition} \begin{proof} This is \cite[Theorem 8.6]{SV}, which shows more precisely that the {\em product CK decomposition\/} \[ \pi^i_{M\times N}:= \sum_{k+\ell=i} \pi^k_M\times \pi^\ell_N\ \ \ \in A^{\dim M+\dim N}\bigl((M\times N)\times (M\times N)\bigr) \] is multiplicative. \end{proof} \begin{theorem}[Shen--Vial \cite{SV2}]\label{blowup} Let $M$ be a smooth projective variety, and let $f\colon\widetilde{M}\to M$ be the blow--up with center a smooth closed subvariety $N\subset M$. Assume that \begin{enumerate} \item $M$ and $N$ have a self-dual MCK decomposition; \item the Chern classes of the normal bundle $\mathcal N_{N/M}$ are in $A^\ast_{(0)}(N)$; \item the graph of the inclusion morphism $N\to M$ is in $A^\ast_{(0)}(N\times M)$. \end{enumerate} Then $\widetilde{M}$ has a self-dual MCK decomposition, and \[ f^\ast A^\ast_{(j)}(M)\ \subset\ A^\ast_{(j)}(\widetilde{M})\ ,\ \ \ f_\ast A^\ast_{(j)}(\widetilde{M})\ \subset\ A^\ast_{(j)}({M})\ .\] \end{theorem} \begin{proof} This is \cite[Proposition 2.4]{SV2}. \end{proof} \section{Examples in dimension 4} \begin{theorem}\label{main1} Let $X$ be a hyperk\"ahler fourfold, and assume that $X$ admits a Lagrangian fibration $\pi\colon X\to B$ which is a compactified Jacobian of a family of curves. Let $A$ be a general fibre of $\pi$. Then \[ \hbox{Im} \bigl( A^2(X)\xrightarrow{\cdot A} A^4(X)\bigr)=\mathbb{Q}[c_4(T_X)]\ .\] \end{theorem} \begin{proof} Thanks to a result of Markushevich \cite[Theorem 1.1]{Mark}, we know that $B\cong\mathbb{P}^2$ and $X\cong J_0$, where $J_0$ is the compactified Jacobian of the genus $2$ curves arising as hyperplane sections of a genus $2$ K3 surface $S$. The fibration $\pi\colon J_0\to\mathbb{P}^2$ occurs in \cite[Example 0.6]{Mu}, where it is shown that there is a birational map \[ J_0\ \dashrightarrow\ S^{[2]} \] which is a Mukai flop. Precisely, $S^{[2]}$ contains a subvariety $P\cong\mathbb{P}^2$ (defined as the pairs of points in $S$ that are in the same fibre of the double cover $S\to\mathbb{P}^2$). There are birational transformations \[ J_0\ \xleftarrow{r}\ \widetilde{J_0}\ \xrightarrow{s}\ S^{[2]}\ ,\] where $s$ is the blow-up with center $P\subset S^{[2]}$, and $r$ is the blow-down of the exceptional divisor of $s$ onto a closed subvariety $P^\prime\subset J_0$. The theorem will follow by combining the following 4 claims: \begin{claim}\label{c0} The variety $\widetilde{J_0}$ has an MCK decomposition, and this induces a splitting $A^2(\widetilde{J_0})=A^2_{(0)}(\widetilde{J_0})\oplus A^2_{(2)}(\widetilde{J_0})$. \end{claim} \begin{claim}\label{c1} Let $A$ be a fibre of the Lagrangian fibration $\pi\colon J_0\to\mathbb{P}^2$. Then \[ r^\ast(A)\ \ \in\ A^2_{(0)}(\widetilde{J_0})\ .\] \end{claim} \begin{claim}\label{c2} Let $A$ be a general fibre of $\pi\colon J_0\to\mathbb{P}^2$. The map \[ A^2_{(2)}(\widetilde{J_0})\ \xrightarrow{\cdot r^\ast(A)}\ A^4(\widetilde{J_0}) \] is zero. \end{claim} \begin{claim}\label{c3} One has \[r^\ast c_4(T_{J_0})\in A^4_{(0)}(\widetilde{J_0}) \ .\] \end{claim} Let us show that these claims imply the theorem; Since $A^2(\widetilde{J_0})=A^2_{(2)}(\widetilde{J_0})\oplus A^2_{(0)}(\widetilde{J_0})$, we have \[ \hbox{Im} \bigl( A^2(\widetilde{J_0})\xrightarrow{\cdot r^\ast(A)} A^4(\widetilde{J_0})\bigr) = A^2_{(2)}(\widetilde{J_0})\cdot r^\ast(A) + A^2_{(0)}(\widetilde{J_0})\cdot r^\ast(A)\ .\] Using Claim \ref{c2}, this reduces to \[ \hbox{Im} \bigl( A^2(\widetilde{J_0})\xrightarrow{\cdot r^\ast(A)} A^4(\widetilde{J_0})\bigr) = A^2_{(0)}(\widetilde{J_0})\cdot r^\ast(A)\ .\] Using Claim \ref{c1}, we see that $A^2_{(0)}(\widetilde{J_0})\cdot r^\ast(A)$ is contained in $A^4_{(0)}(\widetilde{J_0})\cong\mathbb{Q}$. Claim \ref{c3}, plus the fact that $c_4(T_{J_0})$ has strictly positive degree, then implies that \[ A^2(\widetilde{J_0})\cdot r^\ast(A) \ \ \in\ \mathbb{Q}[ r^\ast c_4(T_{J_0})]\ .\] Pushing forward to $J_0$, this gives an inclusion \[ A^2({J_0})\cdot A \ \ \in\ \mathbb{Q}[ c_4(T_{J_0})]\ \ \subset\ A^4(J_0)\ .\] Since the left-hand side is one-dimensional (the intersection of $A$ with 2 ample divisors has strictly positive degree), this inclusion is an equality, proving the theorem. It remains to prove the claims. To prove Claim \ref{c0}, we use Theorem \ref{blowup} with $M=S^{[2]}$ and $N=P\cong\mathbb{P}^2$. Points (1) and (2) are clearly satisfied. For point (3), we note that $S^{[2]}$ and $P$ have a ``universal MCK decomposition'', i.e. there exist \[ \pi^i_{\mathcal S^{[2]}\times_B \mathcal P}\ \ \ \in\ A^6(\mathcal S^{[2]}\times_B \mathcal P)\ \ , \ \ i=0,\ldots,12\ ,\] such that for each $b\in B$ the restriction \[ \pi^i_{(S_b)^{[2]}\times P_b }:= \pi^i_{\mathcal S^{[2]}\times_B \mathcal P}\vert_b\ \ \ \in\ A^6((S_b)^{[2]}\times P_b )\ \] defines an MCK decomposition for $(S_b)^{[2]}\times P_b$. Let $\iota\colon \mathcal P\to \mathcal S^{[2]}$ denote the inclusion morphism, and $\iota_b\colon P_b\to (S_b)^{[2]}$ the restriction to a fibre. For any $k\not=8$, we have that \[ (\pi^k_{(S_b)^{[2]}\times P_b })_\ast (\Gamma_{\iota_b}) = \Bigl( (\pi^k_{\mathcal S^{[2]}\times_B \mathcal P})_\ast (\Gamma_\iota)\Bigr)\vert_b\ \ \ \in\ A^4((S_b)^{[2]}\times P_b) \] is homologically trivial. Theorem \ref{gfc} then implies that it is rationally trivial, and so \[ \Gamma_{\iota_b}= (\pi^8_{(S_b)^{[2]}\times P_b })_\ast (\Gamma_{\iota_b}) \ \ \ \hbox{in}\ A^4((S_b)^{[2]}\times P_b)\ \ \ \forall b\in B \ .\] We have now checked that the conditions of Theorem \ref{blowup} are satisfied, and so $\widetilde{J_0}$ has an MCK decomposition. The ``blow-up'' isomorphism $A^2(\widetilde{J_0})\cong A^2(S^{[2]})\oplus A^1(P)$ is homogeneous with respect to the lower grading. Since $A^2_{(j)}(S^{[2]})=0$ for $j\not\in\{0,2\}$ and $A^1(P)=A^1_{(0)}(P)$, this shows the second part of claim \ref{c0}. Claim \ref{c1} is elementary: writing $A=\pi^{-1}(x)$ where $x\in\mathbb{P}^2$, we see that $r^\ast(A)=r^\ast\pi^\ast(x)$ in $A^2(\widetilde{J_0})$ is an intersection of divisors, which proves the claim. To prove the other 2 claims, we consider things familywise. That is, we let $\mathcal S\to B$ denote the universal family of genus $2$ K3 surfaces as in notation \ref{not}, and we write $\mathcal S^{[2]}\to B$ for the universal family of Hilbert squares of genus $2$ K3 surfaces. There are morphisms of $B$-schemes \[ \mathcal J\ \xleftarrow{r}\ \widetilde{\mathcal J}\ \xrightarrow{s}\ \mathcal S^{[2]}\ ,\] such that restriction to a fibre gives the Mukai flop mentioned above. The morphism $r$ is the blow-up with center $\mathcal P^\prime$ (which is a $\mathbb{P}^2$-bundle over $B$), and the morphism $s$ is the blow-up with center $\mathcal P$ (which is again a $\mathbb{P}^2$-bundle over $B$). We now establish the following result: \begin{proposition}\label{gfc4} Let $\widetilde{\mathcal J}\to B$ be as above. The families $\widetilde{\mathcal J}^{}\to B$ and $\widetilde{\mathcal J}\times_B \mathcal S\to B$ have the Franchetta property. \end{proposition} \begin{proof} For the first family, one notes that there is a commutative diagram \[ \begin{array}[c]{ccc} A^i(\widetilde{\mathcal J}) & \xrightarrow{}& A^i(\mathcal S^{[2]})\oplus A^{i-1}(\mathcal P)\\ &&\\ \downarrow&&\downarrow\\ &&\\ A^i(\widetilde{J}_b) & \xrightarrow{\cong}& A^i((S_b)^{[2]})\oplus A^{i-1}(P_b)\ .\\ \end{array}\] (Here, we write $\widetilde{J}_b, S_b, P_b$ for the fibre over $b\in B$ of the family $\widetilde{\mathcal J}$ resp. $\mathcal S$ resp. $\mathcal P$.) The family $\mathcal S^{[2]}\to B$ has the Franchetta property (theorem \ref{gfc}), and $P_b\cong\mathbb{P}^2$ so the family $\mathcal P\to B$ trivially has the Franchetta property. This settles the Franchetta property for $\widetilde{\mathcal J}\to B$. For the second family, there is a similar commutative diagram \[ \begin{array}[c]{ccc} A^i(\widetilde{\mathcal J}\times_B \mathcal S) & \xrightarrow{}& A^i(\mathcal S^{[2]}\times_B \mathcal S)\oplus A^{i-1}(\mathcal P\times_B \mathcal S)\\ &&\\ \downarrow&&\downarrow\\ &&\\ A^i(\widetilde{J}_b\times S_b) & \xrightarrow{\cong}& A^i((S_b)^{[2]}\times S_b)\oplus A^{i-1}(P_b\times S_b)\ .\\ \end{array}\] The family $\mathcal S^{[2]}\times\mathcal S\to B$ has the Franchetta property (theorem \ref{gfc}, or more exactly \cite[Theorem 1.5]{FLV}), and so does the family $\mathcal P\times_B \mathcal S\to B$ (using the projective bundle formula, one reduces to $\mathcal S\to B$). This settles the Franchetta property for $\widetilde{\mathcal J}\times_B \mathcal S\to B$. \end{proof} Let us now prove the two remaining claims. We will rely on the existence of an MCK decomposition that is {\em generically defined\/} for the family $\widetilde{\mathcal J}\to B$, in the following sense: \begin{lemma}\label{umck} Let $\widetilde{\mathcal J}\to B$ be as above. There exist \[ \pi^i_{\widetilde{\mathcal J}}\ \ \ \in\ A^4(\widetilde{\mathcal J}\times_B \widetilde{\mathcal J})\ \ , \ \ i=0,\ldots,8\ ,\] such that for each $b\in B$ the restriction \[ \pi^i_{\widetilde{J_b}}:= \pi^i_{\widetilde{\mathcal J}}\vert_b\ \ \ \in\ A^4(\widetilde{J}_b\times \widetilde{J}_b)\ \] defines an MCK decomposition for $\widetilde{J}_b$. \end{lemma} \begin{proof} The Hilbert squares $(S_b)^{[2]}$ have an MCK decomposition that exists universally (this is just because the ``distinguished $0$-cycle'' of \cite{BV} exists universally). Looking at the argument of Theorem \ref{blowup} (i.e. the proof of \cite[Proposition 2.4]{SV2}, one sees that the induced MCK decomposition for the blow-up $\widetilde{J_b}$ exists universally as well. \end{proof} To prove Claim \ref{c3}, we observe that \[ (r_b)^\ast c_4(T_{J_b})=\bigl( r^\ast c_4(T_{\mathcal J/B})\bigr)\vert_b\ \ \ \in A^4(\widetilde{J}_b)\ \] is universally defined. This forces $(r_b)^\ast c_4(T_{J_b})$ to lie in $ A^4_{(0)}(\widetilde{J}_b) $: for any $k\not=8$, we have that \[ (\pi^k_{\widetilde{J}_b})_\ast \bigl((r_b)^\ast c_4(T_{J_b})\bigr) = \Bigl( (\pi^k_{\widetilde{\mathcal J}})_\ast (r^\ast c_4(T_{\mathcal J/B})) \Bigr)\vert_b \ \ \ \in A^4(\widetilde{J}_b) \] is homologically trivial, for all $b\in B$. In view of Proposition \ref{gfc4}, this implies \[ (\pi^k_{\widetilde{J}_b})_\ast \bigl((r_b)^\ast c_4(T_{J_b})\bigr)=0\ \ \ \hbox{in}\ A^4(\widetilde{J}_b)\ \ \ \forall b\in B\ ,\ \ \ \forall k\not=8\ ,\] proving claim \ref{c3}. To prove Claim \ref{c2}, let $\mathcal A\subset \mathcal J$ denote a general fibre of $\pi\colon\mathcal J\to \cup_{b\in B} \vert \mathcal O_{J_b}(1)\vert$, and let $\widetilde{\mathcal A}$ denote a general fibre of $\pi\circ r$. Let us write $\tau\colon \widetilde{\mathcal A}\to\widetilde{\mathcal J}$ for the inclusion. We are interested in the correspondence \[ \Gamma_b:= \Gamma_{\tau_b} \circ {}^t \Gamma_{\tau_b} \circ \pi^2_{\widetilde{J}_b} \ \ \ \in\ A^6(\widetilde{J}_b\times \widetilde{J}_b)\ ,\] which by construction is such that \[ (\Gamma_b)_\ast A^2(\widetilde{J}_b) = A^2_{(2)}(\widetilde{J}_b)\cdot (r_b)^\ast (A_b)\ .\] The correspondence $\Gamma_b$ is universally defined, i.e. there exists $\Gamma\in A^6(\widetilde{\mathcal J}\times_B \widetilde{\mathcal J})$ such that \[ \Gamma_b = \Gamma\vert_b\ \ \ \in\ A^6(\widetilde{J}_b\times \widetilde{J}_b)\ \ \ \forall b\in B\ .\] Since $A_b\subset J_b$ is Lagrangian, the cup product of $A_b$ with $H^{2,0}(J_b)$ is zero. By a standard Hodge theory argument, this means that the cup product of $A_b$ with the transcendental cohomology $H^2_{tr}(J_b)$ is also zero. Since $H^2_{tr}$ is a birational invariant, the same holds on $\widetilde{J_b}$, and so the map \[ H^2(\widetilde{J_b})\ \xrightarrow{\cdot\widetilde{A_b}}\ H^6(\widetilde{J_b}) \] is the same as the map \[ N^1 H^2(\widetilde{J_b})\ \xrightarrow{\cdot\widetilde{A_b}}\ N^3 H^6(\widetilde{J_b}) \] (where $N^i H^{2i}()$ denotes the algebraic classes in cohomology). It follows that there exist (for each $b\in B$) a finite union of curves $C_b\subset\widetilde{J}_b$ and a cycle $\gamma_b$ supported on $C_b\times C_b$ such that \[ \Gamma_b=\gamma_b\ \ \ \hbox{in}\ H^{12}(\widetilde{J}_b\times\widetilde{J}_b)\ .\] (Indeed, for $C_b$ one can take a basis of $N^3 H^6(\widetilde{J_b})$, and add curves forming a dual basis to $N^1 H^2(\widetilde{J_b})$.) Using Voisin's Hilbert schemes argument as in \cite[Proposition 3.7]{V0}, these fibrewise data can be spread out, i.e. there exist a finite union of codimension $3$ closed subvarieties $\mathcal C\subset\widetilde{\mathcal J}$ and a cycle $\gamma$ supported on $\mathcal C\times_B \mathcal C\subset \widetilde{\mathcal J}\times_B \widetilde{\mathcal J}$ with the property that \begin{equation}\label{homtriv} (\Gamma -\gamma)\vert_b =0\ \ \ \hbox{in}\ H^{12}(\widetilde{J}_b\times\widetilde{J}_b)\ \ \ \forall b\in B\ .\end{equation} At this point, we need another lemma: \begin{lemma}\label{ll} Set-up as above. There exist relative correspondences \[ \Theta_1\ ,\ \Theta_2\in A^{4}(\mathcal S\times_B \widetilde{\mathcal J})\ ,\ \ \ \Xi_1\ , \ \Xi_2\in A^{2}( (\widetilde{\mathcal J}\times_B \mathcal S) \] such that for each $b\in B$, the composition \[ A^{2}_{(2)}(\widetilde{J_b})\ \xrightarrow{\bigl((\Xi_1\vert_b)_\ast, (\Xi_2\vert_b)_\ast\bigr)}\ A^2(S_b)\oplus A^2(S_b) \xrightarrow{\bigl((\Theta_1+\Theta_2)\vert_b\bigr)_\ast}\ A^{2}(\widetilde{J_b}) \] is the identity. \end{lemma} \begin{proof} By virtue of Theorem \ref{blowup}, the isomorphism \[ A^2(\widetilde{J_b})\ \cong\ A^2\bigl((S_b)^{[2]}\bigr)\oplus A^1(P_b) \] respects the bigrading. Since $A^1_{(2)}(P_b)=0$, it follows that \[ A^2_{(2)}(\widetilde{J_b})\ \xrightarrow{(s_b)_\ast}\ A^2_{(2)}\bigl((S_b)^{[2]}\bigr)\ \xrightarrow{(s_b)^\ast}\ A^2_{(2)}(\widetilde{J}_b)\ \] is the identity. Let $\Psi_b\in A^4\bigl( (S_b)^{[2]}\times (S_b)^2\bigr)$ be the correspondence such that $(\Psi_b)^\ast(\Psi_b)_\ast=\hbox{id}$ on $A^2_{(2)}\bigl((S_b)^{[2]}\bigr)$. This $\Psi_b$ is obviously the restriction of a relative correspondence $\Psi$ (cf. for instance \cite[Proof of Corollary 3.4]{mmj}). The argument of \cite[Proposition 2.15]{mmj} gives that $A^2_{(2)}\bigl( (S_b)^2\bigr)$ factors (via universally defined correspondences) over $ A^2(S_b)\oplus A^2(S_b)$. Composing with $\Psi\circ \Gamma_s$ and its transpose, we obtain the required relative correspondences. \end{proof} Let us now return to the relative correspondence $\Gamma-\gamma\in A^6(\widetilde{\mathcal J}\times_B \widetilde{\mathcal J})$ constructed above. We define the compositions \[ \Gamma_i:= (\Gamma-\gamma)\circ \Theta_i\ \ \ \in\ A^6(\mathcal S\times_B \widetilde{\mathcal J})\ \ \ (i=1,2)\ .\] In view of (\ref{homtriv}), these correspondences are fibrewise homologically trivial: \[ (\Gamma_i)\vert_b=0\ \ \ \hbox{in}\ H^{12}(S_b\times\widetilde{J_b})\ \ \ \forall b\in B\ \ \ (i=1,2)\ .\] Applying proposition \ref{gfc4}, it follows that they are fibrewise rationally trivial: \[ (\Gamma_i)\vert_b=0\ \ \ \hbox{in}\ A^{6}(S_b\times\widetilde{J_b})\ \ \ \forall b\in B\ \ \ (i=1,2)\ .\] But then a fortiori \[ (\Gamma_i)\vert_b\circ (\Xi_i)\vert_b = (\Gamma-\gamma)\vert_b\circ (\Theta_i)\vert_b\circ (\Xi_i)\vert_b=0\ \ \ \hbox{in}\ A^{6}(\widetilde{J_b}\times\widetilde{J_b})\ \ \ \forall b\in B\ \ \ (i=1,2)\ .\] Taking the sum, this implies the fibrewise vanishing \[ (\Gamma-\gamma)\vert_b \circ (\Theta_1\circ\Xi_1+ \Theta_2\circ\Xi_2)\vert_b=0 \ \ \ \hbox{in}\ A^{6}(\widetilde{J_b}\times\widetilde{J_b})\ \ \ \forall b\in B\ .\] In view of Lemma \ref{ll}, we find that \[ \bigl( (\Gamma-\gamma)\vert_b\bigr){}_\ast=0\ \colon\ \ A^2_{(2)}(\widetilde{J_b})\ \to\ A^4(\widetilde{J_b})\ \ \ \forall b\in B\ .\] But the correspondence $\gamma\vert_b$ does not act on $A^2(\widetilde{J_b})$ for dimension reasons, and so \[ (\Gamma\vert_b){}_\ast=0\ \colon\ \ A^2_{(2)}(\widetilde{J_b})\ \to\ A^4(\widetilde{J_b})\ \ \ \forall b\in B\ .\] Since (by construction) $\Gamma\vert_b=\Gamma_b$ acts on $A^2_{(2)}(\widetilde{J_b})$ as multiplication by $A_b$, this proves claim \ref{c2}. \end{proof} \begin{remark} The fourfold $J_0$, being birational to $S^{[2]}$, has an MCK decomposition (theorem \ref{square}). In proving theorem \ref{main1}, it would be more natural to use this MCK decomposition of $J_0$, rather than the one of $\widetilde{J_0}$. However, when trying to do this one runs into the following problem: it is not clear whether the MCK decomposition of $J_0$ is universal (in the sense of lemma \ref{umck}); for this one would need to know that the correspondence $Z$ constructed in \cite{Rie} is universally defined. (On a related note, it is not clear whether the map $(r_b)^\ast\colon A^\ast(J_b)\to A^\ast(\widetilde{J_b})$ respects the bigrading coming from the two MCK decompositions, i.e. I have not been able to prove that $r_b$ is ``of pure grade 0'' in the sense of \cite{SV2}.) \end{remark} \section{Examples in dimension 6} \begin{theorem}[Mukai \cite{Mu}]\label{mu} Let $S\subset\mathbb{P}^3$ be a quartic K3 surface, and assume that every element in $\vert \mathcal O_S(1)\vert$ is irreducible. Let $\pi\colon J_1\to \vert \mathcal O_S(1)\vert\cong\mathbb{P}^3$ be the component of the compactified Picard scheme that parametrizes torsion free degree $1$ line bundles $\xi$ on curves $C\in \vert \mathcal O_S(1)\vert$. The subset $P\subset J_1$ parametrizing line bundles $\xi$ such that $H^0(C,\xi)\not=0$ has the structure of a $\mathbb{P}^2$-bundle over $S$. The flop of $P\subset J_1$ is isomorphic to the moduli space $M_v(S)$, where $v$ is the Mukai vector $v=(3,\mathcal O_S(-1),0)$. \end{theorem} \begin{proof} This is \cite[Example 0.8]{Mu}. \end{proof} \begin{theorem}\label{main2} Let $J_1$ and $P$ be as in theorem \ref{mu}. Let $h_1\in A^1(J_1)$ be the polarization class, and let $h_2:=\pi^\ast(d)\in A^1(J_1)$ where $d\subset\mathbb{P}^3$ is a hyperplane class. Let $R^\ast(J_1)$ be the $\mathbb{Q}$-subalgebra \[ R^\ast(J_1):= \langle h_1, h_2, P, c_j(T_{J_1})\rangle\ \ \ \ \subset\ A^\ast(J_1)\ .\] The cycle class map induces an injection \[ R^\ast(J_1)\ \hookrightarrow\ H^\ast(J_1,\mathbb{Q})\ .\] \end{theorem} \begin{proof} Let $\mathcal J\to B_{}$ be the universal family of sixfolds $J_1$ as in theorem \ref{mu} (here $B$ is some open in the parameter space $B_{g3}$ of notation \ref{not}). We will prove that $\mathcal J\to B_{}$ has the Franchetta property. Since the classes defining the subring $R^\ast(J_1)$ are universally defined (i.e. they are restrictions of classes in $A^\ast(\mathcal J)$), this settles the theorem. We claim that there exist morphisms of $B$-schemes \[ \mathcal J\ \xleftarrow{r}\ \widetilde{\mathcal J}\ \xrightarrow{s}\ \mathcal M\ ,\] where $\mathcal M\to B$ is the universal moduli space with Mukai vector $v=(3,\mathcal O_S(-1),0)$, and $r\colon\widetilde{\mathcal J}\to\mathcal J$ is the blow-up of $\mathcal P$ (the relative version of $P$) and $s\colon\widetilde{\mathcal J}\to \mathcal M$ is the blow-up of $\mathcal P^\prime$ (the relative version of the dual $\mathbb{P}^2$-bundle $P^\prime\subset M_v$). To ascertain that $\mathcal M$ and $s$ exist as claimed, one may reason as follows: $\mathcal P\subset\mathcal J$ can obviously be defined and has the structure of a $\mathbb{P}^2$-bundle over $\mathcal S$. Let $r\colon\widetilde{\mathcal J}\to \mathcal J$ be the blow-up with center $\mathcal P$, and let $\mathcal E\subset\widetilde{\mathcal J}$ denote the exceptional divisor of $r$. This $\mathcal E$ maps to $\mathcal P^\prime$, which is the dual $\mathbb{P}^2$-bundle over $Ss$. The Nakano--Fujiki criterion for the existence of a blow-down \cite{FN}, as used by Mukai \cite[Proof of Theorem 0.7]{Mu} needs that the normal bundle of $\mathcal E\subset\widetilde{\mathcal J}$ restricts to the tautological bundle of the fibres $F_p$ of $\mathcal E\to\mathcal P^\prime$. Since $\mathcal N_{\mathcal E/\widetilde{\mathcal J}}\vert_{F_p}= \mathcal N_{E_b/\widetilde{\mathcal J}_b}\vert_{F_p}$, and the criterion is satisfied fibrewise, this is OK. That is, thanks to Nakano--Fujiki we conclude that there exists a blow-down $s\colon\widetilde{\mathcal J}\to\mathcal M$ with $\mathcal M$ smooth and $s(\mathcal E)=\mathcal P^\prime$, as claimed. To prove the Franchetta property for $\mathcal J$, it suffices to prove the Franchetta property for $\widetilde{\mathcal J}$. On the other hand, the morphism $s$ is the blow-up with center $\mathcal P^\prime$, and $\mathcal P^\prime$ is a $\mathbb{P}^2$-bundle over $\mathcal S$. The formulae for Chow groups of blow-ups and projective bundles give a commutative diagram \[ \begin{array}[c]{ccc} A^i(\widetilde{\mathcal J}) & \xrightarrow{}& A^i(\mathcal M)\oplus A^{i-1}(\mathcal S)\oplus A^{i-2}(\mathcal S)\oplus A^{i-3}(\mathcal S)\\ &&\\ \downarrow&&\downarrow\\ &&\\ A^i(\widetilde{J}_b) & \xrightarrow{\cong}& A^i(M_b)\oplus A^{i-1}(S_b)\oplus A^{i-2}(S_b)\oplus A^{i-3}(S_b)\ .\\ \end{array}\] (Here, we write $\widetilde{J}_b, M_b, S_b$ for the fibre over $b\in B$ of the family $\widetilde{\mathcal J}$ resp. $\mathcal M$ resp. $\mathcal S$.) Since we already know the Franchetta property holds for $\mathcal S\to B$, the Franchetta property for $\widetilde{\mathcal J}\to B$ follows from that for $\mathcal M\to B$. Thus, the following result settles the proof of theorem \ref{main2}: \begin{proposition}\label{p} The family $\mathcal M\to B$ has the Franchetta property. \end{proposition} To prove proposition \ref{p}, we use B\"ulles' result (Theorem \ref{bul}) to reduce to the family $\mathcal S^{3/B}:=\mathcal S\times_B \mathcal S\times_B \mathcal S$. That is, Theorem \ref{bul} tells us that for every $b\in B$ there exist correspondences \[ \Gamma_1^b,\ldots,\Gamma_r^b\ \ \in A^\ast(M_b\times (S_b)^{k_j})\ ,\ \ \ \Psi_1^b,\ldots,\Psi_r^b\ \ \in A^\ast( (S_b)^{k_j} \times M_b)\ \] with the property that \[ \Delta_{M_b}=\sum_{j=1}^r \Psi_j^b\circ \Gamma_j^b\ \ \ \hbox{in}\ A^6(M_b\times M_b)\ .\] Using a Hilbert schemes argument as in \cite[Proposition 3.7]{V0}, these fibrewise data can be spread out over the family, i.e. there exist relative correspondences \[ \Gamma_1,\ldots,\Gamma_r\ \ \in A^\ast(\mathcal M\times_B \mathcal S^{k_j/B})\ ,\ \ \ \Psi_1,\ldots,\Psi_r\ \ \in A^\ast( \mathcal S^{k_j/B} \times_B \mathcal M)\ \] with the property that \begin{equation}\label{prop} \Delta_{M_b}=\sum_{j=1}^r (\Psi_j\circ \Gamma_j)\vert_{M_b\times M_b}\ \ \ \hbox{in}\ A^6(M_b\times M_b)\ \ \ \forall b\in B\ .\end{equation} (Alternatively, instead of invoking a Hilbert schemes argument, one may observe that the cycles in \cite{Bul} are universal expressions in the Chern classes of a quasi-universal object, and thus naturally can be constructed in families. This is the same argument as \cite[Proof of Theorem 3.1]{FLV2}.) Now, given a cycle $\Gamma\in A^\ast(\mathcal M)$ which is homologically trivial on the very general fibre, the element \[ (\Gamma_1\circ \Gamma,\ldots,\Gamma_r\circ\Gamma)\ \ \in\ A^\ast(\mathcal S^{k_1/B})\oplus \cdots\oplus A^\ast(\mathcal S^{k_r/B}) \] will also be homologically trivial on the very general fibre. The families $\mathcal S^{k_j/B} $ have the Franchetta property by Theorem \ref{gfc} (note that $k_j\le 3$), and so it follows that \[ (\Gamma_1\circ \Gamma,\ldots,\Gamma_r\circ\Gamma)\vert_b=(0,\ldots,0)\ \ \hbox{in}\ \ A^\ast(S^{k_1})\oplus \cdots\oplus A^\ast(S^{k_r}) \ \ , \] for $b\in B$ very general. But then, in view of (\ref{prop}), it follows that also \[ \Gamma\vert_b= \sum_{j=1}^r (\Psi_j\circ \Gamma_j\circ \Gamma)\vert_b =0\ \ \ \hbox{in}\ A^\ast(\mathcal M)\ ,\] for $b\in B$ very general, i.e. $\mathcal M\to B$ has the Franchetta property. This proves the proposition, and hence the theorem. \end{proof} \begin{remark} A general fibre $A$ of the Lagrangian fibration $J_1\to \mathbb{P}^3$ has $A=h_2^3$ in $A^3(J_1)$ (because a point $p\in\mathbb{P}^3$ has $p=d^3$ in $A^3(\mathbb{P}^3)$, with $d$ the hyperplane class). As such, $A$ is in the subalgebra $R^\ast(J_1)$ of Theorem \ref{main2}. \end{remark} \begin{remark} It would be interesting to prove something like Theorem \ref{main1} for the sixfolds $J_1$ of theorem \ref{main2}. To do this, it would suffice to have an MCK decomposition for $J_1$ that is universal, and with the property that $A^2_{(2)}(J_1)$ comes from $A^2(S)$. \end{remark} \vskip1cm \begin{nonumberingt} Thanks to Yoyo of kuchibox.fr for daily delivery and great service. \end{nonumberingt} \vskip1cm
1,108,101,562,657
arxiv
\section{Introduction} In this short note we shall sketch a particular first-principles approach to the structure of a physical description of nature. That is, we shall outline the development of a system of mathematical objects together with specifications of how these objects relate to nature in principle. This is to be done in such a way that the emerging framework may accommodate the same predictive power we are familiar with from known physical theories. However, we shall presuppose neither any specific physical theory nor even merely the mathematical structure of a given physical theory or framework (such as, in particular, quantum theory). Of course, there cannot be hope for success if we keep this endeavor entirely divorced from physical experience. Indeed, we shall rely on experience condensed from the most comprehensive physical descriptions of nature known. We extract two key principles that turn out to have far reaching consequences when combined with generic probabilistic reasoning and will be sufficient for our purposes.\footnote{We are far from claiming there are no further principles to be derived from these theories, or even that the ones we choose are to be considered the most important ones.} These principles are \emph{locality} and \emph{operationalism}. The principle of locality stems from the 19th century discovery that forces do not act at a distance, but are mediated through fields that permeate spacetime. In classical physics particles can only interact through signals carried by fields that connect them in spacetime. This remains true in essence in quantum theory, although particles and fields are replaced there (in quantum field theory) by a unified notion. Thus, interaction can only happen through adjacency in spacetime and is parametrizable through possible ``signals''. The principle of operationalism is motivated by the discovery of quantum physics. In classical physics reality is described through particle trajectories and field configurations distributed in spacetime. This distribution is an objective fact that exhaustively describes reality and does not depend on the observer or its actions. But we have learned that this is not an accurate description of nature and invented quantum theory to provide a better description. In quantum theory the process of measurement and the observer play a distinguished role. The lesson is that rather than trying to describe an abstract reality that we are somehow in contact with, we should concentrate on the contact itself. That is, we should describe reality \emph{through} the act of probing it, i.e., through measurement, observation, preparation etc. This is what we mean by operationalism. \section{Spacetime} A manifestation of locality essential for the successful physical description of our world is the fact that an experiment performed in a laboratory generally does not depend (in important ways and if it is not so intended) on what happens outside the laboratory and vice versa. To implement this aspect of locality in our framework we need a manner to distinguish the laboratory from the rest of the universe. That is accomplished by a notion of spacetime \emph{region}. It serves precisely as a means to distinguish whatever happens inside the region from the rest of the universe. A crucial element of physical theory consists in establishing relations between what happens in one region and another one. By locality, the interior of a region can interact with the rest of the universe only through the region's \emph{boundary}. Thus, a notion of \emph{adjacency} of regions is required. More technically speaking this means a notion of gluing or \emph{composing} regions along common boundary parts. These boundary parts are spacetime \emph{hypersurfaces}. We require thus two types of primitive spacetime objects: regions and hypersurfaces, with boundaries of regions being a special case of hypersurfaces. To be mathematically specific we take the regions to be 4-dimensional topological manifolds and the hypersurfaces to be 3-dimensional topological manifolds. In addition to the spacetime objects themselves there is a notion of \emph{composition} of regions along hypersurfaces. Mathematically, this is a gluing of manifolds. Also we require a notion of \emph{decomposition} of hypersurfaces into component hypersurfaces since the gluing is, in general, only along parts of boundaries. For our present purposes it is not necessary, however, to make this mathematically more precise. \section{Probes} \label{sec:probes} By operationalism, rather than aiming for some ontological description of physics, we should describe the act of experimentation, observation, preparation etc.\ together with its outcomes (if any). We subsume this in the concept of a \emph{probe}. In order to make use of locality, a probe is assigned to a spacetime region. It encodes direct influence on, and outcomes of observations in, only this spacetime region. A probe may correlate to events in other spacetime regions, but, due to locality, exclusively so through ``signals'' traversing the boundary of the region. Mathematically, to any spacetime region $M$ we assign a set of probes $\mathcal{P}_M$ in $M$. For any region $M$, there is a special \emph{null-probe} $\emptyset\in\mathcal{P}_M$. This encodes leaving the region ``empty'', i.e., not making any observation in $M$, not putting any apparatus there etc. An elementary, but important operation on probes is their \emph{composition}. Say there are regions $M$ and $N$ that can be composed to a joint region $M\cup N$. Then a probe in $N$ together with a probe in $M$ determine a probe in $M\cup N$. That is, there is a composition map $\diamond:\mathcal{P}_M\times\mathcal{P}_N\to\mathcal{P}_{M\cup N}$. Physically speaking this is the triviality that the combination of two experiments is also an experiment. \section{Boundary conditions} As stated, we take locality to imply that whatever happens inside a region influences and is influenced by the rest of the universe exclusively through ``signals'' that traverse the region's boundary. We condense this into a notion of \emph{boundary conditions}. That is, whatever happens outside of the region can be parametrized in so far as it affects the interior by specifying a boundary condition. The same holds for the influence of the interior on the exterior. Moreover, again using locality, boundary conditions should be localizable not only on boundaries, but also on pieces of them, i.e., on general hypersurfaces. Mathematically, we associate to any hypersurface $\Sigma$ a set of boundary conditions $\mathcal{B}_{\Sigma}$. \section{Values} So far, we have limited ourselves to a qualitative description of the ingredients of our framework. To make actual predictions we need numbers. We shall follow tradition in physics and assume that any observation or measurement outcome can be described through a finite number of real numbers. This motivates the notion of \emph{values}, refining at the same time the notions of probes and boundary conditions. To a spacetime region $M$, a probe $P\in\mathcal{P}_M$ and a boundary condition $b\in\mathcal{B}_{\partial M}$ we assign a value, i.e., a real number. We shall use the notation $(P,b)_M$ to denote this value. Thus, the value represents the outcome for probe $P$ in region $M$ given the boundary condition $b$. Since a value is a single real number, an experiment might not be representable by a single probe, but might require a number of probes for its description. Crucially, the assignment of values means that for a given region $M$ we may view a probe $P\in\mathcal{P}_M$ as a real valued function $(P,\cdot)_M$ on the set $\mathcal{B}_{\partial M}$ of boundary conditions. Similarly, we may view a boundary condition $b\in\mathcal{B}_{\partial M}$ as a real valued function $(\cdot,b)_M$ on the set $\mathcal{P}_M$ of probes. In particular, both the set of probes and the set of boundary conditions naturally span real vector spaces of functions. However, the sets are in general not identical to the vector spaces. Arbitrary linear combinations of boundary conditions are not necessarily again boundary conditions. The same goes for probes. \section{Hierarchies of probes and partial order} Another important structure that the sets of probes acquires by virtue of being a set of functions is a \emph{partial order}. Given two probes $P,Q\in\mathcal{P}_M$ in a region $M$ we declare $P\le Q$ if and only if for any boundary condition $b\in\mathcal{B}_{\partial M}$ we have, $(P,b)_M\le (Q,b)_M$. This partial order carries important physical meaning as we illustrate in the following. To simplify the argument we imagine for a moment that a value predicts directly the reading on an apparatus in an experiment. (We shall see later that this cannot be the case in general.) Consider probes that represent experiments that have YES/NO outcomes. That is, these probes yield values in the set $\{0,1\}$, with $0$ representing NO and $1$ YES. We shall call these \emph{primitive probes}. The null-probe is then a primitive probe that always yields YES. Imagine an apparatus that displays a single light in such a way that the light might either show red or green, nothing else. There are different primitive probes associated with the apparatus. There is the probe that yields $1$ if the light shows green, and $0$ otherwise. Call this $P(g)$. Similarly, there is the probe that yields $1$ if the light shows red, and $0$ otherwise. Call this $P(r)$. There is also the probe that encodes the mere presence of the apparatus, without considering what color the light shows, call this $P(*)$. The probe $P(*)$ is more general than the probes $P(g)$ and $P(r)$ as the latter correspond to special configurations of the former. This \emph{hierarchy} of generality is reflected in a \emph{partial order} relation on those probes. For any boundary condition, the induced values for the probes satisfy the same order relations which are thus order relations between the probes, namely, \begin{equation} \mathbf{0}\le P(g) \le P(*)\quad\text{and}\quad \mathbf{0}\le P(r) \le P(*) . \label{eq:ex1po} \end{equation} Here $\mathbf{0}$ represents the trivial probe that always returns the value $0$. Apart from the order relation we also have an additivity relation here, namely $P(*)=P(r)+P(s)$. In this simple example the hierarchy and partial order are rather limited. But with growing complexity of the apparatus, the hierarchies quickly become more interesting and powerful. As a second example take an apparatus that displays two lights, each either showing red or green. With the obvious notation we have, for example, \begin{equation} \mathbf{0}\le P(g,r) \le P(*,r)\le P(*,*), \end{equation} and a number of similar relations. The partially ordered set of the associated primitive probes is considerably richer than in the first example. Partial orders also arise from continuous measurement outcomes. Consider an apparatus with a scale, showing numbers from $0$ to $5$. The primitive probe that represents an outcome in the range $[0,4]$ is more general than the primitive probe that represents an outcome in the range $[1.5, 3.5]$ for example, etc. \section{Probability and Conditionality} The assumption that values always directly predict measurement outcomes has very restrictive implications. Firstly, it implies that any boundary condition is compatible with any apparatus and also with the absence of an apparatus (encoded by the null-probe). This implication is not at all innocent. Indeed, by locality the set of boundary conditions associated to a hypersurface does not ``know'' about the restrictions imposed by the presence of a region much less an apparatus in a region that the hypersurface might be the boundary of. This suggests that the assumption, at least in this simple form, is unsustainable. A second implication is that measurement outcomes are always predicted with certainty. Our experience with quantum physics speaks against this. A third implication is that boundary conditions need to be mutually exclusive, which is highly restrictive. We should thus allow values to have a probabilistic and conditionalistic relation to measurement outcomes. This requires to adapt the concept of primitive probes. While we continue to consider probes that encode YES/NO measurement outcomes as primitive, we cannot restrict them to yield the binary values $0$ or $1$. Even restricting them to values in the interval $[0,1]$ could meet with normalizability issues. So we merely require them to yield non-negative values. As we shall see, this lack of normalization is not a problem. We also allow probes to be formed as probabilistic ensembles of other probes. These are linear combinations of probes with positive coefficients summing to $1$. While they might not represent a given single experiment, they can give rise to meaningful statements about ensembles. Combining this with the fact that numbers associated to measurement outcomes can be redefined simply by convention justifies considering arbitrary linear combinations of probes as (general, not necessarily primitive) probes. We therefore take the set of probes $\mathcal{P}_M$ from here onward to form a real vector space. The primitive probes admit linear combinations with positive coefficients by forming ensembles and relaxing normalization. Thus, they form a \emph{positive cone} that we shall denote $\mathcal{P}^+_M$ in $\mathcal{P}_M$, making $\mathcal{P}_M$ into an \emph{ordered vector space}. Crucially, the discussion of the previous section on the physical meaning of hierarchies of probes remains valid in the probabilistic context. In particular, the partial order relations in the examples remain true. However, rather than directly yielding a value corresponding to YES/NO the probes in question yield (relative) probabilities. Recall the example with the apparatus displaying a single light with states ``red'' or ``green''. Given a boundary condition $b\in\mathcal{B}_{\partial M}$ the value of $(P(g),b)_M$ is not restricted to the set $\{0,1\}$, corresponding to ``not green'' or ``green''. But neither is it in general the \emph{probability} for the state to turn out ``green'' in the experiment. Rather, this probability is given by the \emph{quotient} \begin{equation} \frac{(P(g),b)_M}{(P(*),b)_M} . \label{eq:condpp} \end{equation} That is, to obtain the probability we have to \emph{condition on} the presence of the apparatus. The inequality (\ref{eq:ex1po}) precisely guarantees that this quotient lies in the interval $[0,1]$. The value $(P(*),b)_M$ itself can be seen as a measure of the ``compatibility'' between the boundary condition $b$ and the presence of the region $M$ with the apparatus represented by the probe $P(*)$. Similarly, the value $(\emptyset,b)_M$ associated to the null-probe measures the ``compatibility'' between the boundary condition $b$ and the presence of the region $M$. Now recall, from last section, the apparatus displaying two lights, each showing either red or green. In this case we can express non-trivial probabilities relating two apparatus states. For example, with boundary condition $b\in\mathcal{B}_{\partial M}$ the probability that the first light shows green given that the second light shows green is given by the quotient, \begin{equation} \frac{(P(g,g),b)_M}{(P(*,g),b)_M} . \end{equation} \section{Hierarchies of boundary conditions} In the probabilistic setting boundary conditions need not be mutually exclusive, may be probabilistically combined, and become subject to the formation of hierarchies. As a physical example consider temperature (range) as a boundary condition. For example, a temperature range $b$ between $10$ and $20$ degrees Celsius is more general than one $c$ between $10$ and $15$ degrees. The corresponding partial order is $c\le b$. However, this does \emph{not} reflect an order relation between values $(P,c)_M\le (P,b)_M$ for arbitrary probes $P\in\mathcal{P}_M$. It does reflect this order relation between values, however, for the restricted class of primitive probes $P\in\mathcal{P}_M^+$. As in the case of probes we do not assume normalizability. Thus, any linear combination of boundary conditions with positive coefficients is again a valid boundary condition. We modify our notation slightly and denote the set of boundary conditions by $\mathcal{B}^+_{\Sigma}$ while $\mathcal{B}_{\Sigma}$ shall henceforth denote the real vector space spanned by it. Thus, $\mathcal{B}^+_{\Sigma}$ is a positive cone in $\mathcal{B}_{\Sigma}$, which thus acquires the structure of an ordered vector space. Analogous to conditional probabilities for different probes given a boundary condition, it makes equal sense to consider conditional probabilities relating different boundary conditions for a given probe. For boundary conditions $b,c\in\mathcal{B}^+_{\partial M}$ with $\mathbf{0}\le c\le b$, i.e., $c$ a specialization of $b$, we can ask for the probability that $c$ is realized given that we know $b$ to be realized and given the primitive probe $P\in\mathcal{P}^+_M$ in $M$. This probability is the quotient, \begin{equation} \frac{(P,c)_M}{(P,b)_M} . \label{eq:condbdy} \end{equation} The physical information added that gives meaning to this probability is the presence of the region $M$ and associated restriction as to what may happen at its boundary $\partial M$. \section{Expectation values} For an expectation value two probes are required: One probe $Q_0$ that encodes the mere presence of the measurement apparatus without considering the outcome of the measurement and another probe $Q$ that takes account of the outcome of the measurement. The expectation value is thus, \begin{equation} \frac{(Q,b)_M}{(Q_0,b)_M} , \label{eq:condev} \end{equation} in complete analogy to formula (\ref{eq:condpp}) for primitive probes. \section{Composition} \label{sec:composition} We return to the concept of composition of probes introduced in Section~\ref{sec:probes}. It turns out that locality, suitably understood, is sufficient to derive a \emph{composition law} for probes. We give a condensed account of this in the following. As a first step we introduce \emph{slice regions}. These are hypersurfaces $\Sigma$ considered as infinitesimally thin regions $\hat{\Sigma}$. A slice region has a boundary $\partial\hat{\Sigma}$ with two components $\Sigma$, $\Sigma'$, each of which is a copy of the original hypersurface. The null-probe gives rise to a bilinear map $\mathcal{B}_{\Sigma}\times\mathcal{B}_{\Sigma}\to\mathbb{R}$ via $(b_1,b_2)\mapsto (\emptyset,(b_1,b_2))_{\hat{\Sigma}}$. Assuming that different boundary conditions encode different physics, this inner product must be \emph{non-degenerate}. It may have \emph{positive definite} and \emph{negative definite} parts.\footnote{We deliberately ignore here complications that may arise in an infinite-dimensional setting.} An orthonormal basis $\{b_k\}_{k\in I}$ has the property, $(\emptyset, (b_k , b_l))_{\hat{\Sigma}} = (-1)^{\sigma(k)} \delta_{k,l}$. Here $\sigma(k)=0,1$ for the positive-definite and negative-definite part respectively. Now, consider the composition of probes $P$, $Q$ in adjacent spacetime regions $M$, $N$. We fix moreover boundary conditions $b$ on $\partial P\setminus \partial Q$ and $c$ on $\partial Q\setminus \partial P$. By locality, it must be possible to describe the effect of probe $Q$ in $N$ with $c$ on $P$ equivalently through a boundary condition $q$ on the interfacing hypersurface $\Sigma = \partial M \cap \partial N$. Formally, $(P\diamond Q, (b, c))_{M\cup N} = (P, (b, q))_M$. Specializing to the case that $M$ is a slice region $\hat{\Sigma}$ with $P$ the null-probe we get, \begin{equation} (\emptyset, (b, q))_{\hat{\Sigma}} = (Q, b)_N = \sum_k (-1)^{\sigma(k)} (\emptyset, (b, b_k ))_{\hat{\Sigma}} (Q, b_k)_N \end{equation} For the second equality have used a completeness relation for the inner product. But this implies, $q=\sum_k (-1)^{\sigma(k)} b_k (Q, b_k)_N$. This in turn implies the composition rule, \begin{equation} (P\diamond Q, (b, c))_{M\cup N} = \sum_k (-1)^{\sigma(k)} (P, (b, b_k ))_{M} (Q, (b_k , c))_{N} . \label{eq:comprule} \end{equation} \section{Convergence} Remarkably, the framework we have arrived at turns out to be essentially a formulation of quantum theory. To explain this, we recall the \emph{general boundary formulation} of quantum theory \cite{Oe:gbqft}, which incorporates manifest locality into quantum theory. In its \emph{amplitude formalism (AF)}, (generalized) Hilbert spaces of states are associated to spacetime hypersurfaces. Amplitudes and observables are associated to spacetime regions. The AF incorporates a composition rule similar to (\ref{eq:comprule}), coming from \emph{topological quantum field theory}. The \emph{positive formalism (PF)} \cite{Oe:dmf} is obtained from the AF by generalizing from pure states to mixed states. It turns out that the framework we have arrived at in this note coincides with the PF. More precisely, the ordered vector space $\mathcal{B}_{\Sigma}$ of boundary conditions arises in the PF as the space of self-adjoint operators on the Hilbert space $\mathcal{H}_{\Sigma}$ (of the AF) associated with the hypersurface $\Sigma$. The primitive probes arise as (unnormalized) \emph{quantum operations}. General probes are quantum observables and quantum measurements. The composition rule (\ref{eq:comprule}) is exactly a version of the Axioms (P5a), (P5b) in \cite{Oe:dmf}. Transition probabilities arise as special cases of formula (\ref{eq:condbdy}) and expectation values as special cases of formula (\ref{eq:condev}). If, as usual in the standard formulation of quantum theory, spacetime regions are time intervals and temporally final states are conditioned on initial ones, the denominators in these formulas turn out to be equal to $1$. This explains why in the standard formulation, transition probabilities and expectation values do not normally take the form of quotients. In \cite{Oe:dmf} it was shown that even though spaces of self-adjoint operators (for the boundary conditions) have more structure than that of ordered vector spaces, only the latter is necessary for a coherent and predictive physical framework. Indeed, in this note we have arrived only at ordered vector spaces. It turns out that dropping the more restrictive structure of self-adjoint operators is akin to dropping the restriction to quantum theory, as we shall see in a moment. At this point we can only speculate what exactly makes the difference between classical and quantum theory. It might be the difference between boundary conditions forming a lattice versus an anti-lattice. \section{Bonus: Classical physics} We proceed to show how classical physics can also be accommodated fairly naturally within the presented formalism. A classical theory associates a space of solutions $L_M$ of the equations of motion to each region $M$ and of germs of solutions $L_{\Sigma}$ to each hypersurface $\Sigma$. Note that for any region $M$ we have a map $L_M\to L_{\partial M}$ that consists in retaining the solution only on the boundary. We can now set up a \emph{deterministic} version of the framework as follows: We define the space of boundary conditions to be $\mathcal{B}_{\Sigma}=L_{\Sigma}$. (We return momentarily to the setting where $\mathcal{B}_{\Sigma}$ is a set without necessarily having the structure of a vector space.) We define the null-probe $(\emptyset,b)_M$ to take the value $1$ if there is a solution in $M$ that induces the boundary condition $b$ and $0$ otherwise. We take general probes to be induced by local observables. That is given an observable $O:L_M\to\mathbb{R}$ in a spacetime region we define an associated probe $P_O$ via $(P_O,b)_M=O(\phi)$ if there is a solution $\phi\in L_M$ that reduces to $b\in L_{\partial M}$ on the boundary and as $0$ otherwise. (This supposes that different solutions in the interior can be distinguished on the boundary or that we restrict to observables that give the same value if the induced boundary conditions are identical.) Since the spaces of boundary conditions are not in general vector spaces here, the composition rule of Section~\ref{sec:composition} does not apply. A probabilistic setting is achieved by transitioning to classical statistical physics. Thus, the space of boundary conditions is replaced by the space of probability distributions over $L_{\Sigma}$. Not imposing normalization, this becomes a cone $\mathcal{B}^+_{\Sigma}$ of positive distributions inside a real vector space $\mathcal{B}_{\Sigma}$. Integrating over $L_M$ in the interior with such a distribution makes a probe as defined above for the deterministic setting into one for this statistical setting. (We omit details here which may render converting this sketch into a precise formalism quite non-trivial.) In this case the composition rule of Section~\ref{sec:composition} should apply. The present framework, along the lines sketched, might provide a suitable starting point for a statistical treatment of classical field theory without metric background, such as general relativity. \bibliographystyle{pos}
1,108,101,562,658
arxiv
\section{Introduction\label{sec:intro}} \input{intro.tex} \section{Key insights from PAC Generalization Bound by Vapnik and Chervonenkis\label{sec:bg-1}} \input{vc.tex} \section{Bayesian Optimization\label{sec:bg}} \input{bg.tex} \section{HyperTune - A Learning Theory based Hyperparameter Tuning Framework} \input{framework.tex} \section{Experiments} \input{experiments.tex} \section{Conclusion\label{sec:conclusion}} \input{conclusion.tex} \bibliographystyle{aaai} \subsection{Experimental Setting} We use the implementation for Fabolas \cite{pmlr-v54-klein17a} and Hyperband \cite{li2016hyperband} from their open source package, RoBo\footnote{https://github.com/automl/RoBO}. We implement the hyperparameter tuning tasks using an open source benchmark repository, called HPOlib\footnote{https://github.com/automl/HPOlib2}. A similar experimental setup as in Fabolas is used. A validation dataset is used for hyperparameter tuning, and a heldout set is used to evaluate generalization performance. In all experiments, we visualize the convergence using plots and report the performance on the held-out data using a table. Fabolas uses subsets of data in the optimization loop, whereas our method and a standard Bayesian optimization use full training data. To obtain a fair convergence plot, we have only kept \emph{Hypertune} and a generic Bayesian optimization algorithm in the plots. However, we report the best performance returned by Fabolas after rebuilding the model using the best hyperparameter setting from the optimization loop on the full training data and evaluating it on the heldout data. For each experiment, a small percentage of data is used to obtain directional derivative information for \emph{HyperTune}. The hyperparameter tuning on the smaller dataset is performed 5 times, and their running time recorded. Although they can be computed in parallel, we consider them to be computed sequentially here. The percentage of data for the smaller dataset experiment is varied depending on the dataset. Next, we run \emph{HyperTune} until convergence. The running time for \emph{HyperTune} includes the time to converge to the best validation set performance (not considering if the improvement is within the $2\sigma$), and the time for smaller dataset experiments. This total time is the budget for the other baselines and we compute the resultant test errors using the best hyperparameter from all the methods on a heldout dataset. We use four benchmark real-world classification datasets from OpenML \cite{vanschoren2014openml} - letter recognition with {\small{}20000} instances and 17 features, MNIST digit recognition with 70000 instances and 785 features, adult income prediction with {\small{}48842} instances and 15 features, and vehicle registration with {\small{}98528} instances and 101 features. \begin{figure*} \begin{centering} \subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{ElnetLetter_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{Elnetmnist_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{Elnetadult_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{Elnetvehicle_validError}} \par\end{centering} \centering{}\caption{EI and \emph{HyperTune} on validation dataset for Elastic net: (a) Letter, (b) MNIST, (c) Adult, and (d) Vehicle datasets. \label{fig:elnet_valid}} \end{figure*} \begin{table*} \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Baselines} & \multicolumn{4}{c|}{Average of the test error $\pm$ standard error}\tabularnewline \cline{2-5} & Letter & MNIST & Adult & Vehicle\tabularnewline \hline Fabolas & {\small{}0.32$\pm$0.005} & {\small{}0.12$\pm$0.012} & {\small{}0.45$\pm$0.112} & {\small{}0.15$\pm$0.0005}\tabularnewline \hline \textit{HyperTune} & \textbf{\small{}0.3$\pm$0.000} & \textbf{\small{}0.11$\pm$0.007} & \textbf{\small{}0.24$\pm$0.000} & \textbf{\small{}0.14$\pm$0.0004}\tabularnewline \hline \end{tabular}\caption{Generalization performance on the heldout dataset for Elastic net.\label{tab:Elnet_test}} \end{table*} \begin{figure*} \centering{}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{SVMonLetter_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{SVMonMnist_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{SVMonadult_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{SVMonVehicle_validError}}\caption{EI and \textit{HyperTune} on validation dataset for SVM with RBF kernel: a) Letter, (b) MNIST, (c) Adult, and (d) Vehicle datasets.\label{fig:svm_VALID} } \end{figure*} \begin{table*} \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Baselines} & \multicolumn{4}{c|}{Average of the test error $\pm$ standard error}\tabularnewline \cline{2-5} & Letter & MNIST & Adult & Vehicle\tabularnewline \hline Fabolas & {\small{}0.54$\pm$0.180} & {\small{}0.016$\pm$0.0005} & \textbf{\small{}0.24$\pm$0.0005} & \textbf{\small{}0.14$\pm$0.002}\tabularnewline \hline \textit{HyperTune} & \textbf{\small{}0.04$\pm$0.000} & \textbf{\small{}0.014$\pm$0.0002} & \textbf{\small{}0.24$\pm$0.0001} & \textbf{\small{}0.14$\pm$0.000}\tabularnewline \hline \end{tabular}\caption{Generalization performance on heldout dataset for SVM with RBF kernel.\label{tab:svm_test}} \end{table*} \begin{figure*} \centering{}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{SVMKernApproxLetter_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{SVMKernApproxMnist_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{SVMKernApproxadult_validError}}\subfloat[\foreignlanguage{british}{}]{\centering{}\includegraphics[width=0.25\textwidth]{SVMKernApproxVehicle_validError}}\caption{EI and \emph{HyperTune} on validation dataset for SVM with kernel approximation: (a) Letter, (b) MNIST, (c) Adult, and (d) Vehicle datasets. \label{fig:SVMApproxValid}} \end{figure*} \begin{table*} \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Baselines} & \multicolumn{4}{c|}{Average of the test error $\pm$ standard error}\tabularnewline \cline{2-5} & Letter & MNIST & Adult & Vehicle\tabularnewline \hline \hline Fabolas & {\small{}0.09$\pm$0.014} & {\small{}0.076$\pm$0.03} & \textbf{\small{}0.242$\pm$0.000} & {\small{}0.18$\pm$0.016}\tabularnewline \hline Hyperband & {\small{}0.04$\pm$0.002} & {\small{}0.019$\pm$0.00} & {\small{}0.246$\pm$0.002} & \textbf{\small{}0.137$\pm$0.000}\tabularnewline \hline \textit{HyperTune} & \textbf{\small{}0.03$\pm$0.000} & \textbf{\small{}0.018$\pm$0.00} & \textbf{\small{}0.242$\pm$0.000} & {\small{}0.138$\pm$0.000}\tabularnewline \hline \end{tabular}\caption{Generalization performance on heldout dataset for SVM with kernel approximation.\label{tab:svm_kern_test}} \end{table*} \subsection{Experimental Results} \subsubsection*{Elastic Net} Elastic net is a logistic regression classifier with $L_{1}$ and $L_{2}$ penalty \cite{zou2005regularization}. The hyperparameters are the ratio parameter that trades-off $L_{1}$ and $L_{2}$ penalty, and the $\alpha$ parameter that determines the magnitude of the penalty. We tune the ratio parameter in the bound $[0,1]$. The penalty parameter $\alpha$ is tuned in an exponent space of $[-7,-1]$. In \emph{HyperTune}, we assign a directional derivative sign $m=-1$ only for the hyperparameter $\alpha$ as the complexity of the model decreases with the penalty hyperparameter $\alpha$ while the other hyperparameter does not contribute to model complexity. The results for \emph{HyperTune} and EI on the validation dataset are shown in \ref{fig:elnet_valid}. \emph{HyperTune} outperforms EI in all the datasets. Table \ref{tab:svm_test} reports the generalization performance of all the methods on a heldout dataset. \emph{HyperTune} performs better than Fabolas in all cases. \subsubsection*{SVM with RBF Kernel } In this experiment, we tune the cost parameter $C$ of SVM and the length-scale $\gamma$ of RBF kernel. We tune both the hyperparameters in an exponent space of $[-3,3]$. For \emph{HyperTune}, we assign directional derivative signs $\mathbf{m}=\{1,1\}$ for both the hyperparameters $C$ and $\gamma$. The complexity of the model increases with an increase in both these hyperparameters. We compare the performance of EI and \emph{HyperTune} on the validation dataset in Figure \ref{fig:svm_VALID}. In all the datasets, \emph{HyperTune} significantly outperforms EI.\emph{} We further record the generalization performance on the heldout dataset in Table \ref{tab:svm_test}. The results show that \emph{HyperTune} again performs better than Fabolas, particularly in Letter dataset where the difference is significant. \subsubsection*{SVM with Kernel Approximation using Random Fourier Features} In our third experiment, we tune the hyperparameters of SVM with kernel approximation using random Fourier features \cite{rahimi2007random}. We tune the cost parameter $C$, length-scale of RBF kernel $\gamma$ and the number of Fourier feature bases in this experiment. Both $C$, and $\gamma$ are tuned in the same bound as the previous experiment, and the number of Fourier features are tuned in an exponent space of $[3,12]$. We assign directional derivative signs $\mathbf{m}=\{1,1,1\}$ for all the hyperparameters since the model complexity increases with these hyperparameters. In Figure \ref{fig:SVMApproxValid}, we plot the results for \emph{HyperTune} and EI on the validation dataset. \emph{HyperTune again} outperforms EI in all the datasets. We also report the performance of the baselines on the heldout dataset in Table \ref{tab:svm_kern_test}. The results show that \emph{HyperTune} is either compatible or better than Hyperband. Both \emph{HyperTune} and Hyperband outperform Fabolas in all the datasets except Adult dataset. Both Fabolas and \emph{HyperTune}, however, perform better than Hyperband in Adult dataset. \begin{figure} \centering{}\includegraphics[width=0.4\paperwidth]{MLPonMnist_validError}\caption{EI and \textit{HyperTune} on validation dataset for MLP.\label{fig:mlp_validError}} \end{figure} \begin{table} \centering{}% \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{Baselines} & \multicolumn{2}{c|}{Average of the test error $\pm$ standard error}\tabularnewline \cline{2-3} & MLP on MNIST & CNN on CIFAR10\tabularnewline \hline Fabolas & {\small{}0.09$\pm$0.008} & {\small{}0.28}\tabularnewline \hline Hyperband & \textbf{\small{}0.04$\pm$0.003} & \textbf{0.21}\tabularnewline \hline \textit{HyperTune} & {\small{}0.05$\pm$0.006} & \textbf{\small{}0.21}\tabularnewline \hline \end{tabular}\caption{Generalization performance on heldout dataset for MLP and CNN. We do not report the standard error for CNN as the experiment is conducted only once.\label{tab:nn_test}} \end{table} \subsubsection*{Multi-layer Perceptron } We tune five hyperparameters of an MLP on MNIST dataset. The hyperparameters here are the number of neurons, dropout, mini-batch size, learning rate, and momentum. We use stochastic gradient descent to learn the model. In this experiment, we assign $m=1$ only for the number of neurons in the hidden layers. We plot the results for EI and \emph{HyperTune} on validation dataset in Figure \ref{fig:mlp_validError}. \emph{HyperTune} performs better than EI on the validation dataset. We report the performance of the baselines on the heldout dataset in Table \ref{tab:nn_test}. Whilst Hyperband achieves the best performance in this experiment, \emph{HyperTune} performs equally well. Both \emph{HyperTune} and Hyperband perform better than Fabolas. \subsubsection*{Convolutional Neural Network } We tune six hyperparameters of a CNN on benchmark real-world CIFAR10 dataset \cite{krizhevsky2014cifar}. Using the benchmark architecture configuration with 25 epochs, we tune batch size, dropout in the convolutional layers, learning rate, momentum, number of neurons in the fully connected layer, and dropout in the fully connected layer. For \emph{HyperTune}, we use $m=1$ only for the number of neurons in the fully connected layer. We allocate a fixed a time budget of 24 hours to all the baselines and the best performance on the heldout dataset is reported in Table \ref{tab:nn_test}. Both Hyperband and \emph{HyperTune} achieve the best generalization performance, and outperform Fabolas.\selectlanguage{british}%
1,108,101,562,659
arxiv
\section*{Acknowledgements} We would like to thanks E.~Reya and T.~Gehrmann for carefully reading the manuscript and useful discussions, S.~Kretzer for code, and S.~Alekhin and D.~de Florian for providing the acceptance corrections and the nuclear corrections respectively. This work has been supported in part by the ``Bundesministerium f\"ur Bildung und Forschung'', Berlin; and by the Swiss National Science Foundation (SNF) under contract 200020-126691.
1,108,101,562,660
arxiv
\section*{Acknowledgments} We acknowledge stimulating discussions with Richard Pausch and Dominik Kraus, and helpful comments by Michael Bussmann. This work was partly funded by the Center of Advanced Systems Understanding (CASUS) which is financed by Germany's Federal Ministry of Education and Research (BMBF) and by the Saxon Ministry for Science and Art (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament, and by the Deutsche Forschungsgemeinschaft (DFG) via project BO1366/13. The PIMC calculations were carried out at the Norddeutscher Verbund f\"ur Hoch- und H\"ochstleistungsrechnen (HLRN) under grant shp00015, on a Bull Cluster at the Center for Information Services and High Performace Computing (ZIH) at Technische Universit\"at Dresden, on the clusters \emph{hypnos} and \emph{hemera} at Helmholtz-Zentrum Dresden-Rossendorf (HZDR), and at the computing center (Rechenzentrum) of Kiel university.
1,108,101,562,661
arxiv
\section{Introduction} \subsection{Neutral hydrogen in the distant Universe} Almost all our current knowledge about the amount and distribution of neutral hydrogen (HI) beyond the local (redshift $z\lesssim0.2$) Universe comes from optical studies of QSO absorption lines. The strongest of these lines, the Damped Lyman-$\alpha$ Absorbers (DLAs), arise in sight-lines with HI column densities above $2\times10^{20}$ atoms cm$^{-2}$, i.e. similar to the typical column density in the Milky Way disk \citep[]{wolfe86,wolfe05}. DLAs are thought to account for the bulk of the cosmic HI mass density across a wide range in redshift, and so are important tracers of the neutral-gas reservoirs for star formation in the distant Universe. Large optical QSO surveys for DLAs are only possible at redshift $z\gtrsim1.7$, where the ultraviolet Ly$\alpha$ absorption line is redshifted to wavelengths visible with ground-based telescopes. At lower redshifts the Mg II absorption line has been used to select candidate DLA systems \citep[]{briggs83,rao00,rao06,ellison06,kanekar09,rao17}, but follow-up ultraviolet spectroscopy with the \textit{Hubble Space Telescope} (\textit{HST}) is then needed to measure the Ly$\alpha$ line profile and HI column density so the samples studied are much smaller. In addition, it is not straightforward to understand and quantify the selection biases present in these pre-selected samples and this can introduce further uncertainties in estimates of the cosmic HI mass density at $z<1.7$ \citep[][]{rao06,neeleman16,rao17,berg17}. Other approaches have been used to probe HI in the redshift range $0.2\lesssim z\lesssim1.7$, including HI emission-line stacking \citep[][]{lah07,delhaize13,kanekar16} and measurements of the UV spectra of QSOs observed with the Hubble Space Telescope without any MgII preselection \citep{neeleman16}, but the amount and distribution of HI in galaxies in this redshift range remains poorly constrained. The extent to which optical DLA surveys are affected by dust obscuration remains unclear. Absorbers with a high HI column density may contain enough dust to redden and dim the light of background QSOs so that they are missed by colour-selected, flux-limited samples \citep{fall89}. A study of radio-selected QSOs by \cite{ellison05} suggests that the effect is generally small, with only a modest increase in reddening for typical absorbers (mean $E(B-V)<0.04$\,mag), though individual DLA systems with much higher reddening have also been identified \citep{heintz18,geier19}. \cite{pontzen09} estimate that around 7\,percent of DLAs at redshift $1.8<z<3.5$ are missing from optical samples due to dust obscuration, while \cite{krogager19} suggest that optical DLA studies underestimate the cosmic mass density of neutral hydrogen by 10--50\,percent at $z\sim3$, and by up to a factor of two at $z=2.2$. Previous studies have concentrated mainly on ground-based surveys for DLAs at $z>1.7$, and the effects of dust obscuration in lower-redshift DLA systems at $z<1$ remain to be quantified. In principle, radio measurements of the 21\,cm HI absorption line provide an alternative tool for identifying DLA systems, particularly at redshift $z<1.7$ where ground-based optical DLA studies are not possible \citep[e.g.][]{kanekar04,morganti15}. Importantly, because the optical depth of the 21-cm absorption line is inversely related to its excitation (spin) temperature, it is most sensitive to the cold neutral medium (CNM; $T \sim 100$\,K). This is the component of the HI most likely to trace star formation in galaxies and therefore directly follow its evolution throughout cosmic history (see e.g. \citealt{kanekar14a} and references therein). Since 21\,cm HI absorption measurements in the radio are unaffected by dust obscuration along the line of sight, they may also help clarify the extent to which optical DLA studies have been affected by a dust bias. A handful of 21\,cm DLAs have been detected in blind\footnote{By a `blind' search, we mean a search that is spectroscopically untargeted and uses no prior assumptions about the redshift of any HI absorption lines} searches with single-dish radio telescopes \citep[][]{brown73,brown83,darling04}, but these searches were limited both by the available spectral bandpass and by the effects of terrestrial radio interference \citep[e.g.][]{edel16}. As noted by \cite{wolfe05}, the redshift interval covered by a single optical spectrum, $\Delta z\sim1$, is large compared to that sampled by early radio 21\,cm surveys (typically $\Delta z\sim0.02$) so that optical surveys were more efficient at covering a large redshift path length. These earlier limitations have now largely been removed through (i) the advent of new wide-band correlators for radio interferometers that provide instantaneous redshift coverage approaching that of optical spectrographs, and (ii) the construction of new SKA pathfinder and precursor radio telescopes on radio-quiet sites. In this paper, we present the results from a blind 21\,cm search towards 53 bright radio sources for HI absorption in galaxies at redshift $0.4<z<1.0$. The observations were carried out in commissioning time with the first 6--12 dishes of the Australian SKA Pathfinder (ASKAP) telescope \citep{mcconnell16}. This work is intended to pave the way for a blind, all-sky 21\,cm absorption survey, the First Large Absorption Survey in HI (FLASH) to be carried out with the full 36-dish ASKAP telescope. Recent early science results from FLASH have discovered an intervening 21-cm absorber in the 50-square-degree GAMA\,23 field, demonstrating the feasibility of such a wide-field survey (\citealt{allison20}). However, FLASH has not yet covered enough sky area to return a high detection yield. By targeting a modest sample of very bright radio sources, we aim here to provide the first constraint of the number density of 21-cm absorbers at intermediate cosmological redshifts. Throughout this paper we adopt the cosmological parameters $H_{0}=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}=0.7$ and $\Omega_{m}=0.3$. \subsection{ASKAP and BETA} ASKAP \citep{deboer09} is a wide-field aperture synthesis radio telescope that uses novel phased array feed technology (PAF) to provide a 30\,deg$^2$ field of view. The telescope is located at the radio-quiet Murchison Radio Observatory in Western Australia, and works in the frequency range 700-1800\,MHz. The full array of $36\times12$\,m ASKAP dishes came into operation in early 2019. The Boolardy Engineering Test Array \citep[BETA;][]{hotan14,mcconnell16} connected six ASKAP dishes equipped with first-generation (MkI) PAFs and operated from March 2014 to February 2016. Although BETA was originally intended purely as an engineering and commissioning testbed, some early science observations were also carried out \citep[e.g.][]{allison15,serra15,heywood16}. The longest baseline for the BETA array was 916\,m, giving an angular resolution of about 1.8\,arcmin at 850 MHz. The frequency resolution was 18.5\,kHz, corresponding to a rest-frame velocity resolution of $\sim6$\,km\,s$^{-1}$. In early observations with BETA, \cite{allison15}\ detected 21\,cm HI absorption at $z=0.44$ against a bright radio continuum source, PKS\,1740-517, whose spectroscopic redshift was unknown at the time. They found that the spectral dynamic range achievable with BETA was excellent for carrying out a blind absorption survey, and that the lowest ASKAP frequency band (covering 700-1000\,MHz), where the MkI PAFs were most sensitive, was essentially free of terrestrial radio-frequency interference (RFI). In spite of its limited collecting area, BETA therefore proved to be an excellent instrument with which to search for new HI absorption lines in the redshift range $0.4<z<1.0$ using bright ($>1$\,Jy) continuum sources as probes \citep[see e.g.][]{moss17,glowacki19}. The data presented in this paper were observed either with BETA, or during commissioning time with a 12-antenna ASKAP Early Science array (ASKAP-12) in early 2017. The ASKAP-12 array was equipped with the second-generation MkII PAF receivers (\citealt{chippendale15}). ASKAP-12 provided higher sensitivity than BETA, but with a more limited frequency coverage at this early stage (799--1039\,MHz, corresponding to $0.365<z<0.775$ for HI -- the 18.5\,kHz frequency resolution remained unchanged). The longest baseline for the ASKAP-12 array was around 2.3\,km, and the restoring beam for our data was $\approx 50 \times 25$\,arcsec (using Briggs' robust weighting = 0.5). \subsection{What do we expect to see?} Two kinds of 21\,cm absorption-line detections are possible: {\it associated}\ lines, where the neutral gas producing the absorption is within (or associated with) the radio source itself, and {\it intervening}\ lines, where the gas lies in a different galaxy somewhere along the line of sight to the radio source. Associated absorption lines \citep[like the one seen in PKS\,1740-517 by][]{allison15} provide insights into the role of cold gas in AGN fuelling and feedback (see \citealt{morganti18} for a recent review). The focus of this paper is on intervening 21\,cm absorption lines, which have the potential to provide an independent probe of the cosmic HI mass density and its evolution. \cite{rao17} estimate the probability of intercepting a Damped Lyman-$\alpha$\ Absorber (DLA) with HI column density $N_{\rm HI} \geq 2 \times 10^{20} {\rm cm}^{-2}$ as: \begin{equation} n(z) = 0.027\pm0.007\ (1+z)^{1.682\pm0.200}. \end{equation} From this, we would expect $\sim$5\% of sightlines with $0.4<z<1.0$ (the HI redshift range spanned by the lowest-frequency ASKAP band at 700-1000\,MHz) to intersect an intervening DLA with an HI column density above this limit. How strong will these lines be? The observed HI 21\,cm optical depth ($\tau$) is related to the HI column density ($N_{\rm HI}$) by: \begin{equation}\label{equation:column_density} N_{\rm HI} = 1.823\times10^{18}\,T_s \int \tau \Delta {V}, \end{equation} where $T_{\rm s}$ (in K) is the harmonic mean spin temperature of the HI gas and $\Delta$V the total absorption-line width in km\,s$^{-1}$. The optical depth of the absorber can be estimated from the observed spectrum, but also depends on the areal fraction of the radio source covered by the absorber (the covering factor $f$). If unknown, a fiducial value of $f = 1$ if often used, which is equivalent to estimating the optical depth, and hence $N_{\rm HI}$, as an average across the unresolved source. Although 21\,cm absorption-line measurements are particularly sensitive to colder gas, they do not provide a direct measurement of $N_{\rm HI}$ unless $T_{\rm s}$ is known or can be estimated. The spin temperature in \autoref{equation:column_density} is a column-density-weighted harmonic mean over all line-of-sight components of HI in the absorber. In the Milky Way ISM, the harmonic mean spin temperature is $T_{\rm s} \approx 300$\,K, with three HI phases in pressure equilibrium, consisting of cold (CNM; $T_{\rm s} \approx 100$\,K), unstable (UNM; $T_{\rm s} \approx 500$\,K) and warm neutral medium (WNM; $T_{\rm s} \approx 10^{4}$\,K), in mass fractions of 28, 20, and 52\,per\,cent, respectively (\citealt{murray18}). In general we expect that the mass fractions of these phases will vary depending on the physical conditions within each absorber. Cooling of the HI is driven by fine structure emission lines of C\,{\sc ii} and O\,{\sc i }, while heating occurs via the absorption of UV radiation by dust grains and subsequent photoelectric effect. The inferred spin temperature in a given 21-cm absorber is therefore dependent on the gas-phase metallicity, dust abundance and background UV field, and so likely to trace the evolutionary history of the galaxy. There are very few measurements of HI spin temperature available for neutral gas in the redshift range $0.4<z<1$. \cite{ellison12} measured a spin temperature of $90\pm23$\,K at z=0.6 for an intervening line against the z=1.25 radio QSO\,J1431+3952, and \cite{zwaan15} derived a slightly lower value of $64\pm17$\,K for the same system. \cite{kanekar14a} measured a range of spin temperatures from 90\,K to $>1380$\,K (median value $\sim270$\,K) for eight DLA systems in the redshift range $0.4<z<1$. Where specific measurements are unavailable, many authors assume a fiducial spin temperature of 100\,K, which equivalently corresponds to assuming that all of the absorbing gas is CNM. Using this value therefore gives a lower limit to the true column density of HI. In the absence of detailed spin temperature measurements, \cite{braun12} found a tight but strongly nonlinear relation between 21\,cm absorption opacity and HI column density in Local Group galaxies at $z=0$, and suggested that this relation might also be applicable at higher redshift. For a DLA system with $N_{\rm HI} \sim 2\times10^{20}\,{\rm cm}^{-2}$, the \cite{braun12} relation gives a typical observed peak opacity of $\tau \sim0.015$. To probe DLA-like HI column densities in gas clouds with HI spin temperatures typical of galaxy disks, we ideally need to be able to detect 21\,cm absorption lines with a peak optical depth of $\tau \sim0.01-0.02$. To make reliable detections, we also require excellent spectral dynamic range across the telescope's bandwidth. \subsection{The expected locations of DLA systems at $0.4<z<1$} Information on the host galaxies of $z<1.7$ DLA systems is relatively sparse \citep{wolfe05}. \cite{chen03} found that the galaxies that give rise to DLA systems at $z\leq1$\ span a wide range of morphological types, and that some DLAs may be associated with galaxy groups rather than individual galaxies. On the theory side, simulations have presented contradictory results about where most of the HI is located, i.e. the relative contributions from the interstellar medium (ISM) of galaxies, the circum-galactic medium (CGM) and intergalactic gas as a function of cosmic time. \citet{van-de-voort12} showed that in the OWLS hydrodynamical simulations, the CGM starts to dominate the HI density at $z\gtrsim 1.5-2$, but other cosmological hydrodynamical simulations, find that the ISM HI continues to dominate the cosmic abundance out to higher redshift \citep{dave13}. In Illustris-TNG, \cite{diemer19} showed that the CGM becomes an important contributor of neutral hydrogen at $z\gtrsim 1$. On the other hand, a comparison of cosmological semi-analytic models of galaxy formation (which only account for HI in the ISM of galaxies) with inferred measurements of $\Omega_{\rm HI}$ has been used to argue that the CGM may already be important at $z\approx 1$ \citep{lagos14,lagos18}. The fact that some observations at intermediate redshifts, $0.3 \lesssim z \lesssim 1$, show DLAs to be associated with groups rather than individual galaxies \citep{peroux19, chen19} may indicate that the CGM is a significant reservoir of HI at these redshifts, in contrast to the local ($z\sim0$) Universe where most of the HI is observed to be in the ISM of galaxies. Given these current uncertainties, new observational constraints are likely to play a vital role in distinguishing between the various models. \ctable[ notespar, star, caption={Targets observed with BETA}, label={tab:sample_data}, ]{l l lr rrrl l ll cc} {\tnote[]{Redshift references ({z$_{\rm ref}$}): 6dF=6dF Galaxy Survey \citep{jones09}; Al15=\cite{allison15}; dS94=\cite{di-serego94}; Ha03=\cite{halpern03}; He08=\cite{healey08}; Hu78=\cite{hunstead78}; Hu80=\cite{hunstead80}; Li99=\cite{lidman99}; Ma11=\cite{marshall11}; Mo78=\cite{morton78}; Mo82=\cite{morton82}; Mu84=\cite{murdoch84}; NTT=ESO NTT spectrum, this paper (see Appendix); Pe76=\cite{peterson76}; Pe79=\cite{peterson79}; Sh12=\citet{shaw12}; Ta93=\cite{tadhunter93}; Ti13=\cite{titov13}; Wh88=\cite{white88}; Wi83=\cite{wilkes83}; Wi00=\cite{wisotzki00}; Wr77=\cite{wright77}. \\ Position measurements are from the ICRF VLBI catalogue \citep{ma98} except for the following: PKS\,0903-57 position from \cite{murphy10}; MRC\,1039-474 from \cite{titov13} PKS\,1421-490 and MRC\,1613-586 from \cite{petrov11}; MRC\,1759-396 from \cite{fomalont03}; PKS\,1830-211 from \cite{fomalont00}, PKS\,2203-18 from \cite{beasley02}. \\ $^a$ For `in-band' radio sources with redshift $z\leq1.0$, we exclude a region within 3000 km\,s$^{-1}$ of the emission redshift (column 8) since this region may be occupied by associated systems. \\ $^b$ See notes on individual objects in Appendix A.}} {\FL \multicolumn{1}{c}{Name} & \multicolumn{1}{c}{RA} & \multicolumn{1}{c}{Dec} & \multicolumn{4}{c}{----- Radio flux density (Jy) ----- } & \multicolumn{1}{c}{$z_{\rm em}$} & \multicolumn{1}{c}{$z_{\rm ref}$} & \multicolumn{1}{c}{$\Delta z$} & \multicolumn{1}{c}{Notes} \\ & \multicolumn{2}{c}{(J2000)} &\multicolumn{1}{c}{\small MRC} & \multicolumn{1}{c}{\small SUMSS} &\multicolumn{1}{c}{\small NVSS} &\multicolumn{1}{c}{\small AT20G} & & & probed$^a$ & \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)}& \multicolumn{1}{c}{(9)} & \multicolumn{1}{c}{(10)} & \multicolumn{1}{c}{(11)} \\ \hline\hline PKS\,0047-579 & 00:49:59.473 & $-57$:38:27.34 & 2.52 & 2.05 & ... & 1.87 & 1.797 & Pe76 & 0.60 & Background \\ PKS\,0208-512 & 02:10:46.200 & $-51$:01:01.89 & 5.48 & 3.37 & ... & 3.29 & 1.003 & Pe76 & 0.60 & Background \\ PKS\,0302-623 & 03:03:50.631 & $-62$:11:25.55 & 0.86 & 2.46 & ... & 1.31 & 1.351 & He08 & 0.60 & Background \\ PKS\,0438-43 & 04:40:17.180 & $-43$:33:08.60 & 8.12 & 5.86 & ... & 1.95 & 2.863 & Mo78 & 0.60 & Background \\ PKS\,0451-28 & 04:53:14.647 & $-28$:07:37.33 & 2.14 & ... & 2.54 & 1.79 & 2.559 & Wi83 & 0.60 & Background \\ \\ PKS\,0454-46 & 04:55:50.772 & $-46$:15:58.68 & 4.25 & 2.87 & ... & 4.16 & 0.853 & Wi00 & 0.43 & In-band \\ PKS\,0506-61 & 05:06:43.989 & $-61$:09:40.99 & 5.03 & 3.12 & ... & 1.66 & 1.093 & Wr77 & 0.60 & Background \\ PKS\,0537-441 & 05:38:50.362 & $-44$:05:08.94 & 2.56 & 3.52 & ... & 5.29 & 0.894 & Pe76 & 0.47 & In-band \\ PKS\,0637-75 & 06:35:46.508 & $-75$:16:16.82 & 7.89 & 5.38 & ... & 3.14 & 0.653 & Hu78 & 0.23 & In-band \\ PKS\,0743-67 & 07:43:31.612 & $-67$:26:25.55 & 8.61 & 5.68 & ... & 1.22 & 1.512 & dS94 & 0.60 & Background \\ \\ PKS\,0903-57 & 09:04:53.36 & $-57$:35:04.7 & 4.90 & 3.31 & ... & 1.43 & ... & .. & ($\leq0.6$) & Uncertain$^b$ \\ PKS\,0920-39 & 09:22:46.418 & $-39$:59:35.07 & 4.38 & 3.16 & 2.62 & 1.31 & 0.591 & Wh88 & 0.17 & In-band \\ MRC\,1039-474 & 10:41:44.650 & $-47$:40:00.06 & 1.44 & 2.37 & ... & 1.26 & 2.558 & Ti13 & 0.60 & Background$^b$ \\ PKS\,1104-445 & 11:07:08.694 & $-44$:49:07.62 & 1.49 & 2.52 & ... & 1.67 & 1.598 & Pe79 & 0.60 & Background \\ PKS\,1421-490 & 14:24:32.237 & $-49$:13:49.74 & 13.10& 9.68 & ... & 2.64 & 0.662 & Ma11 & 0.24 & In-band \\ \\ PKS\,1424-41 & 14:27:56.298 & $-42$:06:19.44 & 6.39 & 3.86 & ... & 2.74 & 1.522 & Wh88 & 0.60 & Background \\ PKS\,1504-167 & 15:07:04.787 & $-16$:52:30.27 & 2.20 & ... & 2.71 & 1.05 & 0.876 & Hu78 & 0.45 & In-band \\ MRC\,1613-586 & 16:17:17.889 & $-58$:48:07.86 & 3.12 & 3.10 & ... & 2.71 & 1.422 & Sh12 & 0.60 & Background \\ PKS\,1610-77 & 16:17:49.276 & $-77$:17:18.47 & 5.35 & 4.15 & ... & 1.86 & 1.710 & Hu80 & 0.60 & Background \\ PKS\,1622-253 & 16:25:46.892 & $-25$:27:38.33 & 2.36 & ... & 2.52 & 2.06 & 0.786 & dS94 & 0.35 & In-band \\ \\ PKS\,1622-29 & 16:26:06.021 & $-29$:51:26.97 & 2.74 & ... & 2.29 & 1.79 & 0.814 & NTT & 0.39 & In-band$^b$ \\ PKS\,1740-517 & 17:44:25.451 & $-51$:44:43.79 & 5.38 & 8.15 & ... & 1.24 & 0.441 & Al15 & 0.02 & In-band$^b$ \\ MRC\,1759-396 & 18:02:42.680 & $-39$:40:07.90 & 2.56 & 1.38 & 2.27 & 1.41 & 1.319 & Sh12 & 0.60 & Background$^b$ \\ PKS\,1830-211 & 18:33:39.886 & $-21$:03:40.57 & 11.47 & ... & 10.90 & 5.50 & 2.507 & Li99 & 0.60 & Background \\ MRC\,1908-201 & 19:11:09.653 & $-20$:06:55.11 & 1.94 & ... & 2.71 & 2.67 & 1.119 & Ha03 & 0.60 & Background \\ \\ MRC\,1920-211 & 19:23:32.190 & $-21$:04:33.33 & 1.35 & ... & 3.17 & 2.55 & 0.874 & Ha03 & 0.45 & In-band \\ PKS\,2052-47 & 20:56:16.360 & $-47$:14:47.63 & 4.15 & 2.14 & ... & 1.17 & 1.492 & Mu84 & 0.60 & Background \\ PKS\,2155-152 & 21:58:06.282 & $-15$:01:09.33 & 2.51 & ... & 3.02 & 1.90 & 0.672 & Wh88 & 0.25 & In-band \\ PKS\,2203-18 & 22:06:10.417 & $-18$:35:38.75 & 9.73 & ... & 6.40 & 2.03 & 0.619 & Mo82 & 0.19 & In-band \\ PKS\,2326-477 & 23:29:17.704 & $-47$:30:19.12 & 3.21 & 3.05 & ... & 1.42 & 1.304 & 6dF & 0.60 & Background \\ \\ PKS\,2333-528 & 23:36:12.145 & $-$52:36:21.95 & 1.81 & 2.16 & ... & 1.07 & ... & .. & ($\leq0.6$) & Uncertain \\ PKS\,2345-16 & 23:48:02.609 & $-$16:31:12.02 & 2.09 & ... & 2.64 & 2.45 & 0.576 & Ta93 & 0.15 & In-band \\ \\ \multicolumn{10}{l}{Total path length for the 30 sources of known redshift: $\Delta z ({\small {\rm BETA}}) = 14.00$} \\ \LL} \ctable[ notespar, star, cap = {Main data table (2)}, caption={Targets observed with ASKAP-12}, label={tab:sample_data2}, ]{l l lr rrrl l ll cc}% {\tnote[]{Redshift references ({$z_{\rm ref}$}): 6dF=6dF Galaxy Survey \citep{jones09}; Ar67=\cite{arp67}; Dr97=\cite{drinkwater97} Fr83=\cite{fricke83}; Ha03=\cite{halpern03}; He10=\cite{hewett10}; Hu80=\cite{hunstead80}; Li99=\cite{lidman99}; Ma96=\cite{marziani96}; O'D91=\cite{odea91}; Os94=\cite{osmer94}; St74=\cite{strittmatter74}; St89=\cite{stickel89}; Wh88=\cite{white88}; Wi83=\cite{wilkes83}; Wi86=\cite{wilkes86}; Wr83=\cite{wright83}. \\ Position measurements are from the ICRF VLBI catalogue \citep{ma98} except for the following: PKS\,0122-00 and PKS\,1136-13 from \cite{beasley02}; PKS\,1229-02, PKS\,1245-19, PKS\,1508-05, PKS\,2123-463 and PKS\,2244-37 from \cite{fey15}. \\ $^a$ For `in-band' radio sources with redshift $z\leq0.77$, we exclude a region within 3000 km\,s$^{-1}$ of the emission redshift (column 8) since this region may be occupied by associated systems. \\ $^b$ See notes on individual objects in Appendix A. \\ $^c$ For PKS\,1229-02 and PKS\,1508-05, the redshift path probed was reduced slightly due to correlator errors over part of the observed frequency range. } } {\FL \multicolumn{1}{c}{Name} & \multicolumn{1}{c}{RA} & \multicolumn{1}{c}{Dec} & \multicolumn{4}{c}{----- Radio flux density (Jy) ----- } & \multicolumn{1}{c}{$z_{\rm em}$ }& \multicolumn{1}{c}{$z_{\rm ref}$} & \multicolumn{1}{c}{$\Delta z$} & \multicolumn{1}{c}{Notes} \\ & \multicolumn{2}{c}{(J2000)} &\multicolumn{1}{c}{\small MRC} & \multicolumn{1}{c}{\small SUMSS} &\multicolumn{1}{c}{\small NVSS} &\multicolumn{1}{c}{\small AT20G} & & & probed$^a$ \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)}& \multicolumn{1}{c}{(9)} & \multicolumn{1}{c}{(10)} & \multicolumn{1}{c}{(11)} \\ \hline \hline \multicolumn{5}{l}{(a) New targets} \\ PKS\,0122-00 & 01:25:28.844 & $-00$:05:55.93 & 1.20 & ... & 1.54 & 1.16 & 1.075 & 6dF & 0.40 & Background \\ PKS\,0237-23 & 02:40:08.175 & $-23$:09:15.73 & 3.67 & ... & 6.26 & 0.90 & 2.223 & Ar67 & 0.40 & Background \\ PKS\,0405-12 & 04:07:48.431 & $-12$:11:36.66 & 8.17 & ... & 2.94 & 1.25 & 0.573 & Ma96 & 0.18 & In-band \\ PKS\,0454-234 & 04:57:03.179 & $-23$:24:52.02 & ... & ... & 1.73 & 3.84 & 1.003 & St89 & 0.40 & Background \\ PKS\,0458-02 & 05:01:12.810 & $-01$:59:14.26 & 2.30 & ... & 2.26 & 1.10 & 2.286 & St74 & 0.40 & Background \\ &&&&& \\ PKS\,0805-07 & 08:08:15.536 & $-07$:51:09.89 & 2.49 & ... & 1.60 & 0.77 & 1.837 & Wh88 & 0.40 & Background \\ PKS\,0834-20 & 08:36:39.215 & $-20$:16:59.50 & 3.53 & ... & 1.97 & 2.67 & 2.752 & Fr83 & 0.40 & Background \\ PKS\,0859-14 & 09:02:16.831 & $-14$:15:30.88 & 3.93 & ... & 2.90 & 1.04 & 1.332 & 6dF & 0.40 & Background$^b$ \\ PKS\,1127-14 & 11:30:07.053 & $-14$:49:27.39 & 5.07 & ... & 5.62 & 1.87 & 1.184 & Wi83 & 0.40 & Background \\ PKS\,1136-13 & 11:39:10.703 & $-13$:50:43.64 & 10.50 & ... & 4.22 & 0.54 & 0.556 & 6dF & 0.16 & In-band \\ &&&&& \\ PKS\,1144-379 & 11:47:01.371 & $-38$:12:11.02 & 0.94 & 0.81 & 1.80 & 1.38 & 1.048 & St89 & 0.40 & Background \\ PKS\,1229-02 & 12:32:00.016 & $-02$:24:04.80 & 3.87 & ... & 1.65 & 0.90 & 1.045 & He10 & 0.27$^c$ & Background \\ PKS\,1245-19 & 12:48:23.898 & $-19$:59:18.59 & 8.61 & ... & 5.14 & 0.69 & 1.275 & O'D91 & 0.40 & Background$^b$ \\ PKS\,1508-05 & 15:10:53.592 & $-05$:43:07.42 & 7.71 & ... & 3.57 & 1.27 & 1.185 & Wi86 & 0.36$^c$ & Background \\ PKS\,1935-692 & 19:40:25.528 & $-69$:07:56.97 & 1.70 & 1.75 & ... & 0.52 & 3.154 & Os94 & 0.40 & Background \\ &&&&& \\ PKS\,2106-413 & 21:09:33.189 & $-41$:10:20.61 & 2.66 & 1.82 & ... & 1.63 & 1.058 & Wh88 & 0.40 & Background \\ PKS\,2123-463 & 21:26:30.704 & $-46$:05:47.89 & 1.95 & 1.52 & ... & 0.55 & ... & .. & ($\leq0.4$) & Uncertain$^b$ \\ PKS\,2131-021 & 21:34:10.310 & $-01$:53:17.24 & 1.91 & ... & 1.69 & 2.11 & 1.285 & Dr97 & 0.40 & Background \\ PKS\,2204-54 & 22:07:43.733 & $-53$:46:33.82 & 2.55 & 1.80 & ... & 1.12 & 1.215 & Wi83 & 0.40 & Background \\ PKS\,2223-05 & 22:25:47.259 & $-04$:57:01.39 & 11.89 & ... & 7.41 & 8.32 & 1.404 & Wr83 & 0.40 & Background$^b$ \\ &&&&& \\ PKS\,2244-37 & 22:47:03.917 & $-36$:57:46.30 & 2.95 & 1.71 & 1.26 & 0.94 & 2.252 & Wi83 & 0.40 & Background \\ &&&&& \\ \multicolumn{10}{l}{Total path length for the 20 sources of known redshift: $\Delta z$({\small {\rm ASKAP-12}}) = 7.37} \\ &&&&& \\ \multicolumn{5}{l}{(b) Repeat observations of BETA targets} \\ PKS\,1610-77 & 16:17:49.276 & $-77$:17:18.47 & 5.35 & 4.15 & ... & 1.86 & 1.710 & Hu80 & 0.40 & Background \\ PKS\,1830-211 & 18:33:39.886 & $-21$:03:40.57 & 11.47 & ... & 10.90 & 5.50 & 2.507 & Li99 & 0.40 & Background \\ MRC\,1908-201 & 19:11:09.653 & $-20$:06:55.11 & 1.94 & ... & 2.71 & 2.67 & 1.119 & Ha03 & 0.40 & Background \\ \LL } \section{Sample selection} For ease of interpretation in this pilot study, we aimed to select background radio sources that were both bright and reasonably compact (i.e. had one or more radio components that were unresolved on arcsec scales). We used the Australia Telescope 20\,GHz (AT20G) Bright Source Sample (BSS) catalogue \citep{massardi08} as the basis for our initial BETA target selection, since it contains a high fraction of radio-loud QSOs (with a median redshift of $z\sim1.2$) and so is dominated by distant, compact radio sources. This catalogue covers the whole sky south of declination $-15^\circ$ (apart from a small strip with Galactic latitude $|b|<1.5^\circ$). For our later observations with ASKAP-12, we supplemented the BSS catalogue with more northerly objects from the main AT20G source catalogue \citep{murphy10}. \subsection{Target selection for BETA} We selected the initial pilot sample of bright radio continuum sources to observe with BETA based on two main considerations: \begin{enumerate} \item The limited sensitivity of the BETA telescope means that the background radio sources used as probes need to be bright enough (ideally with flux density $>2$\,Jy at 700-1000\,MHz) to allow us to measure lines with optical depth $\tau\sim0.01$, and to distinguish weak absorption lines from noise fluctuations in a reliable way. \item The background sources should ideally be at redshift $z>1$ to allow us to probe the full $\Delta z=0.6$ redshift path length available with BETA, but we also included bright radio sources for which no optical redshift is currently available since these may be objects where a dusty galaxy intervenes along the line of sight to a distant radio source. \end{enumerate} We began by selecting the 130 AT20G BSS objects with 20\,GHz flux density above 1.0\,Jy (38 of these have 20 GHz flux densities above 2.0\,Jy), and removed Galactic sources and other objects known to have redshift $z<0.4$. This left 112 AT20G BSS sources. We then further restricted the list to objects with an NVSS or SUMSS flux density above 2.0\,Jy (to ensure good S/N with BETA). This left us with the final sample of 32 sources listed in Table \ref{tab:sample_data}. Two of these objects lack a reliable optical redshift. Of the remaining 30 sources, 17 are background sources for the whole ASKAP band (i.e. have redshift $z>1$) and 13 have redshifts that place the HI line within the lowest ASKAP band ($0.4<z<1.0$). For the 13 `in-band' radio sources with redshift $z\leq1.0$, we reduced the assumed redshift path by $\Delta z=0.01$ to exclude the region within 3000 km\,s$^{-1}$ of the emission redshift that may be occupied by associated HI absorption systems. After accounting for this `proximity effect', the total redshift interval probed for intervening DLA systems by the 30 sources of known redshift is $\Delta z=14.00$. \subsection{Target selection for ASKAP-12} We selected some additional bright, compact sources to observe with ASKAP-12 during commissioning time in February 2017. The improved sensitivity of this 12-antenna array allowed us to relax some of the constraints on our earlier BETA sample, and the new sources were chosen from the AT20G catalogue \citep{murphy10} as follows: \begin{enumerate} \item We relaxed the $-15^\circ$ declination limit applied for the BETA sample, and included sources up to dec $0^\circ$. \item As before, we excluded Galactic sources and other objects known to have redshift $z<0.4.$ \item We include all the remaining objects with 20\,GHz flux density above 0.5\,Jy (rather than 1.0\,Jy for the BETA sample) and NVSS or SUMSS flux density above 1.5\,Jy (rather than 2.0\,Jy for the BETA sample). \end{enumerate} This left us with the sample of 21 additional sources listed in Table \ref{tab:sample_data2}. Twenty of these objects have a reliable optical redshift, 18 are background sources for the whole ASKAP band (i.e. have redshift $z>1$) and two have redshifts that place the HI line within the lowest ASKAP band ($0.4<z<1.0$). As for our BETA sample, we reduced the assumed redshift path for the two `in-band' sources by $\Delta z=0.01$ to account for proximity effects. The total redshift interval probed for intervening DLA systems by the sources in Table \ref{tab:sample_data2} is $\Delta z=7.37$. Two sources observed with BETA were re-observed with ASKAP-12, and these are also listed in Table \ref{tab:sample_data2}. \subsection{Structure of the target radio sources} Our target sources were selected to be compact, and in most cases we expect their radio emission to be dominated by a single component with angular size smaller than 1\,arcsec. We can assess this in several ways: \begin{itemize} \item {\bf ATCA calibrators:}\ 50 of the 53 sources in Tables 1 and 2 are ATCA calibrator sources\footnote{The three objects not listed as ATCA calibrators are PKS\,0743-67, PKS\,1229-02 and PKS\,2123-463} listed in the online calibrator database.\footnote{\url{http://www.narrabri.atnf.csiro.au/calibrators/calibrator_database.html}} For these sources, we can use the `defect' and `closure phase' parameters listed in the calibrator database to identify any sources that have resolved emission on arcsec scales at frequencies above 1.4\,GHz. \item {\bf AT20G 6-km visibilities:}\ 48 of the 53 sources have visibility measurements at 20\,GHz on the longest (6\,km) ATCA baselines \citep{chhetri13}, which allow us to assess the compactness of sources on scales of $\sim150$\,mas. \item {\bf VLBI images:} 36 of the sources in Tables 1 and 2 have VLBI images from \cite{ojha05,ojha10} or \cite{pushkarev17} that map out the structure of compact components on scales as small as 1-10\,mas. \end{itemize} While these data are useful, they provide at best an imperfect picture of the structure of these sources in the frequency range observed by ASKAP. As noted by \cite{kanekar14a}, we would ideally like to have lower-frequency VLBI images that allow us to estimate what fraction of the 700-1000\,MHz radio emission seen by ASKAP (which had $\sim1$\,arcmin resolution at the time of our commissioning observations) is located in compact components less than 10--20\,mas in angular size. Table \ref{tab:sample_res} lists galaxies in our sample that are known to have some resolved continuum emission on arcsec scales (though each of these objects also has a strong compact core). These objects were identified either because the ATCA calibrator database showed evidence for structure on arcsecond scales, or because the \cite{chhetri13} 6-km visibility had a value $<0.9$, indicating the presence of resolved emission on scales larger than about 0.15\,arcsec. We then carried out a literature search for arcsec-scale radio images of these objects, and the results are summarized in column (5) of Table \ref{tab:sample_res}. PKS\,1830-211, also listed in this table, is a special case since the resolved emission is the result of gravitational lensing by a foreground galaxy, rather than being intrinsic to the source. Most of the sources with published VLBI images at 5--15\,GHz \citep{ojha05,ojha10,pushkarev17,petrov19} appear to have a significant fraction (typically 20-60\%) of their flux density in components smaller than $\sim10$\,mas. This corresponds to an angular scale of roughly 50\,pc at $z\sim0.4$, 70\,pc at $z\sim0.7$ and 80\,pc at $z\sim1.0$, which is slightly smaller than the expected size of individual HI clouds in galaxy disks \citep[estimated as $\sim100$\,pc; ][]{braun12}. Even without a more detailed knowledge of the source structure, therefore, it appears that most of our target sources remain quite compact on scales as small as 10-20\,mas, making them effective probes for intervening HI absorption-line systems (because there is less need to account for a non-unity covering factor $f$). This conclusion is supported by the work of \cite{horiuchi04}, who observed a large sample of powerful flat-spectrum radio AGN at 5\,GHz with both the VLBA and VSOP. They found that a typical AGN in their sample had about 50\% of its radio emission in a component smaller than 10\,mas in size, with around 40\% of this milliarcsec-scale emission (20\% of the total emission) coming from a radio core with an average size of 0.2\,mas. \ctable[ notespar, star, caption={Sources known to have resolved continuum emission on arcsec scales. Column 4 lists the 20\,GHz visibilities on the longest (6-km) ATCA baseline, from \protect\cite{chhetri13}. Sources with 6k\_vis$\geq0.9$ are expected to have almost all their high-frequency radio emission originating from a region less than about 120\,mas in angular size. }, label = {tab:sample_res}, ]{l l ll llll l ll cc} {\tnote[]{References: Hi83=\cite{hintzen83}; Ja91=\cite{jauncey91}; Kr92=\cite{kronberg92}; Ma05=\cite{marshall05}; Mc12=\cite{mcconnell12}; Sa89=\cite{saikia89}; Sa04=\cite{sambruna04} }} {\FL \multicolumn{1}{l}{Name} & \multicolumn{1}{c}{$z_{\rm ref}$} & \multicolumn{1}{c}{Cal?} & \multicolumn{1}{c}{6k\_vis} & \multicolumn{1}{c}{Notes} & \multicolumn{1}{c}{Ref.} \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} \\ \hline\hline PKS\,0405-12 & 0.573 & C & 0.94 & Triple at 1.4\,GHz, strong core, LAS$\sim40$\,arcsec & Sa04 \\ PKS\,0903-57 & .. & C & 0.69 & Compact double at 5\,GHz, LAS $\sim3$\,arcsec & Ma05 \\ PKS\,1136-13 & 0.556 & C & 0.45 & Triple at 1.4\,GHz, strong core, LAS$\sim20$\,arcsec & Sa04 \\ PKS\,1229-02 & 1.045 & .. & 0.89 & Triple at 1.6\,GHz, LAS $\sim20$\,arcsec & Kr92 \\ PKS\,1421-490 & 0.662 & C & 0.76 & Strong core at 5\,GHz with $\sim1$\,arcsec jet & Ma05 \\ PKS\,2123-463 & .. & ... & 0.91 & Two components at 5\,GHz, LAS$\sim4$\,arcsec & Mc12 \\ &&& \\ PKS\,1830-211 & 2.507 & C & 0.19 & Lensed double/ring at 1.7\,GHz, LAS$\sim1$\,arcsec & Ja91 \LL } \section{Observations} The observational techniques and data reduction used were similar to those described by \cite{allison15} and \cite{allison17}, and we refer the reader to those papers for further details. \subsection{Observations with BETA} We observed the 32 objects listed in Table \ref{tab:sample_data} with BETA over the period from July 2014 to February 2016. The total integration time was typically 3-5 hours for each object, with between 4 and 6 BETA dishes in the array. The BETA telescope was used as an engineering testbed throughout its operation, so there were sometimes technical problems that made part or all of an observation unusable. When this occurred, the observation was repeated until good-quality data were obtained. In principle up to nine beams could be formed for wide-field imaging using the Mark I phased array feeds (PAFs) on BETA, positioned anywhere within the 30 deg$^2$ field of view of the PAF. However, for our targeted observations presented here, we used a single PAF beam centred at the position of the target source. To obtain initial solutions for the complex antenna gains and to calibrate the flux density scale \citep[based on the model of][]{reynolds94}, we accompanied each observation with a short integration on PKS\,B1934$-$638 between 5 and 15\,min. The expected uncertainty in the flux density scale is 2--3\% \citep{heywood16}. Our observations with BETA were carried out exclusively using the lower frequency band, between 711.5 and 1015.5\,MHz, equivalent to \mbox{H\,{\sc i}} redshifts between $z_{\rm HI} = 0.4$ and $1.0$. The fine channelization generated 16,416 channels across the 304\,MHz bandwidth, with an effective spectral resolution between approximately 5.5 and 7.8\,km\,s$^{-1}$. The full width at half power of the PAF beams is approximately 1.7 degrees at the band centre, and the spatial resolution of BETA is approximately 1\,arcminute (using uniform weighting) so that we do not expect any of our objects to be spatially resolved. \subsection{Observations with ASKAP-12} We observed the 23 objects listed in Table \ref{tab:sample_data2} with ASKAP-12 during January--February 2017. The total integration time was typically 2 hours for each object. ASKAP-12 usually had between 12 and 14 antennas operational for each observation, each fitted with Mark II PAFs with improved sensitivity at 1400\,MHz \citep[see][]{chippendale15}. Up to 36 PAF beams could be electronically formed to fully sample the 30 deg$^2$ field of view. However, for our observations we only considered a single beam centred on the object source. Since calibration of the complex antenna gains and flux scale in each PAF beam would require a separate observation of PKS\,1934-638, only forming a single beam greatly improves observing efficiency for this work. Secondly, during commissioning observations the ASKAP-12 backend capacity was limited so that forming a single PAF beam allowed us to achieve a larger spectral bandwidth. We observed simultaneously at all frequencies between 799.5 and 1039.5\,MHz, spanning \mbox{H\,{\sc i}} redshifts between $z_{\rm HI} = 0.37$ and 0.77. The fine channelisation produced 12960 channels across the 240\,MHz, with an effective spectral resolution between 5.3 and 6.9\,km\,s$^{-1}$. The PAF beam full width at half power at the band centre is approximately 1.6\,degrees and the spatial resolution is approximately 30\,arcsec, again meaning that none of our targets are expected to be spatially resolved by these observations. \subsection{Data reduction} The ingested data from the ASKAP correlator were recorded in measurement set format and so initial flagging (autocorrelations and amplitude thresholding) and splitting of the data were performed using the \textsc{CASA} package \citep{mcmullin07}. Subsequent automated flagging, calibration, imaging and continuum subtraction of the data were carried out using the \textsc{MIRIAD} package \citep{sault95}. The full-spectral-resolution visibilities were split into sub-band chunks, so as to enable efficient parallelization of the data processing and to remove spectral discontinuities caused by the PAF beams. The PAF beams were formed electronically in fixed frequency intervals; every 4 or 5\,MHz for BETA and 1\,MHz for ASKAP-12. These generated discontinuous jumps in the complex gain response of the telescope as a function of frequency and therefore needed to be corrected during bandpass calibration and refined further in continuum subtraction. In the case of BETA, these intervals equate to velocities greater than 1000\,km\,s$^{-1}$ and are therefore much larger than the typical linewidths expected for absorption. We corrected for this simply by splitting the data into sub-band chunks and performing bandpass calibration and continuum subtraction individually on these data. However, in the case of the 1\,MHz intervals used in ASKAP-12 this approach could lead to removal of absorption lines wider than 300\,km\,s$^{-1}$ during continuum subtraction. We therefore split the ASKAP-12 data into sub-band chunks of 4\,MHz (i.e. four beamforming intervals). To correct for the discontinuities in ASKAP-12 data that occur every 1\,MHz, we solved for the bandpass per channel using PKS\,1934-638 and then recovered S/N by using GPEDIT to smooth the solutions using a 10-channel Hanning window with a break every 1\,MHz. Outliers in the bandpass solutions, which are generated either by hardware glitches or radio frequency interference, were identified using the interquartile range and replaced through interpolation before smoothing. Separately, a single full-bandwidth data set, averaged to 1\,MHz resolution, was used to obtain high signal-to-noise continuum images for self-calibration. Initial solutions to the antenna gains as a function of time were obtained using a sky model based on the catalogues of SUMSS \citep{mauch03}, MGPS2 \citep{murphy07} and NVSS \citep{condon98}. Further iterative refinement of the solutions to smaller time intervals were then carried out using self-calibration based on the imaged continuum data. These gain solutions were then applied to the full-spectral-resolution data in each sub-band chunk. Continuum subtraction was carried out separately on each sub-band chunk; first using the CLEAN algorithm to generate a continuum model, which was then subtracted from the visibilities using UVMODEL, followed by UVLIN to fit and subtract a second order polynomial from the residuals. Data cubes were formed by imaging the continuum-subtracted visibilities in each sub-band chunk and a single spectrum was constructed at the position of peak emission from the continuum source. The continuum flux density was also measured at the same position in each sub-band chunk, so that the fractional absorption could be accurately calculated as a function of frequency. In cases where an object was observed on multiple occasions we formed a single spectrum by carrying out an inverse-variance-weighted average. \subsection{Data tables} Tables \ref{tab:beta_obs} and \ref{tab:askap12_obs} list some key parameters of the objects observed with BETA and ASKAP-12 respectively, arranged as follows: \vspace{0.1cm} \\ \noindent (1) Source name from Table \ref{tab:sample_data} or \ref{tab:sample_data2}\\ (2) Total observing time (in hours) \\ (3) Mean continuum flux density of the source (in mJy) averaged across the full continuum frequency band observed \\ (4) Standard deviation of the mean flux density - for these strong sources, this is a mainly a measure of how much the continuum flux density varies across the 240--300 MHz band \\ (5) Typical (median) rms noise (in mJy) in a single spectral channel \\ (6) Typical 1$\sigma$ sensitivity in optical depth for a single spectral channel. \\ \ctable[ notespar, cap = {BETA obs}, caption = {Measurements from BETA observations of the target sources in Table \ref{tab:sample_data}. See \S3.4 of the text for a description of each column. For objects with more than one observation, the values listed are for the spectrum with the best optical-depth sensitivity $\sigma_{\tau}$. }, label = {tab:beta_obs} ]{lcrrclr r}% { }{ \FL \multicolumn{1}{c}{Name} & \multicolumn{1}{c}{t} &\multicolumn{1}{c}{$S_{\rm cont}$} & \multicolumn{1}{c}{$\Delta$S} & \multicolumn{1}{c}{rms/ch} & \multicolumn{1}{c}{$\sigma_{\tau}$} \\ & \multicolumn{1}{c}{(h)} & \multicolumn{1}{c}{mJy} & \multicolumn{1}{c}{mJy} & \multicolumn{1}{c}{mJy} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} \\ \hline \hline PKS\,0047-579 & 5 & 2198 & 81 & 21.7 & 0.0099 \\ PKS\,0208-512 & 3 & 2748 & 163 & 30.3 & 0.0110 \\ PKS\,0302-623 & 5 & 4426 & 429 & 43.0 & 0.0097 \\ PKS\,0438-43 & 3 & 5455 & 354 & 37.2 & 0.0068 \\ PKS\,0451-28 & 5 & 2430 & 94 & 23.4 & 0.0096 \\ \\ PKS\,0454-46 & 5 & 3680 & 130 & 22.9 & 0.0062 \\ PKS\,0506-61 & 3.5 & 3154 & 263 & 31.0 & 0.0098 \\ PKS\,0537-441 & 5 & 4318 & 183 & 22.8 & 0.0053 \\ PKS\,0637-75 & 4 & 6250 & 663 & 26.4 & 0.0042 \\ PKS\,0743-67 & 4 & 5276 & 407 & 24.2 & 0.0046 \\ \\ PKS\,0903-57 & 3 & 3156 & 194 & 28.4 & 0.0090 \\ PKS\,0920-39 & 5 & 3233 & 152 & 25.5 & 0.0079 \\ MRC\,1039-474 & 5 & 1867 & 77 & 24.4 & 0.0130 \\ PKS\,1104-445 & 4 & 2417 & 502 & 25.0 & 0.0103\\ PKS\,1421-490 & 2 & 9360 & 512 & 36.3 & 0.0039 \\ \\ PKS\,1424-41 & 3.5 & 4714 & 153 & 27.9 & 0.0059 \\ PKS\,1504-167 & 5 & 1634 & 73 & 23.6 & 0.0144 \\ MRC\,1613-586 & 5 & 4586 &104 & 24.4 & 0.0053 \\ PKS\,1610-77 & 5 & 3933 & 206 & 16.5 & 0.0042 \\ PKS\,1622-253 & 5 & 2222 & 102 & 29.5 & 0.0133 \\ \\ PKS\,1622-29 & 5 & 2755 & 115 & 23.7 & 0.0086 \\ PKS\,1740-517 & & \\ MRC\,1759-396 & 4 & 1489 & 29 & 29.0 & 0.0195 \\ PKS\,1830-211 & 3.5 & 12620 & 521 & 37.6 & 0.0030 \\ MRC\,1908-201 & 5 & 1768 & 57 & 23.7 & 0.0134 \\ \\ MRC\,1920-211 & 5 & 2331 & 175 & 24.8 & 0.0106 \\ PKS\,2052-47 & 5 & 2523 & 140 & 22.1 & 0.0088 \\ PKS\,2155-152 & 5 & 4096 & 88 & 24.9 & 0.0061 \\ PKS\,2203-18 & 3 & 7395 & 416 & 58.0 & 0.0078 \\ PKS\,2326-477 & 4 & 4169 & 151 & 35.3 & 0.0085 \\ \\ PKS\,2333-528 & 5 & 2233 & 58 & 26.8 & 0.0120 \\ PKS\,2345-16 & 5 & 2684 & 39 & 27.4 & 0.0102 \LL } \ctable[ notespar, cap = {ASKAP-12 obs}, caption = {Measurements from ASKAP-12 observations of the target sources in Table \ref{tab:sample_data2} . See \S3.4 of the text for a description of each column. }, label = {tab:askap12_obs} ]{lcrrclr r}% { }{ \FL \multicolumn{1}{c}{Name} & \multicolumn{1}{c}{t} &\multicolumn{1}{c}{$S_{\rm cont}$} & \multicolumn{1}{c}{$\Delta$S} & \multicolumn{1}{c}{rms/ch} & \multicolumn{1}{c}{$\sigma_{\tau}$} \\ & \multicolumn{1}{c}{(h)} & \multicolumn{1}{c}{mJy} & \multicolumn{1}{c}{mJy} & \multicolumn{1}{c}{mJy} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)}\\ \hline \hline PKS\,0122-00 & 2 & 1633 & 75 & 12.6 & 0.0078\\ PKS\,0237-23 & 2 & 6824 & 258 & 12.6 & 0.0018 \\ PKS\,0405-12 & 2 & 2638 & 50 & 12.5 & 0.0044 \\ PKS\,0454-234 & 2 & 2020 & 10 & 12.3 & 0.0061 \\ PKS\,0458-02 & 2 & 1376 & 13 & 15.1 & 0.0110 \\ \\ PKS\,0805-07 & 2 & 1895 & 27 & 13.6 & 0.0072 \\ PKS\,0834-20 & 2 & 2492 & 21 & 12.8 & 0.0051 \\ PKS\,0859-14 & 2 & 3992 & 67 & 12.7 & 0.0032 \\ PKS\,1127-14 & 2 & 6268 & 81 & 12.8 & 0.0021 \\ PKS\,1136-13 & 2 & 6016 & 265 & 12.2 & 0.0020 \\ \\ PKS\,1144-379 & 2 & 1415 & 76 & 12.4 & 0.0088 \\ PKS\,1229-02 & 2 & 1927 & 121 & 13.1 & 0.0069 \\ PKS\,1245-19 & 2 & 6644 & 338 & 12.1 & 0.0013 \\ PKS\,1508-05 & 2 & 5248 & 100 & 12.8 & 0.0024 \\ PKS\,1935-692 & 2 & 1641 & 39 & 16.2 & 0.0064 \\ \\ PKS\,2106-413 & 2 & 1790 & 13 & 12.4 & 0.0069 \\ PKS\,2123-463 & 2 & 1521 & 46 & 11.3 & 0.0075 \\ PKS\,2131-021 & 2 & 1957 & 14 & 11.1 & 0.0057 \\ PKS\,2204-54 & 2 & 2356 & 20 & 11.9 & 0.0050 \\ PKS\,2223-05 & 2 & 8356 & 359 & 21.9 & 0.0012 \\ \\ PKS\,2244-37 & 2 & 1581 & 87 & 10.9 & 0.0069 \\ \\ \multicolumn{5}{l}{Repeat observations} \\ PKS\,1610-77 & 2 & 3584 & 134 & 12.7 & 0.0035 \\ MRC\,1908-201 & 2 & 1968 & 76 & 13.0 & 0.0066 \LL } \subsection{Quality of the ASKAP radio spectra} Figure \ref{fig:full_spec} shows a full ASKAP-12 spectrum of one of our target sources, PKS\,0237-23. The quality of the spectrum is typical of those obtained during commissioning time, and the band is completely free of terrestrial and satellite-generated RFI. The rms noise level in this spectrum is roughly constant with frequency across the full 288\,MHz ASKAP band, giving a similar detection sensitivity at all redshifts sampled, and this is also true of the radio spectra of our other BETA and ASKAP-12 target sources. However, in the case of ASKAP-12 we found an error in the firmware weights that are used to correct the 1-MHz channelisation. This generates features at the level of 1\,per\,cent in gain amplitude every 1\,MHz, which we account for when assessing detection reliability. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{J024008-230916_average_spectrum.jpg} \caption[]{The full ASKAP-12 spectrum of PKS\,0237-23 (AT20G J024008-230916), a radio-loud quasar at $z=2.223$. The median rms noise in this spectrum is 12.6\,mJy per 18\,kHz channel. Some noise spikes are visible, and one small region (near 930\,MHz) is missing data because of a correlator-block failure. The light-grey band is set at $\pm$5 times the rms noise. } \label{fig:full_spec} \end{figure*} \subsection{Detection limits and sensitivity} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{obs4a.pdf} \includegraphics[width=0.49\textwidth]{obs4b.pdf} \caption[]{Detection limits in HI column density for the observations in Tables \ref{tab:sample_data} and \ref{tab:sample_data2}, assuming a covering factor $f=1$, and a spin temperature of 100\,K (blue squares and open circles) or 600\,K (red stars and open triangles). The left-hand plot assumes a $5\sigma$\ detection limit for a narrow line of width 7\,km\,s$^{-1}$, while the right-hand plot shows a $5.5\sigma$\ detection limit for a broader line with a width of 30\,km\,s$^{-1}$ as discussed by Allison et al.\ (2020). The horizontal dotted line in each plot shows the DLA column density limit of $2\times10^{20}$\,cm$^{-2}$. } \label{fig:phot3} \end{figure*} As can be seen from Tables 3 and 4, the rms noise in our spectra was typically 20--30 mJy per spectral channel for BETA observations and around 12 mJy per channel for observations with ASKAP-12. This corresponds to a 5$\sigma$ detection limit in optical depth of around 0.05 for BETA and 0.02 for ASKAP-12. Figure \ref{fig:phot3} plots the detection limits in HI column density for each of the spectra in Tables \ref{tab:sample_data} and \ref{tab:sample_data2}, assuming a covering factor $f=1$ and HI spin temperatures of 100\,K (blue points) and 600\,K (red points). For $T_{\rm s} = 100$\,K, we should be able to detect a minimal DLA absorber with $N_{\rm HI}\sim 2\times10^{20}$\,cm$^{-2}$ for all our targets across the full range of redshifts probed. The red points in Figure \ref{fig:phot3} are included to show that DLAs with spin temperatures above about 600\,K are unlikely to be detectable in the current pilot survey. Figure \ref{fig:dz} gives a more detailed look at the redshift and spin-temperature sensitivity of the sample as a whole. In these plots, the vertical axis shows the fraction of our sightlines on which we could detect (a) a sufficiently-strong intervening HI absorption line at redshift $z$, and (b) a minimal DLA system (with $N_{\rm HI}=2\times10^{20}$\,cm$^{-2}$ and covering factor $f=1$). This `fraction of sightlines' roughly corresponds to the total redshift path-length $\Delta z$ over which each kind of line could be detected in our survey. While this path-length is roughly constant with redshift (dropping off at $z>0.8$ because of the frequency limit of the ASKAP-12 spectra), the right-hand plot shows that (as expected for a 21\,cm survey) we are much more sensitive to absorption lines from gas with a low HI spin temperature than to lines that originate in warmer gas. Figure \ref{fig:dz} shows that that our 50\% completeness corresponds to $T_{\rm S}\sim300$\,K. We note that this is similar to the harmonic mean spin temperature of the Milky Way ISM \cite{murray18}. \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{dz1.pdf}\includegraphics[width=0.48\textwidth]{dz2.pdf} \caption[]{(left) Fractional redshift coverage for the radio sources in Tables 1 and 2. This is set mainly by the frequency range used for the BETA and ASKAP-12 observations, so we probe fewer sightlines at $z>0.77$. (right) The fraction of sightlines for which we could detect (at the 5-$\sigma$ level) a minimal DLA system with $N_{\rm HI}=2\times10^{20}$\,cm$^{-2}$, assuming a line width 7\,km\,s$^{-1}$ and covering factor $f=1$. For spin temperatures above 250-300\,K, detections of most DLA systems are only possible against the strongest continuum sources in our sample. } \label{fig:dz} \end{figure*} \section{Results}\label{sec:results} \subsection{Detection of redshifted 21\,cm absorption lines} The total path-length over which we could detect 21\,cm absorption lines was $\Delta z$ = 21.37 for the 50 background radio sources with known optical redshifts. We made no prior assumptions about the redshift of any line, so this is a genuinely `blind' line search in the spectral domain. We used the Bayesian detection method developed by \citealt{allison12} to search for absorption lines in the spectrum. This method uses the Bayes factor to assign significance to a particular feature in the spectrum, equal to ratio of the Bayesian evidences for a Gaussian line and noise model, versus a null model comprising just the noise. We consider all features with values greater than $\ln{\mathrm{Bayes factor}} > 1$. Given the aforementioned error from applying incorrect coarse-channelisation weights, and failures of individual correlator cards, further visual inspection of detected features was required. Upon inspection, we obtained four reliable detections of intervening absorption lines and one associated absorption line along the line of sight to the 53 radio sources listed in Tables 1 and 2. The associated absorption-line system, at $z=0.44$\ in PKS\,1740-517, has been studied in detail by \cite{allison15} and \cite{allison19} and is not discussed further in this paper. The properties of the four intervening absorption lines are summarised in Table \ref{tab:beta_det} and discussed below. Two of these lines (towards PKS\,0834-20 and PKS\,1610-77) are new detections, while the other two are re-detections of previously-known absorption-line systems. \subsection{Intervening HI absorption towards PKS 0834-20 } We made a new detection of 21\,cm HI absorption at $z=0.591$ along the line of sight to the background ($z=2.75$) radio source PKS\,0834-20. Figure \ref{fig:abs_new} shows the ASKAP spectrum at the position of the absorption line. The peak optical depth of this line ($\tau=0.14$) is the highest out of our four ASKAP detections. The background radio source, PKS\,0834-20 is a radio-loud blazar, and low-frequency observations show a broad peak in the radio spectrum near 500\,MHz \citep{callingham17}. The 15\,GHz VLBA continuum image published by the MOJAVE team \citep{pushkarev17} shows a core-jet structure with a total extent of 6.4\,mas, corresponding to 42\,pc at the redshift of the 21\,cm absorption line. \begin{figure*} \centering \vspace*{0.2cm} \includegraphics[width=0.48\textwidth]{PKS0834-20_spectrum.pdf} \includegraphics[width=0.475\textwidth]{PKS1610-77_spectrum.pdf} \vspace*{0.5cm} \includegraphics[width=0.47\textwidth]{PKS1830-211_spectrum.pdf} \caption[]{Intervening absorption lines detected by ASKAP towards the radio sources PKS\,0834-20, PKS\,1610-77 and PKS\,1830-211. The grey shading in each plot shows $\pm\tau_{\rm lim}$, where $\tau_{\rm lim}$ is the 1$\sigma$ limit in optical depth listed in Tables 4 and 5. } \label{fig:abs_new} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{PKS1229-02_spectrum_ASKAP12.pdf} \includegraphics[width=0.48\textwidth]{PKS1229-02_spectrum_ASKAP36.pdf} \caption[]{Intervening absorption at $z=0.3950$ along the line of sight to the background radio source PKS\,1229-02 at $z=1.045$. \\ (left) The original ASKAP-12 spectrum, (right) A new ASKAP-36 spectrum of the same sightline. The grey shading shows $\pm\tau_{\rm lim}$, where $\tau_{\rm lim}$ is the 1$\sigma$ limit in optical depth listed in Tables 4 and 5. } \label{fig:det_1229} \end{figure*} \subsection{Intervening absorption towards PKS\,1229-02} \label{sec:1229} A 21\,cm HI absorption line at $z=0.395$ was first detected along a sightline to PKS\,1229-02 by \cite{brown79} with the NRAO 140-ft (43\,m) single-dish radio telescope, and this observation was motivated by the presence of strong Mg\,II absorption in the optical spectrum \citep{kinman67}. \cite{Wolfe80} detected the same HI absorption line with the Arecibo telescope, again using MgII pre-selection. Figure \ref{fig:det_1229} shows the ASKAP detection of intervening 21\,cm absorption at $z=0.395$ along the line of sight to the radio-loud QSO PKS\,1229-02. The original ASKAP-12 detection had relatively low S/N, and the measured optical depth of the HI line was significantly lower than the values observed by \cite{brown79} and \cite{Wolfe80}. To test whether this was the result of variability in the line profile over time, we re-observed PKS\,1229-02 in a test observation for 8 hours with the full 36-antenna ASKAP array in May 2019. The new ASKAP-36 spectrum is also shown in Figure \ref{fig:det_1229}. The total integration time for the ASKAP-36 spectrum was 8 hours, and the HI optical depth measured from this higher-quality ASKAP spectrum is similar to the \cite{brown79} and \cite{Wolfe80} values. The PKS\,1229-02 absorption system is well-studied at optical and UV wavelengths. The strong metal absorption lines in the optical spectrum at $z=0.395$ have been studied by \cite{briggs85} and \cite{lanzetta92}, and imply a relatively high metallicity for the absorbing gas. The PKS\,1229-02 absorption system is the only one in our current sample for which an HI column density has also been measured from the Ly$\alpha$ line. \cite{boisse98} used the HST to observe the damped Lyman-$\alpha$ line in the UV at $z=0.395$, from which they measured an HI column density of $N_{\rm HI}$ = $5.6\times10^{20}$\,cm$^{-2}$. In principle, this allows us to measure the HI spin temperature $T_{\rm s}$ if we assume that the Ly$\alpha$ and 21\,cm absorption measurements are along the same sightline. Unlike most of the other sources in our sample, PKS\,1229-02 contains extended radio structure on scales out to $\sim18$\,arcsec \citep{hintzen83,kronberg92} in addition to a compact central source. This complicates the analysis of the absorption-line spectrum, and we adopt the covering factor of $f=0.42$ derived by \cite{kanekar09}. From our ASKAP spectrum, we derive a value of $T_{\rm s} =102\pm12$\,K for the PKS\,1229-02 absorption system. This is consistent with the previously-published estimates of $T_{\rm s}$ summarized in Table \ref{tab:spin}, which range from 95 to 170\,K. \subsection{Intervening absorption towards PKS 1610-77} With BETA, we made a new detection of intervening 21\,cm absorption at $z=0.4503$ along the line of sight to the radio-loud QSO PKS\,1610-77, as shown in Figure \ref{fig:abs_new}. The line has at least two velocity components, with a separation of $\approx 30$\,km s$^{-1}$. The VLBI image published by \cite{ojha10} shows a curved jet extending about 5\,mas from the core, along with some diffuse emission on larger scales. At the redshift of the HI absorption line, the angular extent of this jet corresponds to a linear size of $\sim30$\,pc. \subsection{Intervening absorption towards PKS\,1830-211} PKS\,1830-211 is a gravitationally-lensed radio source \citep{subrahmanyan90, jauncey91} with an intervening galaxy at $z=0.886$ \citep{wiklind96} and a possible second galaxy at $z = 0.192$ \citep{lovell96}. \cite{chengalur99} first detected HI (and OH) absorption at $z = 0.886$ in this system using the Westerbork Synthesis Radio Telescope. \cite{allison17} obtained ASKAP spectra of PKS\,1830-211 from 700-1530\,MHz, re-detecting HI absorption at $z = 0.192$ and $z = 0.886$, and OH absorption at $z = 0.886$. Comparing spectra for several epochs spanning 20 years, they identified variability in the HI line, consistent with changes in the background quasar. We include the $z = 0.886$ HI line detected with ASKAP in our sample here. \ctable[ notespar, star, cap = {BETA obs}, caption = {Parameters of the detected 21\,cm absorption lines. The estimated values of $N_{\rm HI}$\ assume $T_{\rm s}=100$\,K and $f=1$ (col 6), and equation 9 of Braun 2012 (col 7). Since there can be multiple velocity components in a line, an effective width was determined by dividing the integrated optical depth in this table by the peak value \protect\cite[see e.g.][]{dickey82,allison13,allison14} - these effective velocity widths are listed in column 8. }, label = {tab:beta_det} ]{llcllrrrr}% { }{ \FL \multicolumn{1}{c}{Name} & \multicolumn{1}{c}{$z_{\rm abs}$} & \multicolumn{1}{c}{$N_{\rm Gauss}$} & \multicolumn{1}{c}{$\tau_{\rm pk}$} & \multicolumn{1}{c}{$\int \tau\ {dV}$} &\multicolumn{2}{c}{Estimated HI column} & \multicolumn{1}{c}{Effective} & \multicolumn{1}{c}{Notes} \\ & & & & \multicolumn{1}{c}{(km.s$^{-1}$)} & \multicolumn{2}{c}{ density $N_{\rm HI}$ (cm$^{-2}$)} & \multicolumn{1}{c}{width} \\ &&&&& \multicolumn{1}{c}{[$T_{\rm s}$=100\,K]} & [Braun2012] & \multicolumn{1}{c}{(km.s$^{-1}$)} & \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} \\ \hline \hline PKS\,0834-20 & 0.5906 & 3 & $0.1596\pm0.0051$ & $6.178\pm0.112$ & $1.1\times10^{21}$ & $1.2\times10^{21}$ & 38.7 & ASKAP-12 \\ PKS\,1229-02 & 0.3950 & 3 & $0.0699\pm0.0012$ & $1.233\pm0.021$ & $2.2\times10^{20}$ & $5.3\times10^{20}$ & 17.6 & ASKAP-12/36 \\ PKS\,1610-77 & 0.4503 & 2 & $0.0491\pm0.0027$ & $1.502\pm0.058$ & $2.7\times10^{20}$ & $3.7\times10^{20}$ & 30.6 & BETA \\ PKS\,1830-211 & 0.8849 & 3 & $0.0629\pm0.0008$ & $9.792\pm0.050$ & $1.8\times10^{21}$ & $4.8\times10^{20}$ & 155.7 & BETA \LL} \section{Discussion} \subsection{HI column density estimates} As noted earlier (\S1.3), we need to know both the harmonic mean spin temperature $T_{\rm s}$ and the covering factor $f$ to convert the observed optical depth of a 21\,cm HI absorption line to an HI column density $N_{\rm HI}$. Even without accurate measurements of $T_{\rm s}$ and $f$ for individual sources, however, we can make some general statements about the likely HI column densities along the sightlines where we detected intervening 21\,cm absorption lines with ASKAP. In particular, we can consider how many of these detected lines are likely to arise in gas with an HI column density above the DLA threshold of $2\times10^{20}$\,cm$^{-2}$. This in turn will allow us to estimate the DLA number density n($z$) in the redshift range probed by our ASKAP spectra. Since most of the background radio sources in our survey are compact on scales of 100\,mas or less (as discussed in \S2.3), we will assume for now that the covering factor $f\sim1$. This is a conservative assumption in terms of measuring the total number of DLA systems, since if $f <1$, then the actual HI column density will be higher than that estimated from the observed 21\,cm optical depth. We note that the absorber in PKS\,1229-02 is likely to have a covering factor $f<0.5$ as discussed in section \ref{sec:1229}. Figure \ref{fig:tspin} shows how the HI column density $N_{\rm HI}$ derived for our four detected lines varies with HI spin temperature. The values plotted assume a covering factor $f=1$, so the derived values of $N_{\rm HI}$ will be proportionally higher if $f<1$. We estimate $N_{\rm HI}$ from our detected absorption lines in two different ways: \begin{enumerate} \item Taking the commonly-used fiducial value of $T_{\rm s}=100$\,K, which assumes all of the absorbing gas is in the CNM (\citealt{wolfire03}) and so provides a lower limit. \item Using the empirical relation between $\tau$ and $N_{\rm HI}$ derived by \cite{braun12}. \end{enumerate} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{tspin_final2.pdf} \caption[]{HI column densities as a function of assumed spin temperature for the four detected absorption lines presented in \S4, derived from the observed optical depth of the line assuming a covering factor $f=1$. Vertical dashed lines at 100\,K and 300\,K show the approximate range of spin temperatures expected for the cold HI gas detected in absorption with ASKAP (based on the sensitivity plot shown in Figure 3). } \label{fig:tspin} \end{figure} The values of $N_{\rm HI}$ derived under these two assumptions are listed in columns 6 and 7 of Table \ref{tab:beta_det}. \subsection{The 21\,cm DLA number density at $z\sim0.6$} In terms of the number of DLA-like absorbers detected in our survey, the results in Table \ref{tab:beta_det} and Figure \ref{fig:tspin} appear reasonably consistent. Three absorbers (PKS\,0834-20, 1610-77 and 1830-211) all have HI column densities above the DLA threshold range both for a range of plausible values of $T_{\rm s}$ and $f$ and for the empirical \cite{braun12} relation. A fourth system, PKS\,1229-02, has a 21\,cm value closer to the DLA threshold if we assume $f\sim1$, but in this case we also have a direct HST Lyman-$\alpha$ measurement for PKS\,1229-02 of $N_{\rm HI}$ = $5.6\times10^{20}$\,cm$^{-2}$ \citep{boisse98} which confirms this as a DLA system (see \S \ref{sec:1229}). We therefore have four 21\,cm DLA detections on sightlines covering a total redshift interval $\Delta z = 21.37$. This yields an estimated DLA number density at redshift $z\sim0.6$ of dN/d$z$ = $0.19\substack{+0.15 \\ -0.09}$, where the quoted uncertainties correspond to $1\sigma$\ Gaussian errors calculated for small event numbers \citep{gehrels1986}. As can be seen from Figure \ref{fig:dz}, we are mainly sensitive to gas with a spin temperature typical of the CNM ($T_{\rm s} \lesssim 300$\,K) and would be unlikely to detect DLA systems arising in warmer gas. Figure \ref{fig:ndla} compares our result with those of recent optical DLA studies. The only other 21\,cm point is the local ($z\sim0$) value from \cite{zwaan2005}. The \cite{neeleman16} and \cite{rao17} values are both from HST observations of damped Ly$\alpha$ lines - the \cite{rao17} study preselected targets that showed MgII absorption, while \cite{neeleman16} used a smaller HST sample without MgII preselection. The \cite{noterdaeme12} study at $z>1.7$ used a large ground-based sample of over 6000 SDSS quasars with DLA detections, while \cite{zafar13} analyzed a sample of 122 quasar spectra spanning the redshift range $1.5<z<5$. \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{ndla_new2.pdf} \caption[]{The number density of DLA absorbers, n($z$), as a function of redshift. The dark blue square at $z=0.6$ shows the 21\,cm value derived from our ASKAP pilot survey, as discussed in \S5.2 of this paper. The other values plotted are the 21\,cm HI emission point from Zwaan et al.\ (2005), HST DLA values from Neeleman et al.\ (2016) and Rao et al.\ (2017), and ground-based DLA measurements from Noterdaeme et al.\ (2012) and Zafar et al.\ (2013). The dashed line shows the empirical n($z$) vs $z$ relation from Rao et al.\ (2017). } \label{fig:ndla} \end{figure*} It is notable that the ASKAP n($z$) point at $z\sim0.6$ lies above the \cite{neeleman16} and \cite{rao17} values measured at similar redshift, though at this stage the large error bars on the ASKAP value mean that our result is also (just) consistent with the general trend seen in optical DLA studies. Our high value for n($z$) is somewhat surprising, since we are only sensitive to the subset of DLAs with low $T_{\rm s}$ - and so our n($z$) value\ might be expected to be a lower limit to the total value. As can be seen from Figure \ref{fig:tspin}, the HI column densities for our detected absorbers could only lie below the DLA limit if the covering factor $f = 1$ and the spin temperature dropped below 50--80\,K. This appears unlikely, due to the known properties of the CNM and the paucity of observed systems with such low spin temperatures. Figure \ref{fig:ndla} therefore suggests a potentially significant discrepancy with some of the QSO DLA studies, possibly because we are picking up dusty systems that might not have been included in optical QSO surveys. A larger sample of 21\,cm HI absorption detections is needed to explore this question further. \ctable[ notespar, star, cap = {Spin temperature estimates for PKS\,1229-02}, caption={Spin temperature estimates for the $z=0.395$ absorption system towards PKS\,1229-02, using the Ly$\alpha$ $N_{\rm HI}$ value of $5.6\times10^{20}$\,cm$^{-2}$ from \cite{boisse98}}, label={tab:spin}, ]{l l ll llll l ll cc} {\tnote[]{ } } {\FL \multicolumn{1}{l}{$T_{\rm s}$ (K)} & \multicolumn{1}{l}{Author} & \multicolumn{1}{l}{Notes} \\ \hline\hline 170 & \cite{boisse98} & Used \cite{brown79} HI data, which assumes $f=0.5$ & \\ 170 & \cite{chengalur00} & Recomputed from \cite{brown79} and \cite{briggs99} HI data & \\ 105$\ \pm 30$ & \cite{lane2000} & Used \cite{briggs99} HI data \\ \ 95$\ \pm15$ & \cite{kanekar14a} & Used \cite{brown79} HI data and $f=0.42$\\ 102$\ \pm12$ & This paper & Using ASKAP HI data, and assuming $f=0.42$ \LL } \subsection{The nature of the intervening galaxies} Since our 21\,cm intervening absorbers were selected without any optical pre-selection, identifying their host galaxies gives us a first look at the {\it kinds}\ of galaxies that are present in an HI-selected sample in the distant Universe. In contrast to the host galaxies of the high-redshift ($z>1.7$) DLA systems detected in ground-based optical QSO surveys, the lower-redshift host galaxies of our 21\,cm absorbers may be bright enough to detect in high-quality optical images even in the presence of a bright nearby quasar. For example, a typical spiral galaxy at $z\sim0.5$ is expected to have an I-band magnitude of 20-22 \cite{cantale16} and should be visible in good-quality images from an 8\,m-class telescope. Table \ref{tab:hosts} summarises the information currently available for the host galaxies of the four HI absorption systems detected in this study, and we discuss each of these absorption systems in turn below. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{1610_gal2.pdf} \caption[]{Schematic representation of the positions of four galaxies (A, B, C, D) along the line of sight to the radio source PKS\,1610-77, based on data from Table 1 of \cite{courbin97}. The positions of the radio source and a foreground Galactic star are also shown. } \label{fig:1610_gal2} \end{figure} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{PKS1610-77_cA.pdf} \includegraphics[width=0.8\textwidth]{PKS1610-77_cB.pdf} \caption[]{Optical spectra of two of the galaxies along the line of sight to PKS\,1610-77, taken with the 8\,m Gemini-South telescope. (Top) Spectrum of Galaxy A at $z=0.45043$, showing strong NaD absorption. (Bottom) Spectrum of Galaxy B at $z=0.45035$, showing emission lines of H$\beta$, [OII] and [OIII]. } \label{fig:1610-77_spec} \end{figure*} \subsubsection{The intervening galaxy group towards PKS\,1610-77} The background QSO at $z=1.71$\ was studied in detail by \cite{courbin97}, who obtained an R-band optical image showing three galaxies within a few arcsec of the QSO position. A fourth galaxy was visible in their PSF-subtracted image. Figure \ref{fig:1610_gal2} shows a schematic view of the positions of the four galaxies found by \cite{courbin97}, relative to the radio-loud QSO targeted by ASKAP. \cite{courbin97} obtained detailed photometry of the region surrounding the radio-loud QSO, as well as optical spectra of the QSO and a nearby stellar object. They measured R-band magnitudes of 21.3, 21,3, 22,5 and 23.0 respectively for galaxies A, B, C and D, but were uncertain whether these galaxies were associated with PKS\,1610-77 itself or foreground objects along the line of sight. They also noted that their QSO spectrum appeared highly reddened in the optical, possibly by absorbing objects along the line of sight. \cite{courbin97} also noted the presence of a strong, unidentified absorption feature at 8,552\AA\ in their optical spectrum of PKS\,1610-77. We can now identify this feature as 5895.6\AA\ NaD absorption at a redshift of $z=0.4506$, i.e. the same redshift as the HI absorption line. NaD is a good tracer of cold neutral gas because of its low ionization potential \cite[e.g.][]{schwartz04}, so the identification of this optical line provides independent confirmation of the presence of cold neutral gas at the redshift of our ASKAP HI detection. In 2018, we obtained optical spectra of the two brightest galaxies in the PKS\,1610-77 field (galaxies A and B) with the 8m Gemini-South telescope (see Figure \ref{fig:1610-77_spec}). The redshift measured for both these galaxies ($z=0.4504$) is within 30\,km\,s$^{-1}$ of the ASKAP HI absorption redshift ($z=0.4503$), implying that the intervening HI gas is associated with this galaxy group. At z=0.45, the impact parameters of galaxies A, B, C and D are 10.0, 17.8, 24.2, and 5.9 kpc respectively. The separation between galaxies A and D is only 2.7\,arcsec, corresponding to a projected linear distance of $\sim16$\,kpc. With no k-correction applied, the approximate R-band absolute magnitudes of the four galaxies are $-20.7$ (A), $-20.7$ (B), $-19.5$ (C) and $-19.0$ (D). At this stage, it remains unclear which of the four galaxies in the group is the host galaxy of the HI gas seen in absorption. The spectrum of Galaxy A shows strong NaD absorption, which is absent from the spectrum of Galaxy B, but the spectrum of Galaxy B shows strong H$\beta$ emission characteristic of ongoing star formation. No spectrum is currently available for Galaxy D, which is the closest to the QSO line of sight, and further observations are needed to resolve this question. \subsubsection{The galaxy lens towards PKS\,1830-211} This is a well-studied system, though dust extinction (and a density of foreground stars) within the Milky Way means that optical studies are extremely difficult at the low Galactic latitude of PKS\,1830-211 ($b=-5.7$\,deg). \cite{courbin02} and \cite{winn02} have identified the lensing galaxy as a face-on spiral, and \cite{koopmans05} showed that the HI absorption could be modelled by an almost face on gaseous disk with a constant rotation velocity and a radially dependent 21-cm optical depth. \cite{winn02} also estimated the lens magnification as a factor $\sim5.9$ for the NE component and $\sim3.9$ for the SW component of the lensed QSO. PKS\,1830-211 is one of the brightest radio sources in our sample. Even if we assume that all the radio emission in this source has been boosted by a factor of six, its unlensed counterpart would still have been bright enough to satisfy the selection criteria in \S2.2 of this paper. We therefore retain PKS\,1830-211 in our analysis despite the likelihood that its observed radio flux density has been increased by lensing. \subsubsection{The intervening system towards PKS\,0834-20 } We have not yet identified a host galaxy for the HI absorption system at $z=0.591$ along the line of sight to PKS\,0834-20. A Pan-STARRS \citep{magnier13} image of the field shows faint red objects around 3.6\,arcsec west and 4.8\,arcsec south-east of the QSO. If these are galaxies at the same redshift at the HI line, the impact parameters would be roughly 25\,kpc and 32\,kpc respectively. It is currently unclear whether either of these objects is associated with the HI gas seen in absorption. The only additional information we have at this stage comes from the optical spectrum of the background quasar. The top panel of Figure \ref{fig:0834_NTT} shows a spectrum of PKS\,0834-20 taken in 2018 with the ESO NTT, and the lower panel shows an expanded view of the blue region of this spectrum. Weak metal absorption lines of Fe\,II and Mg\,II are seen at a similar redshift to the intervening HI absorption at $z=0.591$. \ctable[ notespar, star, caption={Potential host galaxies of the intervening HI absorption systems detected in this study}, label={tab:hosts}, ]{l l ll llll l ll cc} {\tnote[]{ } } {\FL \multicolumn{1}{l}{Radio source} & \multicolumn{1}{l}{$z {\small(\rm HI)}$} & \multicolumn{1}{l}{Potential} & \multicolumn{1}{l}{Mag.} & \multicolumn{1}{l}{Impact} & \multicolumn{1}{l}{Notes} \\ & & \multicolumn{1}{l}{host} & & \multicolumn{1}{l}{parameter} \\ \hline\hline PKS\,0834-20 & 0.591 & Unknown & .. & .. & No deep optical image available \\ PKS\,1229-02 & 0.395 & Spiral galaxy & .. & 2\,arcsec (11\,kpc) & See \cite{briggs85,kronberg92} \\ PKS\,1610-77 & 0.450 & Galaxy in group & 21.3 (R) & 1.7\,arcsec (10\,kpc) & If galaxy A, see \cite{courbin97}\\ PKS\,1830-211 & 0.886 & Spiral galaxy & .. &$<<1$\,arcsec ($<<7$\,kpc) & See \cite{winn02} \LL } \begin{figure} \centering \includegraphics[width=0.48\textwidth]{pks0834-20_NTT.pdf} \includegraphics[width=0.5\textwidth]{PKS0834-20-absorption3.pdf} \caption[]{(Top) Optical spectrum of the quasar PKS\,0834-20, taken with the 3.5m ESO NTT in March 2018, with the main spectral lines labelled. (Bottom) Close-up of the region blueward of Ly$\alpha$, showing metal absorption lines of FeII and MgII at $z=0.586$, close to the 21\,cm HI absorption redshift of $z=0.591$. } \label{fig:0834_NTT} \end{figure} \subsubsection{The intervening galaxy towards PKS\,1229-02} \cite{steidel94} obtained infrared I and K-band images of the PKS\,1229-02 field. After subtracting the QSO light, they found two faint objects likely to be foreground galaxies and tentatively identified the $z=0.395$ absorption system with a spiral galaxy at an impact parameter of $\sim7$\,kpc from the background QSO. This is consistent with the findings of \cite{kronberg92}, who carried out a detailed multi-frequency analysis of the rotation measure variations along the radio jet in PKS\,1229-02. They found that their results were consistent with the presence of an intervening spiral galaxy with an inclination angle of $\sim60^\circ$\ and an impact parameter of 2\,arcsec (around 10\,kpc at $z=0.395$). Hamanowicz et al. (2019, in preparation) have recently observed the PKS\,1229-02 field with the MUSE integral-field spectrograph on the ESO VLT. They measure a star-formation rate of $0.67\pm0.09$\,M$_\odot$\,yr$^{-1}$ for the intervening galaxy, and find that it has sub-solar metallicity (A. Hamanowicz, private communication). \section{Conclusions}\label{sec:conclusions} In this pilot ASKAP study of the sightlines towards 53 bright, compact southern radio sources, we detected four intervening 21\,cm HI absorption lines at redshifts ranging from $z=0.395$ to $z=0.886$. Two of these (towards PKS\,1229-02 and PKS\,1830-211) are re-detections of lines from the published literature, and two (towards PKS\,0834-20 and PKS\,1610-77) are detected here for the first time. We used these detections to make a new estimate of the DLA number density at redshift $z\sim0.6$, $n(z)=0.19\substack{+0.15 \\ -0.09}$. This value lies above the general trend seen in optical and ultraviolet studies of QSO DLA systems \citep[e.g.][]{rao17,zafar13,noterdaeme12}, as can be seen from Figure \ref{fig:ndla}. From the small sample observed here, it appears that our detected HI absorption lines arise mainly in the disks of spiral or late-type galaxies, with impact parameters typically less than 10--15\,kpc. At least one of the background radio-loud QSOs (PKS\,1610-77) is highly reddened (presumably by the intervening galaxy group identified by \cite{courbin97} and detected here in HI) and would probably not have been selected in an optical QSO survey. Our pilot results are encouraging for two reasons. Firstly, the redshift interval covered by a single ASKAP spectrum, $\Delta z\sim0.6$, is large enough to carry out spectroscopically untargeted 21\,cm searches for intervening HI absorption systems - a significant advance over earlier radio studies \citep[e.g.][]{lane98} where optical preselection (often based on the Mg\,II absorption line) was required. Secondly, the 700-1000\,MHz ASKAP band is free from terrestrial radio interference (in contrast to the case at most major radio observatories around the world), meaning that the full redshift interval can be searched for 21\,cm absorption lines. While this pilot survey has focused on observations of individual bright radio sources, the completed 36-antenna ASKAP telescope will have sufficient sensitivity to search for HI absorption simultaneously on sightlines to over 100 radio sources across a 30\,deg$^2$ field of view \citep{johnston08}. The full all-sky FLASH dataset will cover a total redshift path length $\Delta z \sim 50,000.$ This opens up the exciting possibility of detecting several hundred new 21\,cm absorption systems out to $z\sim1$, allowing us to improve our knowledge of the amount and physical state of neutral hydrogen in individual galaxies in the distant Universe. \section*{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author. \section*{Acknowledgements} We acknowledge the financial support of the Australian Research Council through grant CE170100013 (ASTRO 3D). The initial stages of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics (CAASTRO), through grant CE110001020. The Australian SKA Pathfinder is part of the Australia Telescope National Facility which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. We gratefully acknowledge and thank the ASKAP commissioning team for the use of BETA and ASKAP-12 during the commissioning phase, which allowed us to carry out the observations described in this paper. Based on observations collected at the European Southern Observatory under ESO program 0100.A-0588(A). Based on observations obtained under program ID GS-2017B-Q-63 at the Gemini Observatory which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'ia e Innovaci\'on Productiva (Argentina), Minist\'erio da Ci\^encia, Tecnologia e Inova\c c\~ao (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We thank C\'eline P\'eroux for helpful comments on an earlier draft of this paper, and Filippo Maccagni for carrying out one of the ESO NTT observing runs. EMS, VAM and JRA also thank the Munich Institute for Astro- and Particle Physics (MIAPP) for supporting our attendance at the 2019 MIAPP program on `Galaxy Evolution in a New Era of HI Surveys', which provided a stimulating venue for discussions and allowed us to complete this paper. \bibliographystyle{mnras}
1,108,101,562,662
arxiv
\subsubsection*{Lagrange multiplier terms} In Section~\ref{secno:FrameB} we described a formulation of Einstein-Cartan gravitation with one field $\varpi$ which is defined over the frame bundle of spacetime. The idea of the model proposed by Hélein and Vey in \cite{LFB} is to forget any a priori structure of frame bundle and simply study the field $\varpi$ defined over a structure-less $10$-dimensional manifold. Indeed, turning around the constraints~(\ref{normalisation},\ref{equivconsnorm}), they can be seen as defining a \enquote{generalised frame bundle structure} from $\varpi$, as we define below. A similar mechanism assuming only part of the fibration structure is studied in~\cite{ObsSp}. \subsubsection*{Generalised frame bundles} Given a $\mathfrak p$-valued coframe $\varpi=\omega\oplus\alpha$, \eqref{normalisation} can be understood as defining the infinitesimal action of $\lor$ : \[ {\bar{\h}} := \varpi^{-1}({\mathfrak h}\oplus 0) \] The $\omega$ component of \eqref{equivconsnorm} then implies that the vectors ${\bar{\h}}$ form a representation of the Lie algebra $\lor$ (as explained in detail in Appendix~\ref{annCartanConnForm}). Equation~\eqref{equivconsnorm} then means that $\varpi$ is equivariant under this Lie algebra action. In the case this Lie algebra action corresponds to the infinitesimal action of $\lor$ on a $\mathfrak{L}$-principal bundle fibration (over some $4$-manifold), $\alpha$ is a solder form (defined in Appendix~\ref{annFrameB}) so that the $\mathfrak{L}$-principal bundle can be identified with a frame bundle. The structure of principal bundle is in a sense a global structure so that it cannot be entirely characterised by local equations like~\eqref{eqno:omalcons}. But up to global topological aspects (to which Appendix~\ref{annMC} is dedicated), these equations encapsulate the \enquote{local structure} of the frame bundle with a metric connection. It is this insight which motivates the generalisation of the Einstein-Cartan theory formulated over a \enquote{blank} $10$-manifold. We call a $\mathfrak p$-valued coframe $\varpi$ which satisfies \begin{equation}\tag{\ref{equivconsnorm}} \mathcal L_{\bar{\h}}\varpi + \ad_{\mathfrak h} \varpi = 0 \end{equation} a \emph{generalised Cartan connection} (simply called Cartan connection in~\cite{DiffGeoCartan}), or a \emph{Cartan $1$-form for short}. We say that the manifold $\mathcal P$ equipped with a generalised Cartan connection has the structure of a \emph{generalised frame bundle}, which is to be understood as a abbreviation for \enquote{generalised frame bundle with connection}. As $\varpi$ defines an action of $\lor$, we can define equivariant sections on a generalised frame bundle. In particular, we will use the following notions, which generalise the usual ones on a standard frame bundle and more generally make sense on foliated manifolds. A \emph{basic vector field} is a $\lor$-equivariant application with value in $\Mink$. More generally, a \emph{(basic) tensor field} is a $\lor$-equivariant application with value in some tensor product $\Mink^{\otimes m} \otimes \Mink^{*\otimes n}$.\\ A \emph{basic differential form} is a $\lor$-equivariant differential form with purely horizontal components, that is to say along products of $\alpha^a$. One defines similarly basic differential forms with value in a vector space, or in an equivariant vector bundle. See Appendix~\ref{annFrameB} for more detail. One example which will be omnipresent throughout the article is the curvature form associated to $\varpi$ : \begin{equation*} \Omega = \d\varpi + \fwb\varpi\varpi = \frac12\Omega_{ab}\alpha^a\wedge\alpha^b \end{equation*} which is an equivariant $\mathfrak p$-valued basic $2$-form (equivariant here means that $\mathfrak p$ is used as a equivariant trivial vector bundle). The structure of generalised frame bundle is the one we wish to obtain from the Euler-Lagrange equations. \subsubsection*{Lagrange multiplier terms} We motivated dropping the constraint \eqref{normalisation} so as to take it as a definition of the fields ${\bar{\h}}$ instead. Constraint~\eqref{eqno:nondegen} is an open (and algebraic) condition so we will simply keep it as restraining the configuration space to an open subset. Our manipulations will actually be meaningful on all of $T^*\mathcal P\otimes \mathfrak p$ including degenerated points, with due adjustments since $T\mathcal P$ and $\mathfrak p$ are no longer identified. However Equation \eqref{equivconsnorm} is different. It is both a differential constraint and a closed constraint. We want to incorporate it into the Lagrangian by means of Lagrange multipliers. For this, the convenient formulation of \eqref{equivconsnorm} is \begin{equation*} \d \varpi^A + \fwb\varpi\varpi^A = \frac12\Omega^A_{bc}\alpha^b\wedge\alpha^c \end{equation*} with $\Omega_{bc}^A$ arbitrary (non-constant) coefficients which are antisymmetric in $b,c$ (the derivation is given in Appendix~\ref{annCartanConnForm}). As the coefficients $\Omega^A_{bc}$ are arbitrary, the equation only means that the components along $\omega\wedge\omega$ and along $\alpha\wedge\omega$ vanish. According to Appendix~\ref{anndual} if we use again the notation $\varpi^{(8)}_{BC}$ for the $8$-form dual to $\varpi^B\wedge\varpi^C$, we can rewrite the equation as \begin{subequations}\label{seqno:equivvarpi8} \begin{align} \left( \d\varpi + \fwb\varpi\varpi\right) \wedge \varpi^{(8)}_{jk} = 0 \label{equivconsij}\\ \left( \d\varpi + \fwb\varpi\varpi\right) \wedge \varpi^{(8)}_{bk} = 0 \label{equivconsai} \end{align} \end{subequations} with the index $b$ corresponding to a basis of $\mathfrak m$ and indices $j,k$ corresponding to a basis of $\lor$. Under this form, it is straightforward to impose the conditions~\eqref{seqno:equivvarpi8} using Lagrange multipliers : we add to the theory free fields $P_A^{jk}$ and $P_A^{bk}$ and consider the following term to add to the Lagrangian : \begin{equation}\label{multvirg} \left( \d\varpi + \fwb\varpi\varpi\right)^A \wedge \frac12 P^{jk}_A \varpi^{(8)}_{jk} + \left( \d\varpi + \fwb\varpi\varpi\right)^A \wedge P^{bk}_A \varpi^{(8)}_{bk} \end{equation} which would impose the constraint~\eqref{seqno:equivvarpi8} through the equations of motion corresponding to variations of $P^{jk}_A$ and $P^{bk}_A$. Note the fundamental difference with \emph{holonomic} Lagrange multipliers which would be coupled to a term such as $f(\varpi,v)\varpi^{(10)}$. The Lagrange multipliers we use serve to impose an (exterior) differential constraint. Such Lagrange multipliers are presented in more detail in Section~\ref{LagMult}, which explains how to use the variational terms derived from them. The $\varpi^{(8)}_{BC}$ forms being antisymmetric in $BC$, only the antisymmetric part of $P_A^{BC}$ is involved in the term \eqref{multvirg}. We thus constrain the multipliers $P_A^{BC}$ (with $BC=bk$ or $BC=jk$) to be antisymmetric in $BC$, effectively using the $8$-forms \[ \frac12 P_A^{BC}\left( \d\varpi + \fwb\varpi\varpi\right)^A \wedge \varpi^{(8)}_{BC} \] as Lagrange multiplier fields. If one wanted to impose a torsion-freeness constraint on the connection, one could in a similar fashion use a free $p^{bc}_a$ term, as is for example done in three dimensions in~\cite{D3ECD}. \subsubsection*{The Lagrangian} The Lagrangian~\eqref{FrameLag} is \[ \mathscr{L}[\varpi] = \Omega^i \rho_{i,d}^b \eta^{dc}\wedge (\omega\oplus\alpha)^{(8)}_{bc} = \Omega^i \rho_{i,d}^b \eta^{dc}\wedge \varpi^{(8)}_{bc} \] so that the Lagrangian~\eqref{FrameLag} takes the form \[ \left( \d\varpi + \fwb\varpi\varpi \right)^i \rho_{i,d}^b\eta^{dc} \wedge \varpi^{(8)}_{bc} \] Note how as a linear function of the curvature $2$-form it is very similar to the terms~\eqref{multvirg}. Let $\mathcal P$ a $10$-manifold and $\mathfrak p\simeq \lor\ltimes\mathfrak m$ the Poincaré Lie algebra. Let us denote $\Iso(T\tot,\p) \subset T^*\mathcal P\otimes \mathfrak p$ the subbundle of \emph{$\mathfrak p$-valued coframes}. We consider the following configuration bundle over $\mathcal P$ : \begin{equation*} Q = \underbrace{\Iso(T\tot,\p)}_{\varpi^A} \times_\mathcal P \underbrace{\left( \mathfrak m\wedge\lor \oplus \ExT^2\lor \right)\otimes\mathfrak p^*}_{P^{bk}_A,\, P^{jk}_A} \end{equation*} with $\mathfrak m\wedge\lor \subset \ExT^2\mathfrak p$ the image of $\mathfrak m\otimes\lor \subset \mathfrak p^{\otimes 2}$. We call $\lambda$ the canonical $\mathfrak p$-valued $1$-form on $T^*\mathcal P\otimes\mathfrak p$. On $\Iso(T\mathcal P,\mathfrak p)$ it can be identified with a solder form (defined in Appendix~\ref{annFrameB}). We will use the notation $\lambda^{(10-k)}_{A_1\cdots A_k}$ for the dual $(10-k)$-forms defined according to Appendix~\ref{anndual}. We also use the notations $p_A^{bk}$ and $p_A^{jk}$ for the (trivial) fibre coordinates of the component in $ \left( \mathfrak m\otimes\lor \oplus \ExT^2\lor \right)\otimes\mathfrak p^*$, in order to establish a clear distinction with the corresponding components $P_A^{bk}$ and $P_A^{jk}$ of \emph{sections} of $Q$ The Lagrangian form gathering both the lifted Einstein-Cartan Lagrangian and the Lagrange multiplier fields is \begin{multline}\label{eqno:ECLagomp} \mathscr{L}[\omega,P] = \rho_{i,d}^b\eta^{dc}\left( \d\varpi + \fwb\varpi\varpi \right)^i \wedge \varpi^{(8)}_{bc} \\ + \frac12 P^{jk}_A \left( \d\varpi + \fwb\varpi\varpi\right)^A \wedge \varpi^{(8)}_{jk} + P_A^{bk} \left( \d\varpi + \fwb\varpi\varpi\right)^A \wedge \varpi^{(8)}_{bk} \end{multline} To obtain a local expression in terms of coordinates, let $(z^I)$ be a local system of coordinates on $\mathcal P$. It induces local coordinates $\lambda^A_I$ on $T^*\mathcal P\otimes\mathfrak p$ and fibre coordinates on the $1$-jet bundle $\mathcal{J}^1(T^*\mathcal P\otimes \mathfrak p)$ : we write them $v^A_{I,J}$, defined so that \[ v^A_{I,J}(\varpi) = \partial_J \lambda^A_I(\varpi) \] Then the Lagrangian \eqref{eqno:ECLagomp} can be expressed as a $10$-form on $\mathcal{J}^1(Q)$ in terms of the local coordinates (there is no $1$st order contribution from $P$) : \begin{multline}\label{eqno:LagvIJ} \mathscr{L} = \rho_{i,d}^b\eta^{dc}\left( v_{I,J}\d z^J\wedge\d z^I + \fwb\lambda\lambda \right)^i \wedge \lhuit{bc} \\ + \frac12 p^{jk}_A \left( v_{I,J}\d z^J\wedge\d z^I + \fwb\lambda\lambda\right)^A \wedge \lambda^{(8)}_{jk} + p_A^{bk} \left( v_{I,J}\d z^J\wedge\d z^I + \fwb\lambda\lambda\right)^A \wedge \lambda^{(8)}_{bk} \end{multline} In this form, there is no constraint extraneous to the Lagrangian other than the open constraint of nondegeneracy of $\varpi$. Hence we can use the usual Legendre transform formula~(\ref{LTform}) (described in Section~\ref{annLegTrans}) to compute the Poincaré-Cartan form. If we define \begin{equation}\label{EWCmultcons} p^{bc}_A = 2\delta^i_A \rho_{i,d}^b\eta^{dc} \end{equation} then $\mathscr{L}$ takes the concise form \begin{equation} \mathscr{L} = \frac12 p_A^{BC} \left( v_{I,J}\d z^J\wedge\d z^I + \fwb\lambda\lambda\right)^A \wedge \lambda^{(8)}_{BC} \end{equation} We can thus consider that we have a field $p_A = p_A^{BC}\lhuit{BC}$ which is subject to the holonomic constraint \eqref{EWCmultcons}. \subsubsection*{The Poincaré-Cartan form} Introduce the following notation : for a $p$-form $u$ on $T^*\mathcal P\otimes\mathfrak p$ with values in a $\mathfrak p$-module we will write \begin{equation} \d^\lambda\! u:=\d u + \lambda \wedge u \end{equation} with $\lambda\wedge u$ including the action of $\mathfrak p$ (in this paper it will mainly be about products of adjoint and coadjoint representations of $\lor\simeq \mathfrak p/\mathfrak m$). Define also \begin{equation} \D\lambda := \d\lambda + \fwb\lambda\lambda = \d^\lambda\!\lambda -\fwb\lambda\lambda \end{equation} The operator $\d^\lambda\!$ is meant to model a covariant differential while $\D\lambda$ models a universal curvature 2-form. They satisfy the expected equations (proved in Appendix~\ref{anncalc}) \begin{align} \d^\lambda\!\dl u &= \D\lambda\wedge u \label{dldls}\\ \d^\lambda\!\D\lambda &= 0 \label{annBianchi}\\ \d^\lambda\!(u^A \wedge v_A) &= (\d^\lambda\! u^A) \wedge v_A + (-1)^{|u|}u^A \wedge \d^\lambda\! v_A \label{eq:dlcontr} \end{align} for $u^A$ and $v_A$ homogeneous differential forms with values in dual $\mathfrak p$-modules. To compute the Poincaré-Cartan form we will use the following formula from Section~\ref{annLegTrans}: \[ \partial_{v^A_{I,J}}\lrcorner\,\d \left( \mathscr{L} + \pi^J_B \wedge \chi^B_J \right) = 0 \mod [\theta^A_J] \] with $\pi^J_B$ $9$-form fields on $\mathcal P$ and $\chi^A_J = \d\lambda^A_J - v^A_{I,J}\d z^I$. Using this formula rather than Formula~\eqref{LTform} will save us some back and forth between the coframes $\d z^I$ and $\lambda^A$. We determine the value of $\pi^J_B$ as a function of $p_A^{BC}$ : \begin{equation*} \begin{aligned} \partial_{v^A_{I,J}}\lrcorner\,\d \left( \mathscr{L} + \pi^J_B \wedge \chi^A_J \right) &= \partial_{v^A_{I,J}}\lrcorner\,\d \left( \frac12 p_D^{BC} \left( v_{K,L}\d z^L\wedge\d z^K + \fwb\lambda\lambda\right)^D \wedge \lambda^{(8)}_{BC} + \pi^K_D \wedge \chi^D_K \right)\\ % &= \frac12 p_A^{BC} \left( \d z^J\wedge\d z^I - z^I\wedge\d z^J\right) \wedge \lambda^{(8)}_{BC}\\ &\qquad+ \left( \partial_{v^A_{I,J}}\lrcorner\,\d\pi^J_D \right) \wedge \chi^D_J - \pi^K_D \wedge \partial_{v^A_{I,J}}\lrcorner\, \left( - \d v^D_{K,L}\wedge \d z^L \right) \\ % &= p_A^{BC} \d z^J\wedge\d z^I \wedge \lambda^{(8)}_{BC} + \pi^I_A \wedge \d z^J + \left( \partial_{v^A_{I,J}}\lrcorner\,\d\pi^J_D \right) \wedge \chi^D_J\\ &= \d z^J\wedge (- p_A^{BC} \d z^I \wedge \lhuit{BC} + \pi^I_A) + \left( \partial_{v^A_{I,J}}\lrcorner\,\d\pi^J_D \right) \wedge \chi^D_J \end{aligned} \end{equation*} As the term $\left( \partial_{v^A_{I,J}}\lrcorner\,\d\pi^J_D \right) \wedge \chi^D_J$ is a contact term the momentum forms $\pi^I_A$ are defined by \[ \d z^J\wedge (- p_A^{BC} \d z^I \wedge \lhuit{BC} + \pi^I_A) = 0 \] Since $\d z^J$ form a basis of $1$-forms on $\mathcal P$, we conclude that the momenta forms are directly parameterised by the Lagrange multipliers $p_A^{BC}$ \begin{equation*} \pi^I_A = p_A^{BC} \d z^I \wedge \lhuit{BC} \end{equation*} and the corresponding contact term in the Poincaré-Cartan form is \begin{equation} p_A^{BC} \d z^I \wedge \lhuit{BC} \wedge \left( \d\lambda^A_I - v^A_{J,I}\d z^J \right) = p_A^{BC} \wedge \lhuit{BC} \wedge \left( \d\lambda^A - v^A_{J,I}\d z^I \wedge \d z^J \right) \end{equation} The Poincaré-Cartan form is \begin{equation*}\begin{aligned} \mathscr{L} + p_A^{BC} \wedge \lhuit{BC} \wedge \left( \d\lambda^A - v^A_{J,I}\d z^I \wedge \d z^J \right) &= \frac12 p_A^{BC} \left( v_{I,J}\d z^J\wedge\d z^I + \fwb\lambda\lambda\right)^A \wedge \lambda^{(8)}_{BC}\\ &\quad + p_A^{BC} \wedge \lhuit{BC} \wedge \left( \d\lambda^A - v^A_{J,I}\d z^I \wedge \d z^J \right)\\ &= \frac12 p^{BC}_A (\d\lambda + \fwb\lambda\lambda)^A \wedge \lhuit{BC} \end{aligned}\end{equation*} We write the Poincaré-Cartan as follows : \begin{equation}\label{PCEWC}\boxed{ \overline{\thewc} = \frac12 p^{BC}_A\D\lambda^A \wedge \lhuit{BC} }\end{equation} It goes with the holonomic constraint~\eqref{EWCmultcons} on the $p_A^{bc}$. We identify two components : \begin{align} \Theta_{EC} &= \frac12 2\rho_{i,d}^b\eta^{dc}\D\lambda^i \wedge \lhuit{bc}\\ \thcons_{EC} &= \frac12 p^{jk}_A\D\lambda^A \wedge \lhuit{jk} + p_A^{bk} \D\lambda^A \wedge \lhuit{bk} \end{align} Note that as \[ \fwb\lambda\lambda^i \wedge \lhuit{ab} = \frac12 c^i_{DE} \lambda^D\wedge\lambda^E \lhuit{ab} = c^i_{ab} \lambda^{(10)} = 0 \] the form $\Theta_{EC}$ can also be expressed with $\d \lambda$ replacing $\D\lambda$. However the expression $\ExT$ has an interpretation as the curvature, furthermore if we generalise from $\mathfrak p \simeq \lor\ltimes \mathfrak m$ to other Lie algebras then $\fwb\lambda\lambda^i\wedge\lhuit{ab}$ may not vanish. We introduce the notation $\nonbc{BC}$ for pairs of indices of $\mathfrak p$ \emph{except pairs which correspond to $\mathfrak m\otimes\mathfrak m$} (i.e. only pairs which correspond to $(\lor\otimes\lor) \oplus (\mathfrak m\otimes\lor)\oplus(\lor\otimes\mathfrak m)$). The Poincaré-Cartan form can then be expressed as \begin{equation} \overline{\thewc} = \underbrace{ \delta^i_A \rho^a_{i,c}\eta^{bc}\D\lambda^A \wedge \lhuit{ab} }_{\Theta_{EC}} + \underbrace{ \frac12 p^{\nonbc{BC}}_A \D\lambda^A \wedge \lhuit{\nonbc{BC}} }_{\thcons_{EC}} \end{equation} The Poincaré-Cartan form takes value in the \emph{affine dual} of $\mathcal{J}^1(Q)$ (described in Section~\ref{ssecno:LegTrans}). It is the fibre bundle of $10$-forms on $Q$ that have a vanishing contraction with all $2$-vectors of $Q$ that are purely vertical with respect to the fibration above the source space $\mathcal P$. This affine dual is usually written : \[ \ExT^{10}_1 T^*Q \] Note that the coefficients of Poincaré-Cartan form have no dependency on the first order component of the $1$-jet, so that the only dependency comes from the factor \[\D\lambda^A=\d\lambda^A+\fwb\lambda\lambda^A\] We can therefore restrict the momentum space from the whole affine dual to $Q$ itself, on which $\overline{\thewc}$ can still be expressed as a $10$-form, along with the supplementary constraint~(\ref{EWCmultcons}) (for details on the equivalence between the Lagrangian and the Hamiltonian formulations see the last section of \cite{ConsFT} on affine Lagrangians). One can also say that $\frac12 p^{\nonbc{BC}}_A\lhuit{\nonbc{BC}}$ correspond to specific elements of the linear dual space of $\mathcal{J}^1(Q)$ of the form $\D\lambda^A \wedge \frac12 p_A^{\nonbc{BC}} \lhuit {\nonbc{BC}}$, according to~(\ref{multvirg}). In fact, the Lagrange multiplier terms are derived in \cite{LFB} as free momenta obtained by a Legendre transformation under constraint. In fact it could be argued that \eqref{PCEWC} is the natural formulation on $\ExT^8 T^*\mathcal P \otimes \mathfrak p^* \times_\mathcal P Q$ of the Lagrangian given in \eqref{eqno:ECLagomp}, rather than \eqref{eqno:LagvIJ}. We chose to go for the \enquote{naive} formulation of the Lagrangian on the $1$-jet bundle to illustrate the systematic approach to the Legendre transformation. We derive the variational equations in the next section. \subsection{Exact terms in the Euler-Lagrange equations} Given a Lagrangian involving general Lagrange multipliers, we explained in Section~\ref{LagMult} how it is possible, using suitable vector fields, to obtain Euler-Lagrange equations which have all the dependency in Lagrange multipliers gathered in an exact term. In our case, we have Lagrange multipliers $p_A^{\nonbc{BC}}$ involved in a Lagrangian term \[ p_A^{\nonbc{BC}} \D\lambda^A \vhuit{\nonbc{BC}} \] as well as Lagrange multipliers $\kappa^{\alpha i}$ involved in a term \[ \frac i 2 \left( \bar\chi \d^\lambda\! s - \d^\lambda\!{\bar s} \chi \right) \lneuf i \] We want to study solutions to Equations~(\ref{eqno:ELtotal}). For this we want to get rid of the non-physical Lagrange multipliers fields. The solution is to gather them in an exact term and to make it vanish by integration. The integration cannot be made on the total space $\mathcal P$ which is not assumed to be compact. We need to find suitable compact submanifolds on which the Euler-Lagrange Equations~(\ref{eqno:ELepsilon},\ref{eqno:ELalpha}) can be \enquote{restricted} in a meaningful way while preserving the exactness of the Lagrange multipliers term. But before this, let us find those suitable vector fields. We want vertical variations of the field which preserve the constraints~(\ref{eqno:ELconsEWC},\ref{eqno:ELconsDira}) on-shell. \subsubsection{Infinitesimal variations preserving the constraints} Once a frame bundle structure is given by a field $\phi=(\varpi,P,\psi,K) : \mathcal P\to Q$ satisfying the constraint equation~(\ref{eqno:ELconsEWC}), it is natural to consider variations of the structure corresponding to variations of the tetrad or of the connection in the usual spacetime formalism. These are given by \emph{equivariant} variations of $\omega^i=\phi^*\lambda^i$ and $\alpha^a = \phi^*\lambda^a$ as we will show. In a similar way, we will consider equivariant variations of the spinor field $\psi^\alpha = \phi^* s^\alpha$. Note however the difference with the situation described in Section~\ref{LagMult} : equivariance is formulated using the principal bundle structure hence only makes sense on $\mathcal P$ and is a notion dependent on the field $\phi$. Instead of assuming our variations equivariant from the start we will derive this condition. \subsubsection*{Variations of the coframe} Let us start with variations of the coframe $\varpi^A$. The field $\phi$ provides us with a nondegenerate equivariant $1$-form : the coframe itself $\varpi^A = \phi^*\lambda^A$. The variation of $\varpi$ will be given by a vertical vector field $X$ on the image of $\varpi$ : \[ X\in \Gamma \left( \mathcal P,\varpi^*\left[ V(\Iso(T\tot,\p)) \right] \right) \] Using here again the isomorphism \[ V(T^*\mathcal P\otimes \eucl) \simeq T^*\mathcal P\otimes \eucl \times_\mathcal P T^*\mathcal P\otimes \eucl \] a vertical variation of $\varpi$ is equivalent to a $1$-form $\epsilon^A\in\Omega^1(\mathcal P,\eucl)$. We will use for convenience Lie derivatives $\mathcal L_X$ but they will not depend on the chosen $1$st order extension of $X$ as we will work on the image of $\phi$. The main property we will use is \begin{equation} \phi^*{\mathcal L_X \lambda}^A = \epsilon^A \end{equation} Let us decompose $\epsilon^A$ into a $\soq$-valued component $\tau^i\in\Omega(\mathcal P,\so_4)$ and a $\mathbb{R}^4$-valued component $\beta^a$ : \[ \epsilon = \tau\oplus\beta \] To study the action of $X$ on the constraints, we will have it act on $\left( \d\lambda +\fwb\lambda\lambda \right)\lhuit{BC}$. We can then compute \begin{align*} \phi^*\mathcal L_X \left( \d\lambda+\fwb\lambda\lambda \right)^i &= \d\phi^*\mathcal L_X \lambda^i + 2\phi^*\fwb{\mathcal L_X\lambda}\lambda^i = \d\tau^i + \wb\omega\tau^i\\ \phi^*\mathcal L_X \left( \d\lambda+\fwb\lambda\lambda \right)^a &= \d\phi^*\mathcal L_X \lambda^a + 2\phi^* \fwb{\mathcal L_X\lambda}\lambda^a = \d\beta^a + \wb\omega\beta^a + \wb\tau\alpha^a \end{align*} and \begin{equation} \phi^*\mathcal L_X \lhuit{BC} = \phi^*\left( (\mathcal L_X\lambda^D)\wedge \lsept{BCD}\right) = \epsilon^D\wedge \vsept{BCD} \end{equation} which gather to give \begin{subequations}\label{eqno:phiLieX} \begin{align} \phi^*\mathcal L_X \left( \left( \d\lambda+\fwb\lambda\lambda \right)^i \wedge \lhuit{\nonbc{BC}} \right) &= \left( \d\tau^i + \wb\omega\tau^i \right) \wedge \lhuit{\nonbc{BC}} + \Omega^i\wedge \epsilon^D\wedge \vsept{\nonbc{BC}D} \label{eqno:varcour}\\ \phi^*\mathcal L_X \left( \left( \d\lambda+\fwb\lambda\lambda \right)^a \wedge \lhuit{\nonbc{BC}} \right) &= \left( \d\beta^a + \wb\omega\beta^a + \wb\tau\alpha^a \right) \wedge \lhuit{\nonbc{BC}} + \Omega^a\wedge \epsilon^D\wedge \vsept{\nonbc{BC}D} \label{eqno:vartor} \end{align} \end{subequations} In order for these terms to vanish under the constraint equation~\eqref{eqno:ELconsEWC}, we will require the following three conditions on $\epsilon$ : \begin{subequations} \begin{enumerate} \item $\epsilon$ to be purely horizontal : $\epsilon^A = \epsilon^A_b\alpha^a$. This will prevent $\vsept{\nonbc{BC}D}$ from having purely horizontal components (of type $\lhuit{bc})$ so that the term \[ \Omega^A\wedge \epsilon^D\wedge \vsept{\nonbc{BC}D} \] necessarily vanishes. \item There exists coefficients $r^i_{bc}$ such that \begin{equation} \d\tau^i + \wb\omega\tau^i = \frac12 r^i_{bc}\alpha^b \wedge \alpha^c \label{eqno:rab} \end{equation} So that the term $\left( \d\tau^i + \wb\omega\tau^i \right) \wedge \lhuit{\nonbc{BC}}$ vanishes. But now that we assumed that $\tau$ is horizontal, this equation exactly means that \emph{$\tau$ is equivariant} \item There exists coefficients $t^a_{bc}$ such that \begin{equation} \d\beta^a + \wb\omega\beta^a = \frac12 t^a_{bc}\alpha^b \wedge \alpha^c \label{eqno:tab} \end{equation} As $\wb\tau\alpha$ is now assumed to be purely horizontal, the term $\left( \d\beta^a + \wb\omega\beta^a + \wb\tau\alpha^a \right) \wedge \lhuit{\nonbc{BC}}$ vanishes. But this is exactly requiring that \emph{$\beta^a$ is equivariant}. \end{enumerate}\end{subequations} Under these three conditions, the terms of \eqref{eqno:phiLieX} vanish. We also need to check whether such variations preserve the constraint \eqref{eqno:ELconsDira} : \begin{equation}\begin{aligned} \mathcal L_X \left( \left( \d s + \omega^i\sigma_i s \right) \wedge \lneuf i \right) &= \left( \d s + \omega^i\sigma_i s \right) \wedge \mathcal L_X \lambda^B \wedge \lhuit{iB} = \left( \d s + \omega^i\sigma_i s \right) \epsilon^B_c\lambda^c \wedge \lhuit{iB}\\ &= \left( \d s + \omega^i\sigma_i s \right) \epsilon^c_c\lneuf i \end{aligned}\end{equation} Thus the constraint is preserved without condition. To conclude, any equivariant horizontal $1$-form $\epsilon$ represents a variation preserving the constraints (\ref{eqno:ELconsEWC},\ref{eqno:ELconsDira}). They can be identified with families of equivariant coefficients $\epsilon^A_b$ on $\mathcal P$ with value in ${\mathbb{R}^4}^*\otimes\eucl$. Unfortunately, we cannot find coefficients $\epsilon^A_i$ such that \eqref{eqno:phiLieX} vanish in a similar manner for two reasons. First, to a nonzero $\epsilon^i$ will correspond a term \[ \Omega^A\wedge \epsilon_i^D\varpi^i\wedge \vsept{\nonbc{BC}D} = \Omega^A \left( \epsilon^i_i \vhuit{BC} - \delta_i^C \epsilon_i^D \vhuit{BD} + \delta_i^B \epsilon_i^D \vhuit{CD} \right) \] which can contain nonzero components of $\Omega^A$ when there are non-vanishing components $\epsilon^d_i$. Second, for a non-horizontal $1$-form $\epsilon$, equivariance is no longer equivalent to \[ \d\epsilon^A + \wb\omega\epsilon^A = \frac12 E^A_{bc}\alpha^b\wedge\alpha^c \] as the following equation holds, with $\epsilon_i=\epsilon({\mathfrak h}_i)\in\mathfrak p$ : \[ \left( \mathcal L_{{\bar{\h}}_i} + {\mathfrak h}_i \cdot \right) \epsilon = i_{{\mathfrak h}_i} \left( \d\epsilon + \wb\omega\epsilon \right) + \d\epsilon_i + \wb\omega{\epsilon_i} \] Thus arbitrary equivariant $1$-forms are not necessarily solutions of $i_{{\mathfrak h}_i} \left( \d\epsilon + \wb\omega\epsilon \right) = 0$. Indeed, $\omega$ itself is equivariant but obeys \[ \d\omega^i + \wb\omega\omega^i = \Omega^i +\fwb\omega\omega^i = \frac12\Omega^i_{bc}\alpha^a\wedge\alpha^b + \frac12 c^i_{jk}\omega^j\wedge\omega^k \] We therefore proceed only with the horizontal variations of $\varpi$ and general variations of $\psi$. Insofar as the fibration above the spacetime does not vary, these variations can be viewed as moving the direct orthonormal frame bundle inside the general linear frame bundle of the spacetime. Note however that beside metric and connection variations, there is an extra gauge freedom as the spin group $\Spin_4$ acts on $T^*\mathcal P\otimes \eucl$ by bundle automorphisms \cite{DynPrincBundle}. \subsubsection*{Variations of the spinor field} If the structure group is $\Spin_4$, the same computation can be done with variations $X$ of $\psi$, identified with $\Sigma$-valued fields $\xi$ over $\mathcal P$ (while keeping the generalised frame structure unvaried). In this case, the compatibility with the constraint \eqref{eqno:ELconsDira} requires \begin{equation} \mathcal L_X (\d s + \omega^i \sigma_i\cdot s) = \d\xi + \omega^i\sigma_i \xi = \varsigma_a\alpha^a \end{equation} which here again expresses the local equivariance of $\xi$ over $\mathcal P$. It is obvious that variations of $\psi$ preserve that constraint term $\left( \d\lambda +\fwb\lambda\lambda \right)\lhuit{BC}$. \subsubsection{A basis for the constraint-preserving variations}\com{Nécessaire ?} We want to construct a local basis of equivariant $\soq$-valued horizontal forms, as well as equivariant $\Sigma$-valued fields. Here we will make use of the structure of principal bundle of $\mathcal P$, or more exactly the existence of slices. A slice $S$ is the image of a local section $\mathcal P/\Spin_4 \to \mathcal P$. Thus equivariant fields on $\Spin_4\cdot S$ are uniquely identified by their value on $S$. We write $\beta^{\mathbf{a},\mu}$ (resp. $\tau^{\mathbf{i},\mu}$) for a basis of horizontal $\mathbb{R}^4$-valued forms (resp. horizontal $\so_4$-valued forms) on $S$ : the superscript $\mu$ corresponds to a basis of horizontal scalar $1$-forms on $S$, while $\mathbf{a}$ and $\mathbf{i}$ correspond to a basis of $\soq$ (resp. $\mathbb{R}^4$)-valued maps on $S$. Similarly we index equivariant spinor variations by their value on $S$ and write $\bar\xi_{\boldsymbol{\alpha}}$ for a basis of such vectors (variations of the adjoint spinor giving the spinor field equation), which can be identified with $\Sp^*$-valued maps over $S$. \subsubsection{Euler-Lagrange terms corresponding to the variations} Recall the decomposition of the Poincaré-Cartan form $\bar\theta = \theta + \Theta^{cons}$ : \begin{align} \theta &= \frac12 2\rho_{i,c}^a{\eta}^{cb}\D\lambda^i \wedge \lhuit{ab} + \frac{1}2 \left( {\bar s} \gamma^a (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^a s \right) \wedge \lambda^{(9)}_a -m{\bar s} s \lambda^{(10)} \label{eqno:PC0}\\ \Theta^{cons} &= \frac12 p^{\nonbc{BC}}_D\D\lambda^D \wedge \lhuit{\nonbc{BC}} + \frac i 2\left( \bar\kappa^i (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^{i} \right) \wedge \lambda^{(9)}_i \label{eqno:PCcons} \end{align} Let $X$ a vertical variation of $\phi$ which consists in an equivariant horizontal variation $\epsilon$ of $\varpi$ and an equivariant variation $\xi$ of $\psi$. According to the principle presented in Section~\ref{LagMult}, the Euler-Lagrange term corresponding to a variation $X$ and Poincaré-Cartan form $\Theta^{cons}$ is exact up to a term which vanish under the constraint : \begin{equation} \begin{aligned} i_X \d\Theta^{cons} = \mathcal L_X (\Theta^{cons}) + \d i_X\Theta^{cons} \equiv \d i_X\Theta^{cons} \equiv 0 \end{aligned} \end{equation} We define the unconstrained Euler-Lagrange forms : \begin{align} (\EL^0)_X &= i_X \d\theta \end{align} The form $\EL^0$ has no dependency in $P_A^{\nonbc{BC}}$ nor in $K^{\alpha i}$. Then a field $\phi$ satisfying the Euler-Lagrange equations (\ref{eqno:ELconsEWC},\ref{eqno:ELconsDira}) satisfies the following : \begin{equation}\label{eqno:phiEL0exact} \phi^*(\EL^0)_X = - \phi^*\d i_X\Theta^{cons} \end{equation} The dependency in $P_A^{\nonbc{BC}}$ and in $K^{\alpha i}$ is gathered in the exact term. In order to make use of the exactness, we want to perform an integral. \subsection{Integration into variational equations on the spacetime} \subsubsection*{Unconstrained Euler-Lagrange terms} The unconstrained Euler-Lagrange form corresponds to the terms in (\ref{eqno:ELconsEWC}-\ref{eqno:ELalpha}) not involving the Lagrange multipliers $P^{\nonbc{BC}}_A$, $K^{\alpha i}$ : \begin{equation}\label{eqno:ELexact}\begin{aligned} \phi^*\EL^0_X = &\epsilon^A \wedge \left[ \D\lambda^i \wedge \frac12 p^{bc}_i \lsept{bcA} - \frac12 \left( {\bar s} \gamma^b (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^b s \right) \wedge \lhuit{bA} - m{\bar s} s \lneuf A \right]\\ % & + \epsilon^j \wedge \frac 12 \left( p^{bc}_j\D\lambda^D \wedge \lsept{bcD} + {\bar s} \{\sigma_j,\gamma^a\} s \lneuf a \right)\\ & + \bar\xi_\alpha \left( \frac{1}{2} \left( 2 (\gamma^a\d^\lambda\! s)^\alpha \wedge \lambda^{(9)}_a + (\gamma^a s)^\alpha \D\lambda^c \wedge \lhuit{ac} \right) - m s^\alpha \lambda^{(10)} \right) \end{aligned}\end{equation} The idea now is to perform a \enquote{partial integration}, or fibre integration, of these Euler-Lagrange equations over the orbits under $\Spin_4$. As they are compact, the exact terms will vanish and we will be left with equations involving only $\EL_0$. Furthermore, $\Spin_4$-equivariance of the variation $X$ implies $\Spin_4$-\emph{invariance} of the form $\phi^*\EL_X$, so that an vanishing integral over orbits implies that $\phi^*\EL_X$ vanishes at each point. But for this, we need to transform \eqref{eqno:ELexact} into an equation on $6$-forms so that it can be integrated along the $6$-dimensional orbits of $\Spin_4$. To this end we want to \enquote{factor out} a factor $\alpha^{(4)}$ while keeping the exactness of the right-hand term in \eqref{eqno:phiEL0exact}. The computation will be easier if we use explicit $10$-forms on $\mathcal P$. Doing so we will not need to keep track of contact terms and constraint terms. Let us thus reexpress the different terms in \eqref{eqno:ELexact} : \begin{subequations}\label{eqno:EL0Xvdix} \begin{align} &\phi^*\left( \D\lambda^i\wedge \epsilon^A\wedge\lsept{bcA} \right) = \Omega^i\wedge\epsilon^A\wedge\vsept{bcA} = \frac12 \Omega^i_{de}\alpha^d\wedge\alpha^e\wedge\epsilon^A_f \alpha^f \vsept{bcA} = \frac12\Omega^i_{de}\epsilon^a_f \delta^{[def]}_{bca}{\varpi^{(10)}}\\ &\phi^*\left( \epsilon^A \wedge \left( {\bar s} \gamma^b (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^b s \right) \wedge \lhuit{bA} \right) \begin{aligned}[t &= - \left( {\bar\psi} \gamma^b \d^\omega \psi - \d^\omega {\bar\psi} \gamma^b \psi \right) \epsilon^A \wedge \vhuit{bA}\\ &= - \left( {\bar\psi} \gamma^b \d^\omega \psi - \d^\omega {\bar\psi} \gamma^b \psi \right) \epsilon^A_c \wedge \left( \delta^c_A \vneuf{b} - \delta^c_b \vneuf A \right)\\ &= - \epsilon^a_c \left( {\bar\psi} \gamma^b \partial_d \psi - \partial_d {\bar\psi} \gamma^b \psi \right) \left( \delta^c_a \delta^d_b - \delta^c_b \delta^d_a \right) {\varpi^{(10)}} \end{aligned}\\ &\phi^*\left( \epsilon^A \wedge{\bar s} s \lneuf A \right) = \epsilon^A_A {\bar\psi} \psi {\varpi^{(10)}} = \epsilon^a_a {\bar\psi} \psi {\varpi^{(10)}} \\ % &\phi^*\left( \epsilon^j \wedge \D\lambda^D \wedge \lsept{bcD} \right) = \frac12 \epsilon^j_e \Omega^d_{fg} \delta^{[efg]}_{bcd} {\varpi^{(10)}}\\ % &\phi^*\left( \epsilon^j \wedge {\bar s} \{\sigma_j,\gamma^a\} s \lneuf a \right) = \epsilon^j_a {\bar\psi} \{\sigma_j,\gamma^a\} \psi {\varpi^{(10)}}\\ % &\bar\xi_\alpha \phi^*\left( (\gamma^a\d^\lambda\! s)^\alpha \wedge \lambda^{(9)}_a + \frac{1}{2} (\gamma^a s)^\alpha \D\lambda^c \wedge \lhuit{ac} \right) = \bar\xi_\alpha \left( \gamma^a\partial_a \psi^\alpha + \frac12 \gamma^a\psi^\alpha \Omega^c_{ac} \right) {\varpi^{(10)}} \end{align} \end{subequations} The non-normalised \emph{antisymmetric Kronecker symbol} $\delta^{[efg]}_{bcd}$ is defined as follows : \[ \delta^e_b\left( \delta^f_c\delta^g_d - \delta^f_d \delta^g_c \right) + \delta^e_c\left( \delta^f_d\delta^g_b - \delta^f_b \delta^g_d \right) + \delta^e_d\left( \delta^f_b\delta^g_c - \delta^f_c \delta^g_b \right) \] \com{acceptable d'utiliser un crochet non-normalisé ici mais normalisé plus tard ?} From these it is straightforward to factorise by $\alpha^{(4)}$ all the terms. We will write \[ \phi^*\EL^0_X = \phi^*E_X \wedge \alpha^{(4)} \] with $\EL^0_X$ defined as a purely horizontal $6$-form on the image of $\phi$ (but it can also be expressed as a form on the jet bundle : $\EL^0_X\in \ExT^6 T^*\mathcal P\otimes\mathcal{J}^1(Q)$). \subsubsection*{Factorisation and exactness of the constraint terms} We need to factorise the exact term $\phi^* \left( \d i_X \Theta^{cons} \right)$. Extracting the relevant terms from Equations~\eqref{eqno:ELtotal} we have \[\begin{aligned} i_X\Theta^{cons} &= i_X \left( \frac12 p^{\nonbc{BC}}_D\D\lambda^D \wedge \lhuit{\nonbc{BC}} + \frac i 2\left( \bar\kappa^i (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^{i} \right) \wedge \lambda^{(9)}_i\right)\\ &= \frac12 p^{\nonbc{BC}}_D\epsilon^D \wedge \lhuit{\nonbc{BC}} - \frac i 2 \bar\xi_\alpha \kappa^{\alpha i} \wedge \lneuf i\\ &= p^{jc}_D \epsilon^D_c \lneuf{j} - \frac i 2 \bar\xi_\alpha \kappa^{\alpha i} \wedge \lneuf i\\ &= \left( p^{jc}_D \epsilon^D_c \lambda^{\soq(5)}_j - \frac i 2 \bar\xi_\alpha \kappa^{\alpha i} \lambda^{\soq(5)}_i \right) \wedge \lambda^{\mathfrak m(4)} \end{aligned}\] Note how essential it is here again that \emph{$\epsilon$ is a purely horizontal form}. This is what allows us to factor $\epsilon^D\wedge \lhuit{\nonbc{BC}}$ by $\lambda^{\mathfrak m(4)}$. Considering the exterior differential of the pullback, we obtain \begin{equation*} \begin{aligned} \phi^*\d i_X \Theta^{cons} &= \d\phi^*i_X\Theta^{cons}\\ &= \d \left( \left( P^{jc}_D \epsilon^D_c \omega^{(5)}_j - \frac i 2 \bar\xi_\alpha K^{\alpha i} \omega^{(5)}_i \right) \wedge \alpha^{(4)} \right)\\ &=\d \left( P^{jc}_D \epsilon^D_c \omega^{(5)}_j - \frac i 2 \bar\xi_\alpha K^{\alpha i} \omega^{(5)}_i \right) \wedge \alpha^{(4)} + \left( P^{jc}_D \epsilon^D_c \omega^{(5)}_j - \frac i 2 \bar\xi_\alpha K^{\alpha i} \omega^{(5)}_i \right) \wedge \d \alpha^{(4)}\\ &= \d \left( P^{jc}_D \epsilon^D_c \omega^{(5)}_j - \frac i 2 \bar\xi_\alpha K^{\alpha i} \omega^{(5)}_i \right) \wedge \alpha^{(4)} + 0 \end{aligned} \end{equation*} We used $\d\alpha^{(4)} = 0$ which is a consequence of Equation~\eqref{eqno:ELconsEWC} \[ \d\alpha^a + \wb\omega\alpha^a = \frac12 \Omega^a_{bc} \alpha^b\wedge\alpha^c \] We call $E^{cons}_X$ the term which is differentiated : \begin{equation} \phi^*E^{cons}_X := P^{jc}_D \epsilon^D_c \omega_j^{(5)} - \frac i 2 \bar\xi_\alpha K^{\alpha i} \omega_i^{(5)} \end{equation} We finally arrive to the following equation, which holds for each $X$ vertical variation of $\phi$ which is equivariant : \begin{equation}\label{eqno:ELaquatre} \boxed{ \phi^*\EL^0_X \wedge \alpha^{(4)} = \d\phi^*E^{cons}_X \wedge\alpha^{(4)} } \end{equation} \subsubsection*{Integration along the orbits} The tangent space to the orbits under $\Spin_4$ is exactly the kernel of $\alpha^{(4)}$. This means that on any orbit $\Spin_4\cdot x\subset \Spin_4\cdot S$, Equation~\eqref{eqno:ELaquatre} implies \begin{equation} \phi^*\EL^0_X|_{\Spin_4\cdot x} = \d\phi^*E^{cons}_X |_{\operatorname{\mathfrak{spin}}_4\cdot x} \end{equation} The orbit being compact, we can integrate along the orbit : \begin{equation} \int_{\Spin_4\cdot x} \phi^*\EL^0_X|_{\Spin_4\cdot x} = \int_{\Spin_4\cdot x} \d\phi^*E^{cons}_X |_{\operatorname{\mathfrak{spin}}_4\cdot x} = 0 \end{equation} by virtue of the Stokes theorem. Now we want to conclude $\phi^*\EL^0_X|_{\Spin_4\cdot x} = 0$. It will follow if we can show that $\phi^*\EL^0_X$ is $\Spin_4$-invariant. Indeed as $\Spin_4$ preserves the orientation of the orbit and acts transitively, any invariant form with vanishing integral is necessarily identically zero. But Equations \eqref{eqno:EL0Xvdix} express $\phi^*\EL^0_X$ as ${\varpi^{(10)}}$, which is $\Spin_4$-invariant by construction, with a factor which is $\Spin_4$-invariant as a complete contraction of equivariant quantities (including $\epsilon$ and $\bar\xi$). Thus factoring out an equivariant $\alpha^{(4)}$ lefts us with an equivariant $\phi^*E^0_X$. Thus we proved that \begin{equation}\label{eqno:phiEL0X0} \boxed{ \phi^*\EL^0_X = 0 } \end{equation} for $X$ which are variations of $\psi$ which are equivariant and variations of $\varpi$ which are \emph{horizontal and} equivariant, over $\Spin_4\cdot x$ for any orbit. Hence it holds at each point. Equation~\eqref{eqno:phiEL0X0} a priori only holds for $1$-forms $\epsilon$ which are \emph{constant} linear combinations of the fields $\beta^{\mathbf{a},\mu}$, $\tau^{\mathbf{i},\mu}$ and $\bar\xi_{{\boldsymbol{\alpha}}}$. However it is a tensorial equation which holds at each point. Thus it still holds if $X$ was multiplied by any real function on $\Spin_4\cdot S$. One concludes that Equation~\eqref{eqno:phiEL0X0} holds for $X$ which is \emph{any variation of $\psi$} and \emph{any horizontal variation of $\varpi$}. Let us recall the coefficient of $\omega^{(6)}$ in $\phi^*E^0_X$, as expressed in~\eqref{eqno:EL0Xvdix} : \begin{equation}\begin{aligned} &\epsilon^a_b \left( \frac12 p_i^{cd} \frac12 \Omega^i_{ef} \delta^{[bef]}_{cda} % +\frac12\left( \delta^b_a \delta^d_c - \delta^b_c \delta^d_a \right) \left( {\bar\psi} \gamma^c \partial_d \psi - \partial_d {\bar\psi} \gamma^c \psi \right) - m \delta^b_a {\bar\psi} \psi \right)\\ +{}&\epsilon^i_b \left( \frac12 p_i^{ef} \frac12 \Omega^g_{cd} \delta^{[bcd]}_{efg} + {\bar\psi} \{\sigma_i,\gamma^b\} \psi \right) + \bar\xi_\alpha \left( \gamma^a\partial_a \psi^\alpha + \frac12\gamma^a\psi^\alpha \Omega^c_{ac} - m \psi^\alpha \right) \end{aligned}\end{equation} Since it has to vanish for \emph{any} $\epsilon^A_b$ and $\bar\xi_\alpha$, it means that each coefficient vanishes : \begin{subequations}\label{eqno:FEtot} \begin{gather} \frac14 p_i^{cd} \Omega^i_{ef} \delta^{[bef]}_{cda} % + \frac12 \left( \delta^b_a \delta^d_c - \delta^b_c \delta^d_a \right) \left( {\bar\psi} \gamma^c \partial_d \psi - \partial_d {\bar\psi} \gamma^c \psi \right) - m \delta^b_a {\bar\psi} \psi = 0 \label{eqno:FEtotab}\\ % \frac14 p_i^{ef} \Omega^g_{cd} \delta^{[bcd]}_{efg} + {\bar\psi} \{\sigma_i,\gamma^b\} \psi = 0 \label{eqno:FEtotib}\\ \gamma^a\partial_a \psi^\alpha + \frac12 \gamma^a\psi^\alpha \Omega^c_{ac} - m \psi^\alpha = 0 \label{eqno:FEtotalpha} \end{gather}\end{subequations} We obtained \emph{tensorial equations} on $\mathcal P$. By assumption, $\mathcal P$ is a (spin) frame bundle above a spacetime ${\mathcal E}$. We now express these tensorial equations on ${\mathcal E}$. \subsubsection*{Expression in spacetime coordinates} As all fields involved in Equations~\eqref{eqno:FEtot} are equivariant, they can be pushed forward to sections of the associated principal bundles on spacetime. Thus the equations can be formulated on spacetime. We need a local system of coordinates, with a local trivialisation of the spinor frame bundle. We use greek indices $\mu,\nu\dots$ for coordinates. We keep the notation $\psi$ for the spinor field as well as the superscript $\alpha$ for spinor fields.\com{Changer la notation ?} There is on ${\mathcal E}$ a metric $g$ corresponding to ${\eta}_{ab}\alpha^a\otimes \alpha^b$ ; it is compatible with the $\Spin_4$-structure defined by $\mathcal P\to {\mathcal E}$. Taking care of the factors $p^{ab}_i$ requires some care. Let us recall their definition : \[ p_i^{bc} = 2\rho^b_{i,d}{\eta}^{dc} \] We also stated that they correspond to an isomorphism \[\eucl\xrightarrow{\sim} \ExT^2 {\mathbb{R}^4}^*\] First, \[\frac12 p_i^{de}\Omega^i_{ab} \text{ corresponds to } \Riem^{o}{}_{\mu\nu\pi}g^{\pi\xi} \] (with indices corresponding in alphabetic order) as explained in Appendix~\ref{annRic}. The $\Omega$ term in Equation~\eqref{eqno:FEtotab} becomes : \begin{equation*} \frac12 \Riem^\xi{}_{\pi\rho \upsilon} g^{\upsilon o} \delta^{[\nu\sigma\tau]}_{\xi o \mu} = \Riem^\nu{}_{o\mu \upsilon} g^{uo} - \Riem^\xi{}_{\xi \mu \upsilon} g^{u\nu} + \Riem^\xi{}_{\xi o \upsilon} g^{uo} \delta^\nu_\mu = \delta^\nu_\mu \Scal - 2 g^{\nu\pi}\Ric_{\mu\pi} \end{equation*} For Equation~\eqref{eqno:FEtotib} we recall as well $\sigma_i$ can be written as a function of $\gamma^a$ in the following way : \[ \sigma_i = \frac14 p_i^{ab}\gamma_a\gamma_b = \frac18 p_i^{ab}[\gamma_a,\gamma_b] \] as explained in Appendix~\ref{annPinSpin} ($p$ is twice the map $\rho$ described there). Equation~\eqref{eqno:FEtotib} is thus equivalent to \[ 0 = \frac12 \Omega^g_{cd} \delta^{[bcd]}_{efg} + \frac14 {\bar\psi} \left\{ [\gamma_e,\gamma_f],\gamma^b \right\} \psi = \Omega^g_{fg} \delta^b_e + \Omega^g_{ef} \delta^b_g - \Omega^g_{eg} \delta^b_f + \frac14 {\bar\psi} \left\{ [\gamma_e,\gamma_f],\gamma^b \right\} \psi \] The spacetime tensor corresponding to $\Omega^a_{bc}$ is the torsion tensor $T^\mu_{\nu\xi}$. We define its trace \[ \tr(T)_\rho = T^\sigma_{\sigma\rho} \] The equation on spacetime corresponding to \eqref{eqno:FEtotib} is : \begin{equation} \left( -\tr(T)_\rho \delta^\mu_\nu+ \tr(T)_\nu \delta^\mu_\xi + T^\mu_{\nu\xi} \right) + \frac14 {\bar\psi} \{[\gamma_\nu,\gamma_\xi],\gamma^\mu\} \psi \end{equation} Last, the horizontal derivatives $\partial_a$ turn to covariant derivatives on spacetime, as they correspond to deriving the fields on the frame bundle along horizontal directions. In particular, $\gamma^a\partial_a$ corresponds to the (covariant) Dirac operator. We summarize the correspondence in a table : \begin{table}[h] \centering \begin{tabular}{|c|c|} \hline Frame bundle & Spacetime\\ \hline $\partial_a$ & $\nabla_a$\\ \hline $\gamma^a\partial_a$ & $\slashed{\nabla}$\\ \hline ${\eta}_{ab}$ & $g_{\mu\nu}$\\ \hline $p_i^{cd}\Omega^i_{ef}$ & $\Riem^{\xi}{}_{\mu\nu}{}^o$ \\ \hline $\Omega^a_{bc}$ & $T^\mu_{\nu\xi}$\\ \hline $\psi^\alpha$ & $\psi^\alpha$\\ \hline \end{tabular} \end{table} For convenience, we convert~\eqref{eqno:FEtotab} to a totally covariant equation. Separating geometry terms and matter terms in Equations~\eqref{eqno:FEtot}, we obtain : \begin{subequations}\begin{empheq}[left=\empheqlbrace]{align} 2\Ric_{\mu\nu} - g_{\mu\nu} \Scal &= g_{\mu\nu} \left( \frac12 {\bar\psi} {\overleftrightarrow{\Dirac}} \psi - m {\bar\psi}\psi \right) - \frac12 {\bar\psi} \gamma_\mu {\overleftrightarrow{\nabla}}_\nu \psi % \label{eqno:EqRicSp} \\ % T^\mu_{\nu\xi} - \tr(T)_\xi \delta^\mu_\nu + \tr(T)_\nu \delta^\mu_\xi &= - \frac14 {\bar\psi} \left\{ [\gamma_\nu,\gamma_\xi],\gamma^\mu \right\} \psi % \label{eqno:EqTorsSp}\\ % \slashed{\nabla}\psi - \frac12 \tr(T)_\mu \gamma^\mu\psi -m\psi &= 0 % \label{eqno:EqRGDirac} % \end{empheq}\end{subequations} \begin{sloppypar These are the Einstein-Cartan-Dirac field equations (Equation~\eqref{eqno:EqRicSp} is usually presented with an extra factor $1/2$). Equation~(\ref{eqno:EqTorsSp}) defines algebraically the tensor $T - \Id\wedge \tr T$ as a function of the spinor field, hence the torsion as well (as ${\tr(T - \Id\wedge\tr T) = (1 - 4 + 1) \tr(T)}$).\end{sloppypar} \subsubsection*{Untreated variational equations} We obtained equations which correspond to the Euler-Lagrange equations for equivariant variations of $\psi$ and horizontal equivariant variations of $\varpi$. We proved that in this case the term $\phi^*\EL^0_X$ has to vanish. This is only implied by the Euler-Lagrange equations and by no means equivalent. There are two parts of the Euler-Lagrange which we do not use. First, for \emph{non-equivariant variations} of $\psi$ and \emph{non-equivariant horizontal variations} of $\varpi$, we proved that $\phi^*\EL^0_X$. But the Euler-Lagrange equation~\eqref{eqno:phiEL0exact} is \[ \phi^*(\EL^0_X + i_X\d\Theta^{cons}) = 0 \] Thus the Euler-Lagrange equation is equivalent to \begin{equation} \phi^*(i_X\d\Theta^{cons}) = 0 \end{equation} which is an equation involving the Lagrange multipliers $p_i^{ab}$ and $\kappa^{\alpha i}$ as well as the fields $\varpi^A$ and $\psi^\alpha$. Second, we did not consider at all \emph{vertical} variations of $\varpi$. This is because we could not find such variations which preserve the constraints. Furthermore, verticality would also prevent factorising an $\alpha^{(4)}$ factor out of the exact term in \eqref{eqno:ELaquatre}. Vertical variations of $\alpha^a$ correspond to variations of the vertical distribution which integrates into the orbits. Vertical variations of $\omega^i$ correspond to variations of the vectors fields representing the Lie algebra $\operatorname{\mathfrak{spin}}_4$. We only note that these equations are subject to some degeneracy, according to Noether's second theorem. Indeed the Poincaré-Cartan form (\ref{eqno:PC0},\ref{eqno:PCcons}) is invariant by diffeomorphisms of $\mathcal P$ \cite{DynPrincBundle}. As the method that allowed us to obtain variational equations on spacetime does not apply to these equation, their study falls outside of the scope of the present paper. \subsection{Decomposing the field equations}\label{secno:DecompFE} \com{Dans une section à part ?} For completion, we present a brief analysis of the Einstein-Cartan-Dirac equations. It is classical and can be found in the literature, as well as its physical implications~\cite{ECD,ECT,STTors}. In order to carry out the analysis, we want to decompose the tensor equations inti components with different index symmetries (sub-representations under $\SO_4$). \subsubsection*{Pure axiality of the torsion} Starting with the torsion, it is convenient to have set all index at the same type (covariant) : we write $T_{\tau\mu\nu} = g_{\tau\pi}T^\pi{}_{\mu\nu}$. The torsion can be decomposed as \cite{STTors,TPGravity,TG} \begin{equation} T_{\tau\mu\nu} = \frac13 \left( \tr(T)_\nu g_{\tau\mu} - \tr(T)_\mu g_{\tau\nu} \right) + \mathcal{A}_{\tau\mu\nu} + \mathcal T_{\tau\mu\nu} \end{equation} with $\tr(\mathcal T) = 0$, the component $\mathcal{A}$ purely antisymmetric and $\mathcal T_{\tau\mu\nu} + \mathcal T_{\mu\nu\tau} + \mathcal T_{\nu\tau\mu} = 0$ ($\mathcal T$ is called the \emph{pure torsion} part). We then express the torsion term of~(\ref{eqno:EqTorsSp}) in terms of these components : \begin{equation} T_{\tau\mu\nu} - \left( g_{\tau\mu} \tr(T)_\nu - g_{\tau\nu} \tr(T)_\mu \right) = -\frac23 \left( \tr(T)_\nu g_{\tau\mu} - \tr(T)_\mu g_{\tau\nu} \right) + \mathcal{A}_{\tau\mu\nu} + \mathcal T_{\tau\mu\nu} \end{equation} Now the matter term $-\frac 1 2 {\bar\psi} \{\sigma_{\mu\nu},\gamma_\tau \} \psi$ is totally antisymmetric (see Appendix~\ref{annCommSigGam}). Equation~(\ref{eqno:EqTorsSp}) hence decomposes into the following three equations \begin{subequations}\begin{align} \tr(T) &= 0\\ \mathcal{A}_{\tau\mu\nu} &= - \frac 1 4 {\bar\psi} \{\sigma_{\mu\nu},\gamma_\tau \} \psi \label{eqno:TorsAx}\\ \mathcal T &= 0 \end{align}\end{subequations} The equations require the torsion to be reduced to its so-called \emph{axial} part. Notably, the trace term appearing in~(\ref{eqno:EqRGDirac}) has to be vanish~\cite{TG} so that \begin{equation} T^\mu_{\nu\xi} = - \frac14 {\bar\psi} \left\{ [\gamma_\nu,\gamma_\xi],\gamma^\mu \right\} \psi \end{equation} The $3$-form $\mathcal{A}$ which is part of the spacetime geometry is (algebraically) coupled to the spinor field ; one can say that this degree of freedom is what separates Einstein-Cartan theory from Einstein's theory of General Relativity. Equation~(\ref{eqno:EqTorsSp}) corresponds to variations of the connection and as such the matter term (the $\psi$ part) is identified with the angular momentum current~\cite{EMTens}. Torsion hence couples the angular momentum current with the various fields of the theory ($\psi$ in the present case). As a purely antisymmetric $(4-1)$-form, we can express $\mathcal{A}$ by its dual (pseudo-)vector $A$, also called \emph{axial vector} : \[ \mathcal{A}_{\tau\mu\nu} = (A\lrcorner\, \mathrm{vol})_{\tau\mu\nu} = A^\xi\epsilon_{\xi\tau\mu\nu} \] with $\epsilon_{\xi\tau\mu\nu}$ the Levi-Civita symbol, which is the components of a volume form in a basis of determinant $1$. Equation~(\ref{eqno:TorsAx}) can be reexpressed using the chirality operator \[\gamma_5 = \gamma_0 \gamma_1 \gamma_2 \gamma_3\] defined in Appendix~\ref{annCommSigGam} : \[A^\xi\epsilon_{\xi\tau\mu\nu} = -\frac12 \epsilon_{\xi\mu\nu\tau}{\bar\psi} \gamma^\xi\gamma_5 \psi\] from which we obtain \begin{equation}\begin{aligned} A^\xi &= - \frac12{\bar\psi}\gamma^\xi\gamma_5 \psi \end{aligned}\end{equation} The equation of motion of the spinor~(\ref{eqno:EqRGDirac}) and its conjugate then take the form \begin{subequations}\label{seqno:SpEOM}\begin{align} \left( \slashed{\nabla} - m \right) \psi &= 0\\ \left( \bar \slashed{\nabla} - m \right) {\bar\psi} &= 0 \end{align}\end{subequations} and one concludes that the term $ - g_{\mu\nu} {\bar\psi} (\frac 1 2 {\overleftrightarrow{\Dirac}} - m) \psi$ in~(\ref{eqno:EqRicSp}) has to vanish. This is a general property of homogeneous Lagrangians (as is the Dirac Lagrangian) : they take the value zero on-shell. \subsubsection*{Symmetric and antisymmetric parts of the curvature-energy relation} The first equation~(\ref{eqno:EqRicSp}) corresponds to Equation \eqref{eqno:FEtotab} hence to the coefficient $\epsilon^a_b$ in variations, that is to say it is the Euler-Lagrange term corresponding to horizontal variations of $\alpha$. It can be decomposed into a symmetric and an antisymmetric parts. We use the parenthesis $(\mu\nu)$ notation for symmetrisation and bracket notation $[\mu\nu]$ for antisymmetrisation (normalized by a $1/2$ factor) : \begin{subequations}\begin{align} 2\Ric_{(\mu\nu)} - g_{\mu\nu} \Scal &= g_{\mu\nu} \left( \frac12 {\bar\psi} {\overleftrightarrow{\Dirac}} \psi - m {\bar\psi}\psi \right) - \frac12{\bar\psi} \gamma_{(\mu}{\overleftrightarrow{\nabla}}_{\nu)} \psi \label{eqno:RicSym}\\ 2\Ric_{[\mu\nu]} &= - \frac12 {\bar\psi} \gamma_{[\mu}{\overleftrightarrow{\nabla}}_{\nu]} \psi \label{eqno:RicAsym} \end{align}\end{subequations} We first look at Equation~(\ref{eqno:RicAsym}). It is the variational term corresponding to \enquote{antisymmetric} variations in the solder form, that it to say variations which preserve the metric. In other words, they correspond to infinitesimal automorphisms of the frame bundle. The Bianchi identity on the Ricci curvature~(\ref{eqno:RicBianchiBase}) (in Appendix \ref{annRicBianchi}) relates the asymmetric part of the Ricci curvature to the exterior divergence of the torsion \begin{equation}\label{eqno:RicBianchimunu} 2\Ric_{[\mu\nu]} = \nabla_\pi T^\pi_{\mu\nu} - \d\tr (T)_{\mu\nu} = \nabla_\pi T^\pi_{\mu\nu} \end{equation} It allows to rewrite \eqref{eqno:RicAsym} as an equation on torsion : \begin{equation} \nabla_\pi T^\pi_{\mu\nu} = -\frac12 {\bar\psi} \gamma_{[\mu}{\overleftrightarrow{\nabla}}_{\nu]} \psi \end{equation} The right-hand term, which corresponds to (twice) the antisymmetric part of the \enquote{canonical energy-momentum tensor} (the terms in $\psi$ in~(\ref{eqno:EqRicSp})), is the so-called \emph{Belinfante (improvement) tensor}~\cite{WeinbergI,MetricAffine2,STandFields}. The Cartan geometry, through the Bianchi identity, then imposes the following equation \begin{equation}\label{eqno:ConsBianchitmp} \boxed{ \frac12 {\bar\psi} \gamma_{[\mu}{\overleftrightarrow{\nabla}}_{\nu]} \psi = \frac14\nabla_\pi {\bar\psi} \{\sigma_{\mu\nu},\gamma^\pi \} \psi } \end{equation} which relates the Belinfante tensor to the \emph{covariant} divergence of the angular momentum current (also called \emph{spin density}~\cite{ECT} or \emph{spin current}~\cite{MetricAffine1}). In theories without torsion, the connection has to follow the solder form variations, so that Equation~\eqref{eqno:ConsBianchitmp} would directly take the place of~\eqref{eqno:RicAsym} as in~\cite{WeinbergI} (as variational equation for variations of the solder form which preserve the metric). The symmetric component~\eqref{eqno:RicSym} corresponds to complementary variations of the solder form hence to variations of the metric (symmetric variations of a frame have been considered as the natural complementary to isometric variations~\cite{MetVar}). The corresponding matter term is then identified with the symmetric energy-momentum tensor~\cite{EMTens,MetricAffine1}. It is Einstein's field equation binding spacetime's (Ricci symmetric) curvature to the distribution of energy-momentum. Taking into account the fact that the term corresponding to the Dirac Lagrangian vanishes due to~\eqref{seqno:SpEOM}, Equations~\eqref{eqno:RicSym} simplifies to \begin{equation}\boxed{ 2\Ric_{(\mu\nu)} - g_{\mu\nu} \Scal = \frac 12 {\bar\psi} \gamma_{(\mu}{\overleftrightarrow{\nabla}}_{\nu)} \psi }\end{equation} \subsection{Expression in terms of the Levi-Civita connection} To compare the Einstein-Cartan theory with Einstein's General Relativity, we relate the connection to the Levi-Civita connection by means of its \emph{contorsion}, which is defined as its difference to the Levi-Civita connection : \begin{equation}\label{KContors} \nabla = \nabla^{LC} + K \end{equation} with $K$ a $1$-form on $\mathcal P$ with values in $\so(T\mathcal P)$. According to the previous section, the field equations~(\ref{eqno:EqTorsSp}) require the torsion to be purely axial. As contorsion is uniquely defined by the torsion (assuming metricity) \cite{TPGravity,STTors}, in our case $K$ has to be $\frac12 T$ : \[ K_{\mu\nu}^\pi = \frac12 T^\pi_{\mu\nu} \] \subsubsection*{Ricci and scalar curvatures} According to Equation~\eqref{VarRic} from Section~\ref{RicContors}, the Ricci curvature of the connection can be related to the Ricci curvature of the Levi-Civita connection by the following equation : \begin{equation}\begin{aligned} \Ric_{\mu\nu} &= \Ric^{LC}_{\mu\nu} + \delta^\tau_\pi \left( \nabla^{LC}_\tau K^\pi_{\mu\nu} - \nabla^{LC}_\mu K^\pi_{\tau\nu} + K^\pi_{\tau \kappa}K^\kappa_{\mu\nu} - K^\pi_{\mu \kappa}K^\kappa_{\tau\nu} \right)\\ &= \Ric^{LC}_{\mu\nu} + \frac12\operatorname{div}^{LC}T_{\mu\nu} - 0 + 0 - \frac14 T^\pi_{\mu \kappa}T^\kappa_{\pi\nu} \end{aligned}\end{equation} We see that the Ricci curvatures difference have an antisymmetric term $\operatorname{div}^{LC}T_{\mu\nu}$ and a symmetric term $-\frac14 T^\tau_{\mu \kappa}T^\kappa_{\tau\nu}$, as the torsion is purely axial. Since the Ricci curvature of the Levi-Civita connection is symmetric (due to the Bianchi identity \eqref{eqno:RicBianchimunu} for a torsion-free connection), the antisymmetric difference between the Ricci curvatures is exactly the antisymmetric part of the Ricci curvature of the connection, as expressed in the Bianchi identity. We express the Ricci curvature difference in term of the axial vector. Start with the quadratic term : \begin{equation*}\begin{aligned} T^\pi_{\mu\kappa}T^\kappa_{\pi\nu} &= g^{\pi\tau}g^{\kappa\rho} T_{\tau\mu\kappa} T_{\rho\pi\nu} = g^{\pi\tau}g^{\kappa\rho} A^\xi A^\upsilon \varepsilon_{\xi \tau\mu\kappa} \varepsilon_{\upsilon \rho\pi\nu}\\ &= A^\xi A^\upsilon (g_{\xi\upsilon}g_{\mu\nu} - g_{\xi \nu}g_{\mu\upsilon})\\ &= A^\xi A_\xi g_{\mu\nu} - A_\nu A_\mu \end{aligned}\end{equation*} The change in scalar curvature is directly derived : \begin{equation} g^{\mu\nu} \left( - \frac14 T^\pi_{\mu\kappa}T^\kappa_{\pi\nu} \right) = -\frac{3}4 A^\xi A_\xi \end{equation} Identifying the totally antisymmetric torsion with a $3$-form, its divergence for the torsion-free Levi-Civita connection corresponds to (minus) the codifferential for the Hodge duality structure \cite{Petersen}. It can hence be identified with the Hodge dual of the exterior differential of the axial $1$-form $A_\pi = g_{\pi\xi}A^\xi$ : \begin{equation} \operatorname{div}^{LC} T_{\mu\nu} = -(\star \d A)_{\mu\nu} \end{equation} The left-hand terms of~(\ref{eqno:RicSym}-\ref{eqno:RicAsym}) are expressed as \begin{equation} \begin{aligned} 2\Ric_{(\mu\nu)} - g_{\mu\nu}\Scal &= 2\Ric^{LC}_{\mu\nu} - g_{\mu\nu} \Scal^{LC} - \frac24 (A^\xi A_\xi g_{\mu\nu} - A_\mu A_\nu) +g_{\mu\nu} \frac34 A^\xi A_\xi\\ &= 2\Ric^{LC}_{\mu\nu} - g_{\mu\nu} \Scal^{LC} + \frac14 \left( A^\xi A_\xi g_{\mu\nu} + 2 A_\mu A_\nu \right) \end{aligned}\end{equation} and \begin{equation} 2\Ric_{[\mu\nu]} = -(\star \d A)_{\mu\nu} \end{equation} \subsubsection*{The spinor connection} We also need to express the covariant derivative of spinor fields in terms of $K$. It is straightforward from~(\ref{KContors}) : \begin{equation} \nabla_\mu\psi = \nabla_\mu^{LC} \psi + \frac12 K_{\mu\nu}^\pi \frac12\sigma_\pi{}^{\nu}\psi = \nabla_\mu^{LC} \psi + \frac18 T_{\tau\mu\nu}\sigma^{\tau\nu}\psi \end{equation} We can re-express the kinetic term ${\bar\psi} \gamma_\mu{\overleftrightarrow{\nabla}}_\nu \psi$ : \begin{equation}\label{eqno:ConnToLC}\begin{aligned} {\bar\psi} \gamma_\mu{\overleftrightarrow{\nabla}}_\nu \psi &= {\bar\psi} \gamma_\mu \left( \nabla^{LC}_\nu + \frac18 T_{\nu\pi\tau}\sigma^{\tau\pi} \right)\psi - \left( \nabla^{LC}_\nu + \frac18 T_{\nu\pi\tau}\bar\sigma^{\tau\pi} \right) {\bar\psi} \gamma_\mu \psi\\ &= {\bar\psi} \left( \gamma_\mu{\overleftrightarrow{\nabla}}_\nu^{LC} +\frac18 T_{\nu\pi\tau}\{\sigma^{\pi\tau},\gamma_\mu\} \right) \psi \end{aligned}\end{equation} and in terms of the axial (pseudo)-vector, using the formulae~(\ref{eqno:CommChir}-\ref{eqno:LCcontr}) from Appendix~\ref{annCommSigGam} : \begin{gather} \frac12 \{\sigma_{\pi\tau},\gamma_\mu\} = \varepsilon_{\upsilon\pi\tau\mu}\gamma^\upsilon\gamma^5 \label{eqno:CommChir}\\ \frac12 \varepsilon^{\xi\nu\pi\tau}\varepsilon_{\upsilon\pi\tau\mu} = \left( \delta^\xi_\upsilon \delta^\nu_\mu - \delta^\xi_\mu\delta^\nu_\upsilon \right) \label{eqno:LCcontr} \end{gather} to obtain : \begin{equation}\label{eqno:ConnToLCAx}\begin{aligned} {\bar\psi} \gamma_\mu{\overleftrightarrow{\nabla}}_\nu \psi &= {\bar\psi} \left( \gamma_\mu{\overleftrightarrow{\nabla}}_\nu^{LC} +\frac14 T_{\nu\tau\kappa} \varepsilon^{\upsilon\tau\kappa}{}_\mu{} \gamma_{\upsilon} \gamma_5\right) \psi\\ &= {\bar\psi} \left( \gamma_\mu{\overleftrightarrow{\nabla}}_\nu^{LC} +\frac14 A^\xi\varepsilon_{\xi\nu\tau\kappa} \varepsilon^{\upsilon\tau\kappa}{}_\mu{} \gamma_{\upsilon} \gamma_5\right) \psi\\ &= {\bar\psi} \left( \gamma_\mu{\overleftrightarrow{\nabla}}_\nu^{LC} + \frac12 \left( A^\xi\gamma_\xi g_{\mu\nu} - A_{\mu}\gamma_\nu \right) \gamma_5 \right) \psi \end{aligned}\end{equation} The Dirac operator is readily rewritten as well, using~(\ref{eqno:CommChir}) and the total antisymmetry mentioned right above: \begin{equation}\begin{aligned} \slashed{\nabla} &= \slashed{\nabla}^{LC} + \frac18\gamma^\mu T_{\tau\mu\nu}\sigma^{\tau\nu} = \slashed{\nabla}^{LC} + \frac18 A^\xi\varepsilon_{\xi\tau\mu\nu}\frac12\{\gamma^\mu,\sigma^{\tau\nu}\} = \slashed{\nabla}^{LC} - \frac18 A^\xi 3!\gamma_\xi\gamma_5\\ &= \slashed{\nabla}^{LC} - \frac34 A^\xi\gamma_\xi\gamma_5 \end{aligned}\end{equation} The field equations~(\ref{eqno:EqRicSp}-\ref{eqno:EqRGDirac}) can then be reformulated as a Levi-Civita connection + axial (pseudo)-vector theory : \begin{subequations}\begin{empheq}[left=\empheqlbrace]{align} 2\Ric^{LC}_{\mu\nu} -g_{\mu\nu} \Scal^{LC} +\frac14 \left( A^\xi A_\xi g_{\mu\nu} + 2 A_\mu A_\nu \right) &= \frac12 {\bar\psi} \left( \gamma_{(\mu}{\overleftrightarrow{\nabla}}^{LC}_{\nu)} + \frac12 \left( A^\xi\gamma_\xi g_{\mu\nu} - A_{(\mu}\gamma_{\nu)} \right) \gamma_5 \right) \psi \label{eqno:LCRicAx}\\ (\star \d A)_{\mu\nu} &= \frac12 {\bar\psi} \left( \gamma_{[\mu}{\overleftrightarrow{\nabla}}^{LC}_{\nu]} - A_{[\mu}\gamma_{\nu]} \gamma_5 \right) \psi \label{eqno:LCTorsAx}\\ A^\xi &= -\frac12\bar\psi \gamma^\xi\gamma_5 \psi \label{eqno:AxSpin}\\ \slashed{\nabla}^{LC} \psi - \frac34 A^\xi\gamma_\xi\gamma_5\psi - m \psi &= 0 \label{eqno:AxDir} \end{empheq}\end{subequations} The axial pseudo-vector is algebraically defined by~(\ref{eqno:AxSpin}) according to which it corresponds the chiral currents. It can be integrated away from the field equations, giving for example in~(\ref{eqno:AxDir}) a cubic interaction term \[+ \frac38 \left( {\bar\psi} \gamma^\xi\gamma_5\psi \right) \gamma_\xi\gamma_5\psi \] This being said, bringing back the dimensional constants in the equations, one notice that the right-hand side of~(\ref{eqno:LCRicAx}) has the gravitational constant as a factor~\cite{GaugeGrav}. Hence the cubic term tends to be very weak in standard matter. In~(\ref{eqno:LCRicAx}), the (axial) torsion has a contribution to both the Einstein curvature term and the (matter) energy-momentum term. \subsubsection*{The space of fields} The frame bundle of $V$ depends on the metric, orientation and time-orientation structures on ${\mathcal E}$; the connection is then an extra structure on the frame bundle. Write $\pi:{\SOp(\aux)}}%_{\mathcal E}}\to{\mathcal E}$ the principal fibration. Vectors transverse to the fibration are called \emph{vertical} and differential forms which have a trivial contraction with all vertical vectors are called \emph{horizontal}. The frame bundle has the structure of a $\mathfrak{L}$-principal bundle on ${\mathcal E}$ provided with a \emph{solder form} $\alpha$ : a nondegenerate $\mathfrak{L}$-equivariant horizontal 1-form with values in $\Mink$ (more detail in Appendix~\ref{annFrameB}). The solder form establishes an isomorphism between the associated vector bundle of fibre $\Mink$ and $T{\mathcal E}$, or between the corresponding frame bundles. Hence choosing an isomorphism class for $V$ amounts to selecting a frame bundle and $\alpha$ plays the role of the tetrad $e$. The Lorentz gauge symmetry on $e$ is geometrically realised as the principal action of the Lorentz group on the frame bundle. A metric connection on $V$ corresponds to a nondegenerate $\lor$-equivariant $\lor$-valued $1$-form $\omega$ on ${\SOp(\aux)}}%_{\mathcal E}}$ which is normalized with respect to the action of the Lorentz algebra $\lor$. Normalisation means that representing ${\mathfrak h}\in\lor\hookrightarrow{\bar{\h}} \in \Gamma(T{\SOp(\aux)}}%_{\mathcal E}})$, the following holds : \[ \forall {\mathfrak h}\in\lor, \quad \omega({\bar{\h}})={\mathfrak h} \] Its kernel is a horizontal distribution which is the corresponding \emph{Ehresmann connection}. The data of a metric connection together with the solder form gives rise to a \emph{Cartan connection} $\omega\oplus\alpha\in\Gamma({\SOp(\aux)}}%_{\mathcal E}},\lor\oplus\Mink)^\mathfrak{L}$ (the superscript $\mathfrak{L}$ means restricting to the $\mathfrak{L}$-equivariant forms). The space of fields can then be described as the set of $\lor\oplus\mathfrak m$-valued Cartan connections on the principal bundle ${\SOp(\aux)}}%_{\mathcal E}}$. \subsubsection*{Lifting the Lagrangian} We want to express the action~(\ref{SEWC}) lifted to ${\SOp(\aux)}}%_{\mathcal E}}$ as a function of the Cartan connection. As the signature is non-Riemannian, the structure group $\mathfrak{L} = \SOpL$ is noncompact hence so is the bundle space. We shall consider the action as a formal integral and a motivation to derive variational equations over local variations, and forget about any domain of integration. As a remark note that a structure group reduction to maximal compact subgroups $\SO_3$ always \emph{exists} but would involve extra physical data (although topologically trivial): a nowhere vanishing timelike vector field (it is called a \emph{field of observers} in~\cite{ObsSp}). The curvature 2-form associated with the connection is expressed as \[ \Omega = \d\omega + \frac12\wb\omega\omega \] It takes value in the Lie algebra $\lor$ but we will allow ourself to consider the associated $\End(\Mink)$-valued 2-form without changing the notation. We write $\pi^*$ for the pullback to ${\SOp(\aux)}}%_{\mathcal E}}$ of any tensorial-valued differential form, which is then identified with a (horizontal) differential form of the same degree with values in a \emph{constant} trivial bundle and which is equivariant. In order to relate the curvature $2$-form to the curvature tensor, we introduce $\rho^b_{i,d}$ the components of the action $\lor\to \End(\mathbb{R}^{1,3})$ so that \[({\mathfrak h}_i\cdot x)^b = \rho^b_{i,d}x^d\] for $x\in\mathbb{R}^{1,3}$. They satisfy the antisymmetry relation \[\rho^b_{i,d}\eta^{dc} + \rho^c_{i,d}\eta^{db} = 0\] Then the curvature tensor $\Riem$ is defined so that \[(\pi^*\widetilde\Riem)^a_c = \Omega^i \rho^a_{i,c} \] as explained in Appendix~\ref{annFrameB}, using implicitly the identification \[\pi^*\End(TM)\simeq \Rpq\otimes\Rpq^*\times{\SOp(\aux)}}%_{\mathcal E}}\] We equip the Minkowski vector space $\Mink$ with a spacetime orientation in addition to its metric structure, so that a duality operator $\ExT^k\Mink\simeq \ExT^{4-k} \Mink^*$ is defined (as in Appendix~\ref{anndual}). We can then lift the EC Lagrangian form~(\ref{SEWC}): \begin{equation} \pi^* \left( \widetilde{\Riem^a_c}\eta^{cb}\wedge e^{(2)}_{ab} \right) = \Omega^i \rho^a_{i,c} \eta^{cb}\wedge \alpha^{(2)}_{ab} \end{equation} which gives a horizontal (scalar-valued) $4$-form on the frame bundle. It is not a top form on the bundle space yet, to turn it into a $\mathfrak{L} -invariant top form it has to be wedge-multiplied with a (right-)invariant volume form on $\mathfrak{L}$. Such a volume form is not unique, but we can specify it in a consistent way for all fibres using the Lie algebra action, as we explain now. As a $\lor$-valued $1$-form, $\omega$ establishes a map $\lor^*\to \Omega^1_{vert}({\SOp(\aux)}}%_{\mathcal E}})^\mathfrak{L}$ which extends to an graded algebra morphism \begin{equation} \omega^* : \ExT^\bullet(\lor^*) \to \Omega_{vert}^\bullet({\SOp(\aux)}}%_{\mathcal E}})^\mathfrak{L} \end{equation} so that specifying a volume element in $\ExT^{6}\lor^*$ gives a vertical $6$-form on ${\SOp(\aux)}}%_{\mathcal E}}$, which we will write $\omega^{(6)}$. Although $\omega^{(6)}$ effectively depends on the connection, detailed computations show that $h \wedge \omega^{(6)}$ does not for $h$ any specified \emph{horizontal} $4$-form. We thus define the following Lagrangian form on ${\SOp(\aux)}}%_{\mathcal E}}$: \begin{equation} \mathscr{L} = \Omega^i \rho^a_{i,c} \eta^{cb}\wedge\alpha^{(2)}_{ab}\wedge\omega^{(6)} \end{equation} Even if we do not explicitly specify the volume element of $\lor^*$, it is still possible to discuss about coupling constants and relative signs when considering a Lagrangian with matter components as long as the same volume element is used for all terms. These considerations remain however out of the scope of this paper. Using the whole Cartan connection form we can in a similar way consider the morphism \begin{equation} (\omega \oplus \alpha)^* : \ExT^\bullet((\lor \oplus \Mink)^*) \to \Omega^\bullet({\SOp(\aux)}}%_{\mathcal E}})^\mathfrak{L} \end{equation} and providing $\lor$ with an arbitrary volume element we use the vector-forms duality (as described in Appendix~\ref{anndual}) on \emph{$\lor\ltimes\Mink$} so that we can write \begin{equation}\label{FrameLag} \mathscr{L} = \Omega^i \rho^a_{i,c} \eta^{cb}\wedge (\omega\oplus\alpha)^{(8)}_{ab} \end{equation} We can now formulate the variational problem on ${\SOp(\aux)}}%_{\mathcal E}}$: the field is a Cartan connection $1$-form $\omega\oplus\alpha$, in other words a $1$-form with value in $\lor\oplus\Mink$, more precisely as a Lie algebra $\lor\ltimes\Mink$, which is nondegenerate (i.e. of constant rank $10$), normalised on the principal action of $\lor$ and equivariant. It has to be an extremal point of the locally defined action \begin{equation}\label{SEWCFrame}\boxed{ S_{EC}[\alpha,\omega] = \int \Omega^i \rho^a_{i,c} \eta^{cb}\wedge (\omega\oplus\alpha)^{(8)}_{ab} }\end{equation} for compactly-supported variations. The integral is taken with respect to the orientation given by $\alpha^{(4)}\wedge\omega^{(6)}$. The constraint of being a Cartan connection $1$-form on $\omega\oplus\alpha$ can be written in the following way, writing ${\bar{\h}}$ for the (left-invariant) vector field representing ${\mathfrak h}\in\lor$ and $R_g$ for the action of $g\in\mathfrak{L}$ : \begin{empheq}[left=\empheqlbrace]{gather*} \alpha^{(4)}\wedge\omega^{(6)}\text{is nowhere vanishing} \\ i_{\bar{\h}}(\omega\oplus\alpha) = {\mathfrak h}\oplus 0 \\ R_g^*(\omega\oplus\alpha) + g \cdot(\omega\oplus\alpha) = 0 \end{empheq} Now as the group $\mathfrak{L}$ is connected, $\lor$-equivariance is equivalent to $\mathfrak{L}$-equivarlence so that the constraint can be written as local equations as follows : \begin{subequations}\label{eqno:omalcons} \begin{empheq}[left=\empheqlbrace]{gather} \alpha^{(4)}\wedge\omega^{(6)}\text{is nowhere vanishing} \label{eqno:nondegen}\\ i_{\bar{\h}}(\omega\oplus\alpha) = {\mathfrak h}\oplus 0 \label{normalisation}\\ \mathcal L_{\bar{\h}} (\omega\oplus\alpha) + {\mathfrak h}\cdot(\omega\oplus\alpha) = 0 \label{equivconsnorm} \end{empheq} \end{subequations} We want to derive the variational Euler-Lagrange equation for~(\ref{SEWCFrame}) under the constraints~(\ref{eqno:nondegen}-\ref{equivconsnorm}) but as the equivariance constraint~(\ref{equivconsnorm}) is non-holonomic, and actually non-local, the usual derivation does not directly apply. Moreover, the action~(\ref{SEWCFrame}) cannot be used as such as the domain is non compact (and requiring equivariance along the noncompact fibres definitely prevents any nontrivial field variation from having a compact support, or even from decaying at infinity). The central question of this paper is the derivation and the treatment of the variational equations under such constraints. This will involve translating the constraints into Lagrange multipliers terms (as presented in Section~\ref{LagMult}) in the Lagrangian and is explained in the next section. \subsection{Lagrange multipliers} Standard holonomic Lagrange multiplier terms of the form \begin{equation*} P_af^a(z,y)\mathrm{vol} \end{equation*} are added to Lagrangians in order to impose the equations \begin{equation*} f^a(z,y)=0 \end{equation*} as Euler-Lagrange equations corresponding to the variations of $P_a$. In the same way, for $n$-forms $(F^a)$ of a more general form -- we will consider elements of $\Omega^n(Q)$ -- one can consider the Euler-Lagrange term associated to \begin{equation*} \mathscr{L}^{cons} = P_a F^a(z,y) \end{equation*} with $(P_a)_{\lel 1 a m}$ being additional free scalar-valued fields (or one field with value in a suitable vector bundle). We write $p_a$ for the corresponding added coordinate in the configuration space. One performs the Legendre transformation to obtain the corresponding Poincaré-Cartan form : \begin{equation} \Theta^{cons} = p_aF^a \end{equation} as $F^a$ is invariant under the Legendre transformation (see the last comment of Appendix~\ref{annLegTrans}). The corresponding premultisymplectic form is \begin{equation} \d\Theta^{cons} = \d p_a \wedge F^a + p_a\d F^a \end{equation} The contribution to the Euler-Lagrange forms is \begin{align} \EL^{cons,a} &= \partial_{p_a}\lrcorner\, \d\Theta^{cons} = F^a \label{eqno:ELconsa}\\ \EL^{cons}_{A} &= \partial_{y^A}\lrcorner\, \d\Theta^{cons} = -\d p_a \wedge i_{\partial_A} F^a + p_a i_{\partial_A}\d F^a \end{align} hence for a field $\phi : \mathcal P \to Q\times \mathbb{R}^m$ the Euler-Lagrange equations associated to the whole Lagrangian $\mathscr{L}_0 + \mathscr{L}_{cons}$ are \begin{empheq}[left=\empheqlbrace]{align} \phi^*F^a &= 0\\ \phi^*\EL^0_A &= \phi^*\left( \d p_a \wedge i_{\partial_A} F^a - p_a i_{\partial_A}\d F^a \right) \label{ELp} \end{empheq} The first equation gives the constraints one intends to impose on the fields. In the second equation $p_a$ can compensate for some nonzero components of $\EL^0_A$. Indeed in the case $F^a=f^a\mathrm{vol}$, the right-hand term is $-P_a \phi^*\left( \partial_{y^A}f^a\mathrm{vol} \right)$. In the case the $f^a$ have independent differentials (in the vertical $y^A$ directions), $P_a$ parametrise non-trivial components of $\phi^*\EL^0_A$ along the $\d_y f^a$ that are allowed by the variational principle \emph{restricted to sections satisfying the constraint condition} $\phi^*f^a = 0$. We dealt with $n$-forms so far, we now explain how to use Lagrange multipliers with lower degree forms. For any form $f\in\Omega^k(Q)$ with $k \leqslant n$, assuming we are given a local frame of $\mathcal P$, we can consider the forms \begin{equation} F_I = f \wedge \mathrm{vol}^{(k)}_I \end{equation} for $I$ a multi-index of size $k$ parametrising the basis of $(n-k)$-forms $\mathrm{vol}^{(k)}_I$ (as described in Section~\ref{anndual}). We consider free multiplier fields $P^I$ which we gather in a term \[ P^I F_I = f\wedge P^I \mathrm{vol}^{(k)}_I = f \wedge P \] with $P\in \Omega^{n-k}(\mathcal P)$. The term encodes the constraints \begin{equation*} (\phi^*f) \wedge \mathrm{vol}^{(k)}_I = 0 \end{equation*} hence \begin{equation*} (\phi^*f) = 0 \end{equation*} which is very similar to the mechanism we used in our application with the term \eqref{multvirg}. The difference being that we only required \emph{specific components} of $\d\lambda+\fwb\lambda\lambda$ to vanish, by selecting specific values for the multi-index $I$. Such constraints can be of non-holonomic nature : consider $F_i = \d f \wedge \mathrm{vol}_i$ for $f\in\Omega^0(Q)$. Couple them to Lagrange multipliers $p^i$ to obtain a term \[ p^i \d f \wedge \mathrm{vol}_i = \d f \wedge p \] with $p\in\ExT^{n-1}(T^*\mathcal P)$. Considering the Lagrangian $\mathscr{L}_0 + \d f \wedge p$ we obtain the following Euler-Lagrange equations on a field $(\phi,P)$ with variations $X\in j\phi^*(T\mathcal{J}^1(Q))$ \begin{empheq}[left=\empheqlbrace]{align} \phi^*\d f &= 0\\ j\phi^*(\EL_X^0) - \phi^*(\mathcal L_X(f))\wedge \d P &= 0 \end{empheq} which is to compare with the equations associated with a single holonomic Lagrange multiplier $\pi f(z,y) \mathrm{vol}$ : \begin{empheq}[left=\empheqlbrace]{align} \phi^*f &= 0\\ j\phi^*(\EL_X^0) + \Pi \phi^*(\mathcal L_X(f)) \mathrm{vol} &=0 \end{empheq} The constraint enforced by the term $\d f\wedge p$ is \begin{equation} \d\phi^*f = 0 \end{equation} or in another words \emph{$\phi^*f$ has to be constant}. The difference with the holonomic constraint is that \emph{we do not specify the constant}. Indeed we only require $\phi$ to be tangent to the leaves of the foliation defined by $\d f$, in other words the level hypersurfaces of $f$. Hence the leaf is allowed to change between the fields, only this is a \emph{non-local} variation. \subsection{Exact Euler-Lagrange terms} We now explain how using specific non-local variations of the dynamic fields it is possible to gather all Lagrange multiplier contributions in an exact term. Then by performing an integration it is possible to obtain a non-local Euler-Lagrange equation which has no multiplier contribution. Let us start with a simple example. We consider a real line bundle $Q = \mathbb{R}\times \mathcal P \to \mathcal P$ with a fibre coordinate $y$ and provided with a Lagrangian $\mathscr{L}_0$. We enforce the constraint $\d y = 0$ by adding a term $p\wedge \d y$ to the Lagrangian with $p$ a free variable in $\ExT^{n-1} T^*\mathcal P$ : \[ \mathscr{L} = \mathscr{L}_0 + p\wedge\d y \] The equation~(\ref{ELp}) on a field $\phi : \mathcal P\to\mathbb{R} \times \ExT^{n-1} T^*\mathcal P$ associated to the variation field $\partial_y$ becomes \begin{equation} \phi^*\EL^0_y = \d P \end{equation} asserting that $\phi^*\EL^0_y$ is an exact form of primitive $P$. In a similar fashion enforcing a constraint $\d y \wedge \mathrm{vol}_i = 0$ with a term $p \d y\wedge \mathrm{vol}_i$, the Euler-Lagrange term would be \enquote{exact in the $i$ direction}, in the sense that $\phi^*\EL^0 = \partial_i P \mathrm{vol}$. In the case $\mathcal P$ is a compact manifold, the variation of the action $\int_\mathcal P \phi^*\mathscr{L}$ under a global translation of $\phi$ by a \emph{constant} value is $\int_\mathcal P \phi^*\EL^0_y$ so that asserting that $\phi^*\EL^0_y$ is exact is equivalent to asserting that the action has a trivial variation under this non-local field variation. In order to formalise this observation let $F^a$ a family of homogeneous forms of degree $\Omega^{k_a}(Q)$ and consider Lagrange multipliers $p_a\in \ExT^{n-k_a} T^*\mathcal P$. We therefore have a Lagrangian \[ \mathscr{L} = \mathscr{L}_0 + p_a\wedge F^a \] Now let $X$ be a vector field on $Q$ (hence which does not act on the multipliers) such that $\mathcal L_X$ preserves the ideal generated by the $F^a$. Beware that this ideal of constraints is not necessarily generated in degree one, hence is not necessarily generated by the annihilator of a plane distribution. Consider the Euler-Lagrange term corresponding to $X$ : \begin{equation}\begin{aligned} \EL_X &= i_{jX} \d(\Theta^0 + p_a \wedge F^a) = \EL^0_X -\d(p_a \wedge i_X F^a) + \mathcal L_X (p_a \wedge F^a)\\ &= \EL^0_X -\d(p_a \wedge i_X F^a) + P_a \wedge \mathcal L_X (F^a) \end{aligned}\end{equation} so that, under the constraint Euler-Lagrange equations~(\ref{eqno:ELconsa}) $\phi^*\mathcal L_X F^a \equiv 0$ and the Euler-Lagrange equation corresponding to $X$ is equivalent to \begin{equation} \phi^*\EL^0_X \equiv \d(P_a \wedge i_X F^a) \mod (F^a) \end{equation} in a process very reminiscent of Noether's conservation theorem. The degrees of freedom $p_a$ then take the role of coefficients parametrising a differential primitive of $\EL^0_X$. Suppose that it is possible to find a set of such vector \emph{fields} $(X^I)$ preserving the constraints (which is a local property of the first order prolongations of $X^I$) which spans the vertical directions of $Q$. Then we can express the whole Euler-Lagrange equation system as \begin{empheq}[left=\empheqlbrace]{align} \phi^*F^a &= 0\\ \phi^*\EL^0_{X^I} &\equiv \d(P_a \wedge F_I^a) \end{empheq} in which we wrote $F_I^a := i_{X^I} F^a$. Note a possible gauge freedom as $P_a \wedge F_A^a$ is only involved through its exterior differential, so that variations of $P_a$ inducing closed variations of all the $P_a \wedge F^a_I$ are symmetries of the equations. Now if the source space $\mathcal P$ is compact, it is possible to integrate over $\mathcal P$ to obtain non-local equations \begin{equation} \int_\mathcal P \phi^*\EL^0_{X^I} = 0 \end{equation} When $\mathcal P$ is not compact, one may try to find lower dimension compact submanifolds and to build from the Euler-Lagrange lower degree forms which are exact. This is our approach in the next section. We will integrate along $6$-dimensional orbits, and for that purpose we will need to factor out a \emph{closed} $4$-form from both $\phi^*\EL^0_X$ and the exact multiplier term. \subsection{The multisymplectic formalism} Lagrangian Field Theory deals with critical points of actions that are defined by the integral of a local Lagrangian which is at each point of the integration space a local function of the physical fields involved. We will discuss \emph{first order theories}. Let us consider a fibre bundle $Q\xrightarrow{\pi}M$ representing a configuration space fibred over a $n$-dimensional manifold $M$ (that will be assumed oriented to avoid density considerations). The \emph{first order jet bundle} (or $1$-jet bundle for short) is written \[\mathcal{J}^1(\pi) \xrightarrow{\pi^1} Q \xrightarrow{\pi} M\] or without mentioning the fibration $\mathcal{J}^1(Q)$. Over every open subset $\mathcal U\subset M$, to each section of $Q$ can be associated its \emph{first order prolongation} ($1$-jet for short) : \[ \phi\in\Gamma(\mathcal U,Q|_{\mathcal U}) \mapsto j\phi \in\Gamma(\mathcal U,\mathcal{J}^1(Q)|_{\mathcal U}) \] A (first order) Lagrangian is a section $\mathscr{L}\in (\pi\circ\pi^1)^*(\ExT^n T^*M)$. The associated action is \[ S : \phi \mapsto S[\phi] = \int \mathscr{L}(\phi,\d\phi) = \int j\phi^*\mathscr{L} \] The integration domain is intentionally not written : the integral may be ill-defined over non-compact domains, but it does not pose problem in (classical) field theory. What defines the equation of motion is the \emph{variation} of the action with respect to \emph{compactly-supported} variations which is well defined over each compact subset of $M$. The usual derivation of the Euler-Lagrange equations (in coordinates) proceeds by taking arbitrary variations of the field $\phi$ and performing an integration by parts in order to obtain an integrand linear in the 0-order variation of $\phi$ which is free. In the multisymplectic formalism this computation is carried in a more geometrical language. It involves the multisymplectic \emph{Legendre transformation}. \subsection{The Legendre transformation}\label{ssecno:LegTrans} The Legendre transformation in covariant field theory is a mapping from the jet bundle to the dual jet bundle. We will here again restrict our considerations to first order Lagrangians. In the case of mechanics (1-dimension) the transform takes the form $(t,q,\dot q)\mapsto -H(t,q,\dot q)\d t + p_i(t,q,\dot q)\d q^i$. The usual Hamiltonian then takes the form of an \emph{hamiltonian section} \[h : \begin{cases} \mathbb{R}\times T^*Q\simeq T^*(\mathbb{R}\times Q)/\mathbb{R}\d t &\to T^*(\mathbb{R}\times Q)\\ \qquad\qquad (t,q^j,p_i\d q^i) &\mapsto (t,q^j,-H(t,q,p)\d t+ p_i\d q^i) \end{cases} \] We get back to the fibre bundle $Q\xrightarrow{\pi}M$. In the so-called \emph{de Donder-Weyl} theory, the momentum space is the \emph{affine dual jet bundle} with values in $\ExT^n T^*M$ \cite{Multisympl} : \begin{equation}\label{eqno:DualAff} \ExT^n_1 T^*Q = \{a\in \ExT^n T^*Q \,|\ (VQ\wedge VQ)\lrcorner\, a = 0 \} \end{equation} where $VQ$ is the vertical tangent bundle of the fibration $Q\to M$. Let us record one useful property here : for any two fibre bundles $Q_1,Q_2\to M$, there is a natural isomorphism \[ \ExT^n_1 T^*(Q_1\times_M Q_2) \simeq \ExT^n_1 T^*Q_1 \times_M \ExT^n_1 T^*Q_2 \] The Legendre transform is realized by the Poincaré-Cartan form $\Theta$, a section of $(\pi^1)^*\ExT^n_1 T^*Q$, which be describe now. It can be seen as a \enquote{corrected} extension of the Lagrangian $\mathscr{L}$ from sections of $\pi$ to arbitrary sections of $\mathcal{J}^1 (Q)$. It is defined by the following properties : \begin{align} j\phi^*\Theta &= j\phi^*\mathscr{L}\qquad \forall\phi\in\Gamma(M,Q) \label{PCcontact}\\ j\phi^*\left( i_X\d \Theta \right)&= 0\qquad \forall\phi\in\Gamma(M,Q),\forall X\in V(\mathcal{J}^1(Q)\to Q)\label{PC0jet} \end{align} so that it defines the same Lagrangian form as $\mathscr{L}$ while its first variation is (locally) trivial in the first-order directions : it depends only on the variation of the \emph{0-jet}. From this perspective, the Legendre transformation is the geometrical formulation of the integration by parts producing the Euler-Lagrange term (the variational derivative) coupled to the 0-jet variation in the usual coordinate-explicit derivation of the Euler-Lagrange equations. In order to obtain an explicit expression, one considers the \textit{contact forms} which generate the differential forms of $\mathcal{J}^1(Q)$ vanishing under the pullback by any first order prolongation \cite{ExSysEL,GIMMSY1}. They are described below. As we want explicit formulas, we will use (local) coordinates. Let $(x^i)$ be coordinates on $M$, let $(y^A)$ be the vertical coordinates on $Q$ with respect to a local trivialisation and let $(v^A_i)$ be the corresponding $1$st-order coordinates, so that $v^A_i(j\phi) = \partial_i \phi^A$. A basis for the contact $1$-forms is given by \[\theta^A = \d y^A -v^A_i\d x^i\] Hence the contact elements of $\ExT^n_1 T^*Q$ are of the form $\theta^A \wedge p_A$ for $p_A\in\pi^*\ExT^{n-1}T^*M$ (we use the Einstein summation convention). To the $x^i$ coordinates is associated a local volume form $\mathrm{vol}=\d x^1\wedge \cdots \wedge \d x^n$ so that $\mathscr{L}$ decomposes as $L(x,y,v)\mathrm{vol}$. We write \[\Theta = \mathscr{L}+\theta^A \wedge p_A\] The components $p_A$ are then uniquely determined (the condition~\eqref{eqno:DualAff} fixing any further contact term) : write $\mathrm{vol}_i$ for the $(n-1)$-form dual to $\d x^i$ and decompose $p_A = p_A^i\mathrm{vol}_i$ : \begin{equation*}\begin{aligned} \partial_{v^A_i}\lrcorner\,\d(\mathscr{L} + \theta^Bp_B) &= (\partial_{v^A_i}L)\mathrm{vol} + (\partial_{v^A_i}\lrcorner\,\d\theta)^B \wedge p_B - (\partial_{v^A_i}\lrcorner\,\theta^B) \wedge \d p_B\\ &\qquad + \d\theta^B \wedge \partial_{v^A_i}\lrcorner\, p_B - \theta^B \wedge \partial_{v^A_i}\lrcorner\, \d p_B\\ &= \partial_{v^A_i}L\mathrm{vol} + (\partial_{v^A_i}\lrcorner\,\d\theta)^B \wedge p_B - \theta^B \wedge \partial_{v^A_i}\lrcorner\, \d p_B\\ &= \partial_{v^A_i}L\mathrm{vol} + (\partial_{v^A_i}\lrcorner\,(-\d v^B_i\d x^i))^B \wedge p_B - \theta^B \wedge \partial_{v^A_i}\lrcorner\, \d p_B \\ &= \partial_{v^A_i}L\mathrm{vol} - \d x^i \wedge p_A - \theta^B \wedge \partial_{v^A_i}\lrcorner\, \d p_B\\ &= (\partial_{v^A_i}L - p^i_A)\mathrm{vol} - \theta^B \wedge \partial_{v^A_i}\lrcorner\, \d p_B \end{aligned}\end{equation*} The $p^i_A$ are then defined by \[ (\partial_{v^A_i}L - p^i_A)\mathrm{vol} - \theta^B \wedge \partial_{v^A_i}\lrcorner\, \d p_B \equiv 0 \mod (\theta^B)\] so that \begin{equation} p^i_A = \partial_{v^A_i} L \end{equation} We obtain the usual expression for the Poincaré-Cartan form : \begin{equation}\label{LTform} \Theta = \mathscr{L} + \left( \partial_{v^A_i} L \right) \theta^A\wedge \mathrm{vol}_i \end{equation} The action can equivalently be defined using $\Theta$ : \[ S[\phi] = \int j\phi^*\Theta \] The corresponding \emph{Euler-Lagrange equation} is \begin{equation} \forall X \in \phi^*TQ,\qquad \boxed{ \phi^*(i_{jX}\d\theta) = 0} \end{equation} with $jX$ the $1$st order prolongation of $X$. The $(n+1)$-form $\d\Theta$ is called a \emph{premultisymplectic form}. We will call the $n$-forms $i_{jX}\d\Theta$ \emph{Euler-Lagrange forms} and write them $EL_X$ : \begin{equation} \EL_X := i_{jX}\d \Theta \end{equation} \com{Inspiré de la notation $E(L)$ utilisée par exemple par Anderson} If we consider more general Lagrangian forms that are sections of ${\pi^1}^*(\ExT^n T^*Q)$, the Poincaré-Cartan form can still be defined. According to \cite{ExSysEL} (our $\mathscr{L}$ is their $\Lambda$ and their $\Pi$ is our $\d\Theta$; they call \emph{Betounes form} what we call Poincaré-Cartan form), $\theta^A\wedge p_A$ is uniquely defined such that \begin{equation}\label{eqno:PCgen} \d(\mathscr{L} + \theta^B\wedge p_B) = \theta^A\wedge E_A \end{equation} The $E_A$ forms then correspond to the Euler-Lagrange terms. The coefficient forms $p_A$ themselves are then uniquely defined if we require a condition such as $p_A = p^I_A \mathrm{vol}_I$ (for a suitable subset of Lagrangians). In any case the Poincaré-Cartan form itself is well defined. In particular, for forms $\mathscr{L}$ that are sections of $\pi^*(\ExT^n T^*M)$ (that is, which have coefficients independent of the $1$st order coordinates), the Poincaré-Cartan form is $\mathscr{L}$ itself. Indeed $\d\mathscr{L}$ is a section of $\ExT^{n+1}_1 T^*Q$, which has $\sum_A \theta^A\wedge {\pi^1}^*(\ExT^n T^*Q)$ as pullback bundle to $\mathcal{J}^1(Q)$ (because there are only $n$ horizontal directions and $\theta^A\wedge \mathrm{vol} = \d y^A\wedge \mathrm{vol}$). \subsection{Maurer-Cartan forms} Let $G$ a \emph{simply-connected} Lie group and $\mathfrak g$ its Lie algebra, with underlying space isomorphic to $T_eG$. The invariant vector fields can be identified with the tangent space at the identity in two ways. The \emph{left-invariant} vector fields, which correspond to the \emph{right} action of $\mathfrak g$ and the \emph{right-invariant} vector fields, which correspond to the \emph{left} action of $\mathfrak g$. Writing $L_g$ and $R_g$ for the respective left and right actions of $G$, the following $T_eG$-valued $1$-forms are defined: \begin{subequations}\begin{align} \omega_L(X)_{|g} &=\d R_{g^{-1}} X_{|g}\\ \omega_R(X)_{|g} &=\d L_{g^{-1}} X_{|g} \end{align}\end{subequations} These forms establish isomorphisms $TG\to G\times \mathfrak g$. We call $\omega_L$ the \emph{left Maurer-Cartan form} and $\omega_R$ the \emph{right Maurer-Cartan form} of $G$. They are subject to the following structure equations: \begin{subequations}\label{MCeq} \begin{align} \d\omega_L -\fwb{\omega_L}{\omega_L}&=0\\ \d\omega_R +\fwb{\omega_R}{\omega_R}&=0 \end{align}\end{subequations} with the wedge bracket defined as $\wb{\alpha}{\beta}(X,Y)=2 \left( \wb{\alpha(X)}{\beta(Y)} - \wb{\beta(X)}{\alpha(Y)} \right)$. Writing the inversion $i:G\to G$, the two $1$-forms are related by \begin{equation} i^*\omega_R = -\omega_L \end{equation} By the isomorphisms $\mathfrak g\to \Gamma(TG)$ they establish, these forms can be read as presenting the actions of the Lie algebra of $G$. Indeed Equations~(\ref{MCeq}) imply that the right action of $\mathfrak g$ is a Lie algebra morphisms (this is artificial to the extent that the Lie algebra is often directly defined as the algebra of (left-)invariant vector fields) while the left action of $\mathfrak g$ is a Lie algebra morphism for the opposite bracket $-[\cdot,\cdot]_\mathfrak g$. The form $\omega_L$ is \emph{right-invariant} while $\omega_R$ is \emph{left-invariant}. Now let $X$ be a differentiable manifold on which the Lie group $G$ acts on the right. By differentiation there is a Lie algebra action $\rho : \mathfrak g\to \Gamma(TX)$, which we also write ${\mathfrak h}\in\mathfrak g\mapsto {\bar{\h}}\in \Gamma(TX)$. If $X$ is a \emph{principal homogeneous space}, that is to say the action is both transitive and free, then the corresponding morphism $X\times\mathfrak g\to TX$ is an isomorphism. We write $\omega: TX\to \mathfrak g$ its inverse: $\omega$ is a $\mathfrak g$-valued $1$-form on $X$. Take two elements ${\mathfrak h}_1,{\mathfrak h}_2$ in $\mathfrak g$ and write the compatibility to the bracket: \begin{equation}\begin{aligned} \omega([{\mathfrak h}_1,{\mathfrak h}_2]) &= {\bar{\h}}_1(\omega({\bar{\h}}_2)) - {\bar{\h}}_2(\omega({\bar{\h}}_1)) - \d\omega({\mathfrak h}_1,{\mathfrak h}_2)\\ &= {\bar{\h}}_1({\mathfrak h}_2) - {\bar{\h}}_2({\mathfrak h}_1) - \d\omega({\mathfrak h}_1,{\mathfrak h}_2)\\ \omega([{\mathfrak h}_1,{\mathfrak h}_2]) &= - \d\omega({\mathfrak h}_1,{\mathfrak h}_2)\end{aligned} \end{equation} usually written under the form \begin{equation} \d\omega({\mathfrak h}_1,{\mathfrak h}_2) + [\omega({\mathfrak h}_1),\omega({\mathfrak h}_2)] =0 \label{MCact} \end{equation} which then holds for all vectors of $X$ as the $({\bar{\h}})_{{\mathfrak h}\in\mathfrak g}$ generate all the vector fields over $\mathcal{C}^\infty(X)$. Hence the \emph{Lie algebra action} data is contained in $\omega$; such a form satisfying the Maurer-Cartan equation (\ref{MCact}) is called a \emph{(right) Maurer-Cartan form (with value in $\mathfrak g$)}. In a similar fashion one can consider left Maurer-Cartan forms, with a different sign in the Maurer-Cartan equation. An immediate question is the following: \emph{what about a non-transitive action ?} Can we find an equivalent to the Maurer-Cartan form? The case we have in mind is that of a principal bundle. The Lie algebra action then only defines a distribution, which integrates to the orbits of the (tentative) group action. The morphism $X\times\mathfrak g\to TX$ is not surjective. We can then ask for a section $\omega : TX\to \mathfrak g$ that is normalised on the ${\bar{\h}}$ : \[\omega({\bar{\h}}) = {\mathfrak h}\] Such a $\mathfrak g$-valued $1$-form is then of maximal rank and its kernel defines a distribution which is an \emph{Ehresmann connection} in the case $X$ is the total space of a principal bundle. But $\omega$ is not sufficient to construct back the Lie algebra action $X\times\mathfrak g\to TX$. A solution would be to have the data of a whole coframe which includes the vector fields representing the action of $\mathfrak g$. This is the situation we consider in the following sections. \subsection{Cartan $1$-forms}\label{annCartanConnForm} We are looking for a frame field on a manifold $X$ such that part of the vectors form an action of $\mathfrak g$ on $X$. We want the bracket compatibility property to be deduced from equation on the dual coframe field which is formulated in terms of the wedge product and the exterior differential, similarly to the Maurer-Cartan Equations \eqref{MCeq}. Let us choose a specific linear representation of $\mathfrak g$ which we will write $\mathbb{R}^n$. Let $\omega\oplus\alpha$ a $\mathfrak g\ltimes\mathbb{R}^n$-valued coframe on $X$; we will also use the notation $\varpi = \omega\oplus\alpha$. For ${\mathfrak h}\in\mathfrak g$ and $\xi\in\mathfrak g\ltimes \mathbb{R}^n$ we write \[{\bar{\h}}:=\omega^{-1}({\mathfrak h})\] and \[\bar\xi:=\left( \varpi \right) ^{-1}(\xi)\] Then the following equation : \begin{equation}\label{eqno:equivframe} \forall({\mathfrak h},\xi)\in\mathfrak g\times(\mathfrak g\ltimes\mathbb{R}^n),\quad [{\bar{\h}},\bar\xi] = \overline{[{\mathfrak h},\xi]} \end{equation} which implies that ${\mathfrak h}\in\mathfrak g\to{\bar{\h}}\in\Gamma(TX)$ is a Lie algebra action, is equivalent to the existence of (variable) coefficients $\Omega^a_{b,c}, \Omega^i_{b,c}$ such that \begin{equation}\label{eqno:omalcomp}\begin{cases} \d\omega^i + \fwb{\omega}{\omega}^i &= \frac12 \Omega^i_{b,c} \alpha^b\wedge \alpha^c\\ \d\alpha^a + \wb{\omega}{\alpha}^a &= \frac12 \Omega^a_{b,c} \alpha^b\wedge \alpha^c \end{cases}\end{equation} Indeed Equations~\eqref{eqno:omalcomp} is equivalent to \begin{equation}\label{eqno:equivcontract} \forall {\mathfrak h}\in \mathfrak g, \quad i_{{\bar{\h}}} \left( \d\varpi + \fwb\varpi\varpi \right) = 0 \end{equation} hence to \[ \forall ({\mathfrak h},\xi)\in \mathfrak g\times(\mathfrak g\ltimes\mathbb{R}^n), \left( \d\varpi + \fwb\varpi\varpi \right) ({\bar{\h}},\bar\xi) = 0 \] To obtain~\eqref{eqno:equivframe} one simply has to compute explicitly : \begin{equation*}\begin{aligned} \d\varpi({\bar{\h}},\bar\xi) + \fwb\varpi\varpi({\bar{\h}},\bar\xi) &= \mathcal L_{\bar{\h}}(\varpi(\bar\xi)) - \mathcal L_{\bar\xi}(\varpi({\bar{\h}})) - \varpi([{\bar{\h}},\bar\xi]) + [\varpi({\bar{\h}}),\varpi(\bar\xi)]\\ &= \mathcal L_{\bar{\h}}(\xi) - \mathcal L_{\bar\xi}({\mathfrak h}) - \varpi([{\bar{\h}},\bar\xi]) + [{\mathfrak h},\xi]\\ &= 0 - 0 - \varpi([{\bar{\h}},\bar\xi]) + [{\mathfrak h},\xi] \end{aligned}\end{equation*} Let us also mention that \eqref{eqno:omalcomp} is equivalent to the equivariance of $\varpi$ under the action of $\mathfrak g$ : the term vanishing in \eqref{eqno:equivcontract} can be written as \[ i_{{\bar{\h}}} \left( \d\varpi + \fwb\varpi\varpi \right) = (\mathcal L_{\bar{\h}} -\d i_{\bar{\h}})\varpi + 2\fwb{\varpi({\bar{\h}})}{\varpi} = \mathcal L_{\bar{\h}} \varpi -\d {\mathfrak h} + \wb{{\mathfrak h}}{\varpi} = \mathcal L_{\bar{\h}} \varpi - 0 + {\mathfrak h}\cdot\varpi \] The prime example for $1$-forms satisfying \eqref{eqno:omalcomp} is that of \emph{Cartan connection $1$-forms} on frame bundles, defined in Appendix~\ref{annFrameB}. These are coframe fields defined on frame bundles, combining a vertical connection $1$-form and a horizontal solder form. A $\mathfrak g\ltimes\mathbb{R}^n$-valued coframe satisfying \eqref{eqno:omalcomp} thus defines an action of the Lie algebra $\mathfrak g$ on $X$ but as the equation is entirely local it cannot be sufficient to encode the whole structure of a principal bundle with connection. The first step, which is tackled in the next section, is to study the conditions under which the Lie algebra action integrates to a Lie group action. \subsection{Integrating a Lie algebra action into a group action}\label{annIntAct} Let $X$ a connected manifold on which a Lie algebra $\mathfrak g$ of dimension $r$ acts \emph{on the right}. $X$ is called a $\mathfrak g$-manifold~\cite{LieActionInt}. The vector fields representing the Lie algebra are commonly called \emph{fundamental vector fields}. We want to integrate the action into a Lie group action. In other words, we want to define an action of a connected Lie group $G$ integrating $\mathfrak g$ to which the associated infinitesimal action corresponds with the existent $\mathfrak g$ action. A first obstruction is that the vector fields might not be complete. An immediate example is a open strict subset of $G$. The problem will hence be to \emph{globalise} the Lie algebra action, by which we mean embedding $X$ into a larger manifold on which $G$ acts, such that the Lie algebra actions correspond. The property identified as necessary and sufficient is the \emph{univalence} of the infinitesimal action~\cite{GlobalLie}. It roughly means that the action of an element $g$ of $G$ on a point of $X$ should not depend on the smooth path from $e$ to $g$ in $G$ used to construct it. We now define it in more detail. Consider the product $X \times G$. The Lie algebra $\mathfrak g$ acts \emph{on the left} on $G$ and on the right on $X$. Considering the \emph{opposite left action} on $X$, the Lie algebra left action on $X\times G$ define a Lie subalgebra of $\mathfrak g\hookrightarrow \Gamma(T (X\times G))$ of constant rank $r$. Indeed the $T G$ component is already of rank $r$. Furthermore as a Lie subalgebra it is closed under vector bracket, which means it defines an involutive distribution. Invoking the Frobenius theorem, it integrates into a foliation the leaves of which project to $G$ by local diffeomorphisms (the differential of the projection being the parallelism $\mathfrak g\hookrightarrow T (X\times G) \to T G$). However the projections of the leaves on $G$ do not have to be onto, nor one-to-one . The action is called \emph{univalent} if the projection $G\times X\to G$ is injective on each leaf. If the fundamental vector fields on $X$ are \emph{complete} then from each point $(x,g)\in X\times G$ belonging to a leaf $L$, the flows of the fundamental fields of $X\times G$ define a mapping $\mathfrak g \to L\subset X\times G$ the image of which projects in $G$ to the neighbourhood $\exp(\mathfrak g)g$ of $g$. Hence the image of the projection of $L$ to $G$ is invariant under multiplication by $\exp{\mathfrak g}$ which generates the whole (connected) group $G$, and the projection has to be onto. It turns out to be a covering map~\cite{GlobalLie}. If we choose $G$ to be the simply-connected integration of $\mathfrak g$ then each leaf has to be diffeomorphic to $G$ under the projection $X\times G\to G$. The action is then readily globalisable on $X$ to a group action : the embedding $X\simeq X\times \{e\} \hookrightarrow X \times G$ identifies $X$ as the \emph{leaf space} of $X \times G$. As the \emph{right} action of $G$ on $X\times G$ preserves the fundamental vector fields, it preserves the foliation and factors to the leaf space, identified with $X$. Now even when the fundamental vector fields are not complete, the leaf space construction still produces a $G$-space. Only, there is no guarantee the leaf space is Hausdorff. Univalence is what ensures us that no two different elements of $X \times \{e\}$ can belong to the same leaf, so that $X$ embeds into the leaf space. The leaf space can fail to have a natural manifold structure, but is provided with a smooth structure~\cite{LieActionInt}. Even in the case of a non-univalent action, the leaf space construction produces a $G$-space, whether the $\mathfrak g$-manifold $X$ is complete or not. However, when forming the leaf space, the manifold $X$ will reduced to a quotient so that the action becomes univalent (effectively identifying points that are connected by a loop based at identity in $G$). This construction can be applied with any Lie integration of $\mathfrak g$, but the larger the fundamental group is, the smaller (and possibly more singular) the quotient of $X$ will be. The leaf space of $X\times G$ is called the \emph{$G$-completion of $X$} and is proved to satisfy a universal property for $\mathfrak g$-equivariant maps from $X$\cite{LieActionInt}. \subsection{Integration of a Cartan connection into a frame bundle with connection}\label{annCartInt} Assume that $X$ is provided with a nondegenerate $\operatorname{\mathfrak{spin}}_{p,q}\ltimes\Rpq$-valued $1$-form $\varpi = \omega\oplus\alpha$ which satisfies \begin{equation*} \d\varpi + \frac12\wb{\varpi}{\varpi} = \frac12\Omega_{ab}\alpha^a\wedge\alpha^b \end{equation*} As justified in Section~\ref{annCartanConnForm}, it defines a right action of $\operatorname{\mathfrak{spin}}_{p,q}$ on $X$, for which $\varpi$ is equivariant. \subsubsection*{Globalising the Cartan $1$-form} We apply the construction described in Section~\ref{annIntAct} : we consider the integration $\Spin^+_{1,3}$ of $\operatorname{\mathfrak{spin}}_{p,q}$ (simply connected for $p\geqslant3$ and $q\leqslant 1$ or the opposite condition) and we equip $X\times \Spin^+_{p,q}$ with the foliation integrating the distribution spanned by $-{\bar{\h}} \oplus {\mathfrak h}\in\Gamma(T(X\times \Spin^+_{p,q}))$ for ${\mathfrak h}\in\operatorname{\mathfrak{spin}}_{p,q}$ represented by the right-invariant vector fields. We extend the notation $\varpi$ for $\omega\oplus\alpha$ to $X\times \Spin^+_{p,q}$ by $(x,g)\mapsto \Ad_g^{-1} \varpi_{|x}$. It is preserved under holonomy : for ${\mathfrak h}\in\operatorname{\mathfrak{spin}}_{p,q}$, \begin{equation}\begin{aligned} \mathcal L_{-{\bar{\h}}\oplus{\mathfrak h}} \left( \Ad_g^{-1}\varpi \right) = \mathcal L_{\mathfrak h} \left( \Ad_g^{-1} \right) \varpi + \Ad_g^{-1}\mathcal L_{-{\bar{\h}}}\varpi = \Ad_g^{-1}(-\operatorname{ad}_{\mathfrak h})\varpi - \Ad_g^{-1} \operatorname{ad}_{-{\mathfrak h}} \varpi = 0 \end{aligned}\end{equation} It is also manifestly equivariant for the right action of $\Spin^+_{p,q}$ on $X\times \Spin^+_{p,q}$ (acting trivially on the $X$ factor); note that this extension of $\varpi$ is only possible using the group $\Spin^+_{p,q}$ or $\SO^{+}_{p,q}$ (or non-connected extensions thereof) as it requires an action on $\so_{p,q}\ltimes \Rpq$. \emph{From now on, we require that the leaf space's smooth structure is that of a Hausdorff differentiable manifold} (more about this in \cite{CartInt}). This hypothesis allows us to factorise $\varpi$ to the $\Spin^+_{p,q}$-completion ${}_{\Spinp_{1,3}}X$ of $X$ to a $\Spin^+_{p,q}$-equivariant $\operatorname{\mathfrak{spin}}_{p,q}\ltimes\Rpq$-valued $1$-form (as the pullback to $X\times \Spin^+_{p,q}$ of the tangent bundle of the leaf space identifies with the normal bundle of the foliation). As $\varpi$ is required to be nondegenerate on $TX$ which is a sub-bundle of $T(X\times \Spin^+_{p,q})$ supplementary to the tangent bundle of the foliation, $\varpi$ is nondegenerate on the normal bundle of foliation hence its factorisation to ${}_{\Spinp_{1,3}}X$ is nondegenerate. We will write this factorisation $\underline{\varpi}$. The space ${}_{\Spinp_{1,3}}X$ is hence provided with an action of $\Spin^+_{p,q}$ and a $\operatorname{\mathfrak{spin}}_{p,q}$-equivariant map from $X\simeq X\times\{e\}$ which maps $\varpi$ to $\underline{\varpi}$. As $\varpi$ and $\underline{\varpi}$ are nondegenerate, the mapping $X\to{}_{\Spinp_{1,3}}X$ is a local diffeomorphism. It satisfies a universal property as a $\Spin^+_{p,q}$-completion and if the action of $\operatorname{\mathfrak{spin}}_{p,q}$ integrates to an action of $\Spin^+_{p,q}$ on $X$ then the mapping $X\to{}_{\Spinp_{1,3}}X$ is a global diffeomorphism: the leaves of $X\times \Spin^+_{p,q}$ \emph{are} the orbits of the action of $\Spin^+_{p,q}$ and the orbit space naturally identifies with $X$. \subsubsection*{Principal bundle structure} We will need to make a further assumption in order to obtain a principal bundle structure : the action of $\Spin^+_{p,q}$ needs to be \emph{proper}. It means that ${}_{\Spinp_{1,3}}X$ has a basis of neighbourhoods $\{\mathcal U \}$ that are \emph{small}, that is~\cite{slice} \begin{equation} \forall x\in\mathcal U, \exists V_x\in\mathcal V(x), \{ g\in \Spin^+_{p,q} | g\mathcal V_x \cap \mathcal U \neq \emptyset \}\text{ is relatively compact in }\Spin^+_{p,q} \end{equation} writing $\mathcal V(x)$ for the neighbourhoods of x. Note that in Euclidean signature the group $\Spin^+_p$ is compact and this property is trivially satisfied. Properness ensures the existence of \emph{slices} for the action of $\Spin^+_{p,q}$: a \emph{slice at $x$} is a $\Stab_{\Spin^+_{p,q}}(x)$-stable subset $S$ of ${}_{\Spinp_{1,3}}X$ which contains $x$ such that $\Spin^+_{p,q}\cdot S$ is open, $S$ is closed in $\Spin^+_{p,q}\cdot S$ and we have $g\cdot S\cap S = \emptyset$ for $g\in \Spin^+_{p,q}\setminus \Stab(x)$\cite{GSpaces}. Another equivalent definition is the existence of an equivariant map $\Spin^+_{p,q}\cdot S \to \Spin^+_{p,q} / \Stab(x)$ such that $S$ is the inverse image of the class $[\Stab(x)]$\cite{slice}. It is proved in~\cite{slice} that a proper Lie group action admits slices at each point. The \emph{type} of an orbit is defined as its isomorphism class as a transitive $\Spin^+_{p,q}$-space. It is equivalent to consider the conjugacy class of its isotropy subgroups. The existence of slices implies that the orbits of points in a neighbourhood of $x$ are of larger type than the orbit of $x$ : all $x$ in ${}_{\Spinp_{1,3}}X$ admit a neighbourhood $\mathcal U_x$ such that for $y$ in $\mathcal U_x$, $\Stab_y$ is conjugated to a subgroup of $\Stab_x$. One then concludes the existence of a \emph{principal orbit type}: a maximal orbit type, of which the union of the orbits is a dense open submanifold of ${}_{\Spinp_{1,3}}X$ \cite{CompactTransGroups} (in which it is stated for a compact group but only uses the existence of a slice and the compactness of isotropy subgroups). Furthermore, $\underline{\varpi}$ identifies the tangent bundle of ${}_{\Spinp_{1,3}}X$ with the trivial bundle of fibre $\operatorname{\mathfrak{spin}}_{p,q}\ltimes \Rpq$ in a $\Spin^+_{p,q}$-equivariant way. In particular, it provides a trivialisation of the normal bundle of the orbits (the quotient of the tangent bundle by its sub-bundle generated by the fundamental vector fields of $\operatorname{\mathfrak{spin}}_{p,q}$). The principal orbit type is then characterised by that the fact that its isotropy subgroups act trivially on the slices generated by the flow of the vectors corresponding to $\Rpq$ starting from $x$ (this slice is called \emph{linear} as the action of $\Stab_x$ on it is diffeomorphic to its linear action on a neighbourhood of the origin). In particular, the isotropy subgroup at $x$ has a trivial action on $\Rpq\subset T_x \left( {}_{\Spinp_{1,3}}X \right)$. There are only two possibilities : either $\Stab_x$ is trivial and the principal orbit type corresponds to the free action of $\Spin^+_{p,q}$, or $\Stab_x$ is the $\{\pm 1\}$ subgroup and the principal orbit type corresponds to $\SO^{+}_{p,q}$. Note that the union of the orbits of principal type does not have to be the whole manifold. Consider for example a \enquote{higher Moebius strip} of the type $\SO^{+}_{p,q}\times \Rpq / (\mathbb{Z}/2\mathbb{Z})$ with $\mathbb{Z}/2\mathbb{Z}$ acting by parity simultaneously on both factors. The trivial product $\SO^{+}_{p,q}\times \Rpq$ has an action of $\so_{p,q}$ and is provided with a tautological equivariant $\Rpq$-valued $1$-form $\alpha$ orthogonal to the $\SO^{+}_{p,q}$ factor, which is given by the embedding \[\SO^{+}_{p,q}\hookrightarrow \operatorname{Hom}(\Rpq,\Rpq) \simeq \Hom(T_x\Rpq,\Rpq)\] for all $x\in\Rpq$. The form $\alpha$ and the action of $\so_{p,q}$ are preserved by the action of $\mathbb{Z}/2\mathbb{Z}$ hence we have a $\so_{p,q}\ltimes\Rpq$-valued $1$-form on the quotient, satisfying Equation~(\ref{eqno:omalcomp}) (the right-hand term is zero : there is no curvature). But as the only point of $\Rpq$ invariant under parity is its origin $0$, the orbit type of $[g,0]$ in the quotient is $\operatorname{PSO}^\uparrow_{p,q}=\SO^{+}_{p,q}/\{\pm1\}$ while each other orbit is of type $\SO^{+}_{p,q}$. We conclude that over an open dense submanifold of ${}_{\Spinp_{1,3}}X$, either $\SO^{+}_{p,q}$ or $\Spin^+_{p,q}$ acts freely and properly. This is enough to ensure that the orbit space has a differentiable manifold structure and the fibration is a locally trivial principal bundle (with local parametrisations given by the linear slices)~\cite{slice}. Now, the principal bundle is provided with a Cartan connection $1$-form $\underline{\varpi}$ which identifies it with a frame bundle or a spin bundle over the orbit space (as described in Section \ref{annFrameB}). Suppose $X$ is provided with a field $\psi$ with value in a $\so_{p,q}$-module $\Sigma$ and which satisfies the equivariance condition~(\ref{eqno:equivFrameB}) \[\d \psi + \varpi \cdot \psi = \varsigma_a\varpi^a\] Then in a similar fashion to $\varpi^a$ the field $\psi$ can be extended to a field $g^{-1}\cdot \psi$ over $X\times \Spin^+_{p,q}$ which is equivariant and factors to ${}_{\Spinp_{1,3}}X$ to a field $\underline\psi$ which satisfies \[\d \underline\psi + \varpi \cdot \underline\psi = \underline\varsigma_a\underline{\varpi}^a\] The section $\underline\psi$ is then $\Spin^+_{p,q}$-equivariant and on the orbit subspace on which ${}_{\Spinp_{1,3}}X$ is a principal bundle, $\underline \psi$ has an associated section of the associated bundle. In particular, if $\Sigma$ is a \emph{spinorial module} of $\so_{p,q}$ (meaning that the action of $\Spin^+_{p,q}$ does not factor to $\SO^{+}_{p,q}$) the orbit types of the points in the support of $\underline\psi$ have to be $\Spin^+_{p,q}$ (as the isotropy subgroup has to act trivially on the module). One conclude that if $\psi$ is not identically trivial (and assumed smooth), the principal orbit type is $\Spin^+_{p,q}$ and the union of the principal orbits provides a $\Spin^+_{p,q}$-structure to its orbit (sub)space. \subsubsection*{Weaker orientation structures} As we have $\SO^{+}_{p,q}$-structures, the spacetimes we obtain are necessarily time and space-oriented. If we want to consider spacetimes with a weaker (or no) orientation, we have two options. Call $G$ the group extension of $\SO^{+}_{p,q}$ preserving the structure we want to consider, and $\bar G$ the extension of $\Spin^+_{p,q}$ (which is yet extra data~\cite{PinGroups,8Spin}). To forget the orientation data is straightforward: it is enough to extend the total space ${}_{\Spinp_{1,3}}X$ for example by constructing the associated bundle (for the action of $\Spin^+_{p,q}$ on ${}_{\Spinp_{1,3}}X$) over the $G$ (or $\bar G$ for a spin frame bundle). It is equivalent to construct ${}_{\Spinp_{1,3}}X$ by taking the non-connected group $\bar G$ instead of $\Spin^+_{p,q}$. This allows us to have non-connected ${}_{\Spinp_{1,3}}X$, while we initially restricted our attention to connected $X$. To obtain genuinely non-orientable spacetimes, we have to interpret the spacetime ${\mathcal E}\subseteq{}_{\Spinp_{1,3}}X/\Spin^+_{p,q}$ as a space-and-time orientation cover. We are then looking for quotients of ${\mathcal E}$ by time and/or space parity acting on the orientation cover. The group $G$ is an semi-direct product of $\SO^{+}_{p,q}$ with a finite group $K$, which lifts to $\bar G$ as a semi-direct product of $\Spin^+_{p,q}$ with a finite group $\bar K$. To obtain a corresponding frame bundle structure, one needs to extend the free action of $\Spin^+_{p,q}$ (resp. $\SO^{+}_{p,q}$) on ${}_{\Spinp_{1,3}}X$ by a free action of $\bar K$ (resp. $K$) which has suitable intertwining relations with the action of $\Spin^+_{p,q}$ (resp. $\SO^{+}_{p,q}$). For the resulting action to be considered as defining a frame bundle, $\underline{\varpi}$ has to be equivariant under the action of $K$ (resp. $\bar K$) as well. In the case the spinor field $\psi$ is nonzero, for it to live on the unoriented manifold ${\mathcal E}/K$, $\psi$ has to be equivariant under $\bar K$. In this sense, unoriented spacetimes relativity theories can be interpreted as theories on oriented spacetime with extra $P$/$T$/$PT$ symmetry. Note also that the space and time-orientation cover is the natural framework to formulate non-parity-invariant and non-time reversal-invariant field theories. \subsection{Building the spinor Lagrangian} \subsubsection*{The Dirac Lagrangian} We want to use the same method as in Section \ref{secno:FBdyn} to build a Lagrangian term on generalised frame bundles for \emph{Dirac spinors} in a dynamical spacetime. The suitable frame bundle is not the frame bundle of spacetime ${\SOp({\mathcal E})}$, but the \emph{spin frame bundle} ${\Spinp({\mathcal E})}$ instead. It is a twofold covering of the (direct orthochronous orthonormal) frame bundle which depends on a choice of spin structure, which is an extra topological structure on the spacetime. As it is a covering of the standard frame bundle, we can lift the construction from the previous section to the spin frame bundle. As stated in Appendix~\ref{annMC}, the principal bundle structure we can obtain from Equations~(\ref{ELMC}) has as structure group either $\mathfrak{L}$ or its twofold cover $\Spin^+_{1,3}$. The presence of a non-zero spinor field requires the group to be $\Spin^+_{1,3}$. Let $\Sigma$ an irreducible complex $\Cl_{1,3}$-module, provided with a \emph{spinor metric} : a (possibly indefinite) Hermitian product for which $\mathbb{R}^{1,3}$ has an antisymmetric action (see Appendix~\ref{annspin}). Recall the (adimensional) Dirac Lagrangian~\cite{WeinbergI,EMTens} : \begin{equation} \frac{1}2{\bar\psi} {\overleftrightarrow{\Dirac}} \psi -m{\bar\psi} \psi \end{equation} in which the spinor contraction is implicit : \[{\bar\psi}_1\psi_2 :={\bar\psi}_{1\alpha}\psi^\alpha_2\] The symmetrized Dirac operator ${\overleftrightarrow{\Dirac}}$ is defined by \[{\bar\psi}_1{\overleftrightarrow{\Dirac}}\psi_2 = {\bar\psi}_1\gamma^\mu\nabla_\mu\psi_2 - (\nabla_\mu {\bar\psi}_1)\gamma^\mu\psi_2 = 2\Re\left( {\bar\psi}_1 \slashed{\nabla} \psi_2 \right)\] Note that the $\alpha$ subscript (resp. index) corresponds to a \emph{complex} basis of $\Sigma$. The absence of an $i$ factor is due to our conventions for Clifford algebra and Lorentzian signature~\cite{PinGroups}. To keep in line with the traditional treatment of spinors fields, we will work with complex indices and make use of the \enquote{holomorphic} and \enquote{anti-holomorphic} directions. \subsubsection*{Lift to the spin frame bundle} On the spin frame bundle, spinors are represented by $\Spin^+_{1,3}$-equivariant $\Sigma$-valued fields so that the configuration space of the spinor field is $\Sigma\times{\Spinp({\mathcal E})}$. We will use $s=(s^\alpha)$ as coordinates in the fibre $\Sigma$ and $({\bar s}_{\alpha})$ for dual coordinates ; the latter can be read as the spinor metric ${\bar s} : \Sigma\to \Sp^*$. We write $\gamma_a\in \End(\Sigma)$ for the action of vectors $e_a\in \mathbb{R}^{1,3}$, use $\sigma_i\in\End(\Sigma)$ for the action of ${\mathfrak h}_i\in\lor\simeq\operatorname{\mathfrak{spin}}_{1,3}$ (resp. $\bar\sigma_i\in\End(\Sp^*)$) and $\psi:{\Spinp({\mathcal E})}\to\Sigma$. More detail on the notation and the action of the $\Spin$ group on Clifford modules will be found in Appendix~\ref{annPinSpin}. Given a system $(z^I)$ of local coordinates on $\mathcal P$, we call $\zeta^\alpha_I$ the associated coordinates on the $1$-jet bundle $\mathcal{J}^1({\Spinp({\mathcal E})},\Sigma)$ : \[ \zeta^\alpha_I \circ \psi = \partial_I\psi^\alpha \] The covariant derivative of the spinor field formulated on the frame bundle takes the form \[\d\psi + \varpi^i\sigma_i\cdot\psi\] The pulled-back Dirac Lagrangian form can be expressed as : \begin{equation} \mathscr{L}_{Dirac} = \left( \frac{1}2 \left( {\bar s} \gamma^a (\zeta_J\d z^J + \lambda^i\sigma_i s) - (\bar\zeta_J\d z^J+ \lambda^i\bar\sigma_i{\bar s}) \gamma^a s \right) \wedge \lambda^{\mathfrak m (3)}_a -m{\bar s} s \lambda^\mathfrak m \right) \wedge \lambda^\l \end{equation} where $\lambda^\mathfrak m$ (resp. $\lambda^\l$) denotes the pullbacks of volume elements of $\mathfrak m$ (resp. $\lor$) by the canonical form $\lambda$ and $\lambda^{\mathfrak m(3)}_a$ is the $3$-form dual to $\lambda^a$ in $\mathfrak m$. Note that in this expression the contribution of the term $\lambda^i\sigma_i$ (resp. $\lambda^i\bar\sigma_i$) actually vanishes due to the $\lambda^{\mathfrak m(3)}_a \wedge \lambda^\lor=\lneuf a$ factor which already selects the horizontal directions from $\zeta_{\alpha,J}\d z^J$ (resp. $\zeta^\alpha_J\d z^J$). \subsubsection*{Dropping the frame bundle structure and Lagrange multipliers} As in the previous section, we consider as source space for the fields a differentiable $10$-manifold $\mathcal P$. We will consider a total Lagrangian composed of $\mathscr{L}_{Dirac}$, $\mathscr{L}_{EC}$ and Legendre multipliers terms in order to make the $\Spin^+_{1,3}$ structure emerge dynamically on the space $\mathcal P$. As mentioned earlier, the structure obtained from Equations \eqref{seqno:equivvarpi8} on $\varpi$ induce a action of the Lie algebra $\soL$, which is naturally isomorphic to $\operatorname{\mathfrak{spin}}_{1,3}$. In particular, it is enough to define the equivariance of the spinor fields under $\operatorname{\mathfrak{spin}}_{1,3}$. Indeed, the difference between the usual linear frame bundle and spinor frame bundles only appears at the \enquote{global} level, when there are complete orbits under the group action. Thus we do not have to adapt our notion of generalised frame bundle in order to accommodate for spin structures. A spinor field $\psi : \mathcal P\to\Sigma$ will have to satisfy the equivariance condition \begin{equation}\label{eqno:psiequiv} \mathcal L_{\bar{\h}} \psi + {\mathfrak h}\cdot\psi = 0 \end{equation} with $\lor$ acting via $\sigma : \lor\to\End(\Sigma)$, we write ${\mathfrak h}_i\cdot\psi = \sigma_i\psi$. We will formulate the equivariance in a similar way to previously : recall the notation \begin{subequations}\begin{align} \d^\lambda\! s &:=\d s + \lambda^i\sigma_i s\\ \d^\lambda\!{\bar s} &:= \d {\bar s} + \lambda^i \bar\sigma_i {\bar s} \end{align}\end{subequations} with the operators $\sigma_i$ being anti-selfadjoint. This notation allows us to write the condition \eqref{eqno:psiequiv} (writing separately $\mathbb{C}$-linear and $\mathbb{C}$-antilinear directions although they correspond to the same degree of freedom) \begin{subequations} \begin{empheq}[left=\empheqlbrace]{align} \psi^* \left( \d^\lambda\! s \wedge \lneuf i \right) &= 0\\ \psi^* \left( \d^\lambda\! {\bar s}\wedge \lneuf i \right) &= 0 \end{empheq} \end{subequations} We consider the following Lagrange multiplier term (using a similar notation $\d^\lambda\! {\bar s}$) \begin{equation*} \frac{i}2 \left(\bar\kappa_\alpha^i \d^\lambda\! s^\alpha - \kappa^{\alpha i}\d^\lambda\! {\bar s} \right) \wedge \lneuf i \end{equation*} with $\bar\kappa^i$ conjugate to $\kappa^i$ (so that the constraint term is real), which added to $\mathscr{L}_{Dirac}$ makes the following Lagrangian \begin{multline}\label{LagDircons} \overline{\mathscr{L}_{Dirac}} = \left( \frac{1}2 \left( {\bar s} \gamma^a (\zeta_J\d z^J + \lambda^i\sigma_i s) - (\bar\zeta_J\d z^J+ \lambda^i\bar\sigma_i{\bar s}) \gamma^a s \right) \wedge \lambda^{\mathfrak m (3)}_a -m{\bar s} s \lambda^\mathfrak m \right) \wedge \lambda^\l\\ + \frac i 2 \left( \bar\kappa^i (\lambda^J\partial_J + \lambda^j\sigma_j) s - \kappa^i (\lambda^J\partial_J + \lambda^j\bar\sigma_j) \bar s \right) \wedge \lambda^{(9)}_i \end{multline} defined over \[\left( \underbrace{\Sigma}_{s^\alpha} \oplus \underbrace{\Sigma\otimes\lor}_{\kappa^{\alpha i}} \right) \times \underbrace{\Iso(T\tot,\p)}_{\lambda^A_I} \] \subsection{The Poincaré-Cartan form} We now compute the Poincaré-Cartan form. The Lagrangian~(\ref{LagDircons}) being affine in the 1st-order jet, the Legendre transformation is straightforward. The image of the Legendre transform, namely the \emph{momentum space}, is a subspace of : \begin{multline*} \ExT^{10}_1 T^* \left[ \left( \Sigma \oplus \Sigma\otimes\lor \right) \times \Iso(T\tot,\p) \right] \\ \simeq (\Sigma\oplus \Sigma\otimes\lor) \times_\mathcal P \left[ (\Sp^*\oplus \Sp^*\otimes\lor^*)\otimes\ExT^{9} T^*\mathcal P \oplus_\mathcal P \ExT^{10} T^*\mathcal P \right] \times_\mathcal P \ExT^{10}_1 T^* \Iso(T\tot,\p) \end{multline*} We use the Legendre transformation formula~(\ref{LTform}) (see Section~\ref{annLegTrans}) : \begin{align} &\begin{multlined} \overline{\thdir} =\overline{\mathscr{L}_{Dirac}} + \der{v^A_{I,J}}{\overline{\mathscr{L}_{Dirac}}} (\d\lambda^A_I - v^A_{I,L} dz^L) \wedge dz^{(9)}_J \\ \qquad \qquad + \der{\zeta_J^\alpha}{\overline{\mathscr{L}_{Dirac}}} (\d s^\alpha - \zeta_L^\alpha dz^L) \wedge dz^{(9)}_J + \der{\zeta_{\alpha,J}}{\overline{\mathscr{L}_{Dirac}}} (\d{\bar s}_\alpha - \zeta_{\alpha,L} dz^L) \wedge dz^{(9)}_J \end{multlined}\nonumber\\ &\qquad\quad =\frac{1}2 \left( {\bar s} \gamma^a (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^a s \right) \wedge \lambda^{(9)}_a + i \frac 1 2\left( \bar\kappa^i (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^i \right) \wedge \lambda^{(9)}_i -m{\bar s} s \lambda^{(10)} \end{align} The momentum dual to $\kappa$ vanishes as $\kappa$ only appears in the Lagrangian at order $0$ so that the momenta have trivial component in $(\lor^*\otimes\Sigma)\otimes\ExT^{9} T^*\mathcal P$. We are interested in a model coupling the Dirac spinor with the Einstein-Cartan gravitational fields. We thus consider a Lagrangian which is the sum of the two Lagrangians and is defined over \[ Q = \Iso(T\tot,\p) \times \Sigma \times_\mathcal P \big[ \lp \m\wedge\l \oplus \l\wedge\l \rp \otimes \p^* \oplus (\Sigma\otimes\lor)\big] \] the whole Poincaré-Cartan form decomposes as follows \begin{equation}\label{PoinCarSp} \bar\Theta = \Theta_{EC} + \Theta_{Dirac} + \thcons_{EC} + \thcons_{Dirac} = \overline{\thewc} + \overline{\thdir} \end{equation} with \begin{align} \overline{\thewc} &= \frac12 p^{BC}_A\D\lambda^A\lhuit{BC} \\ \overline{\thdir} &= \frac{1}2 \left( {\bar s} \gamma^a (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^a s \right) \lambda^{(9)}_a + i\frac 1 2\left( \bar\kappa^i (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^{i} \right) \lambda^{(9)}_i -m{\bar s} s \lambda^{(10)} \end{align with the line over $\overline{\theta}$ denoting the inclusion of the Lagrange multiplier terms. Here as well, the Poincaré-Cartan form is defined on the configuration space to which we added the Lagrange multipliers (the coefficients only depend on the 0-order jet). In accordance with the results from Section~\ref{VEqGR} the momenta dual to $\lambda^A_I$ are restricted to the subspace $\ExT^8 T^*P\otimes\mathfrak p^* \times_\mathcal P \Iso(T\tot,\p)$ of $\ExT^{10}_1 T^*\mathcal P\otimes\mathfrak p$. For convenience working with complex spinor indices, we enlarge the component $\Sigma\otimes\ExT^{9} T^*\mathcal P$ to a factor $(\Sigma\oplus \Sp^*)\otimes\ExT^{9} T^*\mathcal P$. We define fibre coordinates on the momentum space using the components of the canonical $10$-form : \begin{equation*} j^{BC}_{A}\D\lambda^A \wedge \lhuit{BC} + {\bar\Phi}^A_{\alpha}\d^\lambda\! s^\alpha \wedge \lneuf{A} + \Phi^{\alpha A} \d^\lambda\!{\bar s}_{\alpha} \wedge \lneuf{A} - h\lambda^{(10)} \end{equation*} Each coordinate corresponds to a factor of the momentum space, as follows : \begin{equation*} \big( \underbrace{\Iso(T\tot,\p)}_{\lambda^A_I} \oplus \underbrace{\Sigma}_{s^\alpha} \oplus \underbrace{\Sigma\otimes\lor}_{\kappa^{\alpha j}} \big) \times_\mathcal P \underbrace{\ExT^8 T^*P\otimes\mathfrak p^*}_{j^{BC}_A} \times_\mathcal P \big[ \underbrace{ (\Sigma \oplus \Sp^* )\otimes \ExT^{9} T^*\mathcal P }_{\Phi^{\alpha A},{\bar\Phi}_\alpha^A} \oplus_\mathcal P \underbrace{\ExT^{10} T^*\mathcal P}_h \big] \end{equation*} The image of the Legendre transform is a subspace defined by (holonomic) constraints, which take the following form \renewcommand\arraystretch{1 \begin{center} \begin{subequations} \begin{tabularx}{\textwidth}{@{}|X|X|@{} \hline \begin{equation}\label{eqno:jabc} j^{bc}_a = 0 \end{equation} & \begin{equation}\label{eqno:jabi} j^{bc}_i = 2\rho_{id}^b {\eta}^{dc} \end{equation} \\[-15pt \midrule \begin{equation} j^{bj}_A = p^{bj}_A \end{equation} & \begin{equation} j^{ij}_A = p^{ij}_A \end{equation} \\[-15pt] \midrule \begin{equation} \Phi^{\alpha a} = -\frac 1 2 (\gamma^a s)^\alpha \end{equation} & \begin{equation} \Phi^{\alpha j} = -\frac i 2 \kappa^{\alpha j} \end{equation} \\[-13pt] \midrule \begin{equation} {\bar\Phi}^a_\alpha = -\frac 1 2 (\bar\gamma^a {\bar s})_\alpha \end{equation} & \begin{equation} {\bar\Phi}^j_\alpha =\frac i 2 \bar\kappa^j_\alpha \end{equation} \\[-13pt] \midrule \begin{equation} h = m{\bar s}_\alpha s^\alpha \end{equation} & \\[-2pt] \bottomrule \end{tabularx} \end{subequations} \end{center} The momentum space can hence be identified as \begin{equation} \big[ \underbrace{\Iso(T\tot,\p)}_{\lambda^A_I} \times \underbrace{\Sigma}_{s^\alpha} \big] \times_\mathcal P \big[ \underbrace{\lp \m\wedge\l \oplus \l\wedge\l \rp \otimes \p^*}_{p^{\nonbc{BC}}_A} \oplus \underbrace{(\Sigma\otimes\lor)}_{\kappa^{\alpha i}} \big] = Q \end{equation} defined by the two constraints~(\ref{eqno:jabc},\ref{eqno:jabi}). We compute the variational equations in the next section. \subsection{\texorpdfstring{Variation of the multiplier $P^{\nonbc{BC}}_A$} {Variation of the multiplier P}} We check that the variation of the Lagrange multipliers yields the expected constraint equations. Recall that we do not allow $BC$ to take the form $bc$ in $\ddp{\nonbc{BC}}A$. We use the notation $(\EL_{EC})^A_{BC}$ for $(\EL_{EC})_{\partial^{BC}_A}$ : \begin{equation}\begin{aligned}\label{eqno:grcons} (\EL_{EC})^A_{\nonbc{BC}} &= \ddp{\nonbc{BC}}{A} \lrcorner\, \d \left( \frac12 p^{EF}_D\D\lambda^D \wedge \lhuit{EF} \right) = \frac12 \left( \ddp{\nonbc{BC}}{A} \lrcorner\, \d p^{EF}_D\right) \D\lambda^D \wedge \lhuit{EF} = \frac12\D\lambda^A \wedge \lhuit{\nonbc{BC}} \end{aligned}\end{equation} We write $\Omega:=\varpi^*\D\lambda$ which has as components \[ \Omega^A = \frac12\Omega^A_{BC} \varpi^B\wedge\varpi^C \] A critical field $(P,\varpi)$ satisfies \begin{equation} \Omega^A_{\nonbc{BC}} = 0 \label{ELMC}\end{equation} which is equivalent to Equations \eqref{seqno:equivvarpi8}. Thus a solution $\varpi$ of the Euler-Lagrange equations defines a generalised Cartan connection and a generalised frame bundle structure on $\mathcal P$. \subsection{\texorpdfstring{Variation of the coframe $\varpi$} {Variation of the coframe}} Instead of using $\ddl A B$, it will be convenient to use a vertical vector field $X$ on $\Iso(T\tot,\p)$ which has variable coefficients ${\epsilon^A_B}$ as follows : \[ X = {\epsilon^A_B}\ddl A B \] We also gather these coefficients into a $\mathfrak p$-valued $1$-form : \[ \epsilon^A = \epsilon^A_B\lambda^B \] so that \begin{equation} X\lrcorner\,\d\lambda^A = \mathcal L_X\lambda^A = \epsilon^A \end{equation} The correspondence $X\leftrightarrow \epsilon$ is the usual identification between the vertical tangent bundle to a vector bundle and the Whitney sum of the vector bundle with itself. We mention one more identity from Appendix~\ref{anncalc} : \[ \d \left( {\bar s}_{[A]} s^{[A]} \right) = (\d^\lambda\! {\bar s})_\alpha \wedge s^{[A]} + {\bar s}_\alpha \d^\lambda\! s^{[A]} \] for $[A]$ any index, or index list, in a $\lor$-module. We now compute $\EL_X$ : \begin{equation*}\begin{aligned} X\lrcorner\, \d \left( \frac12 p^{BC}_A\D\lambda^A \wedge \lhuit{BC} \right) % &= X\lrcorner\, \left( \d^\lambda\! \D\lambda^D \wedge \frac12 p^{BC}_A\lhuit{BC} + \D\lambda^A \wedge \d^\lambda\! \left( \frac12 p^{BC}_A\lhuit{BC} \right) \right)\\ % &= X \lrcorner\, \left( 0 + \D\lambda^A \wedge \d^\lambda\! \left( \frac12 p^{BC}_A\lhuit{BC} \right) \right)\\ % &= \left( X \lrcorner\, \D\lambda^A \right) \wedge \d^\lambda\! \left( \frac12 p^{BC}_A\lhuit{BC} \right) + \D\lambda^A \wedge X \lrcorner\, \d^\lambda\! \left( \frac12 p^{BC}_A\lhuit{BC} \right) \end{aligned}\end{equation*} On one hand \begin{equation*} X\lrcorner\, \D\lambda = X\lrcorner\, \left( \d\lambda + \fwb\lambda\lambda \right) = \epsilon + 0 \end{equation*} and on the other hand \begin{equation*} \begin{aligned} X\lrcorner\, \left( \d^\lambda\! \left( \frac12 p^{BC}_D\lhuit{BC} \right) \right) % &= X\lrcorner\, \left( \left( \d^\lambda\! \frac12 p^{BC}_D \right) \wedge \lhuit{BC} \right) + X\lrcorner\, \frac12 p^{BC}_D \left( \d^\lambda\! \lhuit{BC} \right)\\ % &= 0 + X \lrcorner\, \frac12 p^{BC}_D \left( \D\lambda^A \wedge \lsept{BCA} \right) \\ &= \epsilon^A\wedge \frac12 p^{BC}_D \lsept{BCA} \end{aligned} \end{equation*} Gathering the two terms, we obtain \begin{equation}\label{eqno:grdyn} \boxed{ \EL_X = \epsilon^A\wedge \left( \D\lambda^D \wedge \frac12 p^{BC}_D \lsept{BCA} + \d^\lambda\! \left( \frac12 p^{BC}_A\lhuit{BC} \right) \right) }\end{equation} The corresponding Euler-Lagrange equations on a field $(\varpi,P)\in\Gamma(\mathcal P,Q)$ are then : \begin{equation*} \forall \epsilon\in\Omega^1(\mathcal P,\mathfrak p), \qquad \epsilon^A\wedge \left( \Omega^D \wedge \frac12 P^{BC}_D \varpi^{(7)}_{ABC} + \d^\varpi \left( \frac12 P^{BC}_A\varpi^{(8)}_{BC} \right) \right) =0 \end{equation*} where $\epsilon^A_B$ are defined over $\mathcal P$ since we consider $X$ which are variations of the field $\varpi$. The equations are equivalent to \begin{equation}\label{eqno:EqGRvarpi} \Omega^D \wedge \frac12 P^{BC}_D \varpi^{(7)}_{ABC} + \d^\varpi \left( \frac12 P^{BC}_A\varpi^{(8)}_{BC} \right) =0 \end{equation} \subsubsection*{The Einstein term} We now explain how the usual Einstein tensor can be identified in \eqref{eqno:EqGRvarpi}. First, we assume that Equation~\eqref{ELMC} is satisfied, so that $\varpi$ defines a generalised frame bundle structure. We will identify tensors built out of $\varpi$ which correspond to the various curvature and the torsion tensors in the standard frame bundle case. For more detail on curvature on the frame bundle, see Appendix~\ref{annFrameB}. Let us isolate the part depending the fixed momenta $p_D^{bc}=2 \delta^l_D \rho_{l,e}^b\eta^{ec}$ : we obtai \begin{equation*}\begin{aligned} \Omega^l \wedge \frac12 p^{bc}_l \varpi^{(7)}_{Abc} + \delta^i_A \d^\varpi \left( \frac12 p^{bc}_i\varpi^{(8)}_{bc} \right) % &= \Omega^l \wedge \frac12 p^{bc}_l \varpi^{(7)}_{Abc} + \delta^i_A \frac12 p^{bc}_i \d^\varpi \left( \varpi^{(8)}_{bc} \right)\\ % &= \Omega^i \wedge \frac12 p^{bc}_i \varpi^{(7)}_{Abc} + \delta^i_A \frac12 p^{bc}_i \Omega^D \wedge \varpi^{(7)}_{bcD}\\ % &= \left( \delta^D_A\Omega^i + \delta^i_A \Omega^D \right) \wedge \frac12 p^{bc}_i \varpi^{(7)}_{bcD} \end{aligned}\end{equation*} Now assuming that Equation~\eqref{ELMC} is satisfied so that $\Omega^A = \frac12\Omega^A_{bc}\alpha^a\wedge\alpha^b$, we compute the wedge product: \begin{equation*}\begin{aligned} &\left( \delta^D_A\Omega^i + \delta^i_A \Omega^D \right) \wedge \frac12 p^{bc}_i \varpi^{(7)}_{bcD}\\ % ={}& \frac12 p^{bc}_i \left( \delta^D_A \left( \Omega^i_{cD} \vneuf b - \Omega^i_{bD} \vneuf c + \Omega^i_{bc} \vneuf D \right) + \delta^i_A \left( \Omega^D_{cD} \vneuf b - \Omega^D_{bD} \vneuf c + \Omega^D_{bc} \vneuf D \right)\rp\\ % ={}& \frac12 p^{bc}_i \left( 2 \Omega^i_{Ab} \vneuf c + \Omega^i_{bc} \vneuf A + \delta^i_A \left( 2 \Omega^d_{db} \vneuf c + \Omega^D_{bc} \vneuf D \right)\rp \end{aligned}\end{equation*} We want to separate the terms according to their dependency on $A$ (of type $a$ or $i$) and $\vneuf{}$ (with a subscript $c$ or $j$) : \begin{multline*} \left( \delta^D_A\Omega^i + \delta^i_A \Omega^D \right) \wedge \frac12 p^{bc}_i \vsept{bcD} % = \left( \delta^a_A p_i^{bc}\Omega^i_{ab} + \delta^c_a \frac12 p_i^{de}\Omega^i_{de} \right) \vneuf c\\ + \delta^i_A \left( \left( p_i^{bc} \Omega^d_{db} + \frac12 p_i^{de} \Omega^c_{de} \right) \vneuf c + \left( \frac12 p_i^{de} \Omega^j_{de} + \frac12 p^{bc}_k \Omega^k_{bc} \delta^j_i \right) \vneuf j \right) % \end{multline*} Now we just have to rewrite the factors in front of $\vneuf c$ so as to get rid of $p$. Recall the definition \[p_i^{bc} = 2\rho^{b}_{i,d} \eta^{dc}\] The first term is \begin{align*} p_i^{bc}\Omega^i_{ab} + \delta^c_a \frac12 p_i^{de}\Omega^i_{de} = 2\rho^{b}_{i,f} \eta^{fc} \Omega^i_{ab} + \delta^c_a \rho^{d}_{i,f} \eta^{fe} \Omega^i_{de} \end{align*} We recognize the contractions of $\Omega^i$ : these are the components of the tensor \begin{equation} -2\Ric_{a,f} \eta^{fc} + \delta^c_a \Scal \end{equation} which is (minus twice) the Einstein tensor. For the second term, we will use the following property : for any tensor field $A$ and any list of indices $[D]$, we have \[ p_i^{bc}A_{bcD} = 0 \Leftrightarrow A_{bc[D]} - A_{cb[D]} = 0 \] This is a consequence of the definition $p_i^{bc} = 2\rho^{b}_{i,e} \eta^{ec}$ interpreted as an isomorphism $\lor \xrightarrow{\sim} \ExT^2 \Mink$. We have \begin{align*} p_i^{bc} \Omega^d_{db} + \frac12 p_i^{de} \Omega^c_{de} = p_i^{de} \left( \delta_e^c \Omega^f_{fd} + \frac12 \Omega^c_{de} \right) = \frac12 p_i^{de} \left( \Omega^c_{de} + \delta_e^c \Omega^f_{fd} - \delta^c_d \Omega^f_{fe} \right) \end{align*} in which the antisymmetric term \begin{equation*} \Omega^c_{de} + \delta^c_e \Omega^f_{fd} - \delta^c_d \Omega^f_{fe} \end{equation*} corresponds to the components of the tensor field \begin{equation}\label{eqno:Ttrace} T + \tr(T)\wedge \Id \end{equation} This is a contraction of the torsion which is quite similar to the Ricci curvature. Its divergence is non zero and actually equates (twice) the antisymmetric part of the Ricci tensor. These tensor are to be equated in \eqref{eqno:EqGRvarpi} with quantities dependent on the multipliers $P_A^{\nonbc{BC}}$. We will see how to get rid of the multipliers in order to extract meaningful field equations building on Section~\ref{LagMult}. \subsubsection*{Comparison with gravitation on \enquote{soft Poincaré manifolds}} In~\cite{SUGRAPrimer} is presented a very similar term for the so-called gravitation on \enquote{soft Poincaré manifolds}. In our language, they take the following form : \begin{align*} \frac12 p^{ab}_i \left( \delta^c_A \Omega^i \wedge \alpha^{(1)}_{cab} + \delta^i_A \Omega^c \wedge \alpha^{(1)}_{cab} \right) \end{align*} and its vanishing is equivalent to \[\begin{cases} \frac12 p^{ab}_i \Omega^i \wedge \alpha^{(1)}_{cab} &=0\\ \frac12 p^{ab}_i \Omega^c \wedge \alpha^{(1)}_{cab} &=0 \end{cases}\] These equations imply the Einstein field equation, the vanishing of the term \eqref{eqno:Ttrace} \emph{as well as Equation~\eqref{ELMC}}. However, as our theory is formulated with $10$-forms, there are in our equations factors of $\omega^i$ due to the term $\lsept{bcD}$ in \eqref{eqno:EqGRvarpi}. They weaken the constraint imposed by the equation on the non-horizontal components $\Omega^A_{bk}, \Omega^A_{jk}$. For this reason, we needed to add the Lagrange multipliers $p_A^{\nonbc{BC}}$ in Section~\ref{secno:FBdyn} in order to enforce Equation~\eqref{ELMC}. \subsection{\texorpdfstring{Variation of the multipliers $K^{\alpha i}$} {Variation of the multipliers K}} We check here that the variation of the Lagrange multipliers $K_\alpha^i$ yield the expected constraint equations. There are independent variations in holomorphic directions corresponding to the index $\alpha$, and in the anti-holomorphic direction corresponding to $\bar K^{\alpha i}$. \begin{align}\label{eqno:Spcons}\begin{split} \left( \EL_{Dirac} \right)_{\alpha i} \\ = \partial_{\kappa^{\alpha i}}\lrcorner\, &\d \left( \frac{1}2 \left( {\bar s} \gamma^a (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^a s \right) \wedge \lambda^{(9)}_a + \frac i 2\left( \bar\kappa^j_\beta (\d^\lambda\! s)^\beta - (\d^\lambda\!{\bar s})_\beta \kappa^{\beta j} \right) \wedge \lambda^{(9)}_j -m{\bar s} s \lambda^{(10)} \right)\\ = -\frac{i}2\d^\lambda\!&{\bar s}_\alpha \wedge \lneuf i \end{split} \end{align} As $\overline{\thdir}$ is real the two Euler-Lagrange terms are conjugate under the antilinear correspondence $\Sigma\to\Sp^*$ : \begin{equation} \label{eqno:bSpcons} \left( \EL_{Dirac} \right)_i^\alpha = \frac{i}2\d^\lambda\! s^\alpha \wedge \lneuf i \end{equation} From now on we will only use the Euler-Lagrange terms coupled to anti-holomorphic $\partial_{\kappa^{\alpha i}}$ variations. The corresponding Euler-Lagrange equations on a field $\phi$ are thus : \begin{equation}\label{eqno:Spconsphi} \phi^* \left( \d^\lambda\! s^\alpha \wedge \lneuf i \right) = \d^\omega \psi^\alpha \wedge\vneuf i = 0 \end{equation} Therefore, if we assume there is a principal bundle structure derived from the variational equations derived in Section \ref{VEqGR}, the vanishing of the pullback of the Euler-Lagrange form $\phi^*{\EL}^\alpha_i$ is equivalent to requiring the equivariance of the corresponding section $\psi$ of $\Sigma$ with respect to the action of $\lor$ defined by $\varpi$. Namely, in the case of the spin frame bundle, it means that $\psi$ is associated to a section of the associated spinor bundle. \subsection{Variation of the spinor field and of the coframe} We start with the variational equations with respect to the variation on the spinor fields. We have to compute $\d\overline{\thewc}$. For the sake of clarity we will compute \[ \d\left( {\bar s} \gamma^a (\d^\lambda\! s) \wedge \lambda^{(9)}_a + i \bar\kappa^i (\d^\lambda\! s) \wedge \lambda^{(9)}_i -m{\bar s} s \lambda^{(10)} \right) \] and conclude is has $\d\overline{\thewc}$ as real part. Let us start with the term dependent of $\kappa^i$ : \begin{equation*}\begin{aligned} \d \left( \bar\kappa^i (\d^\lambda\! s) \wedge \lambda^{(9)}_i \right) &= \left( \d^\lambda\! \bar\kappa^i \wedge (\d^\lambda\! s) + \bar\kappa^i \d^\lambda\!(\d^\lambda\! s) \right) \wedge \lambda^{(9)}_i - \bar\kappa^i (\d^\lambda\! s) \wedge \d^\lambda\! \lambda^{(9)}_i\\ &= \left( \d^\lambda\! \bar\kappa^i \wedge (\d^\lambda\! s) + \bar\kappa^i \D\lambda \cdot s \right) \wedge \lambda^{(9)}_i - \bar\kappa^i (\d^\lambda\! s) \wedge \D\lambda^B \wedge \lhuit {iB} \\ &= \d^\lambda\! \bar\kappa^i \wedge \d^\lambda\! s \wedge \lambda^{(9)}_i + \bar\kappa^i (\sigma_j s) \wedge \lneuf i - \bar\kappa^i (\d^\lambda\! s) \wedge \D\lambda^B \wedge \lhuit{iB} \end{aligned}\end{equation*} An identical calculation replacing $\kappa^{\alpha i}\lneuf i$ with $\gamma^a s^\alpha\lneuf a$ gives \begin{equation*}\begin{aligned} \d \left( {\bar s} \gamma^a (\d^\lambda\! s) \wedge \lambda^{(9)}_a \right) = \left( \d^\lambda\! ({\bar s} \gamma^a) \wedge \d^\lambda\! s + {\bar s} \gamma^a (\sigma_j s) \D\lambda^j \right) \wedge \lambda^{(9)}_a - {\bar s} \gamma^a (\d^\lambda\! s) \wedge \D\lambda^B \wedge \lhuit{aB} \end{aligned}\end{equation*} Recall that $\gamma^a$ is parallel in the following sense : \[ (\d^\lambda\! \gamma^a) \wedge \lneuf b = \left( \d\gamma^a + \lambda^j[\sigma_j,\gamma^a] \right) \wedge \lneuf b = 0 + 0 \] We can then compute \begin{equation*}\begin{aligned} \d \left( {\bar s} \gamma^a (\d^\lambda\! s) \wedge \lambda^{(9)}_a \right) &= \begin{multlined}[t] \left( (\d^\lambda\! {\bar s}) \gamma^a+{\bar s}\d^\lambda\!\gamma^a \right) \wedge \d^\lambda\! s \wedge \lneuf a\\ + {\bar s} \gamma^a (\sigma_j s) \D\lambda^j \wedge \lambda^{(9)}_a - {\bar s} \gamma^a (\d^\lambda\! s) \wedge \D\lambda^B \wedge \lhuit{aB} \end{multlined}\\ &= \left( \d^\lambda\! {\bar s} \gamma^a \wedge \d^\lambda\! s +{\bar s} \gamma^a \sigma_j s \D\lambda^j\right) \wedge \lambda^{(9)}_a - {\bar s} \gamma^a (\d^\lambda\! s) \wedge \D\lambda^B \wedge \lhuit{aB} \end{aligned}\end{equation*} Last, corresponding to the mass term \begin{equation}\begin{aligned} \d\left( m{\bar s}_\alpha s^\alpha \lambda^{(10)} \right) &= m\d({\bar s}_\alpha s^\alpha) \wedge \lambda^{(10)} + m{\bar s} s \d\lambda^{(10)}\\ &= m\left( (\d^\lambda\!{\bar s})_\alpha s^\alpha + {\bar s}_\alpha \d^\lambda\! s^\alpha \wedge \right)\lambda^{(10)} + m{\bar s} s \D\lambda^B \wedge \lneuf B \end{aligned}\end{equation} Recall that both $\gamma^a$ and $\sigma_i$ are anti-selfadjoint : \begin{gather*} \overline{\gamma^a s} s = - {\bar s} \gamma^a s\\ {\bar s} \bar\sigma_i s = - {\bar s} \sigma_i s \end{gather*} We obtain the total exterior differential by taking the real part of the sum of the three terms. We use curly braces $\{\cdot,\cdot\}$ for anticommutators \com{L'anticommutateur n'est pas normalisé alors que les $[\mu\nu]$ plus loin le sont} : \begin{multline} \d\overline{\thdir} =\frac{1}{2} \left( \left( 2\d^\lambda\! {\bar s} \wedge \gamma^a\d^\lambda\! s +{\bar s} \{\sigma_j,\gamma^a\} s \D\lambda^j\right) \wedge \lneuf a - \left( {\bar s} \gamma^a (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^a s \right) \wedge \D\lambda^B \wedge \lhuit{aB} \right)\\ + \frac{i}{2} \left[ \left( \d^\lambda\! \bar\kappa^i \wedge \d^\lambda\! s + (\d^\lambda\!{\bar s}) \wedge \d^\lambda\!\kappa^i\right) \wedge \lneuf i + \left( \bar\kappa^i (\sigma_j s) - ({\bar s}\bar\sigma_j) \kappa^i \right) \D\lambda^j \wedge \lneuf i \right.\\ \left. - \left( \bar\kappa^i (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^i \right) \wedge \D\lambda^B \wedge \lhuit{iB} \right] - m\left( (\d^\lambda\!{\bar s}) s + {\bar s}\d^\lambda\! s \right) \wedge \lambda^{(10)} - m{\bar s} s \D\lambda^B \wedge \lneuf B \end{multline} Here too we adopt the notation \begin{align} \EL^\alpha&:=\EL_{\partial_{{\bar s}_\alpha}} \end{align} Start with the Euler-Lagrange terms corresponding to variations of the spinor field : \begin{align*} \begin{aligned} \EL_{Dirac}^\alpha &= \partial_{{\bar s}_\alpha}\lrcorner\,\d \overline{\thdir} \\ &= \frac{1}{2} \left( 2(\gamma^a(\d^\lambda\! s))^\alpha \wedge \lambda^{(9)}_a + (\gamma^as)^\alpha \D\lambda^B \wedge \lhuit{aB} \right)\\ &\quad + \frac{i}{2}\left( \d^\lambda\! \kappa^{\alpha i} \wedge \lambda^{(9)}_i + \kappa^{\alpha i} \D\lambda^B \wedge \lhuit{iB} \right) - m s^\alpha \lambda^{(10)} \end{aligned} \end{align*} We introduce the notation $\equiv$ for equality which holds up to a \enquote{constraint} term~(\ref{eqno:grcons}, \ref{eqno:Spcons}). This is justified by the fact that the analysis in Section~\ref{secno:EuclFE} will proceed by first assuming these equations satisfied. \begin{equation} \EL_{Dirac}^\alpha \equiv \frac{1}{2} \left( 2 (\gamma^a\d^\lambda\! s)^\alpha \wedge \lambda^{(9)}_a + (\gamma^a s)^\alpha \D\lambda^c \wedge \lhuit{ac} + i \d^\lambda\! \left( \kappa^i \lambda^{(9)}_i \right)^\alpha \right) - m s^\alpha \lambda^{(10)} \end{equation} We now compute the Euler-Lagrange terms corresponding to variations of the coframe $\varpi$, which will govern the interaction of the spinors with the spacetime geometry. Recall the notation \[X = \epsilon^A_B \ddl A B\] and \[\epsilon^A = \epsilon^A_B \lambda^B\] We obtain \begin{equation}\label{eqno:Spdyn}\begin{split} \left(\EL_{Dirac}\right)_X &= X \lrcorner\, \d \overline{\thdir} \\ &\mkern-18mu = \frac{1}{2}\left( {\bar s} \{\sigma_j,\gamma^a\} s \epsilon^j \wedge \lneuf a + \left( {\bar s} \gamma^a (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^a s \right) \wedge \epsilon^B \wedge \lhuit{aB} \right)\\ &\mkern-10mu + \frac{i}{2} \left( \left( \bar\kappa^i (\sigma_j s) - ({\bar s}\bar\sigma_j) \kappa^i \right) \epsilon^j \wedge \lneuf i + \left( \bar\kappa^i (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^i \right) \wedge \epsilon^B \wedge \lhuit{iB} \right) - m{\bar s} s \epsilon^B \wedge \lneuf B\\ &\mkern-18mu = -\epsilon^B\wedge \left( \frac12 \left( {\bar s} \gamma^a (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^a s \right) \wedge \lhuit{aB} +\frac{i}2 \left( \bar\kappa^i (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^i \right) \wedge \lhuit{iB} + m{\bar s} s \lneuf B \right)\\ & + \epsilon^j \wedge \frac 12 \left( {\bar s} \{\sigma_j,\gamma^a\} s \lneuf a + i \left( \bar\kappa^i (\sigma_j s) - ({\bar s}\bar\sigma_j) \kappa^i \right) \lneuf i \right) \end{split}\end{equation} \subsection{The total Euler-Lagrange terms} Gathering the expressions~(\ref{eqno:grcons},\ref{eqno:Spcons},\ref{eqno:grdyn},\ref{eqno:Spdyn}), the total Euler-Lagrange terms corresponding to the Poincaré-Cartan form~(\ref{PoinCarSp}) are then, using again a vertical vector field $X=\epsilon^A_B \ddl A B$, \begin{subequations} \label{eqno:ELtotal} \begin{align} (\EL)^A_{\nonbc{BC}} &= \frac12\D\lambda^A \wedge \lhuit{\nonbc{BC}} \label{eqno:ELconsEWC}\\ \left( \EL \right)_i^\alpha &= \frac{i}2\d^\lambda\! s^\alpha \wedge \lneuf i \label{eqno:ELconsDira}\\ \label{eqno:ELepsilon} \begin{split} \EL_X &= \epsilon^A \wedge \left[ \D\lambda^D \wedge \frac12 p^{BC}_D \lsept{BCA} + \d^\lambda\! \left( \frac12 p^{BC}_A\lhuit{BC} \right) \right.\\ &\qquad \left. - \frac12 \left( {\bar s} \gamma^b (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \gamma^b s \right) \wedge \lhuit{bA} - \frac{i}2 \left( \bar\kappa^j (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^j \right) \wedge \lhuit{jA} - m{\bar s} s \lneuf A \right]\\ % &\qquad + \epsilon^j \wedge \frac 12 \left( {\bar s} \{\sigma_j,\gamma^a\} s \lneuf a + i \left( \bar\kappa^i (\sigma_j s) - ({\bar s}\bar\sigma_j) \kappa^i \right) \lneuf i \right) \end{split}\\ \EL^\alpha &= (\gamma^a\d^\lambda\! s)^\alpha \wedge \lambda^{(9)}_a + \frac{1}{2} (\gamma^a s)^\alpha \D\lambda^c \wedge \lhuit{ac} - m s^\alpha \lambda^{(10)} + \frac{i}{2} \d^\lambda\! \left( \kappa^i \lambda^{(9)}_i \right)^\alpha \label{eqno:ELalpha} \end{align} \end{subequations} The term $\EL_X$ can be decomposed according to the different components of $\epsilon$ : $\epsilon^a_b$, $\epsilon^i_b$, $\epsilon^a_j$, $\epsilon^i_j$. Each one corresponds to a variation of a different part of the structure of generalised frame bundle : $\epsilon^a_b$ corresponds to variations of the tetrad, $\epsilon^i_b$ corresponds to variations of the connection, $\epsilon^a_j$ corresponds to variations of the orbits and $\epsilon^i_j$ corresponds to variation of the action of $\lor$. Unfortunately, the very presence of the Lagrange multipliers $p^{\nonbc{BC}}_A$ and $(\kappa^{\alpha i},\bar\kappa_{\alpha}^i)$ make the corresponding differential equations $\phi^*\EL = 0$ hard to study beyond the geometric structure of a generalised Cartan connection and a section of the associated spinor bundle. Since the Lagrange multipliers need not be equivariant, they have to be studied on the total bundle space. Note however the dependency on the multipliers : they appear in $\d^\lambda\!$-exact terms \begin{gather*} \d^\lambda\! \left( \frac12 p^{BC}_A\lhuit{BC} \right)\\ \d^\lambda\! \left( \kappa^i \lambda^{(9)}_i \right)^\alpha \end{gather*} and in the following three terms : \begin{gather*} \epsilon^A \wedge \D\lambda^D \wedge \frac12 p^{\nonbc{BC}}_D \lsept{\nonbc{BC}A} \\ \epsilon^A \wedge \left( \bar\kappa^j (\d^\lambda\! s) - (\d^\lambda\!{\bar s}) \kappa^j \right) \wedge \lhuit{jA}\\ \epsilon^j \wedge \left( \bar\kappa^i (\sigma_j s) - ({\bar s}\bar\sigma_j) \kappa^i \right) \lneuf i \end{gather*} If we assume that~(\ref{eqno:ELconsEWC},\ref{eqno:ELconsDira}) vanish, these three terms are shown to be dependent only on the vertical component $\epsilon^A_k\lambda^k$. This will be useful in te treatment of the Euler-Lagrange equations in Section~\ref{secno:EuclFE}. \subsection{Structure of the frame bundle}\label{annFrameB} \subsubsection*{Frame bundle, Spin structures and connections} We start with a brief reminder on the structure of the bundle of orthonormal frames. A general reference is~\cite{KobaNomi1}. Everything done in this section applies in a straightforward manner to any metric signature and to unoriented frame, time-oriented frame and space-oriented bundles as well as to spin (and pin) frame bundles. Start with a $n$-manifold $M$, provided with a metric of signature $(p,q)$ as well as an orientation. The bundle of orthonormal frames $\SO^{+}(M)\overset{\pi}{\to} M$ is a principal $\SO^{+}_{p,q}$-bundle equipped with a solder form $\alpha\in\Omega^1(\SO^{+}(M),\mathbb{R}^{p,q})$ which is $\SO^{+}_{p,q}$-equivariant as well as horizontal (vanishes on vertical vectors). It establishes a $\SO^{+}_{p,q}$-equivariant mapping \begin{center} \begin{tikzcd} T\SO^{+}(M)/V\SO^{+}(M)\simeq \pi^*TM \ar[r] & \Rpq \\ T\SO^{+}(M) \ar[u] \ar[ur,"\alpha"] & \end{tikzcd} \end{center} A \emph{$\Spin^+_{p,q}$}-structure is given by a \enquote{lifting} of the so-called \emph{structure group} $\SO^{+}_{p,q}$ to $\Spin^+_{p,q}$. In other words, given a metric and a space-and-time orientation, it is defined by a principal bundle $P$ with a $\Spin^+_{p,q}$-equivariant bundle map $P\to \SO^{+}(M)$. Note that a similar lifting of the \emph{linear} frame bundle to a principal bundle with the connected double cover of $\SL_n$ as structure group induces a $\Spin^+_{p,q}$-structure for every metric and space-and-time orientation. Alternatively, the mapping to $\SO^{+}(M)$ is equivalent to the data of the solder form pulled back to $P$ : the $\Spin^+_{p,q}$-structure is equivalently a principal $\Spin^+_{p,q}$-bundle equipped with an nondegenerate horizontal $\Rpq$-valued $1$-form which is equivariant for the action $\Spin^+_{p,q}\to \SO^{+}_{p,q}$. A connection 1-form on $\SO^{+}(M)$ (hereafter \emph{connection 1-form}) is given by an $\sopq$-equivariant $\sopq$-valued 1-form $\omega$ on $\SO^{+}(p,q)$, which is normalized for the action of $\sopq$ : for ${\mathfrak h}\in\sopq$, writing its action by ${\bar{\h}}\in\Gamma(T\SO^{+}(M))$, the normalization condition is \begin{equation} \omega({\bar{\h}}) = {\mathfrak h} \label{eqno:omnormcond}\end{equation} A connection 1-form defines an Ehresmann connection given by its kernel. The equivariance of the form ensures that the horizontal distribution is equivariant. The combined data of \begin{equation} \omega\oplus\alpha\in\Omega^1(\SO^{+}(M),\so_{p,q}\ltimes\Rpq) \end{equation} is called a \emph{Cartan connection} $1$-form, or \emph{affine connection}. \subsubsection*{Tensorial forms} As the frame bundle trivialises the (pullback of the) tangent bundle of $M$, any tensor-valued differential form on $M$ pulls back to $\SO^{+}(M)$ to a differential form with values in a trivialised bundle, which is \emph{horizontal} (contracts to $0$ with any vertical vector) and \emph{equivariant} (the identification $\pi^*\left( TM^{\otimes k}\otimes T^*M^{\otimes l} \right) \to \Rpq^{\otimes k}\otimes \Rpq^{*\otimes l}$ being equivariant). Such forms on $\SO^{+}(M)$ are called \emph{basic} or \emph{tensorial} and are in bijection with forms of the corresponding type on $M$. We write \[ \Omega^\bullet_h \left( \SO^{+}(M),\, \Rpq^{\otimes k}\otimes \Rpq^{*\otimes l} \right)^{\SOpL} \] for the space of $\Rpq^{\otimes k}\otimes \Rpq^{*\otimes l}$-valued tensorial forms. Equivariance under the connected group $\SOpL$ is equivalent to equivalence under $\soL$, which can be written as follows for a $\Rpq^{\otimes k}\otimes \Rpq^{*\otimes l}$-valued form $\sigma$ on $\SO^{+}(M)$ is \[ (\d i_{\bar{\h}} + i_{\bar{\h}} \d) \sigma + {\mathfrak h}\cdot \sigma = 0 \] for all ${\mathfrak h}\in \so_{p,q}$. As $\sigma$ is horizontal, it can be written \[ (i_{\bar{\h}} \d + {\mathfrak h}\cdot)\sigma = 0 \] Since $\omega$ vanishes on the horizontal directions, is normalized~(\ref{eqno:omnormcond}) and since $\alpha^a$ span the horizontal forms, the infinitesimal equivariance can be equivalently written as \begin{equation}\label{eqno:equivFrameB} \d\sigma + \omega\cdot \sigma = S_{\mathcal I}\alpha^{\mathcal I} \end{equation} where ${\mathcal I}$ are multi-indices, $\alpha^{\mathcal I}$ form a basis of horizontal forms and $S_{\mathcal I}$ are coefficients determined by $\sigma$. \subsubsection*{Curvature and Torsion forms} To a connection on $\SO^{+}(M)$ are associated its torsion 2-form $\Theta$ (with values in $\mathbb{R}^{p,q}$) and its curvature 2-form $\Omega$ (with values in $\so_{p,q})$) defined by \begin{align} \Theta &:= \d\alpha + \omega \cdot \alpha\\ \Omega &:= \d\omega + \fwb\omega\omega \end{align} with $\cdot$ denoting the (tensor) combined product of the wedge product on forms and the action of $\sopq$ on $\Rpq$ and $\fwb\cdot\cdot$ similarly with the adjoint action. They both are horizontal and equivariant, and as such are associated to tensor fields on $M$. Note that it has nothing to do with the Poincaré-Cartan form defined in Section~\ref{annLegTrans}. From the curvature $2$-form is constructed the Ricci curvature form $\Ric\in \Omega^1(\SO^{+}(M),\Rpq^*)$. It is obtained from $\Omega$ by representing $\sopq$ in $\End(\Rpq)$ then taking the trace \emph{with respect to the first 2-form index}, using the connection to identify $\Rpq$ with the horizontal space. Write \[\rho : \sopq\to\End(\Rpq)\] the natural representation of $\sopq$ and \[u : \Rpq \to \Gamma(T\SO^{+}(M))\] the horizontal vector fields such that at each point of $\SO^{+}(M)$ (frames of $T_x M$ for $x\in M$), $u_a$ is the horizontal lift to $T\SO^{+}(M)$ of the vector in $T_x M$ corresponding to $e_a\in\Rpq$. In other words, $\sprod{\alpha}{u} : \Rpq \to\Gamma(\SO^{+}(M),\Rpq)$ is the natural embedding into constant sections. Then the Ricci curvature form can be expressed by the following formula, using $I,J$ for coordinates of $\SO^{+}(M)$, $i$ for indices in $\sopq$ and $a,b$ for indices in $\Rpq$ : \begin{equation} \Ric_{J,b} = \Omega^i_{IJ}\rho^a_{i,b} u_a^I \end{equation} As a tensorial $\Rpq^*$-valued $1$-form on $\SO^{+}(M)$, it is associated to a bilinear form on $M$. \subsection{The Bianchi identity}\label{annRicBianchi} The curvature obeys the so-called (\emph{algebraic}, or \emph{first}) \emph{Bianchi identity} : \begin{equation}\label{eqno:Bianchi} \d\Theta + \omega\cdot \Theta = \Omega\cdot\alpha \end{equation} We are interested in its consequences for the Ricci curvature, so we compute the contraction \begin{equation}\begin{aligned} (\Omega\cdot\alpha)_{IJK}^a u_a^I &= (\Omega^i\rho_{i b}^a\wedge\alpha^b)_{IJK}u_a^I\\ &= u_a^I\left( \Omega^i_{IJ}\rho_{i b}^a\wedge\alpha^b_K + \Omega^i_{JK}\rho_{i b}^a\wedge\alpha^b_I + \Omega^i_{KI}\rho_{i b}^a\wedge\alpha^b_J \right)\\ &= \Ric_{J,b}\wedge\alpha^b_K + \Omega^i_{JK}\rho^a_{i a} - \Ric_{K,b}\wedge\alpha^b_J\\ (\Omega\cdot\alpha)_{IJK}^a u_a^I &=\Ric_{J,b}\wedge\alpha^b_K - \Ric_{K,b}\wedge\alpha^b_J \end{aligned}\end{equation} in which $\Omega^i\rho^a_{i a}$ vanished because $\sopq$ acts by traceless endomorphisms ; this term corresponds to the action of $\Omega$ on the determinant line bundle of $M$. Applying the same contraction to the left side of~(\ref{eqno:Bianchi}) we obtain \begin{equation}\label{eqno:divtors} \left( \d\Theta + \omega\cdot \Theta\right)_{IJK}^a u_a^I = \Ric_{J,b}\wedge\alpha^b_K - \Ric_{K,b}\wedge\alpha^b_J \end{equation} which can be read as : the antisymmetric part of the Ricci curvature is equal to the covariant exterior divergence of the torsion. We want to express the left hand term with a covariant divergence, working in the vector bundle trivializing \emph{vector-valued $2$-forms}. Let us state the final formula beforehand. We write it over the base manifold, using $\tr(T)_\mu = T^\nu{}_{\nu\mu}$ and $\div^\nabla T_{\mu\nu} = \nabla_\rho T^\rho{}_{\mu\nu}$ : \begin{equation}\label{eqno:RicBianchiBase} \Ric_{\mu\nu} - \Ric_{\nu\mu} = \div^\nabla(T)_{\mu\nu} - \d\tr(T) \end{equation} \subsubsection*{Preliminary definitions} We need to introduce a somewhat cumbersome notation in order to differentiate between \emph{horizontal forms} and $\ExT^\bullet \Rpq^*$-valued forms on the frame bundle. This is because although they are identified by the solder form the covariant exterior differential acts differently on each of them. This distinction is harder to keep track of when working on the base manifold, but working on the frame bundle makes it clearer. We now define the morphism relating forms of both kind. Let $E$ be an arbitrary (finite-dimensional) representation of $\sopq$. Let $\sigma$ to an equivariant horizontal $k$-form with values in $E$. We define $\hat \sigma$ the equivariant section of $\ExT^k \Rpq^*\otimes E$ defined by the isomorphism that the connection establishes between the horizontal distribution and $\Rpq$ : \begin{equation} \hat\sigma_{ab\dots k}^A := \sigma^A_{I_1 I_2\dots I_k} u^{I_1}_a u^{I_2}_b \dots u^{I_k}_k \end{equation} We introduce two avatars of the covariant differential : the first one is the standard covariant exterior differential \[\d^\omega A =\d A + \omega\cdot A\] for $A$ an equivariant horizontal differential form with values in a representation $E$ of $\sopq$. For example the definition of the torsion can be written as : \begin{equation} \Theta = \d^\omega \alpha \end{equation} The second one is an antisymmetrised covariant derivative. We write it $\d^\omega\wedge$ and \emph{only applies to equivariant sections of representations $\ExT^k \Rpq^*\otimes E$ of $\sopq$} ($0$-forms) with $E$ a representation of $\sopq$. Let $\hat \sigma$ be an equivariant section of $\ExT^k \Rpq^*\otimes E$ (corresponding to an equivariant section $\sigma\in \Omega^k(\SO^{+}(M),E)$). We consider \[\d^\omega \hat \sigma \in\Omega^1(\SO^{+}(M),\ExT^k\Rpq^* \otimes E) \xrightarrow[\cdot\mapsto \hat{\cdot}]{\sim} \Gamma(\SO^{+}(M),\Rpq^*\otimes\ExT^k\Rpq^* \otimes E)\] and compose with the antisymmetrisation $\Rpq^*\otimes \ExT^k\Rpq^*\to \ExT^{k+1}\Rpq^*$ to obtain \begin{equation} \d^\omega\wedge : \Gamma(\SO^{+}(M),\ExT^k\Rpq^* \otimes E) \to \Gamma(\SO^{+}(M),\ExT^{k+1}\Rpq^* \otimes E) \end{equation} We denote ${\hat\wedge}$ for the product in the exterior algebra $\ExT^* \Rpq^*$, to distinguish it from the wedge product $\wedge$ of $\Omega^*(\SO^{+}(M))$. We also introduce the following \emph{trace} of vector-valued horizontal $k$-forms : \begin{equation} \tr(\sigma) := u^I_a \sigma^a_{IJK\dots} \in\Omega^{k-1}(\SO^{+}(M)) \end{equation} and will also use the natural trace on $\ExT^k\Rpq^*\otimes \Rpq$. The contracted Bianchi identity~(\ref{eqno:divtors}) takes the form \begin{equation}\label{eqno:trRic} \widehat{\tr \left( {\d^\omega\Theta}\right)_{ab}} = \widehat\Ric_{ab} - \widehat\Ric_{ba} \end{equation} The last definition we need is that of the contraction of a $E$-valued $k$-form with $\Theta$. Let $A\in\Omega^k(\SO^{+}(M),E)$. We define the following equivariant $E$-valued $(k+1)$-form \[ (\Theta\lrcorner\, A)_{I_0\cdots I_k} := \sum_{\lel{0}{i<j}{k}} (-1)^{i+j} \Theta^b_{I_i,I_j} u^J_b A_{J,I_0,\cdots \hat{I_i}\cdots \hat{I_i} \cdots I_k} \] Alternatively, the contraction can be defined using the trace : \[ \widehat{\Theta}\lrcorner\, \hat A = \tr\left( \widehat{\Theta} {\hat\wedge} \hat A\right) - \tr(\widehat{\Theta}) {\hat\wedge} \hat A \] \subsubsection*{The contracted Bianchi identity} We want to re-express $\tr\circ\d^\omega$ in \begin{equation} \tr\left( \d^\omega \Theta \right)_{JK} = u^I_a \left( \d\Theta_{IJK} + \omega\cdot \Theta \right) \end{equation} We will need the following formula \begin{lemma}\label{lmno:CovExtDiffTors} Let $A$ be a $E$-valued equivariant horizontal $k$-form. Then \[\widehat{\d^\omega A} = \d^\omega\wedge\hat A + \widehat{\Theta \lrcorner\, \hat A} \] \end{lemma} The proof simply uses the fact that $u_a\lrcorner\, \omega = 0$ and one computes the components of the exterior differential : \begin{equation*}\begin{aligned} \d^\omega A (u_{a_0},u_{a_1}\dots u_{a_k}) &= \d A (u_{a_0},u_{a_1}\dots u_{a_k})\\ &= \begin{multlined}[t] \sum_{i} (-1)^i u_{a_i} ( A(u_{a_0}\dots \hat u_{a_i} \dots u_{a_k} ) )\\ + \sum_{i<j} (-1)^{i+j} A([u_{a_i},u_{a_j}], u_{a_0}\dots \hat{u_{a_i}} \dots \hat{u_{a_j}} \dots u_{a_k} ) \end{multlined}\\ &= \begin{multlined}[t] \sum_{i} (-1)^i u_{a_i} ( \hat{A}_{a_0\dots \hat{a_i} \dots a_k} )\\ + \sum_{i<j} (-1)^{i+j} A(u_b\Theta^b(u_{a_i},u_{a_j}), u_{a_0}\dots \hat{u_{a_i}} \dots \hat{u_{a_j}} \dots u_{a_k} ) \end{multlined}\\ &= \sum_{i} (-1)^i \d^\omega \hat{A}_{a_0\dots \hat{a_i} \dots a_k}(u_{a_i}) + \sum_{i<j} (-1)^{i+j} \Theta^b(u_{a_i},u_{a_j}) \hat{A}_{b,a_0\dots \hat{a_i} \dots \hat{a_j} \dots {a_k}} \end{aligned}\end{equation*} which we write as \begin{equation} \widehat{\d^\omega A} = \d^\omega \wedge \hat A + \widehat{\Theta}\lrcorner\, \hat{A} \end{equation} We can now rewrite~(\ref{eqno:trRic}) : \begin{equation} \tr(\d^\omega \wedge \widehat{\Theta} + \widehat{\Theta\lrcorner\, \Theta} )_{ab} = \widehat\Ric_{ab} - \widehat\Ric_{ba} \end{equation} We extract a divergence term with the help of the following lemma: \begin{lemma}\label{lmno:CovExtDiv} Let $\hat A$ be an equivariant section of $\Rpq\otimes\ExT^k\Rpq^*$. Then $\tr\left( \d^\omega\wedge \hat A \right)$ decomposes as follows \[\tr\left( \d^\omega\wedge \hat A \right) = \tr \d^\omega \hat A - \d^\omega\wedge\tr \hat A \] \end{lemma} It follows from decomposing the indices over which the trace is taken : \begin{equation}\begin{aligned} \tr \left( \d^\omega\wedge \hat A \right)_{a_1\dots a_k} &= \delta^{a_0}_b \left( \sum_{i} (-1)^i \d^\omega \hat{A}^b_{a_0\dots \hat{a_i} \dots a_k}(u_{a_i}) \right)\\ \tr \left( \d^\omega\wedge \hat A \right)_{a_1\dots a_k} &= \d^\omega \hat A^b_{a_1\dots \hat{a_i} a_k}(u_b) - \sum_{i} (-1)^{i-1} \d^\omega \hat{A}^b_{b a_1\dots \hat{a_i} \dots a_k}(u_{a_i}) \end{aligned}\end{equation} Using Lemma~\ref{lmno:CovExtDiv} then once again Lemma~\ref{lmno:CovExtDiffTors}, we do the following computation \begin{equation*} \begin{aligned} \tr(\d^\omega\wedge \widehat{\Theta}) &= \tr \d^\omega\widehat{\Theta} - \d^\omega\wedge \tr\widehat{\Theta}\\ &= \tr \d^\omega\widehat{\Theta} - \left( \widehat{\d^\omega\tr\Theta} - \widehat{\Theta\lrcorner\,\tr\Theta} \right) \end{aligned} \end{equation*} but $\tr\Theta$ is simply an (equivariant) $1$-form so that \[ \d^\omega\tr\Theta = \d\tr\Theta \] and Equation~\eqref{eqno:trRic} now takes the form: \begin{equation} \left( \tr(\d^\omega \widehat{\Theta}) -\widehat{\d\tr\Theta} + \widehat{\tr(\Theta\lrcorner\, \Theta)} + \widehat{\Theta\lrcorner\,\tr\Theta} \right)_{ab}= \widehat\Ric_{ab} - \widehat\Ric_{ba} \end{equation} with $\tr \d^\omega \widehat{\Theta}$ corresponding to the usual covariant divergence. Finally we prove that the quadratic term in $\Theta$ vanishes : \begin{lemma} \[ \tr(\widehat{\Theta}\lrcorner\,\widehat{\Theta}) + \widehat{\Theta\lrcorner\,\tr\Theta} = 0 \] \end{lemma} The computation is straightforward : \begin{equation*} \begin{aligned} \tr(\widehat{\Theta}\lrcorner\,\widehat{\Theta})_{ab} &= \delta_e^c \left( \widehat{\Theta}^e_{dc} \widehat{\Theta}^d_{ab} + \widehat{\Theta}^e_{da} \widehat{\Theta}^d_{bc} + \widehat{\Theta}^e_{db} \widehat{\Theta}^d_{ca} \right)\\ &= \widehat{\Theta}^c_{dc} \widehat{\Theta}^d_{ab} + \widehat{\Theta}^c_{da} \widehat{\Theta}^d_{bc} + \widehat{\Theta}^c_{db} \widehat{\Theta}^d_{ca}\\ &= -\tr(\widehat{\Theta})_d \widehat{\Theta}^d_{ab} + \widehat{\Theta}^c_{da} \widehat{\Theta}^d_{bc} - \widehat{\Theta}^d_{ca} \widehat{\Theta}^c_{bd}\\ &= - \widehat{\Theta\lrcorner\,\tr\widehat{\Theta}} + 0 \end{aligned} \end{equation*} Our final rewriting of~(\ref{eqno:trRic}) is \begin{equation}\label{eqno:RicBianchiFrame}\boxed{ \tr(\d^\omega \widehat{\Theta})_{ab} - \d(\tr\widehat{\Theta})_{ab} = \widehat\Ric_{ab} - \widehat\Ric_{ba} }\end{equation} which corresponds on $M$ to Equation~\eqref{eqno:RicBianchiBase}. \subsection{Variation of the Ricci curvature}\label{RicContors} We want to compare the Ricci curvature of two connections. Let $\omega$ and $\omega+\tau$ be two connection $1$-forms : $\tau\in\Omega_h^1(\SO^{+}(M),\sopq)^{\SO^{+}_{p,q}}$. Note their respective curvature $2$-forms $\Omega$ and $\Omega^\tau$. They are related by the following equation \begin{equation} \Omega^\tau = \d\omega + \d\tau + \fwb\omega\omega + \wb\omega\tau + \fwb\tau\tau = \Omega + \d^\omega\tau + \fwb\tau\tau \end{equation} Using the representation embedding : $\sopq\hookrightarrow \Rpq\otimes\Rpq^*$ we can take the trace we obtain \begin{equation*} \Ric(\omega + \tau)_{J,b} = \rho^a_{i,b} u_a^I (\Omega + \d^\omega\tau + \fwb\tau\tau)^i_{I,J} = \Ric(\omega)_{J,b} + \tr \left( \d^\omega \tau + \fwb\tau\tau \right)_{J,b} \end{equation*} or in terms of $\Rpq^*\otimes\Rpq$-valued field on the frame bundle \begin{equation} \widehat\Ric(\omega + \tau) = \widehat\Ric(\omega) + \widehat{\tr \left( \d^\omega \tau + \fwb\tau\tau \right)} \label{VarRic} \end{equation} Using the lemmas, it is also possible to reformulate it in a similar way to Equation~\eqref{eqno:RicBianchiFrame} : \begin{equation} \widehat\Ric(\omega + \tau) = \widehat\Ric(\omega) + \tr\d^\omega\hat\tau - \d^\omega \widehat{\tr\tau} + \tr \left( \widehat{\Theta^\omega\lrcorner\,\tau} + \frac12\widehat{\wb\tau\tau} \right) \end{equation} \subsection{Clifford modules}\label{annCMod} The spinors we consider are the so-called \emph{Dirac spinors}. They are elements of a complex Clifford module. Set a signature $(p,q)$, which will be either Euclidean $(4,0)$ or $(0,4)$ or Lorentzian $(3,1)$ or $(1,3)$ for our purposes (see~\ref{annSignConv} for details about the signature alternatives). The Clifford algebra $\Cl_{p,q}$ is defined as the quotient of the tensor algebra of $\Rpq$ by the ideal generated by the (even) elements $v\otimes v + \sprod{v}{v}_{p,q}$ for $v\in\Rpq$. It is the universal algebra satisfying ${v_1 \cdot v_2 + v_2\cdot v_1 = -2\sprod{v_1}{v_2}_{p,q}}$. It has a natural structure of super-algebra ($\mathbb{Z}/2\mathbb{Z}$-grading). The Euclidean (4-dimensional) Clifford algebras are $\Cl_{4,0}\simeq \Cl_{0,4}\simeq\Mat_2(\mathbb{H})$ while the Lorentzian Clifford algebras are $\Cl_{1,3}\simeq\Mat_4(\mathbb{R})$ and $\Cl_{3,1}\simeq \Mat_2(\mathbb{H})$ \cite{MajSpin}. Our spinors are subject to the Dirac equation $\slashed{\nabla} \psi - m\psi = 0$ which requires (with these conventions) a structure of \emph{real} Clifford module. Nonetheless we will consider complex spinors, so as to get a Clifford algebra in which Wick rotations between all signatures are straightforward to implement (as well as flipping the sign in the Clifford algebra definition). The complexified Clifford algebras of dimension $2k$ are all isomorphic to $\Mat_{k}(\mathbb{C})$, hence the irreducible module (class) is of complex dimension $k$. We will write $\Sp_{p,q}$ for the irreducible (complex) module. It admits (for each signature) an hermitian form which is compatible with the \emph{real} Clifford module structure in the following sense : \begin{equation}\label{eqno:CliffEquiv} \forall \gamma_a\in\Rpq\subset\Cl_{p,q},(s_1,s_2)\in\Sp_{p,q}^2,\quad \sprod{\gamma_a\cdot s_1}{s_2} + \sprod{s_1}{\gamma_a\cdot s_2} = 0 \end{equation} It is uniquely defined up to a nonzero real factor. It is defined so that the Dirac operator $\gamma^a\nabla_a$ is hermitian. We will fix such a \emph{spinor metric} and use the same normalization everywhere. Normed vectors of positive (squared) norm act by unitary transformations while those of negative norm act by involutive anti-isometries of the metric. Note that the signature of the hermitian form is either $(4,0)$, $(0,4)$ or $(2,2)$, depending on the spacetime signature \cite{SpinHerm}. We will make use of the implicit notation $\bar{s_1}s_2$ for the product $\sprod{s_1}{s_2}$ of elements of $\Sp_{p,q}$. In this notation $s_1$ is to be understood as an element of $\overline\Sp_{p,q}$ identified with $\Sp_{p,q}^*$ through the hermitian form. Elements of $\Cl_{p,q}$ acting on $\overline\Sp_{p,q}$ will be often represented \emph{as acting on the right} for notational convenience : $\bar{s_1} \gamma_a = - \bar{\gamma_a} \cdot \bar{s_1}$. \subsection{\texorpdfstring{$\Pin$ and $\Spin$ groups} {Pin and Spin groups}}\label{annPinSpin} The invertible elements of the Clifford algebra act by conjugation (one also considers twisted conjugation, but it will not be relevant for our purposes). The subgroup of elements preserving the vector space $\Rpq\subset\Cl_{p,q}$ will be written $G$. It can be identified as the group composed of elements of $\Cl_{p,q}$ that can be expressed as a product of non-isotropic vectors of $\Rpq$ (in a non-unique way) For practical purposes, it is more convenient to reduce this group to the products of \emph{normed} vectors of $\Rpq$. The group $G$ has a morphism $G\xrightarrow{N} \mathbb{R}^*$ which corresponds to the product of the (non-negated) squared norms of the vectors which compose the element \cite{NotesSpin}. The hermitian form is then $G$-equivariant seen as a morphism $\overline\Sp_{p,q}\otimes\Sp_{p,q}\to \mathbb{C}$. The inverse image of $\{\pm1\}$ by $N$ is called the (Cliffordian) \emph{Pin group}, written $\Pin_{p,q}$. Its elements act on $\Rpq$ by isometries hence it is provided with a natural morphism to $\operatorname{O}_{p,q}$, which is a two-fold covering. The subgroup composed of even elements is called the \emph{Spin group}, written $\Spin_{p,q}$ and has a natural two-fold covering map to $\SO_{p,q}$. In Euclidean signature, $\Spin_{p,0}$ is identified with the kernel of $G\to\mathbb{R}^*$. The group $\Pin_{p,0}$ is a maximal compact subgroup of $G$ and $\Spin_{p,0}$ its principal component. In non-Euclidean signature the kernel of $G\to\mathbb{R}^*$ is a different subgroup. It corresponds to products of normed vector with an even number of spacelike (negative norm) vectors. They act on $\Rpq$ by isometries preserving the orientation in time. The intersection of the kernel with the even subalgebra defines the \emph{orthochronous $\Spin$ group}, written $\Spin^+_{p,q}$. It acts on $\Rpq$ by isometries preserving both orientations in space and in time. Note that some authors (as in~\cite{NotesSpin}) take this group as the $\Spin$ group. Note that due to Equation~(\ref{eqno:CliffEquiv}) $\Spin^+_{p,q}$ acts by isometries of $\Sp_{p,q}$, and $\sopq$ by infinitesimal isometries. The hermitian form being invariant under $\Spin^+_{p,q}$ can be interpreted as the form establishing an equivariant isomorphism $\overline\Sp_{p,q} \to \Sp_{p,q}^*$ from the complex conjugated $\Spin$ module to the dual complex $\Spin$ module. The Lie algebra of $\SO_{p,q}$ and $\Spin_{p,q}$ are isomorphic and they turn out to be realized in $\Cl_{p,q}$ by the space (linearly) spanned by commutators of vectors, for the algebraic (ungraded) commutator bracket \cite{NotesSpin} : \[ \forall (a,b)\in\Rpq,\quad \frac12(ab-ba)\in\operatorname{\mathfrak{spin}}_{p,q}\subset \Cl_{p,q} \] The isomorphism with $\sopq$ uses the standard representation as anti-symmetric operators $\sopq\subset \End(\Rpq)\simeq \Rpq\otimes\Rpq^*$ and composes with the inverse metric $\Rpq^*\to\Rpq$ to obtain a \emph{linear} isomorphism $\sopq\overset\rho\simeq \ExT^2 \Rpq$. It is then sent onto $\operatorname{\mathfrak{spin}}_{p,q}$ by the mapping \[ a\wedge b \to \frac14(a\cdot b - b\cdot a) \] For this reason, when using indices $i$ for $\sopq$ and $a,b$ for $\Rpq$, we will be using both notations $1/2\sigma_i$ and $1/2\sigma_{ab}$ for a basis of $\operatorname{\mathfrak{spin}}_{p,q}$ embedded in $\Cl_{p,q}$, with a notation reminiscent of the Pauli matrices (which represent a 3-dimensional Clifford algebra), using the $1/2$ factor but not the $i$ factor common in the physics literature (used to turn them into hermitian operators). Explicitly, writing in components $\rho_{i}^{ab}$ the morphism $\sopq\to\ExT^2\Rpq$ the two notations are related by \begin{equation} \frac12 \rho_{i}^{ab}\sigma_{ab} = \sigma_i \end{equation} We also use the common notation $\gamma : \Rpq \to \End(\Sp_{p,q})$. The \emph{chirality} operator is defined as the image of the volume element of $\Rpq$ : for an oriented orthonormed basis $(e_1,e_2,e_3,e_4)$ \begin{equation} \gamma_5 := \gamma_1\gamma_2\gamma_3\gamma_4 \end{equation} which as the notation suggests defines a morphism from a higher dimensional Clifford module. Aiming for a unified treatment, we do not add the usual $i$ factor in the case of a Lorentzian signature. It satisfies $(\gamma_5)^2 = (-1)^q$ and lies in the supercenter of the algebra in even dimension. Its eigenspaces on $\Sp_{p,q}$ are irreducible representations of $\Spin_{p,q}$ and spinors with values in these representations are called \emph{Weyl spinors}. For our purposes, it will implement the duality between codegree $1$ forms and vectors (as discussed in Section~\ref{anndual}). \subsection{Chiral current}\label{annCommSigGam} Seen as a morphism $\Rpq \to \End(\Sp_{p,q})$, $\gamma$ is $\Spin_{p,q}$-equivariant ($\Spin_{p,q}$ being represented by $\SO_{p,q}$). The hermitian metric allows us to define a \enquote{dual} $\Gamma : \overline\Sp_{p,q}\otimes \Sp_{p,q} \to \Rpq^*$ which is equivariant as well : \begin{equation} (a,s_1,s_2)\in\Rpq\otimes\overline\Sp_{p,q}\otimes\Sp_{p,q}\mapsto \sprod{s_1}{\gamma(a)\cdot s_2} \end{equation} Notice how the action of the $\Pin_{p,q}$ group is \emph{twisted} under the morphism, as $\gamma$ takes antihermitian values. In a similar way, one can define tensor-valued hermitian forms by using products of $\gamma$. We are interested in the element $\{\sigma_{\mu\nu},\gamma_\tau\}$ as it appears in the equation~(\ref{eqno:ConnToLC}). By definition $\sigma_{\mu\nu} = \frac12[\gamma_\mu,\gamma_\nu]$. Given that the commutator bracket is a Poisson bracket, one has a Jacobi-like (derivation) identity with the anti-commutator, so that \begin{equation} \{[\gamma_\mu,\gamma_\nu],\gamma_\tau\} = [\gamma_\mu,\{\gamma_\nu,\gamma_\tau\}] - \{\gamma_\nu,[\gamma_\mu,\gamma_\tau]\} = \{\gamma_\nu,[\gamma_\tau,\gamma_\mu]\} \end{equation} hence the $\End(\Sp_{p,q})$-valued $3$-form $\{\sigma_{\mu\nu},\gamma_\tau\}$ is antisymmetric in two pairs of indices, namely totally antisymmetric as two transpositions span the whole symmetric group. It can be expressed using the chirality element and the Levi-Civita symbol (see Appendix~\ref{anndual}), using the method described in \cite{GammaSigma} (with a chirality element different by an $i$ factor) : \begin{equation}\tag{\ref{eqno:CommChir}} \frac12 \{\sigma_{\mu\nu},\gamma_\tau\} = \varepsilon_{\upsilon\mu\nu\tau}\gamma^\upsilon\gamma^5 \end{equation} As $\gamma_\mu$ have antihermitian values, $\frac12 \{\sigma_{\mu\nu},\gamma_\tau\}$ takes value in \emph{hermitian} operators. We also record the following formula, also proved (using the Lorentzian signature) in \cite{GammaSigma} : \begin{equation}\tag{\ref{eqno:LCcontr}} \frac12 \epsilon_{\xi\nu\tau\chi} \epsilon^{\upsilon\tau\chi\mu} = (-1)^q \left( \delta_\xi^\upsilon \delta_\nu^\mu - \delta_\xi^\mu\delta_\nu^\upsilon \right) \end{equation} with $(-1)^q$ corresponding to the norm of the positive volume element $\mathrm{vol}$. \subsection{Signature and conventions}\label{annSignConv} There exists two different conventions for the Clifford algebra of a given signature, namely to choose either $v_1 \cdot v_2 + v_2\cdot v_1 = -2\sprod{v_1}{v_2}_{p,q}$ or $v_1 \cdot v_2 + v_2\cdot v_1 = 2\sprod{v_1}{v_2}_{p,q}$. Going from one convention to the other is equivalent to consider the opposite metric ; as noted in~\cite{NotesSpin} Clifford algebras of opposite signature are ($\mathbb{Z}/2\mathbb{Z}$-graded) opposite algebras of each other. As $\Spin$ groups are composed of even elements of the Clifford algebra, $\Spin$ groups of opposite signature are \emph{isomorphic}. As illustrated in Section~\ref{annCMod} real Clifford algebras of different signature can be non-isomorphic. Complex Clifford algebras, on the other hand, are all isomorphic as all (real) bilinear metrics are congruent under the action of the complex linear group. Note however that the spinorial metric, as we defined it in~\ref{annCMod}, is dependent on the \emph{real} Clifford algebra. Physical theories require an action of the $\Pin$ group on \enquote{spinors} -- they are then sometimes called \emph{pinors}~\cite{MajSpin,PinGroups}. The catch is that $\Pin$ groups of opposite signature are \emph{non-isomorphic}~\cite{PinGroups} (and may actually need to be specified~\cite{PinGR}). In this respect, the exact choice of a signature sign (along with a sign convention) does matter. The structure needed on a pseudo-riemannian manifold to carry (real) pinors is a $\Pin$ structure. That being said, a $\Spin^+$ structure naturally induces $\Spin$ and $\Pin$ structures of the corresponding signatures. As $\Spin^+_{1,3}\simeq\Spin^+_{3,1}$ (lifting an isomorphism $\SO^{+}_{1,3}\simeq\SO^{+}_{3,1}$), a given $\Spin^+_{1,3}$ structure can induce both $\Pin_{1,3}$ and $\Pin_{3,1}$ structures (for opposite metrics). \subsection{Introduction} \subsection{Notations and conventions}\label{secno:notconv} We will make free use of the Einstein convention : \[ A_iB^i := \sum_i A_i B^i \] as well as of the so-called \enquote{musical} isomorphisms raising and lowering indices through a metric that will generally be implicit (as long as there is no ambiguity). The convention for the order of the indices will be to keep the relative order of upper indices and lower indices, and place upper indices \emph{before lower indices}. For example given a tensor $T^\pi{}_{\mu\nu}$ the corresponding totally covariant and totally contravariant tensors are \begin{align*} T_{\tau\mu\nu} &:= g_{\tau\pi}T^\pi{}_{\mu\nu}\\ T^{\pi\tau\rho} &:= g^{\tau\mu}g^{\rho\nu}T^\pi{}_{\mu\nu} \end{align*} given a metric $g$ on the space in which the indices $\tau,\mu,\nu$ live. The metric and the inverse metric will be both written with the same symbol, but we will generally use the Kronecker delta $\delta^\mu_\nu = g^{\mu\tau}g_{\tau\nu}$ for the identity which is the corresponding endomorphism of both covariant and contravariant vectors. We will write ${\eta}_{ab}$ for the Minkowski metric on the Minkowski space $\mathfrak m$, used at the same time as a pseudo-Euclidean affine space, as an abelian Lie group and as an abelian Lie algebra. Our convention for the Lorentzian signature is $(+---)$ and for the Clifford algebras $u\cdot v + v\cdot u = -2\sprod u v$ (more detail in Appendix~\ref{annspin}). We will be working with the connected \emph{proper orthochronous Lorentz group} $\mathfrak{L}=\SOpL$ which we will just call \emph{Lorentz group}. Its Lie algebra is $\lor=\soL$. The \emph{Poincaré group} is isomorphic to the semi-direct product of $\mathfrak m$ with $\mathfrak{L}$ : \[ \mathfrak{P} \simeq \mathfrak{L}\ltimes \mathfrak m \] and there is an isomorphism between the associated Lie algebras : \[ \mathfrak p \simeq \lor\ltimes \mathfrak m \] If not explicitly mentioned (for example in Appendix~\ref{annMC}), we will consider \emph{finite dimensional second countable Hausdorff} differentiable manifolds. \subsection{Introduction} The theory of General Relativity models spacetime as a $4$-dimensional differentiable manifold. The gravitational field is encoded in a linear connection and gives \emph{geometrical structure} to the manifold. In the original approach, a Lorentzian metric models the gravitational potential and the gravitational field corresponds to the associated Levi-Civita connection. The variational formulation of the theory is due to Einstein and Hilbert; the dynamical field is the metric and the Lagrangian density is simply the scalar curvature (multiplied by the pseudo-Riemannian density). As research in General Relativity progressed and the framework of differential geometry kept developing alternative formulations of General Relativity were discovered. The \emph{Palatini formulation} of General Relativity relaxes the relation between the linear connection and the metric. This allows for a first-order formulation of gravity. A variant of this formulation uses the \emph{tetrad formalism} : the metric field is replaced by a frame field which defines the metric for which it is orthonormal. Since the metric field is a quadratic function of the frame field, the tetrad is sometimes called \enquote{square-root of the metric}. Because the metric is defined through a tetrad field, the theory gains $\O_{1,3}$'s worth of gauge freedom. The \emph{Einstein-Cartan theory} is the case where there is a Lorentzian metric and the connection is required to be metric but torsion is allowed. The corresponding geometry on spacetime is suitably understood in the framework of \emph{Cartan geometry}~\cite{CartanWise}. Differences with Einstein's original theory manifest themselves when the gravitation is coupled to spinor fields. The spinor fields act as a source for the torsion field. One simple example is the \emph{Einstein-Cartan-Dirac} theory which couples the Einstein-Cartan gravitation with a Dirac spinor. The usual variational treatment of spinor field theories in a dynamical spacetime uses a tetrad in order to allow variations of the metric structure while the spinor fields remain \enquote{constant}. When a tetrad is used instead of a metric, the theory is called \emph{Sciama-Kibble} theory \cite{STandFields}. In~\cite{LFB} Hélein and Vey, following~\cite{CFTRefFrames}, proposed an action defined on a (structure-less) $10$-dimensional manifold $\mathcal P$ such that under some hypotheses a solution of the associated Euler-Lagrange equations defines : \begin{itemize} \item A $4$-dimensional manifold ${\mathcal E}$ over which $\mathcal P$ is fibred. \item A Riemannian metric and an orientation on ${\mathcal E}$ \item An identification from the fibration bundle $\mathcal P\to {\mathcal E}$ to the orthonormal frame bundle \item A metric connection on ${\mathcal E}$ satisfying the (Riemannian) Einstein-Cartan field equations \end{itemize} Their model can be used with a Lorentzian signature (and space-and-time orientations) but in this case both the construction of the fibration and mechanism giving the Einstein-Cartan field equations fail in general. The reason lies in the non-compactness of the (connected) Lorentz group. We argue that there is another obstruction to the construction of the fibration $\mathcal P\to{\mathcal E}$, even in Riemannian signature. Indeed in this case the $4$-dimensional base space ${\mathcal E}$, obtained as an orbit space, can have singularities and is not \emph{a priori} a smooth manifold. Our second point is that their derivation of the field equations involves heavy computations on which we pretend a more geometrical perspective can shed some light. The aim of the present paper is twofold : first, we give an analysis of the mechanism according to which the variational equations on the bundle total space factor to the base manifold of the fibration ; second, we show how it is possible to use the same principle to build an Einstein-Cartan-Dirac Lagrangian on a structure-less $10$-manifold. The principal bundle structure is constructed from a nondegenerate $1$-form $\omega\oplus\alpha$ with value in the Poincaré Lie algebra. The Poincaré Lie algebra can be decomposed as a semi-direct product $\lor\ltimes \Mink$ with $\lor$ the Lie algebra of the Lorentz group and $\Mink$ the Minkoswki space seen as an abelian Lie algebra. The $1$-form obeys the following equations : \begin{subequations}\label{eqno:introCartForm} \begin{align} \d \omega^i + \fwb{\omega}{\omega}^i &= \frac12 \Omega^i_{bc} \alpha^b\wedge\alpha^c\\ \d \alpha^a + \wb{\omega}{\alpha}^a &= \frac12 \Omega^a_{bc} \alpha^b\wedge\alpha^c \end{align} \end{subequations} From these equations is it possible to build a Lie algebra action on the manifold. In fact, this mechanism is well known from the group manifold approach to supergravity~\cite{GeoSUGRA,SUGRAI}. However more hypotheses are needed for the action to integrate to a Lie group action, and for the fibration over the orbit space to form a principal bundle fibration. The first hypothesis is a completeness requirement, which is stated in~\cite{LFB}, but it is not sufficient. These missing details are presented in Appendix~\ref{annMC} and will be the subject of an upcoming paper~\cite{CartInt}. In~\cite{LFB} field equations on the base manifold are obtained in the following way : one first uses the Euler-Lagrange equations associated to the variation of an extra field that plays the role of Lagrange multipliers. They allow to construct the frame bundle structure on the $10$-manifold. Then in order to deal with the remaining Euler-Lagrange equations, associated to variations of the fields $\omega$ and $\alpha$, a change of fibre coordinates is applied that amounts to choosing a local frame on the four dimensional base manifold. It allows converting equivariant quantities on the frame bundle to \emph{invariant quantities}. In these coordinates part of the Euler-Lagrange equations manifestly becomes an exact divergence and an other part is invariant under the group action. Integration along fibres allows one to conclude that the invariant part of the Euler-Lagrange equations has to vanish, which turns out to be equivalent to the Einstein field equations on spacetime. Once part of Euler-Lagrange equations is used to obtain the (local) frame bundle structure, we approach the problem of the field equations from a different perspective. Instead of considering the Euler-Lagrange equations corresponding to field variations associated with a specific set of coordinates, it is enough to consider variations of the fields which are \emph{equivariant} with respect to the action of the structure group. The calculations are in a sense equivalent but our perspective makes it clear why and how the Euler-Lagrange equations split in two terms that respectively give an exact term and the usual field equations on the base manifold. With this insight it is simple to extend the theory to include spinor fields. One peculiarity of the spinor theory thus constructed is that instead of starting from a spacetime and a tetrad which connects the spacetime to a reference bundle in which spinor fields live, we start from the tentative frame bundle on which the spinor fields live and from there construct back the spacetime. The paper is organised as follows. In Section~\ref{annLegTrans} we give a brief introduction to the aspects of multisymplectic field theory we will be using throughout the paper. Section~\ref{secno:EC} is a short presentation of the Lagrangian of the Einstein-Cartan theory of gravitation. We show in Section \ref{secno:FrameB} how the Einstein-Cartan Lagrangian can be lifted to the frame bundle. In Section \ref{secno:FBdyn} we get to the crux of Hélein and Vey's model : the frame bundle structure can be \emph{locally} characterised by a $1$-form satisfying Equations~\eqref{eqno:introCartForm}. The frame bundle is dropped and we only work with the $1$-form which defines what we call a \enquote{generalised frame bundle structure}. The Equations are then translated into a Lagrange multiplier term which is added to the Lagrangian. We perform the Legendre transform and obtain the same Poincaré-Cartan form as in~\cite{LFB}, with Lagrange multipliers identified with specific components of the momentum. Compared to the non-holonomic Legendre transformation used in~\cite{LFB}, we think our construction is more systematic. Next in Section~\ref{VEqGR} we derive the associated Euler-Lagrange equations. However their treatment is deferred until Section~\ref{secno:EuclFE}. Section~\ref{spinor} is the equivalent of Sections~\ref{secno:EC}-\ref{secno:FrameB} for the Dirac Lagrangian : we present the Lagrangian and lift it to the \emph{spinor frame bundle}, then express it in terms of the generalised frame bundle structure. The corresponding Euler-Lagrange equations are derived in Section~\ref{VEqSp}. In Section~\ref{LagMult} we present a general framework for Lagrange multipliers in multisymplectic field theory. We then show how for the Lagrangians from Sections~\ref{secno:FBdyn} and~\ref{spinor} it is possible to obtain Euler-Lagrange equations with the dependence in the Lagrange multipliers reduced to an exact term. This is applied in Section~\ref{secno:EuclFE} to the Einstein-Cartan-Dirac theory formulated over a $10$-dimensional manifold as described in Section~\ref{spinor}. Assuming the generalised frame structure integrates into a standard frame structure, we derive from the Euler-Lagrange equations the usual Einstein-Cartan-Dirac field equations on the $4$-dimensional base manifold. There is however part of the Euler-Lagrange equations on the $10$-dimensional manifold involving the Lagrange multipliers which we fail to express on the base manifold. Finally, we give a very brief analysis of the Einstein-Cartan-Dirac equations by identifying the value of the torsion, how it relates to energy-momentum, and an explicit comparison to the Einstein-Dirac theory in which the Dirac operator is associated to the torsion-free Levi-Civita connection. \section{Introduction} \input{intro2.tex} \input{intro.tex} \section{Multisymplectic Field Theory}\label{annLegTrans} \input{LegTrans.tex} \section{The Einstein-Cartan theory of gravitation}\label{secno:EC} \input{ECbase.tex} \section{General Relativity formulated on the frame bundle}\label{secno:FrameB} \input{GRFrameB.tex} \section{Generalised frame bundle structure with connection as a dynamical field}\label{secno:FBdyn} \input{FBdyn.tex} \section{Variational equations for gravitation}\label{VEqGR} \input{VEqGR.tex} \input{VEqGR2.tex} \section{Dirac spinors on the spinor frame bundle}\label{spinor} \input{SpinFrame.tex} \section{Variational equations for a spinor on a generalised frame bundle}\label{VEqSp} \input{VEqSp.tex} \section{Lagrange multipliers as a differential primitive of the Euler-Lagrange form}\label{LagMult} \input{LagMult.tex} \section{Derivation of the Einstein-Cartan-Dirac equations on spacetime in Riemannian signature}\label{secno:EuclFE} \input{FEq.tex}
1,108,101,562,663
arxiv
\section{Introduction} \label{intro} With the detection of solar-like oscillation in thousands of red giant stars, {\it Kepler} and CoRoT missions have opened the way to the derivation of basic stellar properties such as mass and age even for single stars located at distances of several kiloparsecs \citep[e.g.][and references therein]{chaplinmiglio13}. In most cases this derivation is based on the two more easily-measured asteroseismic properties: the large frequency separation, \mbox{$\Delta\nu$}, and the frequency of maximum oscillation power, \mbox{$\nu_{\rm max}$}. \mbox{$\Delta\nu$}\ is the separation between oscillation modes with the same angular degree and consecutive radial orders, and scales to a very good approximation with the square root of the mean density ($ \overline{\rho}$), while \mbox{$\nu_{\rm max}$}\ is related with the cut-off frequency for acoustic waves in a isothermal atmosphere, which scales with surface gravity $g$ and effective temperature \mbox{$T_{\rm eff}$}. These dependencies give rise to the so-called scaling relations: \begin{eqnarray} \Delta\nu & \propto & \overline{\rho}^{1/2} \propto M^{1/2} / R^{3/2} \nonumber \\ \nu_{\rm max} & \propto & g T_{\rm eff}^{-1/2} \propto (M/R^2) T_{\rm eff}^{-1/2} \,\,\,\,. \label{eq:scaling} \end{eqnarray} It is straightforward to invert these relations and derive masses $M$ and radii $R$ as a function of \mbox{$\nu_{\rm max}$}, \mbox{$\Delta\nu$}, and \mbox{$T_{\rm eff}$}. The latter has to be estimated in an independent way, for instance via the analysis of high-resolution spectroscopy. $M$ and $R$ can then be determined either (1) in a model-independent way by the ``direct method'', which consists in simply applying the scaling relations with respect to the solar values, or (2) via some statistical method that takes into account stellar theory predictions and other kinds of prior information. In the latter case, the methods are usually referred to as either ``grid-based'' or ``Bayesian'' methods. Determining the radii and masses of giant stars brings consequences of great astrophysical interest: The radius added to a set of apparent magnitudes can be used to estimate the stellar distance and the foreground extinction. The mass of a giant is generally very close to the turn-off mass of its parent population, and hence closely related with its age; the latter is otherwise very difficult to estimate for isolated field stars. In addition, the surface gravities of asteroseismic targets can be determined with an accuracy generally much better than allowed by spectroscopy. Although these ideas are now widely-recognized and largely used in the analyses of CoRoT and {\it Kepler} samples, there are also several indications that asteroseismology can provide even better estimates of masses and ages of red giants, than allowed by the scaling relations above. First, there are significant evidences of corrections of a few percent being necessary \citep[see ][]{white11, miglio12_1,miglio13,brogaard16, miglio16, guggenberger16, sharma16, handberg16} in the $\mbox{$\Delta\nu$}$ scaling relation. Although such corrections are expected to have little impact on the stellar radii (and hence on the distances), they are expected to reduce the errors in the derived stellar masses, hence on the derived ages for giants. Second, there are other asteroseismic parameters as well -- like for instance the period spacing of mixed modes, $\Delta P$ \citep{beck11, mosser14} -- that can be used to estimate stellar parameters, although not via so easy-to-use scaling relations as those above-mentioned. In this paper, we go beyond the use of simple scaling relations in the estimation of stellar properties via Bayesian methods, first by replacing the \mbox{$\Delta\nu$}\ scaling relation by using frequencies actually computed along the evolutionary tracks, and second by including the period spacing $\mbox{$\Delta P$}$ in the method. We study how the precision and accuracy of the inferred stellar properties improve with respect to those derived from scaling relations, and how they depend on the set of available constraints. The set of additional parameters to be explored includes also the intrinsic stellar luminosity, which will be soon determined for a huge number of stars in the Milky Way thanks to the upcoming Gaia parallaxes \citep[][and references therein]{lindegren16}. The results are tested both on synthetic data and on the star cluster NGC~6819, for which \textit{Kepler} has provided high-quality oscillation spectra for about 50 giants \citep{basu11, stello11_1, corsaro12, handberg16}. The structure of this paper is as follows. Section~\ref{sec:models} presents the grids of stellar models used in this work, describes how the \mbox{$\Delta\nu$}\ and \mbox{$\Delta P$}\ are computed along the evolutionary tracks, and how the same are accurately interpolated in order to generate isochrones. Section~\ref{sec:applications} employs the isochrone sets incorporating the new asteroseismic properties to evaluate stellar parameters by means of a Bayesian approach. The method is tested both on synthetic data and on real data for the NGC~6819 cluster. Section~\ref{sec:close} draws the final conclusions. \section{Models} \label{sec:models} \subsection{Physical inputs} The grid of models was computed using the MESA code \citep{Paxton_etal11,Paxton_etal13}. We computed 21 masses in a range between $M= 0.6-2.5 \mbox{$M_{\odot}$}$, in combination with 7 different metallicities ranging from [Fe/H]$=-1.00$ to $0.50$ (Table \ref{tab:grid})\footnote{According to the simulations by \citet{girardi15}, less than one per cent of the giants in the Kepler fields are expected to have masses larger than 2.5~\mbox{$M_{\odot}$}.}. The following points summarize the relevant physical inputs used: \begin{itemize} \item The tracks were computed starting from the pre-main sequence (PMS) up to the first thermal pulse of the asymptotic giant branch (TP-AGB). \item We adopt \citet{GN93} heavy elements partition. \item The OPAL equation of state \citep{Rogers&Nayfonov02} and OPAL opacities \citep{iglesias96} were used, augmented by low-temperature opacities from \citet{Ferguson_etal05}. C-O enhanced opacity tables were considered during the helium-core burning (HeCB) phase. \item A custom table of nuclear reaction rates was used \citep[NACRE,][]{Angulo_etal99}. \item The atmosphere is taken according to \citet{Krishna-Swamy66} model. \item Convection was treated according to mixing-length theory, using the solar-calibrated parameter ($\alpha_\mathrm{MLT}=1.9657$). \item Overshooting was applied during the core-convective burning phases in accordance with \citet{Maeder75} step function scheme. We use overshooting with a parameter of $\mbox{$\alpha_{\rm ovH}$}=0.2 H_p$ during the main sequence, while we consider $\mbox{$\alpha_{\rm ovHe}$}=0.5 H_p$ penetrative convection in the HeCB phase (following the definitions in \citealt{Zahn91} and the result in \citealt{bossini15}). \item Element diffusion, mass loss, and effects of rotational mixing were not taken in account. \item Metallicities [Fe/H] were converted in mass fractions of heavy elements $Z$ by the approximate formula $Z=Z_{\odot}\cdot10^{\mathrm{[Fe/H]}}$, where $Z_{\odot}=0.01756$, coming from the solar calibration. The initial helium mass fraction $Y$ depends on $Z$ and was set using a linear helium enrichment expression \begin{equation} Y=Y_p+\frac{\Delta Y}{\Delta Z} Z \label{eq:Y_enrich} \end{equation} with the primordial helium abundance $Y_{p} = 0.2485$ and the slope $\Delta Y/\Delta Z =(Y_\odot-Y_p)/Z_\odot =1.007$. Table \ref{tab:grid} shows the relationship between [Fe/H], $Z$, and $Y$ for the tracks computed. \end{itemize} \begin{table} \scriptsize \centering \caption{Initial masses and chemical composition of the computed tracks.\label{tab:grid}} \begin{tabular}{ c } \hline Mass (M$_\odot$) \\ \hline 0.60, 0.80, 1.00, 1.10, 1.20, 1.30, 1.40, 1.50, 1.55, 1.60,\\ 1.65, 1.70, 1.75, 1.80, 2.00, 2.15, 2.30, 2.35, 2.40, 2.45, 2.50 \\ \hline \end{tabular} \begin{tabular}{ c c c } \hline [Fe/H] & $Z$ & $Y$ \\ \hline $-$1.00 & 0.00176 & 0.25027 \\ $-$0.75 & 0.00312 & 0.25164 \\ $-$0.50 & 0.00555 & 0.25409 \\ $-$0.25 & 0.00987 & 0.25844 \\ 0.00 & 0.01756 & 0.26618 \\ 0.25 & 0.03123 & 0.27994 \\ 0.50 & 0.05553 & 0.30441 \\ \hline \end{tabular} \end{table} \subsection{Structure of the grid} \label{sec:grid} To build the tracks actually used in our Bayesian-estimation code, we select from the original tracks computed with MESA about two hundred structures well-distributed in the HR diagram and representing all evolutionary stages. From these models we extract global quantities, such as the age, the photospheric luminosity, the effective temperature (\mbox{$T_{\rm eff}$}), the period spacing of gravity modes (\mbox{$\Delta P$}, see Section~\ref{sec:deltaP}). In addition, each structure is also used to compute individual radial mode frequencies with GYRE \citep{Townsend&Teitler13} in order to calculate large separations (\mbox{$\Delta\nu$}), as described in Section~\ref{sec:averageDnu}. \subsection{Average large frequency separation} \label{sec:averageDnu} \subsubsection{Determination of the large frequency separation} \label{sec:DetAverageDnu} In a first approximation, the large separation \mbox{$\Delta\nu$}\ can be estimated in the models by the equation \ref{eq:scaling}. However, this estimation can be inaccurate, since is affected by systematic effects which depend e.g.\ on the evolutionary phase and, more generally, on how the sound speed behaves in the stellar interior. To go beyond the seismic scaling relations, we calculate individual radial-mode frequencies for each of the models in the grid. Based of the frequencies we compute an average large frequency separation $\langle\mbox{$\Delta\nu$}\rangle$. We adopt a definition of $\langle\mbox{$\Delta\nu$}\rangle$ as close as possible to the observational counterpart. The average \mbox{$\Delta\nu$}\ as measured in the observations depends on the number of frequencies identified around \mbox{$\nu_{\rm max}$}\ and on their uncertainties. Therefore, with the aim of a self-consistent comparison between data and models, any $\langle\mbox{$\Delta\nu$}\rangle$ calculated from stellar oscillation codes must take in account the restrictions given by the observations. \citet{handberg16} estimated the quantity $\Delta\nu_\mathrm{fit}$ for the stars in the {\it Kepler}'s cluster NGC~6819. In that paper, $\Delta\nu_\mathrm{fit}$ is estimated by a simple linear fit of the individual frequencies (weighted on their errors) as function of the radial order. The value of the slope resulting from the fitting line gives the estimated \mbox{$\Delta\nu$}. However, the same method cannot be applied to theoretical models since their frequencies have no error bars. Therefore we need to take into account the uncertainties associated to each frequency in order to give them a consistent weight. Observational errors depend primarily on the frequency distance between a given oscillation mode and $\nu_\mathrm{max}$, with a trend that follows approximately the inverse of a Gaussian envelope \citep[smaller errors near \mbox{$\nu_{\rm max}$}, larger errors far away from \mbox{$\nu_{\rm max}$};][]{handberg16}. For this reason we adopt a Gaussian function, as described in \citet{Mosser12}, to calculate the individual weights: \begin{equation} w=\exp\left[-\frac{(\nu-\nu_\mathrm{max})^2}{2\cdot\sigma^2}\right], \label{eq:gauss} \end{equation} where $w$ is the weight associated to the oscillation frequency $\nu$, and \begin{equation} \sigma=0.66\cdot\nu_\mathrm{max}^{0.88} \,\,\,\,. \label{eq:sigma_envelope} \end{equation} The $\langle\mbox{$\Delta\nu$}\rangle$ is then calculated by a linear fitting of the radial frequencies $\nu_{n,0}$ as function of the radial order $n$, with the weights taken at each $\nu_{n,0}$ frequency. In order to test our estimations we use the observed frequencies in \citet{handberg16} simulating their errors using the Gaussian weight function in equation~\ref{eq:gauss}. Figure \ref{fig:dnu_methods} shows the comparison between $\langle\mbox{$\Delta\nu$}_\mathrm{gauss}\rangle$, determined from the method above, with $\langle\mbox{$\Delta\nu$}_\mathrm{fit}\rangle$ estimated in the paper using the actual errors. The method estimate $\langle\mbox{$\Delta\nu$}_\mathrm{gauss}\rangle$ with relative differences within the error bars for the majority of the stars. Although the definition of $\langle\mbox{$\Delta\nu$}\rangle$ may seem a minor technical issue, it plays an important role in avoiding systematic effects on e.g. the mass and age estimates. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{dnu_gauss_vs_fit.eps}} \caption{Comparison between the average large separation $\langle\mbox{$\Delta\nu$}_\mathrm{fit}\rangle$ of the star in NGC~6819, estimated by linear fitting with the actual error, and the output of the method described in Section~\ref{sec:DetAverageDnu}, for which the actual errors were substituted by a Gaussian function centred in \mbox{$\nu_{\rm max}$}.} \label{fig:dnu_methods} \end{figure} \subsubsection{Surface effects} It is well known that current stellar models suffer from an inaccurate description of near-surface layers leading to a mismatch between theoretically predicted and observed oscillation frequencies. These so-called surface effects have a sizable impact also on the large frequency separation, and on its average value. When using model-predicted $\mbox{$\Delta\nu$}$ it is therefore necessary to correct for such effects. As usually done, a first attempt at correcting is to use the Sun as a reference, hence by normalising the $\langle\mbox{$\Delta\nu$}\rangle$ of a solar-calibrated model with the observed one. In our solar model, $\alpha_{\rm MLT}$ and $X_{\odot}$ are calibrated to reproduce, at the solar age $t_\odot=4.57$ Gyr, the observed luminosity $L_\odot = 3.8418\cdot10^{33}$ erg s$^{-1}$, the photospheric radius $R_\odot = 6.9598\cdot10^{10}$ cm \citep{Bahcall_etal05}, and the present-day ratio of heavy elements to hydrogen in the photosphere ($Z/X=0.02452$, \citealt{GN93}). We used the same input physics as described in Section~\ref{sec:models}. A comparison between the large frequency separation of our calibrated solar model and that from solar oscillation frequencies \citep{broomhall14} is shown in Fig.~\ref{fig:dnusun}. We find that the predicted average large separation, $\langle\mbox{$\Delta\nu$}_{\odot, \rm mod}\rangle=136.1$~$\mu$Hz (defined cf.~Section~\ref{sec:averageDnu}), is 0.8 per cent larger than the observed one ($\langle\mbox{$\Delta\nu$}_{\odot, \rm obs}\rangle=135.0$~$\mu$Hz). We then follow the approach by \citet{white11} and adopt as a solar reference value that of our calibrated solar model ($\langle\mbox{$\Delta\nu$}\rangle_\odot=\langle\mbox{$\Delta\nu$}\rangle_{\rm mod, \odot}=136.1 \mu$Hz). \begin{figure} \includegraphics[height=\hsize, angle=-90]{dnusolar.eps} \caption{Large frequency separation (\mbox{$\Delta\nu$}) of radial modes as function of frequency, as observed in the Sun (\citealt{broomhall14}, dots connected by a blue line) and in our calibrated solar model (red line). The gray gaussian profile represents the weights given by each point of \mbox{$\Delta\nu$}\ when estimating $\langle\mbox{$\Delta\nu$}\rangle$ (accordingly to the method described in Section~\ref{sec:DetAverageDnu}).} \label{fig:dnusun} \end{figure} This is an approximation which should be kept in mind, and an increased accuracy when using $\langle\mbox{$\Delta\nu$}\rangle$ can only be achieved by both improving our theoretical understanding of surface effects in stars other than the Sun \citep[e.g. see][]{sonoi15, ball16}, and by trying to mitigate surface effects when comparing models and observations. In this respect a way forward would be to determine the star's mean density by using the full set of observed acoustic modes, not just their average frequency spacing. This approach was carried out in at least two RGB stars \citep{huber13, Lillo-Box2014}, and led to determination of the stellar mean density which is $\sim 5-6$ per cent higher than derived from assuming scaling relations, and with a much improved precision of $\sim 1.4$ per cent. Furthermore, the impact of surface effects on the inferred mean density is mitigated when determining the mean density using individual mode frequencies rather than using the average large separation \citep[e.g., see][]{chaplinmiglio13}. This approach is however not yet feasible for populations studies, mostly because individual mode frequencies are not available yet for such large ensembles, but it is a path worth pursuing to improve both precision and accuracy of estimates of the stellar mean density. \subsubsection{$\Delta\nu$: deviations from simple scaling} Small-scale deviations from the $\langle\mbox{$\Delta\nu$}\rangle$ scaling relation have been investigated in several papers. This is usually done by comparing how well model predicted $\langle\mbox{$\Delta\nu$}\rangle$ scales with $\overline{\rho}^{1/2}$, taking the Sun as a reference point \citep[see ][]{white11, miglio12_1,miglio13,brogaard16, miglio16, guggenberger16, sharma16, handberg16}. Such deviations may be expected primarily for two reasons. First, stars in general are not homologous to the Sun, hence the sound speed in their interior (hence the total acoustic travel time) does not simply scale with mass and radius only. Second, the oscillation modes detected in stars do not adhere to the asymptotic approximation to the same degree as in the Sun \citep[see e.g.][for a more detailed explanation]{Belkacem2013}. The combination of these two factors is what eventually determines a deviation from the scaling relation itself. Cases where a small correction is expected are likely the result of a fortuitous cancellation of the two effects (e.g. in RC stars). We would like to stress that beyond global trends e.g. with global properties, such corrections are also expected to be evolutionary-state and mass dependent, as discussed e.g.\ in \citet{miglio12_1}, \citet{miglio13}, and \citet{christensendalsgaard14}. As pointed out in these papers, the mass distribution is very different inside stars with same mass and radius but in RGB or RC phases. A RGB model has a central density $\sim\!10$ times higher than a RC one; the former has a radiative degenerate core of He, while the latter has a very small convective core inside a He-core. The mass coordinate of the He-core is roughly a factor 2 larger for the RC model, while the fractional radius of this core is very small ($\sim 2.5-6 \times 10^{-3}$) in both cases. The frequencies of radial modes are dominated by envelope properties, which have not very different temperatures. How the difference in the deep interior of the star affects then the relation between mean density and the seismic parameter? As suggested in the above mentioned papers, different distribution of mass implies a lower density of the envelope of the RC with respect to the RGB one, and hence a different sound speed in the regions effectively probed by radial oscillations. As shown by \citet[][and references therein]{ledoux58}, the oscillation frequencies of radial modes depend not only on the mean density of the star, but also on the mass concentration, with mode frequencies (and hence separations) increasing with mass concentration. Although in the RGB model the center density is 10 times larger than the same in the RC one, the latter is a more concentrated model since for 1 M$_{\odot}$, for instance, half of the stellar mass is inside some thousandths of its radius. Moreover, as mass concentration increases, the oscillation modes tend to propagate in more external layers. Hence, not only the envelope of the RC model has a lower density, in addition the eigenfunctions propagate in more external regions with respect to their behaviour in RGB stars. The adiabatic sound speed of the regions probed by these oscillation modes is smaller in the RC than in the RGB, leading to differences in large frequency separations, and corrections with respect to the scaling relation. Figure \ref{fig:gridnmaxdnu} shows the ratio between the large separation obtained from the scaling relation, $\mbox{$\Delta\nu$}_\mathrm{scal.}$, and $\langle\mbox{$\Delta\nu$}\rangle$ (calculated as described in Sec. \ref{sec:DetAverageDnu}) as a function of \mbox{$\nu_{\rm max}$}\ for a large number of tracks in our computed grid. These panels illustrate the dependence of $\Delta\nu$ corrections on mass, evolutionary state and also chemical composition that affects the mass distribution inside the star. \begin{figure*} \begin{minipage}{0.4\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{mass_legend.eps}} \textcolor{white}{\bf{\\0\\}} \resizebox{0.9\textwidth}{!}{\includegraphics{nmax_2.eps}} \textcolor{white}{\bf{\\0\\}} \resizebox{0.9\textwidth}{!}{\includegraphics{nmax_4.eps}} \textcolor{white}{\bf{\\0\\}} \resizebox{0.9\textwidth}{!}{\includegraphics{nmax_6.eps}} \textcolor{white}{\bf{\\0\\}} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{nmax_1.eps}} \textcolor{white}{\bf{\\0\\}} \resizebox{0.9\textwidth}{!}{\includegraphics{nmax_3.eps}} \textcolor{white}{\bf{\\0\\}} \resizebox{0.9\textwidth}{!}{\includegraphics{nmax_5.eps}} \textcolor{white}{\bf{\\0\\}} \resizebox{0.9\textwidth}{!}{\includegraphics{nmax_7.eps}} \textcolor{white}{\bf{\\0\\}} \end{minipage} \caption{Correction of scaling relation \mbox{$\Delta\nu$}\ in function of \mbox{$\nu_{\rm max}$}, for a subsection of the grid of tracks presented in Section~\ref{sec:grid}.} \label{fig:gridnmaxdnu} \end{figure*} As shown in Fig.~\ref{fig:gridnmaxdnu} the deviation of $\Delta\nu$ with respect to the scaling relation tends to low values for stars in the secondary clump. We must keep in mind however, that the masses of stars populating the secondary clump depend on the mixing processes occurred during the previous main-sequence phase, and also on the chemical composition, that is metallicity and initial mass fraction of He. Therefore, a straightforward parametrization of correction as function of mass and metallicity is not possible. \subsection{Period spacing} \label{sec:deltaP} It has been shown by \citet{Mosser12a} that is possible to infer the asymptotic period spacing of a star, by fitting a simple pattern on their oscillation spectra. This is particularly relevant for those stars that present a rich forest of dipole modes ($l=1$), like, for instance, the red giants. The asymptotic theory of stellar oscillation tells us that the g-modes are related by an asymptotic relation where their periods are equally spaced by $\mbox{$\Delta P$}_{l}$. The relation states that the asymptotic period spacing is proportional to the inverse of the integral of the Brunt-V\"ais\"al\"a\ frequency $N$ inside the trapping cavity: \begin{equation} \mbox{$\Delta P$}_{l} = \dfrac{2\pi^2}{\sqrt{l(l+1)}}\left(\displaystyle\int^{r_2}_{r_1}\dfrac{N}{r}\mathrm{d} r\right)^{-1}, \label{eq:DPg} \end{equation} where $r_1$ and $r_2$ are the coordinates in radius of turning points that limit the cavity. It is easy to see that its value depends, among other things, on the size and the position of the internal cavity, fact that will become particularly relevant in the helium-core-burning phase, giving the uncertainties on the core convection \citep{Montalban_etal13,bossini15}. On the RGB the period spacing is an excellent tool to set constrains on other stellar quantities, like radius, and luminosity (see for instance \citealt{lagarde16} and \citealt{davies16}). Moreover the period spacing gives an easy and immediate discrimination between stars in helium-core-burning and in RGB phases, since the former have a \mbox{$\Delta P$}\ systematically larger of about $\sim 200-300$~s than the latter, while after the early-AGB phase it decreases to similar or smaller values. \subsection{A quick introduction to grid-based and Bayesian methods} Having introduced the way \mbox{$\Delta\nu$}\ (hereafter, to simplify $\mbox{$\Delta\nu$}=\langle\mbox{$\Delta\nu$}\rangle$) and \mbox{$\Delta P$}\ are computed in the grids of tracks, let us first remind how they enter in the grid-based and Bayesian methods. In the so-called direct methods, the asteroseismic quantities are used to provide estimates of stellar parameters and their errors, by directly entering them either in formulas (like the scaling relations of equation~\ref{eq:scaling}) or in 2D diagrams built from grids of stellar models. In grid-based methods with Bayesian inference, this procedure is improved by the weighting of all possible models and by updating the probability with additional information about the data set, described approximately as: \begin{equation} p(\mathbf{x}|\mathbf{y})\sim p(\mathbf{y}|\mathbf{x}) p(\mathbf{x}), \label{eq_ppdf} \end{equation} where $p(\mathbf{x}|\mathbf{y})$ is the posterior probability density function (PDF), $p(\mathbf{y}|\mathbf{x})$ is the likelihood function, which makes the connection between the measured data $\mathbf{y}$ and the models described as a function of parameters to be derived $\mathbf{x}$, and $p(\mathbf{x})$ is the prior probability function that describes the knowledge about the derived parameters obtained before the measured data. The uncertainties of the measured data are usually described as a normal distribution, therefore the likelihood function is written as \begin{equation} p(\mathbf{y^\prime}|\mathbf{x}) = \prod_i \frac{1}{\sqrt{2\pi}\sigma_{y_i}} \times \exp{\left(\frac{-(y_i^\prime-y_i)^2}{2\sigma_{y_i}^2} \right)}, \label{eg:likelihood} \end{equation} where ${y_i^\prime}$ and $\sigma_{y_i^\prime}$ are the mean and standard deviation, for each of the $i$ quantities considered in the data set. In order to obtain the stellar quantity $x_i$, the posterior PDF is then integrated over all parameters, except $x_i$, resulting a PDF for this parameter. For each PDF, a central tendency (mean, mode, or median) is calculated with their credible intervals. Therefore this method requires not only trusting the stellar evolutionary models but also adopting a minimum set of reasonable priors (in stellar age, mass, etc.). In addition to avoid the scaling relations, the method requires that the asteroseismic quantities are tabulated along a set of stellar models, covering the complete relevant interval of masses, ages, and metallicities. \subsection{Interpolating the \texorpdfstring{\mbox{$\Delta\nu$}}{Dnu} deviations to make isochrones} \begin{figure} \resizebox{0.85\hsize}{!}{\includegraphics{tracks_isoc.eps}} \resizebox{0.85\hsize}{!}{\includegraphics{numax_ddnu.eps}} \resizebox{0.85\hsize}{!}{\includegraphics{dnu_dpi.eps}} \caption{MESA evolutionary tracks color coded according to mass in the HR (top panel), $\mbox{$\Delta\nu$}/\mbox{$\Delta\nu$}_\text{SR}$ {\it versus} \mbox{$\nu_{\rm max}$} (middle), and \mbox{$\Delta P$}\ {\it versus} \mbox{$\Delta\nu$}\ (bottom) diagrams. The solid and the dashed black lines are examples of interpolated isochrones of 2 and 10~Gyr, respectively.} \label{fig_tracks_isoc} \end{figure} The \mbox{$\Delta\nu$}\ computed along the tracks appropriately sample stars in the most relevant evolutionary stages, and over the interval of mass and metallicity to be considered in this work. However, in order to be useful in Bayesian codes, a further step is necessary: such calculations need to be interpolated for {\em any} intermediate value of evolutionary stage, mass, and metallicity. This would allow us to derive detailed isochrones, that can enter easily in any estimation code which involves age as a parameter. Needless to say, such isochrones may find many other applications. The computational framework to perform such interpolations is already present in our isochrone-making routines, which are described elsewhere \citep[see][]{marigo16}. In short, the following steps are performed: our code reads the evolutionary tracks of all available initial masses and metallicities; these tracks contain age ($\tau$), luminosity ($L$), \mbox{$T_{\rm eff}$}, \mbox{$\Delta\nu$}, and \mbox{$\Delta P$}\ from the ZAMS until TP-AGB. These quantities are interpolated between the tracks, for any intermediate value of initial mass and metallicity, by performing linear interpolations between pairs of ``equivalent evolutionary points'', i.e.,\ points in neighbouring tracks which share similar evolutionary properties. An isochrone is then built by simply selecting a set of interpolated points for the same age and metallicity. In the case of \mbox{$\Delta\nu$}, the interpolation is done in the quantity $\mbox{$\Delta\nu$}/\mbox{$\Delta\nu$}_\text{SR}$, where $\mbox{$\Delta\nu$}_\text{SR}$ is the value defined by the scaling relation in equation~\ref{eq:scaling}. In fact, $\mbox{$\Delta\nu$}/\mbox{$\Delta\nu$}_\text{SR}$ varies along the tracks in a much smoother way, and has a much more limited range of values than the $\mbox{$\Delta\nu$}$ itself; therefore the multiple interpolations of its value among the tracks also produce well-behaved results. Of course, in the end the interpolated values of $\mbox{$\Delta\nu$}/\mbox{$\Delta\nu$}_\text{SR}$ are converted into $\mbox{$\Delta\nu$}$, for every point in the generated isochrones. Figure \ref{fig_tracks_isoc} shows a set of evolutionary tracks until the TP-AGB phase in the range $[0.60,1.75]$~\mbox{$M_{\odot}$}\ for $\mbox{\rm [{\rm Fe}/{\rm H}]}=0.25$ ($Z=0.03123$) and interpolated isochrones of 2 and 10~Gyr both in the Hertzsprung--Russell (HR), the ratio $\mbox{$\Delta\nu$}/\mbox{$\Delta\nu$}_\text{SR}$ {\it versus} \mbox{$\nu_{\rm max}$}, and the \mbox{$\Delta P$}\ {\it versus} \mbox{$\Delta\nu$}\ diagrams. The middle panel shows the deviation of the scaling \mbox{$\Delta\nu$}\ of few percents mainly over the RGB and early-AGB phases. Deviations at the stages of main sequence and core helium burning are generally smaller than one per cent. The Fig.~\ref{fig_tracks_isoc} also shows that our interpolation scheme works very well, with the derived isochrones reproducing the behaviour expected from the evolutionary tracks. No similar procedure was necessary for the interpolation in \mbox{$\Delta P$}, since it does not follow any simple scaling relation, and it varies much more smoothly and covering a smaller total range than \mbox{$\Delta\nu$}. The interpolations of \mbox{$\Delta P$}\ are simply linear ones using parameters such as mass, age (along the tracks), and initial metallicity as the independent parameters. \section{Applications} \label{sec:applications} We derived the stellar properties using the Bayesian tool PARAM \citep{dasilva06, rodrigues14}. From the measured data -- \mbox{$T_{\rm eff}$}, \mbox{\rm [{\rm M}/{\rm H}]}, \mbox{$\Delta\nu$}, and \mbox{$\nu_{\rm max}$} -- the code computes PDFs for the stellar parameters: $M$, $R$, $\log{g}$, mean density, absolute magnitudes in several passbands, and as a second step, it combines apparent and absolute magnitudes to derive extinctions $A_V$ in the $V$ band and distances $d$. The code uses a flat prior for metallicity and age, while for the mass, the \citet{chabrier01} initial mass function was adopted with a correction for small amount of mass lost near the tip of the RGB computed from \citet{reimers75} law with efficiency parameter $\eta=0.2$ \citep[cf.][]{miglio12_2}. The code also has a prior on evolutionary stage that, when applied, separates the isochrones into 3 groups: ‘core-He burners’ (RC), ‘non-core He burners’ (RGB/AGB), and only RGB (till the tip of the RGB). The statistical method and some applications are described in details in \citet{rodrigues14}. We expanded the code to read the additional seismic information of the MESA models described in Section~\ref{sec:models}. We implemented new variables to be taken into account in the likelihood function (equation~\ref{eg:likelihood}), such as \mbox{$\Delta\nu$}\ from the model frequencies, \mbox{$\Delta P$}, $\log{g}$, and luminosity. Hence the entire set of measured data is \begin{equation} \mathbf{y}=(\mbox{\rm [{\rm M}/{\rm H}]},\mbox{$T_{\rm eff}$},\mbox{$\Delta\nu$},\mbox{$\nu_{\rm max}$},\mbox{$\Delta P$},\log{g},L), \nonumber \end{equation} where \mbox{$\Delta\nu$}\ can still be computed using the standard scaling relation (hereafter \mbox{$\Delta\nu$}(SR)). Therefore PARAM is now able to compute stellar properties using several different input configurations, i.e., the code can be set to use different combinations of measured data. Some interesting cases are, together with \mbox{$T_{\rm eff}$}\ and \mbox{\rm [{\rm M}/{\rm H}]}, \begin{itemize} \item \mbox{$\Delta\nu$}\ and \mbox{$\nu_{\rm max}$}\ from scaling relation (equation~\ref{eq:scaling}); \item \mbox{$\Delta\nu$}\ from model frequencies and \mbox{$\nu_{\rm max}$}\ from scaling relation; \item \mbox{$\Delta\nu$}\ (either from model frequencies or scaling relation), together with some other asteroseismic parameter, such as \mbox{$\Delta P$}; \item $\log{g}$; \item any of the previous options together with the addition of a constraint on the stellar luminosity. \end{itemize} The first two cases constitute the main improvement we consider in this paper, which is already subject of significant attention in the literature \citep[see e.g.][]{sharma16, guggenberger16}. The third case is particularly important given the fact that the \mbox{$\nu_{\rm max}$}\ scaling relation is basically empirical and may still reveal small offsets in the future. Finally, the fourth and fifth cases are aimed at exploring the effect of lacking of seismic information, when only spectroscopic data is available for a given star; and adding independent information in the method, like e.g.\ the known distance of a cluster, or of upcoming Gaia parallaxes, respectively. \subsection{Tests with artificial data} \label{sec:artificial} To test the precision that we could reach with a typical set of observational constraints available for {\it Kepler} stars, we have chosen 6 models from our grid of models and considered various combinations of seismic, astrometric, and spectroscopic constraints (see Table \ref{tab:artificial}). The seismic constraints taken from the artificial data are \mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}, and \mbox{$\Delta P$}. The latter is used by taking its asymptotic value as additional constraint in equation~\ref{eg:likelihood}, and not as only a discriminant for the evolutionary phase as done in previous works \citep[e.g.][]{rodrigues14}. Uncertainties on \mbox{$\Delta\nu$}\ and \mbox{$\nu_{\rm max}$}\ were taken from \citet{handberg16} and on \mbox{$\Delta P$}\ from \citet{Vrard2016}. We adopted 0.2~dex as uncertainties on $\log{g}$ based on average values coming from spectroscopy. For luminosity, we adopted uncertainties of the order of 3 per cent based on Gaia parallaxes, where a significant fraction of the uncertainty comes from bolometric corrections \citep{reese16}. We derived stellar properties using 11 different combinations as input to PARAM, in all cases using \mbox{$T_{\rm eff}$}\ and \mbox{\rm [{\rm Fe}/{\rm H}]}, explained as following: \begin{enumerate}[i] \item \mbox{$\Delta\nu$}\ -- only \mbox{$\Delta\nu$}\ from model frequencies; \label{item:first} \item \mbox{$\Delta\nu$}\ and \mbox{$\nu_{\rm max}$}\ -- to compare with the previous item in order to test if we can eliminate the usage of \mbox{$\nu_{\rm max}$}; \label{item:second} \item \mbox{$\Delta\nu$}(SR) and \mbox{$\nu_{\rm max}$}\ -- traditional scaling relations, to compare with the previous item and correct the offset introduced by using \mbox{$\Delta\nu$}\ scaling; \label{item:third} \item \mbox{$\Delta\nu$}\ and \mbox{$\Delta P$}\ -- in order to test if we can eliminate the usage of \mbox{$\nu_{\rm max}$}\ and improve precision using the period spacing not only as prior, but as a measured data; \label{item:fourth} \item \mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}, and \mbox{$\Delta P$}\ -- using all the asteroseismic data available; \label{item:fifth} \item \mbox{$\Delta\nu$}, \mbox{$\Delta P$}, and $L$ -- in order to test if we can eliminate the usage of \mbox{$\nu_{\rm max}$}, when luminosity is available (from the photometry plus parallaxes); \label{item:sixth} \item \mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}, \mbox{$\Delta P$}, and $L$ -- using all the asteroseismic data available and luminosity, simulating future data available for stars with seismic data observed by Gaia; \label{item:seventh} \item \mbox{$\nu_{\rm max}$}\ and $L$ -- in the case when it may not always be possible to derive \mbox{$\Delta\nu$}\ from lightcurves, simulating possible data from K2 and Gaia surveys; \label{item:eighth} \item $\log{g}$ and $L$ -- in the case when only spectroscopic data are available (in addition to $L$); \label{item:ninth} \item \mbox{$\Delta\nu$}\ and $\log{g}$ -- again in order to test if we can eliminate the usage of \mbox{$\nu_{\rm max}$}, replacing it by the spectroscopic $\log g$; \label{item:tenth} \item \mbox{$\Delta\nu$}\ and $L$ -- again in order to test if we can eliminate the usage of \mbox{$\nu_{\rm max}$}, when luminosity is available. \label{item:eleventh} \end{enumerate} In all cases, the prior on evolutionary stage was also tested. The resulting mass and age PDFs for each artificial star are presented using violin plots\footnote{Violin plots are similar to box plots, but showing the smoothed probability density function.} in Figure~\ref{fig:pdfmass} and \ref{fig:pdfage}, respectively. The $x$ axis indicates each combination of input parameters, as discussed before; the left side of the violin (cyan color) represents the resulting PDF when prior on evolutionary stage is applied, while in the right side (white color) the prior is not being used. The black dots and error bars represent the mode and its 68 per cent credible intervals of the PDF with prior on evolutionary stage (cyan distributions). \begin{figure*} \begin{minipage}{\columnwidth} \resizebox{\hsize}{!}{\includegraphics{pdfs_m_S1_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_m_S2_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_m_S3_s1e4.eps}} \end{minipage} \begin{minipage}{\columnwidth} \resizebox{\hsize}{!}{\includegraphics{pdfs_m_S4_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_m_S5_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_m_S6_s1e4.eps}} \end{minipage} \caption{PDFs of mass for the 6 artificial stars presented in Table~\ref{tab:artificial} using violin plots. Each panel shows the results of one star, named in the top together with its evolutionary stage. The $x$ axis indicates each combination of input parameters for PARAM code as described in Section~\ref{sec:artificial}. The left side of the violin (cyan color) represents the resulting PDF when prior on evolutionary stage is applied, while in the right side (white color) the prior is not being used. The black dots and error bars represent the mode and its 68 per cent credible intervals of the PDF with prior on evolutionary stage (cyan distributions). The dashed line indicates the mass of the artificial stars.} \label{fig:pdfmass} \end{figure*} \begin{figure*} \begin{minipage}{\columnwidth} \resizebox{\hsize}{!}{\includegraphics{pdfs_age_S1_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_age_S2_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_age_S3_s1e4.eps}} \end{minipage} \begin{minipage}{\columnwidth} \resizebox{\hsize}{!}{\includegraphics{pdfs_age_S4_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_age_S5_s1e3.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_age_S6_s1e3.eps}} \end{minipage} \caption{The same as figure~\ref{fig:pdfmass} but for logarithm of the ages. The right y-axis gives the age in Gyr. The dashed line indicates the age of the artificial stars.} \label{fig:pdfage} \end{figure*} \begin{figure*} \begin{minipage}{\columnwidth} \resizebox{\hsize}{!}{\includegraphics{pdfs_m_S2_eta2_4_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_age_S2_eta2_4_s1e4.eps}} \\ \end{minipage} \begin{minipage}{\columnwidth} \resizebox{\hsize}{!}{\includegraphics{pdfs_m_S5_eta2_4_s1e4.eps}} \\ \resizebox{\hsize}{!}{\includegraphics{pdfs_age_S5_eta2_4_s1e3.eps}} \\ \end{minipage} \caption{PDFs of mass (top panels) and ages (bottom panels) for the artificial RC stars S2 and S5 presented in Table~\ref{tab:artificial} using violin plots. The left side of the violin (cyan color) represents the resulting PDF with the efficiency parameter on mass loss $\eta=0.2$, while in the right side (white color) $\eta=0.4$. The black dots and error bars represent the mode and its 68 per cent credible intervals of the PDF with $\eta=0.2$ (cyan distributions). The dashed line indicates the mass and the ages of the artificial stars.} \label{fig:pdf_m_age_eta} \end{figure*} \begin{table*} \scriptsize \centering \caption{Set of artificial data considered in Section~\ref{sec:artificial}.} \setlength{\tabcolsep}{4pt} \begin{tabular}{ ccc|ccccccccc } \hline Label & $M$/M$_\odot$ & $\log$Age/yr & \mbox{$T_{\rm eff}$}\ (K) & \mbox{\rm [{\rm Fe}/{\rm H}]} & $\log{g}$ &$L$/L$_\odot$ & \mbox{$\nu_{\rm max}$}\, ($\mu$Hz)& \mbox{$\Delta\nu$}\, ($\mu$Hz)& \mbox{$\Delta P$}\, (s) & Ev. State \\ \hline S1 & 1.00 & 9.8379 & 4813$\pm$70 & -0.75$\pm$0.1 & 2.38$\pm$0.20 & 54.77$\pm$1.64 & 30.26$\pm$0.58 & 3.76$\pm$0.05 & 61.40$\pm$0.61 & RGB \\ S2 & 1.00 & 9.8445 & 5046$\pm$70 & -0.75$\pm$0.1 & 2.39$\pm$0.20 & 64.52$\pm$1.94 & 30.31$\pm$0.58 & 4.04$\pm$0.05 & 304.20$\pm$3.04 & RC \\ S3 & 1.60 & 9.3383 & 4830$\pm$70 & 0.0$\pm$0.1 & 2.92$\pm$0.20 & 25.57$\pm$0.77 & 105.01$\pm$1.83 & 8.66$\pm$0.05 & 70.80$\pm$0.71 & RGB \\ S4 & 1.60 & 9.3461 & 4656$\pm$70 & 0.0$\pm$0.1 & 2.55$\pm$0.02 & 51.36$\pm$1.54 & 45.99$\pm$0.84 & 4.56$\pm$0.05 & 62.00$\pm$0.62 & RGB \\ S5 & 1.60 & 9.3623 & 4769$\pm$70 & 0.0$\pm$0.1 & 2.54$\pm$0.20 & 58.40$\pm$1.75 & 43.97$\pm$0.81 & 4.60$\pm$0.05 & 268.30$\pm$2.68 & RC \\ S6 & 2.35 & 8.9120 & 5003$\pm$70 & 0.0$\pm$0.1 & 2.85$\pm$0.20 & 51.41$\pm$1.54 & 86.79$\pm$1.54 & 6.86$\pm$0.05 & 251.20$\pm$2.51 & RC \\ \hline \end{tabular} \label{tab:artificial} \end{table*} In most cases, we recover the stellar masses and ages within the 68 per cent credible intervals. Using only \mbox{$\Delta\nu$}\ results in wider and more skewed PDFs (case \ref{item:first} in the plots), while adding \mbox{$\nu_{\rm max}$}\ confines the solution in a much smaller region (cases \ref{item:second} and \ref{item:third}). When combining with \mbox{$\Delta P$}, the solution is tied better (case \ref{item:fifth}). In most cases, the combination of \mbox{$\Delta\nu$}\ and \mbox{$\nu_{\rm max}$}\ provides narrower PDFs than \mbox{$\Delta\nu$}\ and \mbox{$\Delta P$}, which indicates that \mbox{$\Delta P$}\ does not constrain the solution as tightly as \mbox{$\nu_{\rm max}$}\ (cases \ref{item:second} and \ref{item:fourth}) even for RC stars. As expected, adding more information as luminosity, narrows the searching ``area'' in the parameter space which provides the narrowest PDFs when all asteroseismic parameters and luminosity are combined (case \ref{item:seventh}). The usage of only \mbox{$\nu_{\rm max}$}\ and luminosity (case \ref{item:eighth}) is very interesting, because it provides PDFs slightly narrower than using the typical combination of \mbox{$\Delta\nu$}\ and \mbox{$\nu_{\rm max}$}\ or \mbox{$\Delta\nu$}\ and luminosity (case \ref{item:eleventh}), and it is similar to the \ref{item:fifth} and \ref{item:sixth} cases. The lack of asteroseismic information (cases \ref{item:ninth}) worsens the situation, providing a significant larger error bar than most other cases, simply because of large uncertainties on gravity coming from spectroscopic analysis. The case \ref{item:tenth} results in PDFs very similar with case \ref{item:first}. This is to be expected: including \mbox{$\Delta\nu$}\ as a constraint (case \ref{item:first}) leads to a typical $\sigma(\log g) \simeq 0.02$ dex (see also the discussion in \citealt{Morel14}, page 4), i.e., adding the spectroscopic $\log g$ ($\sigma(\log g) \simeq 0.2$ dex) as a constraint (case \ref{item:tenth}) has a negligible impact on the PDFs. Finally, the prior on evolutionary stage does not change the shape of the PDFs in almost all cases, except for the RGB star S4. Regarding this case, it is interesting to note that S4 and S5 have similar \mbox{$\Delta\nu$}\ and \mbox{$\nu_{\rm max}$}, but different \mbox{$\Delta P$}, that is, they are in a region of the \mbox{$\Delta\nu$}\ versus \mbox{$\nu_{\rm max}$}\ diagram that is crossed by both RC and RGB evolutionary paths. In similar cases, not knowing the evolutionary stage causes the Bayesian code to cover all sections of the evolutionary paths, meaning that there is a large parameter space to cover, which often causes the PDFs to become multi-peaked or spread for all possible solutions as the cases \ref{item:ninth} and \ref{item:tenth}. Further examples of this effect are given in figure 5 of \citet{rodrigues14}. Knowing the evolutionary stage, instead, limits the Bayesian code to weight just a fraction of the available evolutionary paths, hence limiting the parameter space to be explored and, occasionally, producing narrower PDFs. This is what happens for star S4, which, despite being a RGB star of 1.6~\mbox{$M_{\odot}$}, happens to have asteroseismic parameters too similar to those the more long-lived RC stars of masses $\sim1.1$~\mbox{$M_{\odot}$}. Table~\ref{tab:rel_unc} presents the average relative mass and age uncertainties for RGB and RC stars, which summarizes well the qualitative description given above. Cases \ref{item:first} (very similar to case \ref{item:tenth}) and \ref{item:ninth} results the largest uncertainties: 17 and 12 per cent for RGB, and 8 and 11 for RC masses; up 70 and 40 per cent for RGB, and 22 and 31 for RC ages, respectively. From the traditional scaling relations (case \ref{item:second}) to the addition of period spacing and luminosity (case \ref{item:seventh}, the uncertainties can decrease from 8 to 3 per cent for RGB and 5 to 3 for RC masses; 29 to 10 per cent for RGB and 14 to 8 for RC ages. It is remarkable that we can also achieve a precision around 10 per cent on ages using \mbox{$\nu_{\rm max}$}\ and luminosity (case \ref{item:eighth}), and 15 per cent using \mbox{$\Delta\nu$}\ and luminosity (case \ref{item:eleventh}). \begin{table} \scriptsize \centering \caption{Average relative uncertainties for each combination of input parameters for PARAM code as described in Section~\ref{sec:artificial}.} \setlength{\tabcolsep}{4pt} \begin{tabular}{cccccc} \hline \multicolumn{2}{c}{Case} & \multicolumn{2}{c}{$<\sigma M/M>$} & \multicolumn{2}{c}{$<\sigma \text{Age}/\text{Age}>$} \\ \noalign{\smallskip} & & RGB & RC & RGB & RC \\ \hline \ref{item:first} & \mbox{$\Delta\nu$}\ & 0.173 & 0.077 & 0.734 & 0.217 \\ \ref{item:second} & \mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}\ & 0.078 & 0.045 & 0.284 & 0.144 \\ \ref{item:third} & \mbox{$\Delta\nu$}(SR), \mbox{$\nu_{\rm max}$}\ & 0.061 & 0.047 & 0.220 & 0.146 \\ \ref{item:fourth} & \mbox{$\Delta\nu$}, \mbox{$\Delta P$}\ & 0.109 & 0.052 & 0.336 & 0.181 \\ \ref{item:fifth} & \mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}, \mbox{$\Delta P$}\ & 0.054 & 0.030 & 0.192 & 0.109 \\ \ref{item:sixth} & \mbox{$\Delta\nu$}, \mbox{$\Delta P$}, $L$ & 0.043 & 0.035 & 0.122 & 0.101 \\ \ref{item:seventh} & \mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}, \mbox{$\Delta P$}, $L$ & 0.034 & 0.025 & 0.097 & 0.075 \\ \ref{item:eighth} & \mbox{$\nu_{\rm max}$}, $L$ & 0.039 & 0.033 & 0.107 & 0.102 \\ \ref{item:ninth} & $\log{g}$, $L$ & 0.124 & 0.108 & 0.427 & 0.310 \\ \ref{item:tenth} &\mbox{$\Delta\nu$}\, $\log{g}$ & 0.173 & 0.077 & 0.727 & 0.215 \\ \ref{item:eleventh} &\mbox{$\Delta\nu$}\, $L$ & 0.052 & 0.046 & 0.143 & 0.146 \\ \hline \end{tabular} \label{tab:rel_unc} \end{table} Average relative differences between masses are $\leq$ 1 per cent for cases \ref{item:fifth}, \ref{item:sixth}, \ref{item:seventh}, and \ref{item:eighth}, around 1 per cent for cases \ref{item:second} and \ref{item:eleventh}, $\sim6$ per cent for case \ref{item:third}, and greater than 6 per cent for cases \ref{item:first}, \ref{item:ninth}, and \ref{item:tenth}. Regarding ages, relative absolute differences are lesser than 5 per cent for cases \ref{item:fifth}, \ref{item:sixth}, \ref{item:seventh}, \ref{item:eighth}, and \ref{item:eleventh}, around 10 per cent when using \mbox{$\Delta\nu$}\ and \mbox{$\nu_{\rm max}$}, $\sim20$ per cent when using \mbox{$\Delta\nu$}(SR) and \mbox{$\nu_{\rm max}$}, and greater than 40 per cent for cases \ref{item:first}, \ref{item:ninth}, and \ref{item:tenth}. We also applied mass loss on the models. Figure~\ref{fig:pdf_m_age_eta} shows the resulting mass and age PDFs for stars S2 and S5 with the efficiency parameter $\eta=0.2$ (cyan colors) and $\eta=0.4$ (white colors). For the cases \ref{item:fifth}, \ref{item:sixth}, \ref{item:seventh}, \ref{item:eighth}, and \ref{item:eleventh}, a mass loss with efficiency $\eta=0.4$ produces differences on masses of $\sim$1 per cent, while on ages, may be greater than 47 per cent for S2 and than 18 per cent for S5. The small difference in masses results from the fact that, in these cases, mass values follow almost directly from the observables -- roughly speaking, they represent the mass of the tracks that pass closer to the observed parameters. As well known, red giant stars quickly lose memory of their initial masses and follow evolutionary tracks which are primarily just a function of their actual mass and surface chemical composition. So their derived masses will be almost the same, irrespective of the mass loss employed to compute previous evolutionary stages. But the value of $\eta$ will affect the relationship between the actual masses and the initial ones at the main sequence, which are those that determine the stellar age. For instance, S2 have nearly the same actual mass (very close to 1~\mbox{$M_{\odot}$}) for both $\eta=0.2$ and $\eta=0.4$ cases, but this actual mass can derive from a star of initial mass close to 1.075~\mbox{$M_{\odot}$}\ in the case of $\eta=0.2$, or from a star of initial mass close to 1.15~\mbox{$M_{\odot}$}\ in the case of $\eta=0.4$. This $\sim13$ per cent difference in the initial, main sequence mass is enough to explain the $\sim47$ per cent difference in the derived ages of S2. More in general, this large dependence of the derived ages on the assumed efficiency of mass loss, warns against trusting on the ages of RC stars. \subsection{NGC~6819} \label{sec:ngc6819} The previous section demonstrates that it is possible to recover, generally within the expected 68~per cent (1$\sigma$) credible interval expected from observational errors, the masses and ages of artificial stars. It is not granted that a similar level of accuracy will be obtained in the analysis of real data. Star clusters, whose members are all expected to be at the same distance and share a common initial chemical composition and age, offer one of the few possible ways to actually verify this. Only four clusters have been observed in the {\em Kepler} field \citep{gilliland10}, and among these NGC~6819 represents the best case study, owing to its brightness, its near-solar metallicity (for which stellar models are expected to be better calibrated) and the large numbers of stars in {\em Kepler} database. NGC~6791 has even more giants observed by {\em Kepler}; however its super-solar metallicity, the uncertainty about its initial helium content, and larger age -- causing a non-negligible mass loss before the RC stage -- makes any comparison with evolutionary models more complicated. \citet{handberg16} reanalysed the raw {\it Kepler} data of the stars in the open cluster NGC~6819 and extracted individual frequencies, heights, and linewidths for several oscillation modes. They also derived the average seismic parameters and stellar properties for $\sim$50 red giant stars based on targets of \citet{stello11_1}. Effective temperatures were computed based on $V-K_s$ colours with bolometric correction and intrinsic colour tables from \citet{casagrande_vandenberg14}, and adopting a reddening of $E(B-V)=0.15$~mag. They derived masses and radii using scaling relations, and computed apparent distance moduli using bolometric corrections from \citet{casagrande_vandenberg14}. The authors also applied an empirical correction of 2.54 per cent to the \mbox{$\Delta\nu$}\ of RGB stars, thus making the mean distance of RGB and RC stars to become identical. As we based the definition of the average \mbox{$\Delta\nu$}\ for MESA models similar to the one used in \citet{handberg16}'s work, we adopted their values for the global seismic (\mbox{$\Delta\nu$}\ and \mbox{$\nu_{\rm max}$}) and spectroscopic (\mbox{$T_{\rm eff}$}) parameters. We verified that their \mbox{$T_{\rm eff}$}\ scale is just $\sim\!57$~K cooler than the spectroscopic measurements from the APOGEE Data Release 12 \citep{alam15}. The metallicity adopted was $\mbox{\rm [{\rm Fe}/{\rm H}]}=0.02\pm0.10$ dex for all stars. We also adopted period spacing values from \citet{Vrard2016}, who automatically measured \mbox{$\Delta P$}\ for more than 6000 stars observed by {\it Kepler}. In order to derive distances and extinctions in the $V$-band ($A_V$), we also used the following apparent magnitudes: SDSS $griz$ measured by the KIC team \citep{brown11} and corrected by \citet{pinsonneault12}; $JHK_s$ from 2MASS \citep{cutri03,skrutskie06}; and $W1$ and $W2$ from WISE \citep{wright10}. We computed stellar properties for 52 stars that have \mbox{$T_{\rm eff}$}, \mbox{\rm [{\rm Fe}/{\rm H}]}, \mbox{$\Delta\nu$}, and \mbox{$\nu_{\rm max}$}\ available using case \ref{item:second} and \ref{item:third}; and for 20 stars that have also \mbox{$\Delta P$}\ measurements using case \ref{item:fifth}. Table~\ref{tab:cluster_rel_unc} presents the average relative uncertainties on masses and ages for these stars. These average uncertainties are slightly smaller than the ones from our test with artificial stars in the previous section. \begin{table} \scriptsize \centering \caption{Average relative uncertainties on masses and ages for stars in NGC~6819 using the combination of input parameters \ref{item:second}, \ref{item:third}, and \ref{item:fifth} for PARAM code.} \setlength{\tabcolsep}{4pt} \begin{tabular}{cccccc} \hline \multicolumn{2}{c}{Case} & \multicolumn{2}{c}{$<\sigma M/M>$} & \multicolumn{2}{c}{$<\sigma \text{Age}/\text{Age}>$} \\ \noalign{\smallskip} & & RGB & RC & RGB & RC \\ \hline \ref{item:second} & \mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}\ & 0.057 & 0.026 & 0.210 & 0.100 \\ \ref{item:third} & \mbox{$\Delta\nu$}(SR), \mbox{$\nu_{\rm max}$}\ & 0.044 & 0.026 & 0.161 & 0.102 \\ \ref{item:fifth} & \mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}, \mbox{$\Delta P$}\ & 0.013 & 0.021 & 0.050 & 0.077 \\ \hline \end{tabular} \label{tab:cluster_rel_unc} \end{table} Figure~\ref{fig:cluster_mass_age} shows the masses and ages derived using PARAM with case \ref{item:second} and \ref{item:third} as observational input. The blue and red colors represent RC and RGB stars, respectively. The median and mean relative differences between stellar properties are presented in Table~\ref{tab:reldiff_IF_SR}. The RGB stars have masses $\sim 8$ per cent greater when using \mbox{$\Delta\nu$}\ scaling relation, while many RC stars present no difference and only few of them have smaller masses ($\approx 2$ per cent). The mass differences reflects RGB stars being on average $\sim 18$ per cent younger and no significant differences on RC stars. The $\sim 5$ per cent difference on RGB radii reflects on the same difference on distances. \begin{table} \scriptsize \centering \caption{Median and mean relative (and absolute) differences between properties estimated using case~\ref{item:second} and \ref{item:third} for RGB and RC stars from NGC~6819.} \label{tab:reldiff_IF_SR} \begin{tabular}{ccccc} \hline \multirow{2}{*}{properties} & \multicolumn{2}{c}{RGB} & \multicolumn{2}{c}{RC} \\ & median & mean & median & mean \\ \hline masses & 0.088 & 0.079 & 0.000 & -0.012 \\ ages & -0.195 & -0.180 & -0.002 & -0.004 \\ radii & 0.048 & 0.043 & 0.000 & -0.007 \\ $A_V$ & 0.005 & 0.031 & 0.001 & 0.001 \\ distances & 0.047 & 0.045 & -0.001 & -0.006 \\ \hline \end{tabular} \end{table} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{m_IF_SR_eta0_diff.eps}} \resizebox{\hsize}{!}{\includegraphics{age_IF_SR_eta0_diff.eps}} \caption{Comparison between masses (top panel) and ages (bottom) estimated with case \ref{item:second} versus case \ref{item:third}. The bottom panel excludes KIC~4937011 (Li-rich low mass RC) that has an estimated age in both cases of $\sim13.8$~Gyr. Sub-panels show relative differences. Dotted black lines are the identity line. The blue and red colors represent RC and RGB stars, respectively. Different symbols are peculiar stars that were discussed in details in \citet{handberg16} -- asterisks are stars classified as non-member; diamonds: stars classified as over-massive; squares: uncertain cases; triangle: Li-rich low mass RC (KIC 4937011).} \label{fig:cluster_mass_age} \end{figure} Figure~\ref{fig:cluster_mass_age_v} shows the masses and ages derived using case \ref{item:second} and \ref{item:fifth} as observational input. The average relative uncertainties are much smaller for RGB stars when adding \mbox{$\Delta P$}\ as an observational constraint (see Table~\ref{tab:cluster_rel_unc}). The agreement on masses is very good, except for massive stars, when masses are around the upper mass limit of our grid (2.50~\mbox{$M_{\odot}$}). Two over-massive stars result $\sim 10$ per cent less massive when adding $\mbox{$\Delta P$}$ (KIC~5024476 and 5112361). The ages also present a good agreement inside the error bars, although with a dispersion of $\sim\!5$ per cent. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{m_IF_DP_eta0_diff.eps}} \resizebox{\hsize}{!}{\includegraphics{age_IF_DP_eta0_diff.eps}} \caption{Same as Fig.~\ref{fig:cluster_mass_age}, but with case \ref{item:second} versus case \ref{item:fifth}.} \label{fig:cluster_mass_age_v} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{m_IF_Hand_eta0_diff.eps}} \resizebox{\hsize}{!}{\includegraphics{mu0_IF_Hand_eta0_fit.eps}} \caption{Comparison between masses (top panel) and distance moduli (bottom) estimated with case \ref{item:second} and from \citet{handberg16}. Dotted black lines are the identity line. The blue and red colors represent RC and RGB stars, respectively. Different symbols are the same as Fig.~\ref{fig:cluster_mass_age}. The solid black line in the bottom panel shows the agreement between our distance with the distance in the V-band, representing a measurement of the extinction.} \label{fig:cluster_mass_hand} \end{figure} The top panel of figure~\ref{fig:cluster_mass_hand} shows the comparison between masses estimated with case \ref{item:second} versus masses from \citet{handberg16}. The masses have a good agreement with a dispersion of $\sim 7$ per cent, showing that the proposed correction of 2.54 per cent on \mbox{$\Delta\nu$}\ for RGB stars in \citet{handberg16} compensates the deviations when using \mbox{$\Delta\nu$}\ scaling. The authors also discussed in details some stars that seem to experience {\it non-standard} evolution based on their masses and distances estimations and on membership classification based on radial velocity and proper motion study by \citet{milliman14}. These stars are represented with different symbols in all figures of this section: asterisks -- non-member stars (KIC~4937257, 5024043, 5023889); diamonds -- stars classified as overmassive (KIC~5024272, 5023953, 5024476, 5024414, 5112880, 5112361); squares -- uncertain cases (KIC~5112974, 5113061, 5112786, 4937770, 4937775); triangle -- Li--rich low mass RC (KIC 4937011). A similar detailed description star by star is not the scope of the present paper, however the peculiarities of these stars should be kept on mind when deriving their stellar and the cluster properties. Some of the over-massive stars do not have a good agreement, because of the upper mass limit of our grid of models ($2.5~\mbox{$M_{\odot}$}$). Taking into account only single member stars, the mean masses of RGB and RC stars using case \ref{item:second} are $1.61\pm0.04$~\mbox{$M_{\odot}$}\ and $1.62\pm0.03$~\mbox{$M_{\odot}$}, which also agree with the ones found in \citet{handberg16} and \citet{miglio12_2}. The bottom panel of figure~\ref{fig:cluster_mass_hand} shows the comparison between distance moduli estimated with case \ref{item:second} versus distance moduli in the $V$-band estimated in \citet{handberg16}. The solid line represents the linear regression $\mu_0 = \mu_V(\text{Handberg}) - A_V$, which results $A_V=0.475\pm0.003$~mag that is in a good agreement with the average extinction for the cluster (see Fig.~\ref{fig:cluster_mu0_av}). Our method estimates the extinction star-to-star and it varies significantly in the range $A_V=[0.3,0.7]$ for the stars in the cluster. This seems to be in agreement with \citet{platais16} that shown a substantial differential reddening in this cluster with the maximum being $\Delta E(B-V)=0.06$ mag, what implies extinctions in the $V$-band in the same range that we found. Extinctions and distance moduli estimated using case \ref{item:second} are presented in Figure~\ref{fig:cluster_mu0_av}. The average uncertainties on extinctions and distance moduli are 0.1~mag and 0.03~mag ($<2$ per cent on distances), respectively. We derived the distance for the cluster by computing the mean distances, $\mu_0=11.90\pm0.04$~mag with a dispersion of 0.23~mag (solid and dashed black lines in Figure~\ref{fig:cluster_mu0_av}), excluding stars classified as non-member (asterisks) by \citet{handberg16}. This value compares well with distance moduli measured for eclipsing binaries, $\mu_0=12.07\pm0.07$~mag \citep{jeffries13}. Figure~\ref{fig:cluster_hist_age} shows the histogram of the age estimated using case \ref{item:second}. The gray line represents the histogram of all stars, except the 3 stars classified as non-member and the star KIC~4937011 that likely experienced very high mass-loss during its evolution (see discussion in \citealt{handberg16}). Red and blue lines represent the ages of RGB and RC stars. The mean age by the gray histogram is $2.22\pm0.15$~Gyr with a dispersion of 1.01~Gyr, that agrees with the age estimated by fitting isochrones to the cluster CMDs by \citet{brewer16} ($2.21 \pm0.10 \pm 0.20$ Gyr). Taking into account only stars classified as single members (31 stars), i.e. excluding stars that are binary members, single members flagged as over-under massive and with uncertain parameters classified according to \citet{handberg16}, the mean age results $2.25\pm0.12$~Gyr with a dispersion of 0.64~Gyr. Importantly, RGB and RC apparently share the same age distribution, i.e. there is no evidence of systematic differences in the ages of the two groups of stars. This result reflects taking into consideration the deviations from scaling relations, which are quite relevant for RGB stars but smaller for the RC. Adding \mbox{$\Delta P$}\ (case \ref{item:fifth}), the mean age is $2.12\pm0.19$~Gyr with a dispersion of 0.79~Gyr, excluding the star KIC~4937011 and also the one classified as non-member KIC~4937257 (triangle and asterisk symbols in Figure~\ref{fig:cluster_mass_age_v}). For this case, there are 13 stars classified as single member according to \citet{handberg16}, whose mean age is $2.18\pm0.20$~Gyr with a dispersion of 0.73~Gyr. In the case with \mbox{$\Delta\nu$}\ scaling (case \ref{item:third}) the mean age is $1.95\pm0.11$~Gyr (dispersion of 0.78~Gyr, computed also excluding the 3 stars classified as non-member and the star KIC~4937011), 12 per cent younger than using \mbox{$\Delta\nu$}\ from models. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{mu0_Av_IF_eta0.eps}} \caption{Extinction versus distance moduli estimated with case \ref{item:second}. The blue and red colors represent RC and RGB stars, respectively. Solid and dashed black lines are the mean and its uncertainty of distance moduli computed taking into account all stars, except for the ones classified as non-member (asterisks) by \citet{handberg16}. Different symbols are the same as Fig.~\ref{fig:cluster_mass_age}.} \label{fig:cluster_mu0_av} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{hist_age_IF_eta0.eps}} \caption{Histogram of ages estimated using case \ref{item:second}. The gray line represents all stars, except the ones classified as non-members stars and KIC~4937011 that has $\sim 13.8$ Gyr. Red and blue lines represent the ages of RGB and RC stars.} \label{fig:cluster_hist_age} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{CMD_mmb90_eta0_all_isoc.eps}} \caption{CMD for the cluster stars with membership probability $\geq 90$ per cent according to radial velocity by \citet{hole09} (gray dots). The blue and red colors represent RC and RGB stars, respectively. Different symbols are the same as Fig.~\ref{fig:cluster_mass_age}. The green, cyan, and orange lines are MESA isochrones with ages 2.0, 2.2, and 2.3~Gyr, using $\mu_0=11.90$~mag and $E(B-V)=0.14$~mag.} \label{fig:cluster_cmd} \end{figure} Figure~\ref{fig:cluster_cmd} shows the color-magnitude diagram (CMD) for the cluster stars with membership probability $\geq 90$ per cent according to radial velocity by \citet{hole09} (gray dots). The red and blue symbols are the stars analysed in the present work. There is a significant dispersion on the RGB and RC, but still our isochrones match well the photometry. This points to a significant consistency between the ages of evolved stars derived from asteroseismology, and the CMD-fitting age which would be derived from the photometry. This particular result, however, should not be generalised, since it applies only to the specific set of stellar models and cluster data that has been used here. Another important aspect, however, is that the ages derived for cluster stars turn out to present a larger scatter than expected. If we assume that all cluster stars really have the same age, their mean standard deviation implies that the final errors in the ages are of roughly 46 per cent, which is a factor of 2 larger than the individual age uncertainties for the case \ref{item:second} (see Table~\ref{tab:cluster_rel_unc}). The scatter is reduced when excluding from the sample stars that are binary members, single members flagged as over-under massive and with uncertain parameters classified according to \citet{handberg16}. In this case the scatter (28 per cent) is higher, but comparable with, the expected uncertainty (21 per cent). At present, the origin of this increased age dispersion is not clear. We note however that the NGC~6819 giants are also dispersed around the best-age isochrones in the CMD. The magnitude of this dispersion is not simply attributable to differential reddening or photometric errors \citep{hole09,milliman14,brewer16}. Therefore, it is possible that it reflects some physical process acting in the individual cluster stars, rather than a failure in the method. We also notice that in the cluster CMD (Fig.~\ref{fig:cluster_cmd}) the main sequence turn-off is well-defined and the comparison with isochrones appear to rule out internal age spreads larger than $\sim0.2$~Gyr. Even larger age spreads have been suggested to explain the very extended (and sometimes bimodal) main sequence turn-offs observed in some very massive star clusters in the Magellanic Clouds \citep[][and references therein]{goudfrooij15}. However, there is no evidence of a similar feature occurring in the photometry of NGC~6819. \section{Discussion and conclusions} \label{sec:close} Our main conclusions are: \begin{itemize} \item It is possible to implement the asteroseismic quantities \mbox{$\Delta\nu$}\ and \mbox{$\Delta P$}, computed along detailed grids of stellar evolutionary tracks, into the usual Bayesian or grid-based methods of parameter estimation for asteroseismic targets. We perform such an implementation in the PARAM code. It will be soon become available for public use through the web interface \url{http://stev.oapd.inaf.it/param}. \item Tests with synthetic data reveal that masses and ages can be determined with typical precision of 5 and 19 per cent, if precise global seismic parameters (\mbox{$\Delta\nu$}, \mbox{$\nu_{\rm max}$}, \mbox{$\Delta P$}) are available. Adding luminosity these values can decrease to 3 and 10 per cent, respectively. \item Combining the luminosity expected from the end-of-mission Gaia parallaxes with \mbox{$\Delta\nu$}, enables us to infer masses (ages) to $\sim 5$ per cent ($\sim 15$ per cent) independently from the \mbox{$\nu_{\rm max}$}\ scaling relation, which is still lacking a detailed theoretical understanding (but see \citealt{belkacem11}). A similar precision on mass and age is also expected when combining luminosity and \mbox{$\nu_{\rm max}$}: this will be particularly relevant for stars where data are not of sufficient quality/duration to enable a robust measurement of \mbox{$\Delta\nu$}. Stringent tests of the accuracy of the \mbox{$\nu_{\rm max}$}\ scaling relation \citep[as in][]{coelho15} are therefore of great relevance in this context. \item Any estimate based on asteroseismic parameters is at least a factor 4 more precise than those based on spectroscopic parameters alone. \item The application of these methods to NGC~6819 giants produces mean age of $2.22\pm0.15$~Gyr, distance $\mu_0=11.90\pm0.04$~mag, and extinctions $A_V\approx 0.475\pm0.003$~mag. All these values are in agreement with estimates derived from photometry alone, via isochrone fitting. \item Despite these encouraging results, the application of the method to NGC~6819 stars also reveals a few caveats and far-from-negligible complications. Even after removing some evident outliers (likely non members) from the analyses, the age dispersion of NGC~6819 stars turns out to be appreciable, with the $\tau=2.22\pm0.15$~Gyr with a dispersion of 1.01~Gyr, implying a $\sim46$~per cent error on individual ages (or $\sim28$~per cent taking into account only single members and removing over-massive stars indentified in \citealt{handberg16}). The mean age value is compatible with those determined with independent methods (e.g.\ the $\tau=2.21\pm0.10\pm0.20$~Gyr from isochrone fitting). \end{itemize} The result of a large age dispersion for NGC~6819 stars is no doubt surprising, given the smaller typical errors found during our tests with artificial data. Since asteroseismology is now widely regarded as the key to derive precise ages for large samples of field giants distributed widely across the Galaxy, this is surely a point that has to be understood: any uncertainty or systematics affecting the NGC~6819 stars will also affect the analyses of the field giants observed by asteroseismic missions. We could point out that, on the one hand, a clear source of bias in age is the presence of over/under-massive stars which are likely to be the product of binary evolution. Additionally, even restricting ourselves to RGB stars and weeding out clear over/under massive stars we are left with an age/mass spread which is larger than expected (28 per cent compared to 21 per cent). Grid-based modelling increases the significance of this spread, compared to the results presented in \citet{handberg16}. Whether this spread is an effect specific to the age-metallicity of NGC~6819, is yet to be determined. Previous works on NGC~6791 and M~67, for instance, have not reported on a significant spread in mass/age of their asteroseismic targets \citep[]{basu11,miglio12_2,corsaro12,stello16}. These three clusters are different in many aspects, with NGC~6791 being the most atypical one given its very high metallicity. Apart form this obvious difference, in both NGC~6791 and M~67 the evolved stars have masses smaller than 1.4~\mbox{$M_{\odot}$}, and were of spectral type mid/late-F or G -- hence slow rotators -- while in their main sequence. In NGC~6819 the evolved stars have masses high enough to be ``retired A-stars'', which includes the possibility of having been fast rotators before becoming giants. This is a difference that could, at least partially, be influencing our results. Indeed, rotation during the main sequence is able to change the stellar core masses, chemical profile, and main sequence lifetimes \citep{eggenberger10, lagarde16}. A spread in rotational velocities among coeval stars might then cause the spread in the properties of the red giants, which might not be captured in our grids of non-rotating stellar models. The possible impact of rotation in the grid-based and Bayesian methods, has still to be investigated. On the other hand, this $\sim46$~per cent uncertainty is comparable to the 0.2~dex uncertainties that are obtained for the ages of giants with precise spectroscopic data and {\em Hipparcos} parallax uncertainties smaller than 10~\% \citep{feuillet16}, which refer to stars within 100~pc of the Sun. In this sense, our results confirm that asteroseismic data offer the best prospects to derive astrophysically-useful ages for individual, distant stars. \section*{Acknowledgments} We thank the anonymous referee for his/her useful comments. We acknowledge the support from the PRIN INAF 2014 -- CRA 1.05.01.94.05. TSR acknowledges support from CNPq-Brazil. JM and MT acknowledge support from the ERC Consolidator Grant funding scheme ({\em project STARKEY}, G.A. n. 615604). AM acknowledges the support of the UK Science and Technology Facilities Council (STFC). Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant agreement no.: DNRF106). \bibliographystyle{mn2e}
1,108,101,562,664
arxiv
\section{Introduction} During the last decade much work has been devoted to quantum plasmas, see e.g. Refs. \citep{Haas-book,Manfredi,Shukla-Eliasson,Shukla-Eliasson-RMP,glenzer-redmer} and references therein. Laboratory applications include quantum wells \citep% {Manfredi-quantum-well}, spintronics \citep{Spintronics} and plasmonics \citep% {Atwater-Plasmonics}. Quantum plasma effects can also be of interest in experiments with solid density targets \citep{glenzer-redmer}, as well as in astrophysics \citep{Astrophysics,Astrophysics1,Astrophysics2}. Nonlinear wave-wave interaction in plasmas has been studied since the sixties, see e.g. Refs. \citep{Sagdeev-64,Sjolund-67,Kadomtsev,Tsytovich}. Of special interest here is the three wave interaction processes, that have a wide range of applications, including e.g. stimulated Brillouin scattering in the ionoshere \citep{Dysthe-1977,Stenflo-2004} and various processes in laser-plasma experiments \citep{Lashmore-Davies-book-chapter,Kruer-book-laser-plasma,Mironov-1990}. From a theoretical point of view the Manley-Rowe relations \citep{Manley-Rowe} are of much interest when three-wave processes are studied \citep{Weiland-Wilhelmsson,Larsson-1973,Larsson-1977,Brodin-88}. For example, these relations put important constraints on the dynamics, e.g.\ for a background plasma in thermodynamic equilibrium the pump wave may only decay into waves with lower frequencies. In the present work three-wave interaction in a homogenous magnetized plasma is studied using the simplest form of quantum hydrodynamic equations, but with a slight generalization of the Bohm de Broglie term such that it depends on a free parameter. The exchange of wave energies among the three waves are calculated, and the conditions under which the Manley-Rowe relations are fulfilled is found. The results are compared with previous works \citep{Murtza-2013,Larsson-1973,Larsson-1977}, and our findings are used to draw general conclusions regarding the mathematical structure of quantum hydrodynamics. \section{Quantum hydrodynamics and the Manley-Rowe relations} \label{chapter2} The most simple quantum hydrodynamic equations \citep% {Haas-book,Manfredi,Lundin} reads% \begin{equation} \frac{\partial n}{\partial t}+\nabla \cdot \left( n\mathbf{v}\right) =0 \label{cont-1} \end{equation}% \begin{equation} \left( \frac{\partial }{\partial t}+\mathbf{v}\cdot \nabla \right) \mathbf{% v} = \frac{q}{m} \left( \mathbf{E}+\mathbf{v}\times \mathbf{B}\right) -\frac{\nabla P}{nm}+% \frac{\hbar ^{2}}{2m^2}\nabla \left( \frac{1}{\sqrt{n}}\nabla ^{2}\sqrt{n}% \right) \label{momentum-1} \end{equation}% where $n$ is the number density, $\mathbf{v}$ is the fluid velocity, $q$ and $m$ are the particle charge and mass, $\mathbf{E}$ and $\mathbf{B}$ are the electric and magnetic field, $P$ is the pressure and $h=2\pi \hbar $ is Planck's constant. The last term in Eq. (\ref{momentum-1}) is the Bohm de Broglie force which normally can be neglected for ions due to the mass dependence. Eqs. (\ref{cont-1}) and (\ref{momentum-1}) for each species are complemented by the standard Maxwell equations and an equation of state for the pressure. An often used simple relation is \begin{equation} \left( \frac{P}{P_{0}}\right) =\left( \frac{n}{n_{0}}\right) ^{\gamma } \label{pressure-1} \end{equation}% which includes isothermal ($\gamma =1$), classical adiabatic ($\gamma =3$) or Fermi pressures ($\gamma =5/3$) as special cases. Here $P_{0}$ and $n_{0}$ are the unperturbed pressure and number density. Typically when the Bohm de Broglie force is significant, the thermodynamic temperature $T$ is smaller than the Fermi temperature $T_{F}=(\hbar ^{2}/2m_{e})(3\pi^{2})^{2/3}n^{2/3}/k_{B}$, which makes the Fermi pressure the favored equation of state in quantum plasmas. While the expression for the Fermi pressure $P_{F} = ( \hbar^{2}/5m_{e}) (3 \pi^{2} )^{2/3} n^{5/3}$ is well established there is still a degree of uncertainty regarding the most accurate factor in the equation of state for a degenerate plasma. The reason is that the for a weakly collisional system (as is typically appropriate for a plasma), the system is not in local thermodynamic equilibrium during the compression by electromagnetic forces, in which case there is no firm bases for any type of pressure model. Comparions with kinetic theories based on the Wigner function \citep{Manfredi} can then favor values of $\gamma \neq 5/3$ even for $% T\ll T_{F}$. We will not be concerned with the best value of $\gamma \,\ $in the rest of the manuscript, and simply note that for a degenerate plasma we have $1\lesssim \gamma \lesssim 3$. Eqs. (\ref{cont-1}) and (\ref{momentum-1}) can be derived from the Schr\"odinger equation using a Madelung ansatz for the wave function \citep% {Manfredi,Haas-Manfredi}, where the wave function amplitude become the square root of the number density and the gradient of the phase is closely related to the fluid velocity. While the Bohm de Broglie force comes out straightforwardly from the single particle Schr\"odinger equation, the derivation of (\ref{momentum-1}) depend on the possibility to interchange the ordering between averaging over particles and taking spatial derivatives (see e.g. Eq. (4.30) of Ref. \citep{Manfredi}). While such an interchange sometimes can be justified, this step becomes questionable when the Bohm de Broglie force is large, in which case Eq. (\ref{momentum-1}) lacks a firm basis. Another means to derive quantum hydrodynamics equations is to take moments of the Wigner function \citep{Manfredi,Gardner,Moment-1,Moment-2}. Such a procedure can to some extent lend support to Eq. (\ref{momentum-1}), but depending on technical details it may also generate evolution equations that deviate considerably from the ones presented here. In particular the quantum effect may occur firstly in the heat flux equation, not already in the momentum equation \citep{Moment-1,Moment-2}. A general problem when using moment expansions is that typically truncation of the series depends on physical insights rather than mathematical rigor. In the limit of small collisions the truncation must necessarily involve rather crude approximations, since the effects of wave-particle interaction (which is dropped in the fluid limit) is not small in general. In such a scenario when no rigorous justification from first principles can be made, the credence of the fluid equations can be determined on two grounds. Firstly, that there is reasonable agreement with kinetic theory in most situations. Secondly, that the mathematical structure of the fluid equations is sound. The first criterion is discussed e.g. in Ref. \citep{Manfredi}, where a good agreement of (\ref{cont-1}) and (\ref{momentum-1}) with kinetic theory is found for some model problems. The second criterion is usually deemed to be fulfilled if proper conservations laws for momentum, energy and angular momentum are obeyed. Here we would like to extend these requirements on the mathematical structure, and also demand that the basic equations fulfill the Manley-Rowe relations \citep{Manley-Rowe} when nonlinear three-wave interaction \citep% {Weiland-Wilhelmsson,Larsson-1973,Larsson-1977,Brodin-88} is studied. Let us consider three waves with frequencies and wave numbers $(\omega _{(i)},\mathbf{k}_{(i)})$ $i=1,2,3$, that propagate in an homogenous magnetized plasma. We let the frequencies and wave numbers be related through \begin{eqnarray} \omega _{(3)} &=&\omega _{(1)}+\omega _{(2)} \label{Freq} \\ \mathbf{k}_{(3)} &=&\mathbf{k}_{(1)}+\mathbf{k}_{(2)} \label{Wave-vect} \end{eqnarray}% which correspond to energy and momentum conservation respectively, in case we make a quantum mechanical interpretation. The consistency of a quantum mechanical interpretation depends on the Manley-Rowe relations, however. According to the Manley-Rowe relations the change of energy (denoted by $W_{(i)}$) of each wave must be in direct proportion to its frequency, such that we can imagine wave interaction taking place one quanta at a time. Thus in terms of the wave energies the Manley-Rowe relations can be written \begin{equation} \frac{1}{\omega _{(3)}}\frac{dW_{(3)}}{dt}=-\frac{1}{\omega _{(1)}}\frac{% dW_{(1)}}{dt} = -\frac{1}{\omega _{(2)}}\frac{dW_{(2)}}{dt} . \label{MR-rel} \end{equation}% All the common classical plasma models lead to coupling coefficients for three wave interaction that are consistent with the Manley-Rowe relations, including the Vlasov equation and multifluid equations of the type (\ref% {cont-1}) and (\ref{momentum-1}) \textit{but without the Bohm de Broglie term% }, see e.g. Refs. \citep{Larsson-1973,Larsson-1977,Stenflo-1994}. Furthermore, requiring that (\ref{MR-rel}) is fulfilled can be used as a means for separating useful plasma models from less physical ones. For a concrete example, see e.g. Ref. \citep{Brodin-thesis} where a class of pressure tensor models were investigated, and only the sub-class consistent with (\ref{MR-rel}) were deemed appropriate. In the section below we will demonstrate that the fluid equations including the Bohm de Broglie term in general lead to coupling coefficients that fulfill the Manley-Rowe relations. It should be stressed that this depend on the detailed mathematical structure of the quantum force. To emphasize this point we will consider a slightly generalized Bohm de Broglie term given by \begin{equation} \frac{\hbar ^{2}}{2m}\nabla \left( \frac{1}{n^{\xi }}\nabla ^{2}n^{\xi }\right) . \label{general} \end{equation}% As we will see below, the Manley-Rowe relations will be fulfilled if and only if $\xi =1/2$, in which case Eq. (\ref{general}) agrees with the standard form displayed in Eq. (\ref{momentum-1}) This supports the idea that fulfillment of the Manley-Rowe relations is a highly useful criterion in separating physical models from non-physical ones. \section{The coupled three wave equations} \subsection{Preliminaries} In general we consider our variables as given by the sum of a unperturbed background and a small perturbation, e.g. $n=n_0 + \delta n$, where index $0$ denotes the unperturbed value. The background plasma is time-independent and homogenous with zero drift velocities, and the unperturbed magnetic field is $\mathbf{B}_{0}=B_{0}\mathbf{\hat{z}}$. The perturbations consist of contributions from all waves, $\delta n = \sum_{j=1}^{3}n_{(j)}(t)\exp [i(\mathbf{k}_{(j)}\cdot \mathbf{r} -\omega _{(j)}t)]+\mathrm{c.c}.,$ where $\mathrm{c.c.}$ denotes the complex conjugate and the time dependence of the amplitudes are assumed to be slow compared to the wave frequency. Firstly limiting ourselves to only time-dependent amplitudes simplifies some of the technical aspects of the derivation. A generalization to a weak spatial dependence of the amplitudes is easily included by the substitution $\partial /\partial t\rightarrow \partial /\partial t+\mathbf{v}_{g(j)}\cdot \nabla $. Here the index $(j)$ on the group velocity $\mathbf{v}_{g(j)}$ is $j=1,2,3$ depending on which wave amplitude the derivative is acting. Next we make an amplitude expansion keeping only up to second order terms. Writing linear quantities on the left hand side, and nonlinear on the right hand side, the momentum equation reads \begin{eqnarray} && \frac{\partial \delta \mathbf{v} }{\partial t} - \frac{q}{m} (\delta \mathbf{E} + \delta\mathbf{v} \times \mathbf{B}_0) + \frac{\gamma P_0}{n_0^2m } \nabla \delta n - \frac{\xi \hbar^2}{2m^2 n_0} \nabla \nabla^2 \delta n \nonumber \\ &=& - (\delta \mathbf{v} \cdot \nabla ) \delta \mathbf{v} + \frac{q}{m} \delta \mathbf{v} \times \delta \mathbf{B} - \frac{\gamma (\gamma-2) P_0}{n_0^3m} \delta n \nabla \delta n \nonumber \\ && - \frac{ \xi \hbar^2}{2m^2 n_0^2} \left( \delta n \nabla \nabla^2 \delta n + \nabla^2 \delta n \nabla \delta n + 2( 1 - \xi) \left( \nabla \delta n \cdot \nabla \right) \nabla \delta n \right) \label{Momentum-2} \end{eqnarray} Before we start the nonlinear analysis it is convenient to introduce the expression for the wave energy densities. These are% \begin{eqnarray} W_{(i)}&=& \frac{\epsilon _{0}}{2}\mathbf{E}_{(i)}\cdot \mathbf{E}_{(i)}^{\ast} + \frac{1}{2\mu _{0}}\mathbf{B}_{(i)}\cdot \mathbf{B}_{(i)}^{\ast } \nonumber \\ && + \sum_{s} \left[ \frac{m_{s}n_{0s}}{2}\mathbf{v}_{(i)s}\cdot \mathbf{v}_{(i)s}^{\ast } + % \left( \frac{\gamma _{s}P_{0s}}{2n_{0s}^2}+\frac{\xi \hbar ^{2}k_{(i)}^{2}}{% 4m_{s}n_{0s}}\right) n_{(i)s}n_{(i)s}^{\ast } \right] \label{Wave-energy-1} \end{eqnarray}% where the star denotes complex conjugate. The expression for the wave energy densities can be deduced by demanding that $W_{(i)}$ is conserved to all orders in the slow time derivative (i.e.\ acting on the wave amplitudes) when the nonlinearities are neglected. From the dispersion relation, where the wave frequency becomes real in the absence of dissipative mechanisms, one can of course deduce that the different sub-parts of the wave energy are conserved separately in the absence of nonlinear interactions. However, the wave energy (\ref{Wave-energy-1}) is the unique expression that can be shown to be conserved without using the linear dispersion relation. Formally all species are treated equivalently in Eqs. (\ref{Momentum-2}) and (\ref% {Wave-energy-1}). The fact that electrons may be described quantum mechanically and ion classically can be accounted for in the final result by choosing $\gamma _{s}$ differently for electrons and ions, and dropping the Bohm de Broglie term altogether for ions. \subsection{The Manley Rowe relations} Including only the linearized terms of the left hand side in (\ref% {Momentum-2}), as well as in the continuity equation and Maxwell's equations, the wave energy of each wave is conserved. Including the quadratically nonlinear terms of the right hand sides, the rate of change of each wave energy becomes proportional to terms that are cubic in the amplitude. Only the resonant cubic terms that survives averaging over several wave periods are kept. Thus the energy change of wave 3 directly associated with the electric field can be written% \begin{eqnarray} && \frac{\epsilon _{0}}{2} \mathbf{E}_{(3)}^{\ast } \cdot \frac{\partial \mathbf{E}_{(3)}}{\partial t} + \mathrm{c.c} \nonumber \\ && = \frac{\epsilon _{0}c^{2}}{2}\mathbf{E}_{(3)}^{\ast } \cdot \left[ \nabla \times \mathbf{B}_{(3)} - \mu_{0}\sum_{s}q_{s} \left(n_{0s}\mathbf{v}_{(3)s} + n_{(1)s}\mathbf{v}_{(2)s}+n_{(2)s}\mathbf{v}_{(1)s} \right) \right] + \mathrm{c.c}. \label{example} \end{eqnarray}% in accordance with Eqs.\ (\ref{Freq}) and (\ref{Wave-vect}). As the terms that are quadratic in the wave fields will cancel when all source terms are considered, only the cubic terms of the right hand side are of interest here. Treating the other energy terms in the same manner, we thus find that $% \mathrm{d}W_{3}/\mathrm{d}t$ becomes proportional to a large number of cubic terms. Simplifying this expression using linear approximations (e.g.\ $% n_{(j)s}=n_{0s}\mathbf{k}_{(j)}\cdot \mathbf{v}_{(j)s}/\omega _{(j)}$, etc) in the cubic terms, we obtain after some lengthy algebra \begin{eqnarray} \frac{\mathrm{d}W_{(3)}}{\mathrm{d}t} &=& \omega _{(3)} \sum_s \bigg[ -\frac{im_{s}}{2} \Big( n_{(1)s}\mathbf{v}_{(2)s}\cdot \mathbf{v}_{(3)s}^{\ast } + n_{(2)s} \mathbf{v}_{(1)s} \cdot \mathbf{v}_{(3)s}^{\ast } + n_{(3)s}^{\ast }\mathbf{v}_{(1)s} \cdot \mathbf{v}_{(2)s} \Big) \nonumber \\ && \quad \quad \quad \quad - \frac{i \gamma_{s} (\gamma_{s} - 2) P_{0s} }{n_{0s}^{3}} n_{(1)s} n_{(2)s} n_{(3)s}^{\ast } \nonumber \\ && \quad \quad \quad \quad + \frac{i \xi \hbar ^{2}}{8m_{s}n_{0s}^{2}} \left[ k_{(1)}^{2} + k_{(2)}^{2} + k_{(3)}^{2} - ( 2\xi - 1 ) \mathbf{k}_{(1)} \cdot \mathbf{k}_{(2)} \right] n_{(1)s} n_{(2)s} n_{(3)s}^{\ast } \nonumber \\&& \quad \quad \quad \quad - \frac{m_{s} \omega _{cs}}{ 2 \omega _{(3)} } n_{0s} \left( \frac{k_{(2)z}}{ \omega_{(2)} } - \frac{k_{(1)z}}{ \omega_{(1)} } \right) \mathbf{v}_{(3)s}^{\ast } \cdot \left(\mathbf{v}_{(1)s}\times \mathbf{v}_{(2)s} \right) \bigg] + \mathrm{c.c}. \label{eq:wave_3_power_density_res} \end{eqnarray}% Equation (\ref{eq:wave_3_power_density_res}) is our main result, together with the similar expressions for $dW_{(1,2)}/dt$ that can be obtained directly from (\ref{eq:wave_3_power_density_res}) using the symmetry between $\omega _{(1)}$, $\omega _{(2)}$ and $-\omega _{(3)}$ as well as between $\mathbf{k}% _{(1)}$, $\mathbf{k}_{(2)}$ and $-\mathbf{k}_{(3)}$. When $\hbar \rightarrow 0$ Eq. (\ref{eq:wave_3_power_density_res}) agrees with Ref. \citep% {Larsson-1977}. Furthermore, the corresponding expression for $W_{(1,2)}$ confirms that the Manley-Rowe relations (\ref{MR-rel}) are fulfilled when $% \hbar \rightarrow 0$. At a first glance the last term of (\ref{eq:wave_3_power_density_res}) seems to be in conflict with (\ref{MR-rel}) (i.e. the symmetry between waves 1, 2 and 3 is not explicit), but simple manipulations using Eqs.\ (\ref{Freq}) and (\ref{Wave-vect}) quickly confirms that the term is in full agreement with (\ref{MR-rel}). The quantum term on the other hand has two very different contributions. The first term (proportional to $% k_{(1)}^{2}+k_{(2)}^{2}+k_{(3)}^{2}$) is obviously in agreement with (\ref% {MR-rel}). However, the second term proportional to $(2\xi -1)$ must vanish for the Manley-Rowe relations to hold, i.e.\ we must have $\xi =1/2$ . Thus we can confirm that fulfillment of (\ref{MR-rel}) can be used as a criterion for disregarding unphysical models. From now on we limit ourselves to the standard Bohm de Broglie term with $\xi =1/2$, in which case \begin{equation} \frac{1}{\omega _{(3)}}\frac{dW_{(3)}}{dt}=-\frac{1}{\omega _{(1,2)}}\frac{% dW_{(1,2)}}{dt}= V+\mathrm{c.c.} \label{MR-2} \end{equation}% with \begin{eqnarray} V &=& \sum_s \bigg[ -\frac{ i m_{s} }{ 2 } \left( n_{(1)s } \mathbf{v}_{(2)s} \cdot \mathbf{v}_{(3)s}^{\ast } + n_{(2)s} \mathbf{v}_{(1)s} \cdot \mathbf{v}_{(3)s}^{\ast} + n_{(3)s}^{\ast }\mathbf{v}_{(1)s} \cdot \mathbf{v}_{(2)s} \right) \nonumber \\ && -\frac{i\gamma _{s}(\gamma _{s}-2)P_{0s}}{n_{0s}^{3}}% n_{(1)s}n_{(2)s}n_{(3)s}^{\ast } +\frac{i \hbar ^{2}}{16m_{s}n_{0s}^{2}} \left[ k_{(1)}^{2}+k_{(2)}^{2}+k_{(3)}^{2} \right] n_{(1)s} n_{(2)s}n_{(3)s}^{\ast } \nonumber \\ && - \frac{m_{s}\omega _{cs}}{2\omega _{(3)}}n_{0s}\left( \frac{k_{(2)z}}{\omega _{(2)}}-\frac{k_{(1)z}}{\omega _{(1)}}\right) \left[ \mathbf{v}_{(3)s}^{\ast }\cdot \left(\mathbf{v}_{(1)s}\times \mathbf{v}_{(2)s} \right) \right] \label{Final-V} \end{eqnarray}% The property (\ref{MR-2}) has important consequences. Firstly, it means that a quantum interpretation of the three-wave interaction process is possible, as noted above. This has the further implication that parametric decay occurs from higher to lower frequencies, unless the wave energy density is negative, which can only occur if the background plasma has a free energy source. Three-wave interaction in homogenous plasmas using quantum hydrodynamic equations has been considered previously by Ref. \citep% {Murtza-2013}, specifically focusing on the parametric decay of Langmuir waves in magnetized plasmas. However, their calculations did not produce Manley-Rowe symmetric formulas, and thus our above results is an improvement in this respect. Furthermore, Eq.\ (\ref{Final-V}) cover all types of waves (Alfen waves, Whistler waves, Extra-ordinary, etc.) and thus represents an extensive generalization of previous work. \subsection{Three wave equations} In the previous sub-section we showed that the Manley-Rowe relations are fulfilled for the physical case of $\xi =1/2$. However, in order to do practical calculations of wave interactions (e.g.\ to find growth rates for parametric instabilities), we first need to rewrite the equations in terms of the wave amplitudes rather than wave energy densities. For this purpose we note that the wave energy densities can be written as $W=\varepsilon _{0}E_{j}^{\ast }\partial (\omega D_{ij})/\partial \omega )E_{i}$ for each wave, where we denote cartesian componets $x,y,z$ with index $1,2,3$ in order to use the summation convention (a closely related and often used expression for the wave energy that is equivalent is $W=\varepsilon _{0}(1/\omega )E_{j}^{\ast }\partial (\omega ^{2}\varepsilon _{ij})/\partial \omega )E_{i}$, where $% \varepsilon _{ij}$ is the dielectric tensor). The electric field eigenvectors fulfill $D_{ij}E_{j}=0$ with \begin{equation} D_{ij}=\left( 1-\frac{k^{2}c^{2}}{\omega ^{2}}\right) \xi _{ij}+\frac{% k_{i}k_{j}c^{2}}{\omega ^{2}}+\sum_{s}\chi _{ij} \end{equation}% and the susceptibility tensor $\chi _{ij}$ for each species is given by \begin{eqnarray} &&\mathbf{\chi }=-\frac{\omega _{p}^{2}}{\Omega _{s}^{4}} \times \nonumber \\ && \left[ \! \! \begin{array}{ccc} \omega^{2} - (k^{2} - k_{x}^{2} )V_{s}^{2} & k_{x}k_{y}V_{s}^{2} + \frac{i\omega_{cs}}{\omega }(\omega^{2}-k_{z}^{2}V_{s}^{2}) & k_{x}k_{z}V_{s}^{2}+\frac{i\omega _{cs}}{\omega }k_{y}k_{z}V_{s}^{2} \\ & & \\ k_{x}k_{y}V_{s}^{2}-\frac{i\omega _{cs}}{\omega }(\omega ^{2}-k_{z}^{2}V_{s}^{2}) & \omega ^{2}-(k^{2}-k_{y}^{2})V_{s}^{2} & k_{y}k_{z}V_{s}^{2}-\frac{i\omega _{cs}}{\omega }k_{x}k_{z}V_{s}^{2} \\ & & \\ k_{x}k_{z}V_{s}^{2}-\frac{i\omega _{cs}}{\omega }k_{y}k_{z}V_{s}^{2} & k_{y}k_{z}V_{s}^{2}+\frac{i\omega _{cs}}{\omega }k_{x}k_{z}V_{s}^{2} & \omega ^{2}-\omega _{cs}^{2}-V_{s}^{2}(k^{2}-k_{z}^{2})% \end{array}% \! \! \right], \nonumber \\ && \end{eqnarray}% where we have defined \begin{eqnarray} \Omega _{s}^{4} &=&\omega ^{2}(\omega ^{2}-V_{s}^{2}k^{2})-\omega _{cs}^{2}(\omega ^{2}-V_{s}^{2}k_{z}^{2}) \\ V_{s}^{2} &=& \frac{\gamma P_{0s}}{m_s n_{0s}}+\frac{\hbar ^{2}k^{2}}{4m_{s}^{2}}. \end{eqnarray} The linear susceptibility in a fluid theory including the Bohm de Broglie force has been computed in Ref. \citep{Lundin}. Here we have generalized this expression to arbitrary cartesian coordinate axes, since we cannot chose a coordinate axis along the perpendicular wavenumber for mote than one of the interacting waves in general. Finally the dispersion relation for each wave is determined by \begin{equation} D(\omega ,\mathbf{k})\equiv \det D_{ij}=0. \label{E-polarization} \end{equation}% Now we want to express all quantities appearing in (\ref{MR-2}) and (\ref% {Final-V}) in terms of a single variable representing the wave amplitude of each wave. Somewhat arbitrarily we can pick the z-component of the electric fields, but the procedure outlined below works for any component of the electric field. Firstly using $D_{ij}E_{j}=0$ we can express $E_{x}$ and $% E_{y}$ in terms of $E_{z}$ for each wave. Together with $v_{i}=-i \omega \epsilon_0 \chi _{ij}E_{j} / qn_0 $ this gives all velocity components in terms of $E_{z}$, and the density perturbation is obtained in terms of $% \delta n=n_0 k_{i}v_{i}/\omega $. The remaining quantity needed is the wave energy density, which with the help of $W=\varepsilon _{0}E_{j}^{\ast }\partial (\omega D_{ij})/\partial \omega )E_{i}$ is written as \begin{equation} W_{(3)}=\frac{\varepsilon _{0} \omega_3}{(D_{xx}D_{yy}-D_{xy} D_{yx})}\frac{% \partial D(\omega _{3},\mathbf{k}_{3}\mathbf{)}}{\partial \omega _{3}}% E_{(3)z}E_{(3)z}^{\ast } \label{W-3} \end{equation} for wave 3. As a consequence all variables appearing in (\ref{MR-2}) and (% \ref{Final-V}) can be expressed in terms of the z-component of the electric field amplitudes, in which case Eqs. (\ref{MR-2}) and (\ref{Final-V}) can be rewritten as% \begin{eqnarray} \frac{dE_{(1,2)z}}{dt} &=&\alpha _{(1,2)}E_{(3)z}E_{(2,1)z}^{\ast } \\ \frac{dE_{(3)z}}{dt} &=&\alpha _{(3)}E_{(1)z}E_{(2)z} , \end{eqnarray}% where we now allow for spatially dependent amplitudes such that \begin{equation} \frac{ dE_{(j)z} }{dt} = \left( \frac{\partial}{\partial t} + \mathbf{v}_{g(j) } \cdot \nabla \right) E_{(j)z} . \end{equation} It is straightforward to find the general expressions for the coupling coefficients $\alpha _{(1,2,3)}$from formulas (\ref% {MR-2}), (\ref{Final-V}) and (\ref{W-3}) and the procedure outlined above. However, in order to obtain comparatively simple and illustrative formulas, we consider the special case where the plasma is unmagnetized, $B_{0}=0$, and waves 1 and 3 are Langmuir waves. Furthermore we let the plasma be degenerate, i.e. for a 3D Fermi gas $P_0=n_0 m v_F^2/5$ and $\gamma=5/3$, where we have used the thermodynamic equilibrium pressure (see discussion in Section \ref{chapter2}). In this case the general dispersion relation (\ref{E-polarization}) reduces to% \begin{equation} \omega _{(1,3)}^{2}=\omega _{p}^{2}+\frac{1}{3}k_{(1,3)}^{2}v_{F}^{2}+\frac{\hbar ^{2}k_{(1,3)}^{4}}{4m^{2}} \label{DR-Langmuir} \end{equation}% when the corrections due to the ion \ motion is neglected. Wave 2 is a low-frequency ion-acoustic wave fulfilling the approximate dispersion relation \begin{equation} \omega _{(2)}^{2}=\frac{\omega _{pi}^{2}}{1+\omega _{pe}^{2}/(\frac{1}{3}k_{(2)}^{2}v_{F}^{2}+\hbar ^{2}k_{(2)}^{4}/4m_{e}^{2})} \label{DR-ion-ac} \end{equation}% where we have set the ion temperature to zero and let $m_{e}/m_{i}% \rightarrow 0$, but avoided the approximation of quasi-neutrality in order to allow for short wavelengths. Making the corresponding approximations in (% \ref{Final-V}) and (\ref{W-3}) we obtain the coupled equations:% \begin{eqnarray} \frac{ \partial \phi _{(3)} }{ \partial t } &=& - \frac{i q_{e} }{2 m_{e} \omega _{(3)} k_{(3)}^{2} \omega _{pe}^{2} } \tilde V \phi _{(1)} \phi _{(2)} \\ \frac{\partial \phi _{(2)}}{\partial t}&=& \frac{ i q_{e}\omega _{(2)}^{3} }{2 m_{e} k_{(2)}^{2} \omega _{pi}^{2} \omega _{pe}^{4} } \tilde V \phi _{(1)}^{\ast }\phi _{(3)} \\ \frac{\partial \phi _{(1)}}{\partial t} &=& \frac{i q_{e}}{2 m_{e}\omega _{(1)} k_{(1)}^{2} \omega _{pe}^{2}} \tilde V \phi _{(3)}\phi _{(2)}^{\ast }, \end{eqnarray} where \begin{equation} \tilde V = \left( 1 - \frac{ \omega _{pi}^{2} }{ \omega _{(2)}^{2} } \right) \left( \omega _{(1)} \omega _{(3)} k_{(2)}^{2} \mathbf{k}_{(1)} \cdot \mathbf{k}_{(3)} - \left[ \frac{v_F^2}{9} + \frac{ \hbar^{2} }{ 8m_{e}^{2} }( k_{(1)}^{2} + k_{(2)}^{2} + k_{(3)}^{2} ) \right] k_{(1)}^{2} k_{(2)}^{2} k_{(3)}^{2} \right) \end{equation} As usual these equations can be used to compute growth rates and threshold values for parametric instabilities (if the pump wave has a finite width or a damping mechanism due to e.g. collisions is added), see e.g. Ref. \citep{Weiland-Wilhelmsson,Brodin-Scripta-88}. \section{Concluding discussion} In this paper we have focused on the Manley-Rowe relations in quantum hydrodynamics. Our starting point has been that basic equations that are physically sound should produce coupling coefficients for three-wave interaction that obey these relations. As discussed by e.g.\ Ref.\ \citep{Larsson-Hamiltonian} fulfillment of the Manley-Rowe relations comes from an underlying Hamiltonian structure. For classical plasmas, it is illustrated rather clearly in Ref.\ \citep{Stenflo-1994} that the Manley-Rowe relations are satisfied for arbitrary wave propagation in hot magnetized plasmas with a uniform background. Moreover the Manley-Rowe relations are more general than expected, i.e. they are sometimes applicable outside their expected range of validity, see Ref.\ \citep{Kaufman-1975}. Generalized Manley-Rowe relations are also valid fo non-uniform plasmas \citep% {Kaufman-1979,Lindgren-1981,Aliev-1990} and somewhat surprisingly also for turbulent plasma (see the rather instructive paper by Ref.\ \citep% {Vladimirov-1997}). Nevertheless, the derivation of standard quantum hydrodynamic equations contain steps that can be questioned when both the pressure and particle dispersive effects are large. Hence, it is not obvious that such equations preserves a physically sound structure, i.e. obeys the Manley-Rowe relations. As demonstrated by Eqs.\ (\ref{MR-2}-\ref{Final-V}), however, these relations are indeed fulfilled when the standard Bohm de Broglie term is used to describe particle dispersive effects. This is \textit{not the case% }, however, in case the Bohm de Broglie term is replaced by a slightly generalized expression, which demonstrates that fulfillment of the Manley-Rowe relations is a useful criterion in separating acceptable models from unphysical ones. Besides these theoretical aspects we note that our resutls extends previous works on three-wave interaction based on classical fluid equations \citep{Larsson-1977} to cover quantum hydrodynamics. The quantum contribution to the coupling coefficient is important for short wavelengths (of the order of the thermal de Broglie wavelength), and for a quantum parameter $H=\hbar \omega _{p}/k_{B}T$ that is not much smaller than unity. See e.g.\ Refs.\ \citep% {Haas-book,Manfredi,Shukla-Eliasson,Shukla-Eliasson-RMP} for a thorough discussion of systems that fit this description. \bibliographystyle{plain}
1,108,101,562,665
arxiv
\section{Introduction} In 1918 Hermann Weyl introduced, what is now known as Weyl geometries \cite{Weyl1918}. He observed that the Riemann curvature has a conformally invariant component $C\tensor{ij}{k}{l}$, which he referred to as the conformal curvature. In \cite{Weyl1921} Weyl discussed both conformal and projective geometries and showed that analogously the Riemann curvature has a projectively invariant component $W\tensor{ij}{k}{l}$, referred to as the projective curvature. The idea has been extend to parabolic geometries, (see e.g. \cite{BEG}, \cite{CSbook}) and in the modern literature the invariant curvature component is simply referred to as the Weyl tensor or the Weyl curvature, with the type of geometry typically implied by the context. In this article we will be dealing with $C\tensor{ij}{k}{l}$ and $W\tensor{ij}{k}{l}$ simultaneously and we will refer to them as the conformal and projective Weyl tensors respectively. In \cite{Nur12} Nurowski investigated when a given projective class of connections $ [ \nabla ] $ on $M$ includes a Levi-Civita connection of some metric $g$ on $M$. An algorithm to check the metrisability of a chosen projective structure was given. In proposition 2.5 of \cite{Nur12} it was shown that the projective and conformal Weyl tensors coincide if and only if the Ricci tensor of the Levi-Civita connection satisfies \begin{equation} \label{Nurowski condition} M\tensor{abcd}{ef}{} R_{ef}=0 \end{equation} where $$ M\tensor{abcd}{ef}{} = 2 g_{a[c}\delta^e_{d]}\delta^f_{b} + 2 g_{a[d} g_{c]b}g^{ef} + 2(n-1) g_{b[d}\delta^f_{c]}\delta^e_{a}. $$ Corollary 2.6 of \cite{Nur12} deduces that the projective and conformal Weyl tensors of an Einstein metric are equal. As a comment Nurowski raised the question whether there are non-Einstein metrics, which satisfy condition \eqref{Nurowski condition}. This article proves that this is not the case. In particular, for a given connection $\nabla$ on an $n\ge 4$ dimensional manifold the projective and conformal Weyl tensors associated to $\nabla$ only agree if $\nabla$ is the Levi-Civita connection of an Einstein metric. The problem is addressed in more generality by allowing for general Weyl connections. This generalisation is of interest, due to the fact that neither the Ricci curvature of a general Weyl connection nor the Ricci curvature of a projective connection need be symmetric. Hence the possibility exists that the two Weyl tensors agree when using a general Weyl connection that is not a Levi-Civita connection for a metric in $[g]$. \section{Projective and conformal connection changes} We define the tensors \begin{equation*} \Sigma^{kl}_{ij} = \delta^k_i \delta^l_j + \delta^l_i \delta^k_j , \quad\quad S^{kl}_{ij} = \delta^k_i \delta^l_j + \delta^l_i \delta^k_j - g_{ij} g^{kl} \end{equation*} Two connections $\nabla$ and $\check{\nabla} $ are projectively related if there exists a 1-form $\check{b}_i $ such that the connection coefficients are related by \begin{equation*} \check{\Gamma}\tensor{i}{k}{j} = \Gamma\tensor{i}{k}{j} + \Sigma^{kl}_{ij} \check{b}_l \end{equation*} We denote the class of all connections projectively related to $\nabla$ by $[\nabla]_p $. Suppose further that $\nabla$ is related to the conformal class $[g]$. By this we mean that there exists a 1-form $f_i$ such that \begin{equation} \label{Weyl connection metric condition} \nabla_i g_{kl} = -2f_i g_{kl} \end{equation} This holds for $g_{ij}$ iff it holds for any representative in $[g]$. Connections that satisfy \eqref{Weyl connection metric condition} are referred to as general Weyl connections of $[g]$. Note that the Levi-Civita connection of any representative in $[g]$ satisfies \eqref{Weyl connection metric condition}. However $\nabla$ need not be the Levi-Civita connection for a metric in $[g]$. The connections $\nabla$ and $\hat{\nabla} $ are conformally related if there exists a 1-form $\hat{b}_i $ such that the connection coefficients are related by \begin{equation*} \hat{\Gamma}\tensor{i}{k}{j} = \Gamma\tensor{i}{k}{j} + S^{kl}_{ij} \hat{b}_l \end{equation*} We denote the class of all connections conformally related to $\nabla$ by $[\nabla]_c $. Observe that all connections in $[\nabla]_c $ satisfy \eqref{Weyl connection metric condition}. \section{Decomposition of the Riemann curvature} Given a connection $\nabla$ the Riemann and Ricci tensors are defined as \begin{equation*} 2 \nabla_{[i} \nabla_{j]} v^k = R\tensor{ij}{k}{l} v^l, \quad \quad R_{jl} = R\tensor{kj}{k}{l} \end{equation*} The projective and conformal Schouten tensor are related to the Ricci tensor of $\nabla$ by \cite{BEG}, \cite{Fri03} \begin{eqnarray*} \rho_{ij} &=& \frac{1}{n-1} R_{(jl)} + \frac{1}{n+1} R_{[jl]}\\ P_{ij} &=& \frac{1}{n-2} R_{(jl)} + \frac{1}{n} R_{[jl]} - \frac{R_{kl}g^{kl}}{2(n-2)(n-1)} g_{ij} \end{eqnarray*} \noindent The Schouten tensors can be used to decompose the Riemann curvature as follows \begin{equation} \label{Riemann decomposition} R\tensor{ij}{k}{l} = W\tensor{ij}{k}{l} + 2\Sigma_{l[i}^{km} \rho_{j]m} = C\tensor{ij}{k}{l} + 2S_{l[i}^{km} P_{j]m} , \end{equation} where $W\tensor{ij}{k}{l}$ and $C\tensor{ij}{k}{l}$ are the projective and conformal Weyl tensors respectively. Moreover the once contracted Bianchi identity $\nabla_k R\tensor{ij}{k}{l} =0 $ implies \cite{BEG} that \begin{eqnarray}\label{proj_Bianchi} \nabla_k W\tensor{ij}{k}{l} &=& 2(n-2) \nabla_{[i} \rho_{j]l} = (n-2)y_{ijl}\\ \label{conf_Bianchi} \nabla_k C\tensor{ij}{k}{l} &=& 2(n-3) \nabla_{[i} P_{j]l} = (n-3)Y_{ijl}. \end{eqnarray} The tensor $y_{ijl}$ and $Y_{ijl}$ are known as the Cotton-York tensors. Under a connection change $\check{\nabla} = \nabla + \check{b}$ respectively $\hat{\nabla} = \nabla + \hat{b}$ the Schouten tensors transform as \begin{eqnarray*} \rho_{ij} - \check{\rho}_{ij} &=& \nabla_i b_j + \frac{1}{2} \Sigma^{kl}_{ij} \check{b}_k \check{b}_l \\ P_{ij} - \check{P}_{ij} &=& \nabla_i b_j + \frac{1}{2} S^{kl}_{ij} \hat{b}_k \hat{b}_l \end{eqnarray*} In both cases the Schouten tensors absorb all terms that arise in the Riemann tensor under connection changes. It follows that the projective Weyl tensor $W\tensor{ij}{k}{l} $ and the conformal Weyl tensor $C\tensor{ij}{k}{l} $ are invariants of the projective class $[\nabla]_p$ and the conformal class $[\nabla]_c$, respectively. The question we wish to address is for which manifolds these two invariants coincide. \medskip We note that for $n \le 2 \,$ $W\tensor{ij}{k}{l} =0$ and for $n \le 3 \,$ $C\tensor{ij}{k}{l} =0$. Therefore it follows trivially that: \noindent \textit{In $n=2$ the Weyl tensors always agree. In $n=3$ they agree if and only if the manifold is projectively flat, i.e. the flat connection is contained in $[\nabla]_p $ } Hence in the following we focus only on $n > 3$. \section{Coincidence of the conformal and projective \\ Weyl tensors} The Ricci tensor can be decomposed into its symmetric trace-free, skew and trace components with respect to the metric $g_{ij}$: \begin{eqnarray} \label{Riccidecomp} R_{ij} &=& \Phi_{ij} + \varphi_{ij} + \frac{R}{n}g_{ij} \end{eqnarray} Hence the Schouten tensors can be rewritten as \begin{eqnarray} \label{projectiveSchoutentoRicci} \rho_{ij} &=& \frac{1}{n-1} \Phi_{ij} + \frac{1}{n+1} \varphi_{ij} + \frac{R}{n(n-1)}g_{ij}\\ \label{conformalSchoutentoRicci} P_{ij} &=& \frac{1}{n-2} \Phi_{ij} + \frac{1}{n} \varphi_{ij} + \frac{R}{2n(n-1)}g_{ij} \end{eqnarray} The condition $W\tensor{ij}{k}{l} = C\tensor{ij}{k}{l} $ is equivalent to \begin{equation} 2\Sigma_{l[i}^{km} \rho_{j]m} = 2S_{l[i}^{km} P_{j]m} \end{equation} \noindent Substitutions of \eqref{projectiveSchoutentoRicci} and \eqref{conformalSchoutentoRicci} give \begin{eqnarray*} 2\Sigma_{l[i}^{km} \rho_{j]m} &=& \frac{2}{n-1}\delta_{[i}^k \Phi_{j]l} + \frac{2 R}{n(n-1)}\delta_{[i}^k g_{j]l} + \frac{2}{n+1}\delta_{[i}^k \varphi_{j]l} - \frac{2}{n+1} \delta_l^k \varphi_{ij}\\ 2S_{l[i}^{km} P_{j]m} &=& \frac{2}{n-2}\delta_{[i}^k \Phi_{j]l} - \frac{2}{n-2} g_{l[i}\Phi_{j]m}g^{km} + \frac{2 R}{n(n-1)}\delta_{[i}^k g_{j]l} \nonumber \\ && + \frac{2}{n}\delta_{[i}^k \varphi_{j]l} - \frac{2}{n} g_{l[i}\varphi_{j]m}g^{km} - \frac{2}{n} \delta_l^k \varphi_{ij} \end{eqnarray*} We observe that the scalar curvature terms are identical on both sides and hence only $\Phi_{ij}$ and $\varphi_{ij}$ are involved in our condition. The scalar curvature can take arbitrary values. \noindent Taking the trace over $il$ and equating both sides. \begin{eqnarray*} 2\Sigma_{l[i}^{km} \rho_{j]m} g^{il} &=& \frac{1}{n-1}\Phi\tensor{j}{k}{} - \frac{R}{n} \delta_{j}^k + \frac{3}{n+1}\varphi\tensor{j}{k}{} \\ 2\Sigma_{l[i}^{km} P_{j]m} g^{il} &=& - \Phi\tensor{j}{k}{} - \frac{R}{n} \delta_{j}^k + \frac{4-n}{n}\varphi\tensor{j}{k}{} = - R\tensor{j}{k}{} + \frac{4}{n}\varphi\tensor{j}{k}{} \end{eqnarray*} Comparing irreducible components we find that we require \begin{eqnarray} \frac{n}{n-1}\Phi\tensor{j}{k}{} = 0 \quad \mathrm{and} \quad \frac{n^2-4}{n(n+1)}\varphi\tensor{j}{k}{} = 0 \end{eqnarray} Thus under our assumption of $n > 3$, both $\Phi_{ij}$ and $\varphi_{ij}$ must vanish. It follows that the Ricci tensor is pure trace and hence $g$ is an Einstein metric. Note that the Bianchi identities \eqref{proj_Bianchi}, \eqref{conf_Bianchi} imply that $R$ is constant. The result can be formulated as follows \begin{theorem} Let $\nabla$ be a connection related to the conformal class $[g]$. \begin{itemize} \item In $n=2$ the Weyl tensors always vanish and hence agree. \item In $n=3$ the Weyl tensors agree if and only if the manifold is projectively flat, i.e. the flat connection is contained in $[\nabla]_p $ \item In $n \ge 4$ the Weyl tensors agree if and only if the connection $\nabla$ is the Levi-Civita connection of the metric $g$ and the manifold is an Einstein manifold. \end{itemize} \end{theorem} \begin{corollary} If the projective and conformal Weyl tensor for $n\ge 4$ coincide then the Cotton-York tensors coincide as well. In fact they vanish identically. \end{corollary} The result follows immediately from the fact that the connection is the Levi-Civita connection of an Einstein metric. Hence the Schouten tensors are proportional to the metric and both Cotton-York tensors vanish. \section{Conclusion} It has been shown that the coincidence of the projective and conformal Weyl tensors is closely linked to the concept of Einstein metrics. For metric connections in $[\nabla]_c$ one could have deduced the main result directly from \eqref{Nurowski condition} by using the above decomposition of the Ricci tensor and using suitable traces of \eqref{Nurowski condition}. However, the set-up given here allowed for a direct generalisation to Weyl connections without requiring a more general form of \eqref{Nurowski condition}. Moreover it was felt that the set-up provided more clarity of role of the different types of curvatures involved.
1,108,101,562,666
arxiv
\section{Introduction} The dominant X-ray radiation mechanism in accreting black holes is commonly thought to be inverse Compton scattering of low energy photons by a cloud of hot ($T\sim 10^{8} - 10^{9}~K$) electron plasma near the black hole \citep{st80}. The physical geometry of the coronal plasma responsible for scattering the photons is not well constrained by the current observations. The ubiquity of a disk blackbody component accompanied by a power-law tail in the overall spectra of galactic black hole candidates (GBHCs) motivated several workers to develop the so called two phase accretion disk-corona models \citep[see eg.,][]{hm91,hm93, pkr97,sz94,ste95,bel98}. In these models, the blackbody radiation from the cold disk enter the hot corona and is Comptonized into X-rays. A part of the hard X-rays from the corona, being reprocessed in the disk, produce the reflection hump. The geometry of the corona controls this feedback mechanism which in turn determines the spectral slope of the escaping radiation. The Kompaneets y-parameter (and hence the temperature) is determined by the energy balance between heating and cooling mechanism inside the plasma. One important question in all such models is the method by which the gravitational energy is converted to the energy of electrons. The ideas explored in the literature include magnetic flares \citep{hmg94,ste95,pou96,bel99a,bel99b}, advection dominated disk very close to the black hole \citep{ny94}, Bondi type free fall beyond a shock region \citep{ct95}, but, as yet, there is no consensus on the exact mechanism. A general prediction of the above models is that the hard X-ray variations should lag behind those in softer bands, as hard photons undergo more number of scatterings in the plasma before escape. Measurement of such time-lags has the potential of constraining the size of the region which in turn will help us to understand the mechanism by which the electron cloud is energized. Frequency dependent time lags have been observed in Galactic black hole candidates \citep{miy88,mkk91} as well as in a few active galactic nuclei \citep[AGNs;][]{pnk01,zh02}. Interpreting these lags as due to Comptonization requires the size of the emitting region to be very large, typically several thousand Schwarzschild radii (compatible to the lowest frequency at which the lag is observed), leading to the problem of heating the electron cloud at large distances from the black hole. Hence, sometimes these lags are interpreted as due to the energy dependent asymmetries in random shots \citep{mk89}; that is interpreting the lags as due to the production of the variability itself. One of the problems of detecting lags at higher frequencies and relating them to the Comptonization at the innermost regions of the accretion disk around black holes could be observational. If the Comptonization process occurs at 10 -- 20 Schwarzschild radii and if the Comptonization process gives a lag which is a factor of few larger than the light travel time in these regions, one expects a lag of a few tens of milliseconds in a black hole of mass 10 M$_\odot$ and about a day in a AGN of mass 10$^8$ M$_\odot$. Detection of such delays is observationally a difficult task. Bright nearby AGNs with a black hole mass of 10$^7$ M$_\odot$, where one expects a delay of about an hour, are the ideal sources to look for delays due to the process of Comptonization. With this motivation, we have searched for delays in one of the bright low mass AGN Mrk~110, based on a long observation using the XMM-Newton observatory. In this {\it Letter} we present the cross-correlation analysis in different energy bands of this source and show that hard X-rays are delayed by a significant amount of time (from few minutes to an hour). The 0.3 - 12 keV X-ray spectrum can be represented by a Comptonization model which can explain the measured hard X-ray delay. \begin{figure} \centering \includegraphics[scale=0.35,angle=-90]{f1.eps} \caption{EPIC-PN light curves (1000 s bin) of Mrk~110 in different energy range. Start of variability is indicated by downward arrows (see text)}. \label{lc} \end{figure} \section{Observation} Mrk~110 is a nearby optically bright, radio intermediate ($R\sim1.6$) narrow-line Seyfert 1 galaxy (NLS1s) at a redshift z=0.036. The optical continuum and the broad emission lines of Mrk 110 are highly variable \citep[by a factor of 2 to 8 within a timescale of 10 years - ][]{kw01,kol03}. Mrk~110 was observed on 2004 November 15 by XMM-Newton for 47.4 ks. The EPIC-PN cameras were operated in Prime-Small-Window observing mode using the Thin1 filters. Here we use data only from EPIC-PN cameras due to the better efficiency, better calibration at lower energies and absence of pile-up. Source spectra and light curves were extracted from the EPIC images using a circular source region centered on the observed source position. Background spectra and light curves were derived from adjacent `blank sky' regions. The EPIC spectra were binned to give a minimum of 50 counts per bin. The {\sc xspec v11.0} and {\sc xronos} packages were used for spectral and timing analysis respectively. Errors on fitted parameters are quoted at the nominal 90\% confidence level ($\Delta \chi^2$ = 2.7) unless otherwise stated. \section{Timing Analysis \label{timing}} In Figure~\ref{lc} we plot the binned light curves of Mrk~110 in different energy ranges (not corrected for 71\% duty cycle of PN SW mode). The $0.2 -12$ keV light curve shows peak to peak variation of approximately 10\% within 3 hours. To quantify the source variability in different energy bands, we calculated the fractional variability in seven energy bands: E$_1$ (0.2 -- 0.3 keV), E$_2$ (0.3 -- 0.42 keV), E$_3$ (0.42 -- 0.58 keV), E$_4$ (0.58 -- 0.8 keV), E$_5$ (0.8 -- 1.2 keV), E$_6$ (1.2 -- 2 keV), and E$_7$ (2 -- 12 keV), respectively. The energy bands are chosen such that the mean count rate in those bands are approximately same (within 10\% of average). We find significant variability (rms value 2.5-4.5\%) in all the energy bands for 500 s binning (with a typical error of 0.4\%). There is a marginal evidence for increasing variability with increasing energy: the variability is (2.7 $\pm$ 0.4)\% for energy $<$0.6 keV and it is (3.6 $\pm$ 0.4)\% for energy $>$0.6 keV. A Structure Function analysis \citep{rut78} of the light curves shows that the shortest correlation timescale is more than a few thousand seconds. \begin{figure} \centering \includegraphics[scale=0.5,angle=0]{f2.eps} \caption{The CCFs between light curves of 0.2 -- 0.3 keV and higher energy bands (increasing from top to bottom). The CCF values are vertically shifted for clarity. (see text)} \label{ccf} \end{figure} A visual examination of the light curves reveals a gradual decrease in count rate ($\sim 0.2$ cts in 2 hrs) in E$_1$ at time 10 ks from the starting time. Similar kind of variation is seen after more than an hour in E$_7$. (marked by arrows in figure~\ref{lc}). To search for time lags, we calculated the cross correlation function (CCF) between E$_1$ and other energy bands using the {\it crosscor} package in {\sc ftools}. The CCFs between E$_1$ and E$_i$ (where i=2,3,..,7) are plotted as a function of delay, the successive plots are vertically shifted by 1.6, 1.2, 0.9, 0.6, 0.3, 0 respectively for clarity. The errors in the CCF values are the standard one sigma values which includes only the counting statistics errors. The harder light curves systematically lag the softer band. To estimate amount of the delay, we fitted the central part of the CCF distribution with a Gaussian function and derive values of delays for the different energy bands of 200$\pm$700 s, 400$\pm$900 s, 900$\pm$800 s, 1800$\pm$900 s, 2400$\pm$900 s, 4500$\pm$900 s, respectively. The errors in the delays are estimated by the $\chi^2$ fitting method with the prescription $\Delta \chi^2$ = 4 (for three parameters). In Fig~\ref{delay} we plot the derived delay as a function of the energy of the hard band, along with a Comptonization model (see section~5) fitted to the data points ($\chi^2$/dof $\sim$ 1/4). For the hypothesis that there is no delay we get a value for $\chi^2$/dof of 38/6 and for the hypothesis of energy independent delay gives a value for $\chi^2$/dof of 18/5. Hence we can conclude that a delay is detected and it is energy dependent at more than 99\% confidence level. We must, however, caution that the above results are based on statistical considerations only and do not include systematics like the shape of the variation of CCF with delay. We have also derived cross correlation using other combinations (CCFs of $E_i$, w.r.t. $E_4$, where i=1,2,3,5,6,7) to check whether the results are artifact of uncertainty of instrumental calibration of energy below 0.3 keV and we find similar results. \begin{figure} \centering \includegraphics[height=8.8cm,angle=-90]{f3.eps} \caption{Energy dependence of lag between the light curves of energy 0.2 -- 0.3 keV and higher energies.} \label{delay} \end{figure} \section{Spectral Analysis \label{spec}} We first modelled the X-ray spectrum of Mrk~110 with a absorbed power-law with Galactic column density $N_H=1.6 \times 10^{20}$. This provides a fairly good fit to the PN data in the 2--12 keV range ($\chi^2/\nu = 1037/1022$) giving a photon index $\Gamma = 1.75\pm0.01$.Inclusion of a narrow Gaussian emission line improves the fit significantly ($\chi^2 / \nu = 997/1020$), giving the energy of the line $6.42\pm0.02$ keV. Extrapolating the best fit model to 0.3 keV shows a huge soft excess in the spectrum ($\chi^2/\nu = 94471/1363$). Refitting still provides a rather poor fit ($\chi^2/\nu = 7350/1363$). A power-law plus a blackbody (generally the soft excess component in AGN is modeled by thermal black-body but the origin is still unknown) to model the soft excess improves the quality of fit but it is still not acceptable ($\chi^2/\nu = 2109/1361$). The temperature of the blackbody becomes $kT=100\pm2$ eV. Fitting the data with more realistic thermal accretion disc spectrum like {\it diskbb} \citep{mit84} or {\it diskpn} \citep{gier99} provides a poor fit to data ($\chi^2/\nu = 1780/1361$ and $\chi^2/\nu = 1782/1360$ respectively) and the temperature becomes very high ($kT = 155\pm2$ eV and $159\pm3$ eV respectively). These results would seem to reject an origin for the soft excess in terms of unmodified thermal blackbody emission. Addition of a power-law instead of blackbody improves the fit ($\chi^2/\nu = 1680/1361$). The values of the power-law indices are $2.47\pm0.02$ and $1.21\pm0.04$. A model in 0.3 -- 12 keV range which consists of a broken power-law and 6.4 keV Gaussian emission line improves the fit slightly ($\chi^2/\nu = 1597/1361$). The power-law indices are $2.29\pm0.01$ and $1.78\pm0.01$, and the break energy is $1.66\pm0.04$ keV. Reflection off the surface could also produce strong soft excess at low energies. To test this the ionized reflection model {\it pexriv} \citep{mz95} was fitted to the 0.3 - 12 keV spectrum resulting in a poor fit to the data ($\chi^2/dof \sim 2185/1361$). The best fitting parameters were $\Gamma = 2.26\pm0.01$, $R=8.14\pm0.12$ and $\xi < 10^{-3}$. The energy dependent time lags are strongly suggestive of Comptonization of low energy seed photons by a population of high temperature electrons (section~1). The {\sc comptt} code \citep{tit94} was used to model Comptonization of soft photons in a thermal plasma. A power-law plus a Comptonization component give a good fit to the data ($\chi^2/\nu = 1477/1359$) with $\Gamma = 1.51\pm0.05$ and the seed photon temperature $kT_{bb} = 69\pm2$ eV. The temperature and optical depth of the Comptonizing plasma are strongly covariant parameters, and thus cannot be constrained simultaneously. Fitting the data with the {\it compps} model \citep{pou96} along with the power-law gives a reasonable fit ($\chi^2/dof = 1483/1359$). In this model the soft flux is equal to the sum of the absorbed incident flux from the corona and the flux due to local energy dissipation in the cold disk. The spectral shape of the soft components are assumed to be Planckian, with temperature $T_{bb}$ and $T_{disk}$, respectively ($T_{bb}>T_{disk}$). The inner disc temperature $kT_{disk}$ is fixed at $40$ eV (calculated for black hole of mass $10^7~M_{\odot}$ assuming standard accretion disk). The best fit values of the parameters are $\tau = 3.2^{+1.6}_{-0.7}$, $kT_e = 14.0^{+3.1}_{-3.2}$ keV, $kT_{bb} = 99^{+5}_{-5}$ eV, $\Gamma = 1.57^{+0.04}_{-0.05}$. \begin{figure} \centering \includegraphics[height=8.8cm, angle=-90]{f4.eps} \caption{XMM-Newton (EPIC-PN) spectrum in the energy range 0.3 -- 12 keV and the best fit hybrid thermal/non-thermal Comptonization model.} \label{spectrum} \end{figure} The Comptonization model along with a power-law described above has the two spectrally identified continua originating in two distinct thermal Comptonizing plasmas. An alternative is that the whole spectrum is produced by a single plasma with a hybrid thermal/non-thermal electron distribution \citep{cop99}. The definite detection of a non-thermal Comptonization component requires high energy and high quality data. As noted by \citet{pdo95}, the X-ray spectra of ultrasoft Seyfert galaxies do resemble that of Cyg X-1 in its high/soft state. Thus it seems reasonable to test whether a hybrid thermal/non-thermal plasma is a viable model for the X-ray continuum for Mrk~110. To test this idea we tried the hybrid Comptonization model {\it compps} \citep[for a detailed description of parameters see][]{zd05}. We assume a spherical geometry of hot plasma. We fitted the model ($\chi^2/dof = 1455/1359$) and the best fit values of the parameters are $\tau = 4.8^{+1.6}_{-2.1}$, $kT_e = 11.1^{+9.4}_{-1.8}$ keV, $kT_{bb} = 105^{+4}_{-3}$ eV, $\Gamma_{inj} = 2.43^{+0.09}_{-0.12}$, $\gamma_{min} = 1.19^{+0.13}_{-0.05}$. The unfolded spectrum is shown in Figure~\ref{spectrum}. Similar attempt has been done to model the X-ray spectrum of NLS1 galaxy Ton S180 \citep{vau02}. \section{Discussion and Conclusion} We find significant energy dependent delay between the hard and soft X-ray emission in the Seyfert 1 galaxy Mrk 110. Similar delays were also found in the bright Seyfert 1 galaxy MCG-6-30-15 \citep{pon04}. \citet{gal04} found alternating leads and lags in the NLS1 IRAS 13224-3809. Though other interpretations like geometric effects and energy dependent shape of shots can also explain the observed energy dependent delays in Mrk 110, we try to interpret the results as due to Comptonization, particularly because of the fact that the spectral analysis favours a two component Comptonization model. We consider a static Compton cloud with optical depth $\tau_T$, and electron temperature $\Theta=kT_e/m_e c^2$. A soft seed photon of energy $E_0$ injected into the cloud increases its energy by a factor of $A=1+4\Theta+16\Theta^2$, on an average after each scattering, so that after n scattering its energy is $E_n=A^n E_0$. The photon mean free path is $\lambda \approx R/max(1,\tau_T)$ (where R is the size of the X-ray emitting region). The time difference between successive scatterings is then $t_c = (R/c)/max(1,\tau_T)$, where $\tau_T= N \sigma_T R$ is the Thompson optical depth, and $\sigma_T$ is the cross section of Thompson scattering, N is the electron number density of the scattering medium. The time needed to reach the energy $E_n$ is $t_n=nt_c$ \citep{st80,pay80}. We have calculated time lags of different high energy bands with respect to 0.2 -- 0.3 keV band. Hence in our case $E_0=0.25$ keV. We fit the above equation ($t_n=nt_c$) to the result (hard lag as a function of energy) of our analysis and find that the data is in good agreement with the equation (Fig.~\ref{delay}). Using the values of the best fit parameters of the above result and the parameters of the Comptonization model fitted to spectrum we get $R\sim2\times10^{11}~m~\sim 10~R_S$. This value of R is physically realistic as most of the energy is dissipated within 10 $R_S$. \citet{bl98} pointed out that any scenario in which the observed hard time lags are purely due to static Comptonization requires that the radial extent of the hot corona exceeds $\sim 10^{4}~R_S$ of a solar mass black hole ($R>10^{8}~m$). This is incompatible with current models of accretion flows onto galactic black holes \citep[see the review by][]{lia98} even from simple energy arguments (see section~1). But the results reported here pertains to lags at very short times scales and hence is consistent with the static Comptonization model. The harder power-law emission extending to 12 keV (and presumably beyond) can also be produced by Comptonization, either in another purely thermal plasma or non-thermal electrons in a plasma with a hybrid thermal/non-thermal distribution. The origin of the hot plasma above the accretion disk is quite debatable. If the radiation pressure inside the accretion disk very close to the black hole is very high then the accretion disk can be puffed up and the hot thermal plasma can be formed just above the accretion disk. Hybrid thermal/non-thermal plasmas have often been successfully used to model the observed data \citep{gier99,pc98}. A possible origin of the non-thermal component is in process of magnetohydrodynamic turbulence occurring in the corona \citep{lm97}. Stochastic acceleration of particles by stochastic gyroresonent acceleration in accreting plasma can also accelerate particles to higher energies \citep{dml96,lkl96}. We conclude that inverse Compton scattering of soft photons by highly energetic electron distribution provides a satisfactory explanation of the hard X-ray time lag observed in the XMM-Newton observation. The energy spectrum in the 0.3 -- 12 keV band can be modelled either by two component Comptonization or by a hybrid thermal/non-thermal Comptonization. It is not possible to distinguish between them without good quality high energy data. This work is based on observations obtained with the XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA member states and the USA (NASA). Authors are grateful to H. Netzer, G. Dewangan and S. Mandal for useful suggestions. We thank the referee for useful comments.
1,108,101,562,667
arxiv
\section{Introduction} An important part of our knowledge on the electronic properties of graphene, which consist of a two-dimensional (2D) lattice of carbon atoms,% \cite{CG09} can be deduced from optical spectroscopy measurements (for recent reviews see Refs. \onlinecite{P10,OP10}). Infrared spectroscopy experiments allows for the control of interband excitations by means of electrical gating,\cite{WangF2008,LB08} similarly as electrical transport in field effect transistors. Within the simplest Dirac cone approximation, only vertical in wave-vector space transitions across the Dirac point are optically active, leading to a constant value for the optical conductivity of undoped graphene of $\sigma _{0}=\pi e^{2}/2h$. This leads to a frequency-independent absorption of $\pi \alpha \approx 2.3\%$, where $% \alpha=e^2/\hbar c\approx 1/137 $ is the fine structure constant. This fact was observed for suspended graphene in experiments in the visible range of the spectrum\cite{Nair2008} and it was later confirmed by further experiments in suspended graphene\cite{Mak2008,Fei2008} and epitaxial graphene on SiC substrate.\cite{Dawlaty2008} For doped graphene with nonzero chemical potential $\mu $, at zero temperature, in the absence of disorder and without considering many body effects, the allowed excitations are only those between particle-hole pairs with an energy difference larger than $2\mu $, due to Pauli's exclusion principle. This would lead to a zero infrared conductivity below the energy $\omega=2\mu $, and the optical conductivity should be simply a step function $\sigma \left( \omega \right) =\sigma _{0}\Theta \left( \omega -2\mu \right) $. However, a background contribution to the optical conductivity between $0<\omega<2\mu$ was observed in Refs. \onlinecite{LB08,Mak2008}, pointing out the relevance of disorder and many body effects. Another characteristic of the optical spectrum is the Drude peak, which is built from a transfer of spectral weight from the low-energy interband conductance to the $\omega\rightarrow 0$ region of the spectrum,% \cite{Kuzmenko2008} although a strong suppression of the Drude peak at infrared energies has recently been observed.\cite{Horng2011} Furthermore, the flattening of the $\pi$-bands at energies away from the Dirac point is responsible for the strong peak in the spectrum at higher energies (of the order of 5eV) which is associated to optical transitions between states of the Van Hove singularities.\cite{Fei2008,MSH11,SR11} Finally, a method to control the intermediate excited states in inelastic light scattering experiments has been also reported, revealing the important role of quantum interference in Raman scattering.\cite{ChenCF2011} This intense experimental work has been accompanied by a series of theoretical studies which have treated the problem of the optical conductivity at different levels of approximation.\cite% {Ando2002,Gruneis2003,Peres2006,Gusynin2006,SPG07,Gusynin2007,Stauber2008,Stauber2008b,MinHK2009} For example, it has been suggested that the presence of spectral weight in the \textit{forbidden} region of the optical spectrum of doped graphene (below $\omega=2\mu$) can be associated to disorder,\cite{Ando2002,Peres2008} electron-electron interaction\cite{Grushin2009} or excitonic effects.\cite% {Peres2010} In particular, the effect of electron interaction in the spectrum has been considered in Refs. \onlinecite{Mikhailov2007,Mishchenko2007,Falkovsky2007,Falkovsky2007b,Katsnelson2008,RLG08,Sheehy2009,Juricic2010,Giuliani2011}. Furthermore, understanding the role played by the different kinds of disorder that can be present in this material is essential to increase the mobility of the samples. Besides the long-range charged impurities,\cite% {Ando2007,Stauber2008b,Juan2010} other possible scattering sources such as ripples,\cite{KG08} strong random on-site potentials,\cite% {YRK10} large concentration of hydrogen adatoms,\cite{YRK10} strain \cite% {Pereira2010,Pellegrino2010} or random deformations of the honeycomb lattice have been considered.\cite{CV09,Sinner2011} In this paper, we perform a systemic study of the optical spectrum of graphene with different kinds of disorder for both doped and undoped graphene, such as the randomness of the on-site potentials and fluctuation of the nearest-neighbor hopping. Special attention is paid to the presence of resonant impurities, e.g., vacancies and hydrogen adatoms, which have been proposed as the main factor limiting the carrier mobility in graphene.\cite{WK10,MM10,NG10} Furthermore, depending on the way how the defects are distributed over the lattice sites, each kind of disorder can be either non-correlated or correlated. The non-correlated one corresponds to the case with uniformly random distributed disorder sources, i.e., the potential or hopping are randomly changed within a certain range, or the resonant impurities (vacancies or hydrogen adatoms) are randomly positioned over the whole lattice; the correlated one means that the distribution of the disorder follow particular topological structures, such as Gaussian potentials or Gaussian hopping parameters, resonant clusters with groups of vacancies or hydrogen adatoms. In the present paper, we consider a noninteracting $\pi$-band tight-binding model on a honeycomb lattice and solve its time dependent Sch\"odinger equation (TDSE) to calculate the density of states (DOS). From this, the optical conductivity is calculated numerically by means of the Kubo formula. The paper is organized as follows. In Sec. \ref{Sec:Method} we give details about the method. In Secs. \ref{Sec:NCD} and \ref{Sec:CD} we show results for the optical conductivity of undoped graphene in the presence of non-correlated and correlated disorder, respectively. In Sec. % \ref{Sec:Doped} we calculate the optical spectrum of doped graphene. Our main conclusions are summarized in Sec. \ref{Sec:Conclusions}. \section{Model and Method} \label{Sec:Method} The tight-binding Hamiltonian of a disordered single-layer graphene is given by% \begin{equation} H=-\sum_{<i,j>}(t_{ij}a_{i}^{\dagger }b_{j}+\mathrm{h.c})+% \sum_{i}v_{i}c_{i}^{\dagger }c_{i},+H_{imp}, \label{Hamiltonian0} \end{equation}% where where $a_{i}^{\dagger }$ ($b_{i}$) creates (annihilates) an electron on sublattice A (B), $t_{ij}$ is the nearest neighbor hopping parameter, $% v_{i}$ is the on-site potential, and $H_{imp}$ describes the hydrogen-like resonant impurities:% \begin{equation} H_{imp}=\varepsilon _{d}\sum_{i}d_{i}^{\dagger}d_{i}+V\sum_{i}\left( d_{i}^{\dagger}c_{i}+\mathrm{h.c}\right) , \label{Eq:Himp} \end{equation}% where $\varepsilon _{d}$ is the on-site potential on the ``hydrogen'' impurity (to be specific, we will use this terminology although it can be more complicated chemical species, such as various organic groups \cite{WK10}) and $V$ is the hopping between carbon and hydrogen atoms. For discussions of the last term see, e.g. Refs.~\onlinecite{Robinson08,WK10,YRK10}. The spin degree of freedom contributes only through a degeneracy factor and is omitted for simplicity in Eq.~(\ref{Hamiltonian0}). A vacancy can be regarded as an atom (lattice point) with and on-site energy $% v_{i}\rightarrow \infty $ or with its hopping parameters to other sites being zero. In the numerical simulation, the simplest way to implement a vacancy is to remove the atom at the vacancy site. \begin{figure*}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{dosslg_vr.pdf} \includegraphics[width=7cm]{acslg_vr.pdf} } \mbox{ \includegraphics[width=7cm]{dosslg_tr.pdf} \includegraphics[width=7cm]{acslg_tr.pdf} } \mbox{ \includegraphics[width=7cm]{dosslg_x.pdf} \includegraphics[width=7cm]{acslg_x.pdf} } \mbox{ \includegraphics[width=7cm]{dosslg_ximp.pdf} \includegraphics[width=7cm]{acslg_ximp.pdf} } \end{center} \caption{(Color online) Numerical results for the density of states (left panels) and optical conductivity (right panels) of undopped graphene with different kinds of non-correlated disorders: (a,b) random on-site potentials, (c,d) random hopping parameters, (e,f) random distributed vacancies, and (g,h) random distributed hydrogen adatoms. Size of the samples is $4096\times 4096$ for DOS, and $8192\times 8192$ for optical conductivity. In the right column, the insets show a zoom of the optical conductivity in the infrared region of the spectrum.} \label{dos_ac_randomdisorder} \end{figure*} \begin{figure*}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{dosslg_vrcluster.pdf} \includegraphics[width=7cm]{acslg_vrcluster.pdf} } \mbox{ \includegraphics[width=7cm]{dosslg_trcluster.pdf} \includegraphics[width=7cm]{acslg_trcluster.pdf} } \mbox{ \includegraphics[width=7cm]{dosslg_xcluster.pdf} \includegraphics[width=7cm]{acslg_xcluster.pdf} } \mbox{ \includegraphics[width=7cm]{dosslg_ximpcluster.pdf} \includegraphics[width=7cm]{acslg_ximpcluster.pdf} } \end{center} \caption{(Color online) Numerical results for the DOS (left panels) and optical conductivity (right panels) of undopped graphene with different kinds of correlated disorders: (a,b) Gaussian potentials, (c,d) Gaussian hoppings, (e,f) vacancy clusters, and (g,h) hydrogen clusters. The distribution of the clusters of impurities used for the results (e)-(h) are sketched in Fig. \protect\ref{figcluster}.} \label{dos_ac_cluster} \end{figure*} The numerical calculations of the optical conductivity and DOS are performed based on the numerical solution of the TDSE for the non-interacting particles. In general, the real part of the optical conductivity contains two parts, the Drude weight $D$ ($\omega =0$) and the regular part ($\omega \neq 0$). We omit the calculation of the Drude weight, and focus on the regular part. For non-interacting electrons, the regular part is \cite{Ishihara1971,YRK10} \begin{eqnarray} \sigma _{\alpha \beta }\left( \omega \right) &=&\lim_{\varepsilon \rightarrow 0^{+}}\frac{e^{-\beta \omega }-1}{ \omega \Omega }% \int_{0}^{\infty }e^{-\varepsilon t}\sin \omega t \notag \label{gabw2} \\ &&\times 2\text{Im}\left\langle \varphi |f\left( H\right) J_{\alpha }\left( t\right) \left[ 1-f\left( H\right) \right] J_{\beta }|\varphi \right\rangle dt, \notag \\ && \end{eqnarray}% (we put $\hbar =1$) where $\beta =1/k_{B}T$ is the inverse temperature, $\Omega $ is the sample area, $f\left( H\right) =1/\left[ e^{\beta \left( H-\mu \right) }+1\right] $ is the Fermi-Dirac distribution operator, $J_{\alpha }\left( t\right) =e^{iHt}J_{\alpha }e^{-iHt}$ is the time-dependent current operator in the $\alpha $ ($=x$ or $y$) direction, and $\left\vert \varphi \right\rangle $ is a random superposition of all the basis states in the real space, i.e.,\cite{HR00,YRK10} \begin{equation} \left\vert \varphi \right\rangle =\sum_{i}a_{i}c_{i}^{\dagger }\left\vert 0\right\rangle , \label{Eq:phi0} \end{equation}% where $a_{i}$ are random complex numbers normalized as $\sum_{i}\left\vert a_{i}\right\vert ^{2}=1$. The time evolution operator $e^{-iHt}$ and the Fermi-Dirac distribution operator $f\left( H\right) $ can be obtained by the standard Chebyshev polynomial representation.\cite{YRK10} The density of states is calculated by the Fourier transform of the time-dependent correlation functions \cite{HR00,YRK10} \begin{equation} \rho \left( \varepsilon \right) =\frac{1}{2\pi }\int_{-\infty }^{\infty }e^{i\varepsilon t}\left\langle \varphi \right\vert e^{-iHt}\left\vert \varphi \right\rangle dt, \label{Eq:DOS} \end{equation}% with the same initial state $\left\vert \varphi \right\rangle $ defined in Eq.~(\ref{Eq:phi0}). For a more detailed description and discussion of our numerical method we refer to Ref. \onlinecite{YRK10}. In this paper, we fix the temperature to $T=300$K. We use periodic boundary conditions in the calculations for both the optical conductivity and the density of states, and the size of the system is $8192\times 8192$ or $4096\times 4096$. \section{Non-Correlated Disorder} \label{Sec:NCD} \subsection{Random on-Site Potentials or Nearest-Neighbor Hopping Parameters} We first consider two different kinds of disorder: random local change of on-site potentials and random renormalization of the hopping, which correspond to the diagonal and off-digonal disorders in the single-layer Hamiltonian Eq.~(\ref{Hamiltonian0}), respectively. The former acts as a local shift of the chemical potential of the Dirac fermions, i.e., shifts locally the Dirac point, and the latter arises from the changes of distance or angles between the $p_{z}$ orbitals. In order to introduce the non-correlated disorders in the on-site potentials, we consider that the on-site potential $v_{i}$ is random and uniformly distributed (independently of each site $i$) between the values $-v_{r}$ and $+v_{r}$. Similarly, the non-correlated disorder in the nearest-neighbor hopping is introduced by letting $t_{ij}$ be random and uniformly distributed (independently of couple of neighboring sites $\langle i,j\rangle$) between $t-t_{r}$ and $t+t_{r}$. The presence of each type of disorder has quite similar effect to the density of states [see the numerical results with different magnitude of disorders in Fig. \ref% {dos_ac_randomdisorder} (a) and (c) for the random on-site potentials ($% v_{r}/t=0.2$, $0.5$ and $1$) and random hoppings ($t_{r}/t=0.1$, $0.3$ and $% 0.5$) respectively]. The spectrum is smeared starting from the Van Hove singularities at $\left\vert E\right\vert =t$, and the smeared region expands around their vicinal areas as the strength of the disorder is increased, whereas the spectrum around the vicinal region of the neutrality point keeps unaffected unless the disorder is too strong. As the optical conductivity is proportional to the density of states of the occupied and unoccupied states, one expects a peak in the spectrum of the optical conductivity at the energy $\omega \approx 2t$, which corresponds to particle-hole excitations between states of the valence band with energy $% E\approx -t$ and states of the conduction band with energy $E\approx t$.\cite% {YRK11} These processes contribute to the optical conductivity with a strong spectral weight due to the enhanced density of states at the Van Hove singularities of the $\pi $-bands. Because we are considering a full $\pi$-band tight-binding model for our calculations, this peak is also present in our results for the optical conductivity, as it is evident in Figs. \ref{dos_ac_randomdisorder}(b) and (d) at $\omega/t\approx 2$, in qualitative agreement with recent experimental results.\cite{MSH11} Notice that the height of the peak is sensitive to the presence of disorder, getting more and more smeared as the strength of disorder is increased. On the other hand, for this kind of disorder, for which there is no big change in the DOS around the Dirac point, one expects that the low energy spectrum of the optical conductivity should be robust for small disorder, i.e., the optical conductivity should follow the same spectrum as the clean sample without any disorder. These expectations are exactly what we observed in the numerical results of $\sigma \left( \omega \right) $ shown in the insets of Fig. \ref{dos_ac_randomdisorder} (b) and (d). This is indeed the part of the spectrum that can be accounted for within the continuum (Dirac cone) approximation. We can conclude that the non-correlated random disorder in the on-site potentials or hopping integrals have almost no effect on the electronic properties (density of states and AC conductivity) in the low energy part of the spectrum unless the disorder is too large. On the other hand, the high energy inter-band processes between states belonging to the Van Hove singularities of the valence and conduction bands are quite sensitive to the strength of these two kinds of disorder. \subsection{Random Distributed Vacancies or Hydrogen Impurities} Next, we consider the influence of two other types of defects on graphene, namely, vacancies and hydrogen impurities. Introducing vacancies in a graphene sheet will create a zero energy mode (midgap state), effect that has been anticipated in many theoretical works,\cite% {Peres2006,Pereira2006,Pereira2008,YRK10} and which has been recently observed experimentally by means of scanning tunneling spectroscopy (STM) measurements.\cite{Ugeda10} It is shown that the number of midgap states increases with the concentration of the vacancies \cite{YRK10}% , and the inclusion of vacancies brings an increase of spectral weight to the surrounding of the Dirac point ($E=0)$ and smears the van Hove singularities.\cite{Peres2006,Pereira2008,YRK10} This is in fact the behavior found in Fig. \ref{dos_ac_randomdisorder} (e) for the DOS of graphene with different concentrations of vacancies $n_{x}$, where the numerical results with $n_{x}=1\%$, $5\%$, $10\%$ are represented and compared to the density of states of clean graphene. The presence of hydrogen impurities, which are introduced by the formation of a chemical bond between a carbon atom from the graphene sheet and a carbon/oxygen/hydrogen atom from an adsorbed organic molecule (CH$_{3}$, C$% _{2}$H$_{5}$, CH$_{2}$OH, as well as H and OH groups) have quite similar effect to the electronic structure and transport properties of graphene.\cite% {WK10,YRK10} The adsorbates are described by the Hamiltonian $H_{imp}$ in Eq.~(\ref{Hamiltonian0}). The band parameters $V\approx 2t$ and $\epsilon _{d}\approx -t/16$ are obtained from the \textit{ab initio} density functional theory (DFT) calculations.\cite{WK10} Following Refs. % \onlinecite{WK10,YRK10}, we call these impurities as adsorbates hydrogen atoms but actually, the parameters for organic groups are almost the same. \cite{WK10} As we can see from Fig. \ref{dos_ac_randomdisorder} (g), small concentrations of hydrogen impurities have similar effects as the same concentration of vacancies to the density of states of graphene. Hydrogen adatoms also lead to zero modes and the quasilocalization of the low-energy eigenstates, as well as to a smearing of the Van Hove singularities. The shift of the central peak of the density of states with respect to the Dirac point in the case of hydrogen impurities is due to the nonzero (negative) on-site potentials $\epsilon _{d}$. The similarity in the density of states leads to similar optical spectra for graphene with random vacancies or hydrogen adatoms, as it can be seen in Fig. \ref{dos_ac_randomdisorder} (f) and (h). In the high and intermediate energy part of the spectrum it is noticeable, apart from the smearing of the $\omega\approx 2t$ peak due to the renormalization of the Van Hove singularities, the appearance of a new peak at an energy $% \omega\approx t$. This peak is associated to optical transitions between the newly formed midgap states (with energy $E\approx 0$) and the states of the Van Hove singularities (with energy $E\approx t$). Notice that, contrary to the $\omega\approx 2t$ peak, the height of this $\omega\approx t$ peak grows with the strength of disorder, due to the enhancement of the DOS at the Dirac point. Therefore, we expect that this peak should be observed in optical spectroscopy measurements of graphene samples with sufficient amount of resonant scatterers. In the low energy part of the spectra, the new structure of the DOS around the Dirac point leads to a modulation of the infrared conductivity, as it can be seen in the insets of Figs. \ref% {dos_ac_randomdisorder} (f) and (h). The lower peaks, which in Figs. \ref% {dos_ac_randomdisorder} (f) and (h) corresponds to a conductivity $\sigma \approx 0.9\sigma _{0}\,$ for different concentration of impurities, might have their origin from excitations involving states surrounding the zero modes (central high peak in the density of states). At slightly higher energies there is a new set of peaks that can be associated to processes involving states at the boundaries of the midgap states. The optical conductivities in the region between these two peaks are in general smaller compared to those in clean graphene, which can be due to the fact that the midgap states are quasilocalized states. \section{Correlated Disorders} \label{Sec:CD} \subsection{Gaussian Potentials and Gaussian Hoppings} As discussed in the previous section, the change of on-site potential can be regarded as a local chemical potential shift for the Dirac fermions. If the random potentials are too large, characteristics of the graphene band structure such as the Dirac points or the Van Hove singularities can disappear completely, and the whole spectrum becomes relatively flat over the whole energy range.\cite{YRK10b} Therefore in order to introduce large values of random potentials but keep a relatively similar spectrum, in this section we use small concentrations of correlated Gaussian potentials, defined as \cite{LMC08,YRK10b} \begin{equation} v_{i}=\sum_{k=1}^{N_{imp}^{v}}U_{k}\exp \left( -\frac{\left\vert \mathbf{r}% _{i}-\mathbf{r}_{k}\right\vert ^{2}}{2d^{2}}\right) , \label{vgaussian} \end{equation}% where $N_{imp}^{v}$ is the number of the Gaussian centers, which are chosen to be randomly distributed over the carbon atoms ($\mathbf{r}_{k}$), $U_{k}$ is uniformly random in the range $[-\Delta _{v},\Delta _{v}]$ and $d$ is interpreted as the effective potential radius. The typical values of $d$ used in our model are $d=0.65a$ and $5a$ for short- and long-range Gaussian potential, respectively. Here $a\approx 1.42$\r{A}~ is the carbon-carbon distance in the single-layer graphene. The value of $N_{imp}^{v}$ is characterized by the ratio $P_{v}=N_{imp}^{v}/N$, where $N$ is the total number of carbon atoms of the sample. As one can see from Fig. \ref% {dos_ac_cluster}(a), in the presence of locally strong disorders ($\Delta _{v}=3t$ and $t$ for short and long range Gaussian potentials, respectively) the whole spectrum of DOS is quite similar to the case of clean graphene, but with the emergence of states in the vicinal area around the Dirac point, and also a smearing of the Van Hove singularities. This kind of disorder leads to regions of the graphene membrane where the Dirac point is locally shifted to the electron ($U_{k}<0$) or to the hole ($U_{k}>0$) side with the same probability, rising the DOS at zero energy. The final spectrum is similar to the one of clean graphene but with a series of electron-hole puddles which are formed at the maxima and minima of the potential. The enhancement of the DOS around the Dirac point leads to the possibility for new excitations in the low energy part spectrum, as compared to the clean case, as it can see in Fig. \ref{dos_ac_cluster}(b). For the cases we consider, the presence of long-range Gaussian potentials change the low energy optical spectrum completely with the emergence of a new peak around $\omega \approx 0.15t$. The optical conductivity in the region $\omega <0.24t$ is larger than in clean graphene but becomes smaller for $\omega >0.24t$. The increase of the conductivity might have its origin in the possible excitations between electron and hole puddles. Indeed, the renormalization of the spectrum obtained by considering long-range Gaussian potentials leads to a larger optical contribution than for short-range Gaussian potentials, which yield infra-red spectra much more close to that of a clean graphene membrane. The local strong disorder in the hopping between carbon atoms is introduced in a similar way as the correlated potentials, i.e., with a distribution of the nearest-neighbor hopping parameter given by\cite{YRK10b}% \begin{equation} t_{ij}=t+\sum_{k=1}^{N_{imp}^{t}}T_{k}\exp \left( -\frac{\left\vert \mathbf{r% }_{i}+\mathbf{r}_{j}-2\mathbf{r}_{k}\right\vert ^{2}}{8d_{t}^{2}}\right) , \label{tgaussian} \end{equation}% where $N_{imp}^{t}$ is the number of the Gaussian centers ($\mathbf{r}_{k}$% ), $T_{k}$ is uniformly random in the range $[-\Delta _{t},\Delta _{t}]$ and $d_{t}$ is interpreted as the effective screening length. Similarly, the typical values of $d_{t}$ are the same as for the Gaussian potential, i.e., $% d_{t}=0.65a$ and $5a$ for short- and long-range Gaussian random hopping, respectively, and the values of $N_{imp}^{t}$ are characterized by the ratio $P_{t}=N_{imp}^{t}/N$. Numerical results for the DOS and optical conductivity of graphene with short- ($\Delta _{t}=3t,$ $d_{t}=0.65a$) and long-range ($\Delta _{t}=1t,$ $% d_{t}=5a$) Gaussian hoppings are shown in Fig. \ref{dos_ac_cluster}(c-d). This kind of disorder accounts for the effect of substitutional impurities like B or N instead of C, or local distortions of the membrane. Concerning the physics around the neutrality point, in this case the Dirac point remains unchanged although there is a local renormalization of the slope of the band. As a consequence, the Fermi velocity around Dirac is locally increased (when $t_{k}>0$) or decreased (when $t_{k}<0$). However, no midgap states are created by this kind of disorder, and the DOS remains quite similar to the one of a clean graphene layer, as it can be seen in Fig. \ref{dos_ac_cluster}(c). In particular, the absence of an impurity band at $E\approx 0$ makes that the optical conductivity presents only slight deviations as compared to the clean case. This can be seen in Fig. \ref% {dos_ac_cluster}(d), where (apart from the smearing of the Van Hove peak) the optical spectrum, especially in the infra-red region, remains practically the same as in the absence of disorder. \subsection{Vacancy Clusters and Hydrogen Clusters} \begin{figure}[t] \begin{center} \mbox{ \includegraphics[width=4cm]{vacancyx0_02.pdf} \includegraphics[width=4cm]{resonantximp0_02.pdf} } \mbox{ \includegraphics[width=4cm]{vacancyrandom3x0_02.pdf} \includegraphics[width=4cm]{resonantrandom3ximp0_02.pdf} } \mbox{ \includegraphics[width=4cm]{vacancyfix3x0_02.pdf} \includegraphics[width=4cm]{resonantfix3ximp0_02.pdf} } \end{center} \caption{(Color online) Sketch of a gaphene sheet with vacancies (left pannels) or hydrogen adatoms (right pannels). The vacancies are presented as missing carbon atoms, whereas the hydrogen adatoms are highlighted in red. From top to bottom: Resonant impurites are distributed as the formation I ($R=0)$, II ($0\leq R\leq 3a$) and III ($R=3a$% ) as described in the text. For illustrative purposes, the size of the sample shown in this sketch is $60\times 40$, and the concentration of impurities is approximately equal to $2\%$.} \label{figcluster} \end{figure} Correlated resonant impurities are introduced by the formation of groups of vacancies or adsorbed hydrogen atoms (see Fig. \ref{figcluster}). The center of the formed vacancy or hydrogen cluster ($\mathbf{r}_{c}$) is randomly distributed over the honeycomb lattice sites, with equal probability on both sublattices A and B. Each site ($i$) whose distance to one of the centers ($R\equiv \left\vert \mathbf{r-r}_{c}\right\vert $) is smaller than a certain value ($% R_{c}$), is assumed to be part of the cluster, i.e., being a vacancy or adsorbing a hydrogen atom. We further introduce another freedom of the resonant clusters namely that their radius can change within the sample, allowing for a graphene layer with cluster of impurities of different size. This means that the value of $R_{c}$ for each resonant cluster can either be different and randomly distributed to a maximum value, or can be kept fixed for all the clusters in the sample. We want to emphasize that as the center of the cluster is located on a particular sublattice A or B, the formation of the cluster does not preserve the sublattice symmetry and therefore can lead to the appearance of midgap states. \begin{figure*}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{eigenxfix0.pdf} \includegraphics[width=7cm]{eigenxfix01.pdf} } \mbox{ \includegraphics[width=7cm]{eigenximpfix0.pdf} \includegraphics[width=7cm]{eigenximpfix01.pdf} } \end{center} \caption{(Color online) Contour plot of the amplitudes of quasieigenstates at energy $E=0$ or $E=0.1094t$. The radius of the resonant clusters is fixed at $R_{c}=5a$.} \label{quasieigenstates} \end{figure*} Firts, in Fig. \ref{dos_ac_cluster}(e) and (g), we compare the density of states with the same total number of resonant impurities (vacancies or hydrogen adatoms) but with different kinds of formations. We consider three different situations, i.e., randomly distributed uncorrelated single impurities (formation I), randomly distributed correlated clusters with varied radius of clusters (formation II) or with fixed radius of clusters (formation III). The different structures are sketched in Fig. \ref% {figcluster}. Notice that the formation I is a limiting case of the formation III with all the radius of clusters being zero. As we can see from the results of the simulations, the number of midgap states is larger in the cases of uncorrelated single resonant impurities, and smaller for the case of fixed radius of resonant clusters. This is expected since the midgap states are states which are quasilocalized around the vacancies or carbon atoms which adsorb hydrogen atoms.\cite% {Peres2006,Pereira2006,Pereira2008,YRK10} Therefore, for the same concentration of impurities, the number of midgap states will grow with the \textit{isolation} of the impurities in small clusters. Something similar happens for the case of hydrogen clusters. This can understood by looking at Fig. \ref{quasieigenstates}, where we present contour plots of the amplitudes of quasieigenstates at the Dirac point or outside the midgap region. The quasieigenstate $\left\vert \Psi \left( \varepsilon \right) \right\rangle $ is a superposition of the degenerated eigenstates with the same eigenenergy $% \varepsilon $, obtained by the Fourier transformation of the wave functions at different times\cite{YRK10}% \begin{equation} \left\vert \Psi \left( \varepsilon \right) \right\rangle =\frac{1}{2\pi }% \int_{-\infty }^{\infty }dte^{i\varepsilon t}\left\vert \varphi \left( t\right) \right\rangle , \end{equation}% where $\left\vert \varphi \left( t\right) \right\rangle =e^{-iHt}\left\vert \varphi \right\rangle $ is the time evolution of the initial state $% \left\vert \varphi \right\rangle $ defined in Eq.~(\ref{Eq:phi0}). Although the quasieigenstate is not exactly the energy eigenstate unless the corresponding eigenstate is not degenerated at energy $\varepsilon $, we can still use the distribution of the amplitude in the real space to verify the quasilocalization of the zero modes in the presence of random impurities, \cite{YRK10} or obtain the DC conductivity at certain energies or carrier densities.\cite{YRK10,WK10,YRK10b} As we can see from Fig. \ref% {quasieigenstates}, the contour plots of the quasieigenstates of graphene with vacancy and hydrogen clusters are quite similar, i.e., the amplitudes on the carbon atoms which adsorb an hydrogen atom are almost zero, just like if they are vacancies. Furthermore, at the Dirac point (left panels of Fig. \ref{quasieigenstates}, corresponding to $E=0$) the quasieigenstates are semi-localized around the edge of the clusters (see the red color in the regions around the cluster). On the other hand, for energies above the impurity band, the states are not localized around the resonant cluster, and the amplitudes of the quasieigenstates are more or less uniformly distributed over the sample except within the clusters, where the amplitudes are zero. Therefore, as we have discussed above, for a given concentration of impurities, the number of carbon atoms which are located around an impurity will be larger in the formation I than in the formation III. Then, the number of zero modes is also larger in I than in III, leading to spectra for the DOS and optical conductivity similar to the ones of clean graphene for samples in which disorder is concentrated in a small number of big clusters (formation III) than spread into a large number of small clusters (formation I), as it can be seen in Figs. \ref{dos_ac_cluster}% (e)-(h). Finally, notice that the possibility for new excitations between the impurity and the carrier bands, leads to a modulation of the optical conductivity (as compared to the clean membrane) whose peak structure depends on the renormalized DOS and band dispersion of each case. \section{Optical conductivity of doped graphene} \label{Sec:Doped} \begin{figure}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{ac_slg_miua.pdf} } \mbox{ \includegraphics[width=7cm]{ac_slg_miub.pdf} } \end{center} \caption{(Color online) Simulation results of the optical conductivity of doped graphene with diffenrent kinds of non-correlated disorders. The chemical potential is $\protect\mu =0.1t$ in (a) and $0.2t$ in (b).} \label{ac_slg_miu} \end{figure} So far we have discussed the effects of disorder on the optical response of undoped graphene. In this section, we study the optical conductivity of graphene for finite values of the chemical potential, taking into account the effect of disorder. At zero temperature, a clean sheet of gated (doped) graphene has a zero optical conductivity in the region $\omega <2\mu $, and an universal conductivity of $\sigma(\omega)=\sigma _{0}$, due to optically active inter-band excitations through the Dirac point, for energies above the threshold $\omega>2\mu $.\cite% {Ando2002,Stauber2008,Falkovsky2007,Kuzmenko2008,YRK10} In the presence of the disorder, the broadening of the bands as well as the appearance of possible midgap states leads to a more complicated selection rule for the optical transitions, making possible to have excitations in the \textit{forbidden region} $0<\omega<2\mu$, as observed experimentally.\cite{LB08} In this section, we are interested in studying the effect on the optical spectrum of doped graphene of the different kinds of disorder, considered in the previous section. \begin{figure*}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{ac_x_miu.pdf} \includegraphics[width=7cm]{ac_ximp_miu.pdf} } \mbox{ \includegraphics[width=7cm]{ac_x_c.pdf} \includegraphics[width=7cm]{ac_ximp_c.pdf} } \end{center} \caption{(Color online) Optical conductivity of doped graphene with resonant scatterers. Upper panels: Fixed concentration of impurities, $\protect% \sigma(\protect\omega)$ for different values of $\protect\mu$. Lower panels: Fixed chemical potential $\protect\mu$, $\protect\sigma(\protect\omega) $ for different concentration of impurities. } \label{Fig:Doped-resonant} \end{figure*} In Fig. \ref{ac_slg_miu} we compare the numerical results for the optical conductivity of doped graphene, considering four different types of non-correlated disorder (random potentials, random hoppings, vacancies and hydrogen adatoms) as well as clean graphene. First, one notice that the effect of doping is not relevant in the high energy part of the spectrum ($\omega\gg \mu$), and $\sigma(\omega)$ follows the same behavior discussed in Secs. \ref{Sec:NCD} and \ref{Sec:CD}, with a peak corresponding to particle-hole inter-band transitions between states of the Van Hove singularities at $\omega\approx 2t$. However, the spectrum changes dramatically in the infra-red region, as shown in the insets of Fig. \ref {ac_slg_miu}. Therefore, from now on we will focus our interest on the effect of disorder on this low energy part of the spectrum. First, one notices that for all kinds of disorder, there is a peak in $\sigma(\omega)$ close to $\omega=0$, whereas at slightly higher energies, $\sigma(\omega)$ drops to almost zero for the case of non-resonant scatterers (red and green curves), while there is still a non-zero background contribution when resonant scatterers are considered (light and dark blue curves). This can be understood as follows: for all the cases, disorder leads to a broadening of the bands, which allows for intra-band transitions between surrounding states of the Fermi level. However, we have seen that resonant impurities create an impurity band at the Dirac point, with the corresponding peak in the DOS at $E=0 $, whereas non-resonant impurities are not so effective in creating midgap states. Therefore, the background contribution that we find in Fig. \ref {ac_slg_miu}(a)-(b) between $0<\omega<2\mu$ for samples with resonant scatterers are due to transitions between the newly formed impurity band and the conduction band. Taking into account that resonant impurities are believed to be the main source of scattering in graphene,\cite{WK10,MM10,NG10} our results suggest that this kind of impurities could be behind the background contribution to the optical conductivity observed experimentally.\cite{LB08,Mak2008,ChenCF2011,Horng2011} Finally, notice that the peak observed in $\sigma(\omega)$ for the case of resonant impurities at the energy $ \omega\approx \mu$ is associated to transitions between the above discussed impurity band and states at the Fermi level. To gain more insight about the effect of disorder in the optical conductivity of doped graphene, in Fig. \ref{Fig:Doped-resonant} we show $% \sigma (\omega )$ for different values of $\mu $ at fixed concentration of impurities (upper panels), and $\sigma (\omega )$ for different concentrations of impurities and fixed $\mu $. In the first case, the main feature is that the conductivity increases as the doping decreases, in a qualitative agreement with the experimental results.\cite{LB08} When the chemical potential is fixed and the concentration of impurities changes (bottom panels), one observe that the conductivity in the region $% 0<\omega <2\mu $ grows with $n_{x(i)}$ from $\sigma (\omega )=0$ for a clean sample to $\sigma (\omega )\approx 0.4\sigma _{0}$ for the larger concentration of impurities consider ($n_{x(i)}=0.5\%$). If we compare to recent experiments, we notice that a $0.25\%$ of resonant impurities would lead to a background contribution similar to the one reported by Li \textit{% et al.} for graphene on SiO$_{2}$,\cite{LB08} whereas only a $\sim 0.1\%$ of resonant impurities would be necessary to quantitatively reproduce the results of Chen \textit{et al.} for graphene doped with a high-capacitance ion-gel gate dielectric.\cite{ChenCF2011} Finally, we can see that similar results are obtained when a sample with correlated on-site potential disorder distributed in the form of Gaussian clusters, as shown in Fig. \ref{Fig:Doped-Gaussian}. Therefore, we conclude that there are several kinds of disorder (resonant scatterers and correlated impurities) that can induce a finite conductivity in the infra-red region of the spectrum, as observed experimentally. It is the whole set of data on DC and AC transport from which one may infer the dominant type of defects in real graphene. \begin{figure*}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{ac_vrgaussianlong_miu.pdf} \includegraphics[width=7cm]{ac_vrgaussianshort_miu.pdf} } \mbox{ \includegraphics[width=7cm]{ac_vrgaussianlong_c.pdf} \includegraphics[width=7cm]{ac_vrgaussianshort_c.pdf} } \end{center} \caption{(Color online) Same as Fig. \protect\ref{Fig:Doped-resonant} but for doped graphene with on-site potential disorder distributed in gaussian clusters. } \label{Fig:Doped-Gaussian} \end{figure*} \section{Conclusion and Discussion} \label{Sec:Conclusions} We have presented a detailed theoretical study of the optical conductivity of graphene with different kinds of disorder, as resonant impurities, random distribution of on-site potential or random renormalization of the nearest-neighbor hopping parameter (which can account for the effect of substitutional defects). Furthermore, we have consider the possibility for the impurities to be correlated or non-correlated. For all types of disorder considered, the high energy peak at $\omega\approx 2t$, due to inter-band excitations between states of the Van Hove singularities of the valence and the conduction bands, are always sensitive to disorder, getting smeared out proportionally to the strength of disorder. On the other hand, the low energy part of the optical spectrum is strongly dependent on the type of disorder, as well as its strength and concentration. In general, for undoped graphene and in the presence of small disorder of the on-site potentials or in the nearest-neighbor hopping between the carbon atoms, the characteristics of the single-particle Dirac cone approximation are clearly present in the spectrum, and $\sigma(\omega)% \approx\sigma_0$ at energies for which the continuum approximation applies. This is also true when we consider Gaussian hopping parameters. On the other hand, if there are long-range Gaussian potentials, the local shifts of the Dirac points leads to electron-hole puddles and to the emergence of states in the vicinal region of the Dirac points. As a consequence, we observe an enhancement of the optical conductivity in the infra-red part of the spectrum. Interestingly, in the presence of resonant impurities (vacancies or hydrogen adatoms) there appear midgap states which are quasilocalized around the impurities, the number of which is proportional to the number of carbon atoms which are located around the impurities. Completely random distributed (non-correlated) resonant impurities lead to the strongest enhancement of zero modes (seen as a prominent peak in the DOS at zero energy) and also the largest effect on the optical spectrum. In fact, for a large enough amount of resonant impurities, we obtain a new peak in the optical conductivity at an energy $\omega\approx t$, which is associated to optical transitions between the midgap states and states of the Van Hove singularities. When, for a given concentration of impurities, they merge together forming clusters, instead of staying uncorrelated, the influence of disorder on the electronic properties becomes smaller, especially if these clusters form large islands. Finally, we have considered the effect of doping on the spectrum. Whereas for clean graphene, only inter-band processes with an energy larger than $% \omega=2\mu$ are optically active, the presence of disorder leads to a low energy peak in $\sigma(\omega)$ (associated to transitions near the Fermi level) plus a possible spectral weight in the region $0<\omega<2\mu$ for disorders that can create an impurity band at zero energy. Most importantly, we have found that a small amount of resonant impurities $\sim 0.1 - 0.2\%$% , leads to a background contribution to $\sigma(\omega)$ between $% 0<\omega<2\mu$ in qualitative and quantitative agreement with recent spectroscopy measurements. \section{Acknowledgement} The authors thank useful discussions with E. Cappelluti and F. Guinea. This research is supported by the Stichting Fundamenteel Onderzoek der Materie (FOM), the Netherlands National Computing Facilities foundation (NCF), the EU-India FP-7 collaboration under MONAMI and the grant CONSOLIDER CSD2007-00010. \bibliographystyle{apsrev4-1} \section{Introduction} An important part of our knowledge on the electronic properties of graphene, which consist of a two-dimensional (2D) lattice of carbon atoms \cite{CG09} can be deduced from optical spectroscopy measurements (for recent reviews see Refs. \onlinecite{P10,OP10}). Infrared spectroscopy allows for the control of interband excitations by means of electrical gating,\cite{WangF2008,LB08} similarly as electrical transport in field effect transistors. Within the simplest Dirac cone approximation, only vertical in wave-vector space transitions across the Dirac point are optically active, leading to a constant value for the optical conductivity of undoped graphene of $\sigma _{0}=\pi e^{2}/2h$. This leads to a frequency-independent absorption of $\pi \alpha \approx 2.3\%$, where \alpha=e^2/\hbar c\approx 1/137 $ is the fine structure constant, which accounts for the strength of interaction between light and matter. This fact was observed for suspended graphene in experiments in the visible range of the spectrum\cite{Nair2008} and it was later confirmed by further experiments in suspended graphene\cite{Mak2008,Fei2008} and epitaxial graphene on SiC substrate.\cite{Dawlaty2008} For dopped graphene with nonzero chemical potential $\mu $, at zero temperature, in the absence of disorder and without considering many body effects, the allowed excitations are only those between particle-hole pairs with an energy difference larger than $2\mu $, due to Pauli's exclusion principle. This would lead to a zero infrared conductivity below the energy $\omega=2\mu $, and the optical conductivity should be simply a step function $\sigma \left( \omega \right) =\sigma _{0}\Theta \left( \omega -2\mu \right) $. However, a background contribution to the optical conductivity between $0<\omega<2\mu$ was observed in Refs. \onlinecite{LB08,Mak2008}, pointing out the relevance of disorder and many body effects on the electronic properties of this material. Another characteristic of the optical spectrum is the Drude peak, which is built from a transfer of spectral weight from the low-energy interband conductance to the $\omega\rightarrow 0$ region of the spectrum \cite{Kuzmenko2008} although it has been recently observed a strong suppression of the Drude peak at infrared energies.\cite{Horng2011} Furthermore, the flattening of the $\pi$-bands at energies away from the Dirac point is behind the strong peak in the spectrum at higher energies (of the order of 5eV) which is associated to optical transitions between states of the Van Hove singularities.\cite{Fei2008,MSH11,SR11} Finally, a method to control the intermediate excited states in inelastic light scattering experiments has been also reported, revealing the important role of quantum interference in Raman scattering.\cite{ChenCF2011}. This intense experimental work has been accompanied by a series of theoretical studies which have treated the problem of the optical conductivity at different levels of approximation.\cit {Ando2002,Gruneis2003,Peres2006,Gusynin2006,SPG07,Gusynin2007,Stauber2008,Stauber2008b,MinHK2009}. For example, it has been suggested that the presence of spectral weight in the \textit{forbidden} region of the optical spectrum of doped graphene (below $\omega=2\mu$) can be associated to disorder,\cite{Ando2002,Peres2008} electron-electron interaction\cite{Grushin2009} or excitonic effects.\cit {Peres2010} In particular, the effect of electron interaction in the spectrum has been considered in Refs. \onlinecite{Mikhailov2007,Mishchenko2007,Falkovsky2007,Falkovsky2007b,Katsnelson2008,Sheehy2009,Juricic2010,Giuliani2011}. Furthermore, understanding the role played by the different kinds of disorder that can be present in this material is essential to increase the mobility of the samples. Besides the long-range charged impurities,\cit {Ando2007,Stauber2008b,Juan2010} other possible scattering sources has been considered as ripples,\cite{KG08} strong random on-site potentials,\cit {YRK10} large concentration of hydrogen adatoms,\cite{YRK10} strain \cit {Pereira2010,Pellegrino2010} or random deformations of the honeycomb lattice \cite{CV09,Sinner2011} In this paper, we perform a systemic study of the optical spectrum of graphene with different kinds of disorder for both doped and undoped graphene, such as the randomness of the on-site potentials and fluctuation of the nearest-neighbor hoping. Special attention is paid to the presence of resonant impurities, e.g., vacancies and hydrogen adatoms, which have been proposed as the main factor limiting the carrier mobility in graphene.\cite{WK10,MM10,NG10} Furthermore, depending on the way how the defects are distributed over the lattice sites, each kind of disorder can be either non-correlated or correlated. The non-correlated one corresponds to the case with uniformly random distributed disorder sources, i.e., the potential or hopping are randomly changed within certain amplitude, or the resonant impurities (vacancies or hydrogen adatoms) are randomly presented over the whole lattice; the correlated one means that the distribution of the disorder follow particular topological structures, such as Gaussian potentials or Gaussian hopping parameters, resonant clusters with groups of vacancies or hydrogen adatoms. For this aim, we use a noninteracting $\pi$-band tight-binding model on a honeycomb lattice, and solve the time dependent Sch\"odinger equation (TDSE) to calculate the density of states (DOS). From this, the optical conductivity is calculated numerically by means of the Kubo formula. The paper is organized as follows. In Sec. \ref{Sec:Method} we give details about the method. In Secs. \ref{Sec:NCD} and \ref{Sec:CD} we show results for the optical conductivity of undoped graphene in the presence of non-correlated and correlated disorder, respectively. In Sec. \ref{Sec:Doped} we calculate the optical spectrum of doped graphene. Our main conclusions are summarized in Sec. \ref{Sec:Conclusions}. \section{Model and Method} \label{Sec:Method} The tight-binding Hamiltonian of a disordered single-layer graphene is given b \begin{equation} H=-\sum_{<i,j>}(t_{ij}a_{i}^{\dagger }b_{j}+\mathrm{h.c}) \sum_{i}v_{i}c_{i}^{\dagger }c_{i},+H_{imp}, \label{Hamiltonian0} \end{equation where where $a_{i}^{\dagger }$ ($b_{i}$) creates (annihilates) an electron on sublattice A (B), $t_{ij}$ is the nearest neighbor hopping parameter, v_{i}$ is the on-site potential, and $H_{imp}$ describes the hydrogen-like resonant impurities \begin{equation} H_{imp}=\varepsilon _{d}\sum_{i}d_{i}^{\dagger}d_{i}+V\sum_{i}\left( d_{i}^{\dagger}c_{i}+\mathrm{h.c}\right) , \label{Eq:Himp} \end{equation where $\varepsilon _{d}$ is the on-site potential on the ``hydrogen'' impurity (to be specific, we will use this terminology although is can be more complicated chemical species, such as various organic groups \cite{WK10})) and $V$ is the hopping between carbon and hydrogen atoms. For discussions of the last term see, e.g. Refs.~\onlinecite{Robinson08,WK10,YRK10}. The spin degree of freedom contributes only through a degeneracy factor and is omitted for simplicity in Eq.~(\ref{Hamiltonian0}). A vacancy can be regarded as an atom (lattice point) with and on-site energy v_{i}\rightarrow \infty $ or with its hopping parameters to other sites being zero. In the numerical simulation, the simplest way to implement a vacancy is to remove the atom at the vacancy site. \begin{figure*}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{dosslg_vr.eps} \includegraphics[width=7cm]{acslg_vr.eps} } \mbox{ \includegraphics[width=7cm]{dosslg_tr.eps} \includegraphics[width=7cm]{acslg_tr.eps} } \mbox{ \includegraphics[width=7cm]{dosslg_x.eps} \includegraphics[width=7cm]{acslg_x.eps} } \mbox{ \includegraphics[width=7cm]{dosslg_ximp.eps} \includegraphics[width=7cm]{acslg_ximp.eps} } \end{center} \caption{(Color online) Numerical results for the density of states (left panels) and optical conductivity (right panels) of undopped graphene with different kinds of non-correlated disorders: (a,b) random on-site potentials, (c,d) random hopping parameters, (e,f) random distributed vacancies, and (g,h) random distributed hydrogen adatoms. Size of the samples is $4096\times 4096$ for DOS, and $8192\times 8192$ for optical conductivity. In the right column, the insets show a zoom of the optical conductivity in the infrared region of the spectrum.} \label{dos_ac_randomdisorder} \end{figure*} \bigskip The numerical calculations of the optical conductivity and DOS are performed based on the numerical solution of the TDSE for the non-interacting particles. In general, the real part of the optical conductivity contains two parts, the Drude weight $D$ ($\omega =0$) and the regular part ($\omega \neq 0$). We omit the calculation of the Drude weight, and focus on the regular part. {\color{blue} \footnote{{\color{blue} Here we say that we omit the Drude part of the conductivity. It is true that for undoped graphene there is no such a peak, but for doped graphene, there is clearly a big peak in all the figures for the optical conductivity at \omega \rightarrow 0$. Is this not the Drude peak? What is it, then? ----------- I don't think it is Drude peak. It seams that by introducing the disorders, the intraband excitations between the states in the vicinity of Fermi level are also allowed, I remember that in the experimental papers about the optical conductivity of dopped graphene, they observe these enhanced and broaden peaks near $\omega =0$, which can not be explained by the simple drude peak, because in principle, the contribution from the drude weight is just a delta function at $\omega =0.$But in the experiments, the peak is not just like a delta function, but getting broaden when increaseing $\mu $. This is also the case in our simulation results.}}} For non-interacting electrons, the regular part is \cite{Ishihara1971,YRK10} \begin{eqnarray} \sigma _{\alpha \beta }\left( \omega \right) &=&\lim_{\varepsilon \rightarrow 0^{+}}\frac{e^{-\beta \omega }-1}{ \omega \Omega \int_{0}^{\infty }e^{-\varepsilon t}\sin \omega t \notag \label{gabw2} \\ &&\times 2\text{Im}\left\langle \varphi |f\left( H\right) J_{\alpha }\left( t\right) \left[ 1-f\left( H\right) \right] J_{\beta }|\varphi \right\rangle dt, \notag \\ && \end{eqnarray (we put $\hbar =1$) where $\beta =1/k_{B}T$ is the inverse temperature, $\Omega $ is the sample area, $f\left( H\right) =1/\left[ e^{\beta \left( H-\mu \right) }+1\right] $ is the Fermi-Dirac distribution operator, $J_{\alpha }\left( t\right) =e^{iHt}J_{\alpha }e^{-iHt}$ is the time-dependent current operator in the $\alpha $ ($=x$ or $y$) direction, and $\left\vert \varphi \right\rangle $ is a random superposition of all the basis states in the real space, i.e.,\cite{HR00,YRK10} \begin{equation} \left\vert \varphi \right\rangle =\sum_{i}a_{i}c_{i}^{\dagger }\left\vert 0\right\rangle , \label{Eq:phi0} \end{equation where $a_{i}$ are random complex numbers normalized as $\sum_{i}\left\vert a_{i}\right\vert ^{2}=1$. The time evolution operator $e^{-iHt}$ and the Fermi-Dirac distribution operator $f\left( H\right) $ can be obtained by the standard Chebyshev polynomial decomposition.\cite{YRK10} The density of states is calculated by the Fourier transform of the time-dependent correlation functions \cite{HR00,YRK10} \begin{equation} \rho \left( \varepsilon \right) =\frac{1}{2\pi }\int_{-\infty }^{\infty }e^{i\varepsilon t}\left\langle \varphi \right\vert e^{-iHt}\left\vert \varphi \right\rangle dt, \label{Eq:DOS} \end{equation with the same initial state $\left\vert \varphi \right\rangle $ defined in Eq.~(\ref{Eq:phi0}). For a more detailed description and discussion of our numerical method we refer to Ref. \onlinecite{YRK10}. In this paper, we fix the temperature to $T=300$K. We use periodic boundary conditions in calculations for both the optical conductivity and the density of states, and the size of the system is $8192\times 8192$ or $4096\times 4096$. \section{Non-Correlated Disorder} \label{Sec:NCD} \subsection{Random on-Site Potentials or Nearest-Neighbor Hopping Parameters} We first consider two different kinds of disorder: random local change of on-site potentials and random renormalization of the hopping, which correspond to the diagonal and off-digonal disorders in the single-layer Hamiltonian Eq.~(\ref{Hamiltonian0}), respectively. The former acts as a local shift of the chemical potential of the Dirac fermions, i.e., shifts locally the Dirac point, and the later rises from the changes of distance or angles between the $p_{z}$ orbitals. In order to introduce the non-correlated disorders in the on-site potentials, we consider that the on-site potential $v_{i}$ is random and uniformly distributed (independently of each site $i$) between the values $-v_{r}$ and $+v_{r}$. Similarly, the non-correlated disorder in the nearest-neighbor hopping is introduced by letting $t_{ij}$ be random and uniformly distributed (independently of couple of neighboring sites $<i,j>$) between $t-t_{r}$ and $t+t_{r}$. The presence of each type of disorder has quite similar effect to the density of states [see the numerical results with different magnitude of disorders in Fig. \re {dos_ac_randomdisorder} (a) and (c) for the random on-site potentials ( v_{r}/t=0.2$, $0.5$ and $1$) and random hoppings ($t_{r}/t=0.1$, $0.3$ and 0.5$) respectively]. The spectrum is smeared starting from the Van Hove singularities at $\left\vert E\right\vert =t$, and the smeared region expands around their vicinal areas as the strength of the disorder is increased, whereas the spectrum around the vicinal region of the neutrality point keeps unaffected unless the disorder is too strong. As the optical conductivity is proportional to the density of states of the occupied and unoccupied states, one expects a peak in the spectrum of the optical conductivity at the energy $\omega \approx 2t$, which corresponds to particle-hole excitations between states of the valence band with energy E\approx -t$ and states of the conduction band with energy $E\approx t$.\cit {YRK11} These processes contribute to the optical conductivity with a strong spectral weight due to the enhanced density of states at the Van Hove singularities of the $\pi $-bands. Because we are considering a full $\pi$-band tight-binding model for our calculations, this peak is also present in our results for the optical conductivity, as it is evident in Figs. \ref{dos_ac_randomdisorder}(b) and (d) at $\omega/t\approx 2$, in qualitative agreement with recent experimental results.\cite{MSH11} Notice that the height of the peak is sensitive to the presence of disorder, getting more and more smeared as the strength of disorder is increased. On the other hand, for this kind of disorder, for which there is no big change in the DOS around the Dirac point, one expects that the low energy spectrum of the optical conductivity should be robust for small disorder, i.e., the optical conductivity should follow the same spectrum as the clean sample without any disorder. These expectations are exactly what we observed in the numerical results of $\sigma \left( \omega \right) $ shown in the insets of Fig. \ref{dos_ac_randomdisorder} (b) and (d). This is indeed the part of the spectrum that can be accounted for within the continuum (Dirac cone) approximation. We can conclude that the non-correlated random disorder in the on-site potentials or hopping integrals have almost no effect on the electronic properties (density of states and AC conductivity) in the low energy part of the spectrum unless the disorder is too large. On the other hand, the high energy inter-band processes between states belonging to the Van Hove singularities of the valence and conduction bands are quite sensitive to the strength of these two kinds of disorder. \subsection{Random Distributed Vacancies or Hydrogen Impurities} Next, we consider the influence of two other types of defects on graphene, namely, vacancies and hydrogen impurities. Introducing vacancies in a graphene sheet will create a zero energy modes (midgap state).\cit {Peres2006,Pereira2006,Pereira2008,YRK10} It is shown that the number of midgap states increases with the concentration of the vacancies \cite{YRK10 , and the inclusion of vacancies brings an increase of spectral weight to the surrounding of the Dirac point ($E=0)$ and smears the van Hove singularities.\cite{Peres2006,Pereira2008,YRK10} This is in fact the behavior found in Fig. \ref{dos_ac_randomdisorder} (e) for the DOS of graphene with different concentrations of vacancies $n_{x}$, where the numerical results with $n_{x}=1\%$, $5\%$, $10\%$ are represented and compared to the density of states of clean graphene. The presence of hydrogen impurities, which are introduced by the formation of a chemical bond between a carbon atom from the graphene sheet and a carbon/oxygen/hydrogen atom from an adsorbed organic molecule (CH$_{3}$, C _{2}$H$_{5}$, CH$_{2}$OH, as well as H and OH groups) have quite similar effect to the electronic structure and transport properties of graphene.\cit {WK10,YRK10} The adsorbates are described by the Hamiltonian $H_{imp}$ in Eq.~(\ref{Hamiltonian0}). The band parameters $V\approx 2t$ and $\epsilon _{d}\approx -t/16$ are obtained from the \textit{ab initio} density functional theory (DFT) calculations.\cite{WK10} Following Refs. \onlinecite{WK10,YRK10}, we call these impurities as adsorbates hydrogen atoms but actually, the parameters for organic groups are almost the same. \cite{WK10} As we can see from Fig. \ref{dos_ac_randomdisorder} (g), small concentrations of hydrogen impurities have similar effects as the same concentration of vacancies to the density of states of graphene. Hydrogen adatoms also lead to zero modes and the quasilocalization of the low-energy eigenstates, as well as to a smearing of the Van Hove singularities. The shift of the central peak of the density of states with respect to the Dirac point in the case of hydrogen impurities is due to the nonzero (negative) on-site potentials $\epsilon _{d}$. The similarity in the density of states leads to similar optical spectra for graphene with random vacancies or hydrogen adatoms, as it can be seen in Fig. \ref{dos_ac_randomdisorder} (f) and (h). In the high and intermediate energy part of the spectrum it is noticeable, apart from the smearing of the $\omega\approx 2t$ peak due to the renormalization of the Van Hove singularities, the appearance of a new peak at an energy \omega\approx t$. This peak is associated to optical transitions between the newly formed midgap states (with energy $E\approx 0$) and the states of the Van Hove singularities (with energy $E\approx t$). Notice that, contrary to the $\omega\approx 2t$ peak, the height of this $\omega\approx t$ peak grows with the strength of disorder, due to the enhancement of the DOS at the Dirac point. Therefore, we expect that this peak should be observed in optical spectroscopy measurements of graphene samples with sufficient amount of resonant scatterers. {\color{blue} In the low energy part of the spectra, the new structure of the DOS around the Dirac point leads to a modulation of the infrared conductivity, as it can be seen in the insets of Figs. \re {dos_ac_randomdisorder} (f) and (h). The lower peaks, which in Figs. \re {dos_ac_randomdisorder} (f) and (h) corresponds to a conductivity $\sigma \approx 0.9\sigma _{0}\,$ for different concentration of impurities, might have their origin from excitations involving states surrounding the zero modes (central high peak in the density of states). At slightly higher energies there is a new set of peaks that can be associated to processes involving states at the boundaries of the midgap states. The optical conductivities in the region between these two peaks are in general smaller compared to those in clean graphene, what can be due to the fact that the midgap states are quasilocalized states. It is worth to mention that the sharp growth of the conductivity around $\omega =0$ is due to a significant increment in the density of states.}{\color{blue}\footnote{{\color{blue} I'm not sure to have understood your discussion about the low energy part of the spectrum, i.e., the text in blue. In particular, what do you mean by \textquotedblright the conductivities raise faster around $\omega =0$ than the clean graphene is because of the significant increment in the density of states\textquotedblright ?. I'll keep thinking on this, but perhaps it would be nice if we ask Misha or Hans to improve this part.------------ It could be that because we are dealing with nonzero temperature, and therefore it is not a step function at $\omega =0$. The increasement in the case of disorder graphene is faster might because of the fact that the number of states which participate in the excitations grows faster than clean graphene, and as the optical conductivity is propotional to the number of excitating states, so it also grows faster than clean graphene.}}} \section{Correlated Disorders} \label{Sec:CD} \begin{figure*}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{dosslg_vrcluster.eps} \includegraphics[width=7cm]{acslg_vrcluster.eps} } \mbox{ \includegraphics[width=7cm]{dosslg_trcluster.eps} \includegraphics[width=7cm]{acslg_trcluster.eps} } \mbox{ \includegraphics[width=7cm]{dosslg_xcluster.eps} \includegraphics[width=7cm]{acslg_xcluster.eps} } \mbox{ \includegraphics[width=7cm]{dosslg_ximpcluster.eps} \includegraphics[width=7cm]{acslg_ximpcluster.eps} } \end{center} \caption{(Color online) Numerical results for the DOS (left panels) and optical conductivity (right panels) of undopped graphene with diffenrent kinds of correlated disorders: (a,b) Gaussian potentials, (c,d) Gaussian hoppings, (e,f) vacancy clusters, and (g,h) hydrogen clusters. The distribution of the clusters of impurities used for the results (e)-(h) are sketched in Fig. \protect\ref{figcluster}.} \label{dos_ac_cluster} \end{figure*} \subsection{Gaussian Potentials and Gaussian Hoppings} As we have discussed in the previous section, the change of on-site potential can be regarded as a local chemical potential shift for the Dirac fermions. If the random potentials are too large, characteristics of the graphene band structure such as the Dirac points or the Van Hove singularities can disappear completely, and the whole spectrum becomes relatively flat with approximate equally distributed density of states over the whole energy range.\cite{YRK10b} Therefore in order to introduce large values of random potentials but keep a relatively similar spectrum, in this section we use small concentrations of correlated Gaussian potentials, defined as \cite{LMC08,YRK10b} \begin{equation} v_{i}=\sum_{k=1}^{N_{imp}^{v}}U_{k}\exp \left( -\frac{\left\vert \mathbf{r _{i}-\mathbf{r}_{k}\right\vert ^{2}}{2d^{2}}\right) , \label{vgaussian} \end{equation where $N_{imp}^{v}$ is the number of the Gaussian centers, which are chosen randomly distributed on the carbon atoms ($\mathbf{r}_{k}$), $U_{k}$ is uniformly random in the range $[-\Delta _{v},\Delta _{v}]$ and $d$ is interpreted as the effective potential radius. The typical values of $d$ used in our model are $d=0.65a$ and $5a$ for short- and long-range Gaussian potential, respectively. Here $a\approx 1.42$\r{A}~ is the carbon-carbon distance in the single-layer graphene. The value of $N_{imp}^{v}$ is characterized by the ratio $P_{v}=N_{imp}^{v}/N$, where $N$ is the total number of carbon atoms of the sample. As one can see from Fig. \re {dos_ac_cluster}(a), in the presence of locally strong disorders ($\Delta _{v}=3t$ and $t$ for short and long range Gaussian potentials, respectively) the whole spectrum of DOS is quite similar to the case of clean graphene, but with the emergence of states in the vicinal area around the Dirac point, and also a smearing of the Van Hove singularities. This kind of disorder leads to regions of the graphene membrane where the Dirac point is locally shifted to the electron ($U_{k}<0$) or to the hole ($U_{k}>0$) side with the same probability, rising the DOS at zero energy. The final spectrum is similar to the one of clean graphene but with a series of electron-hole puddles which are formed at the maxima and minima of the potential. \color{blue} The enhancement of the DOS around the Dirac point leads to the possibility for new excitations in the low energy part spectrum, as compared to the clean case, as it can see in Fig. \ref{dos_ac_cluster}(b). The presence of long-range Gaussian potentials change the low energy optical spectrum completely with the emergence of a new peak around $\omega \approx 0.15t$. The optical conductivity in the region $\omega <0.24t$ is larger than in clean graphene but becomes smaller for $\omega >0.24t$. The raise of the conductivity might have its origin on the possible excitations between electron and hole puddles. Indeed, the renormalization of the spectrum obtained by considering long-range Gaussian potentials leads to a higher optical contribution than that for short-range Gaussian potentials, which presents an infra-red spectrum much more close to that of a clean graphene membrane.}{\color{blue} \footnote{{\color{blue} Also, this part in blue should be checked. (Should we give the numbers, $\omega \approx 0.15t$, etc. without knowing their origin?) We still need to understand this spectrum better. By looking at the DOS, I'm not able to say something better. Misha and Hans?? If this kind of disorder really leads to el-hole puddles, I think it would be nice to understand this part better and improve the discussion. -------------- I think we can somewhere add something like: \textit{for the cases we considered}... I think that the value of these $\omega $ should also dependent on the the parameters of the Gaussian potentials we are using. Depending on these paparemeters, $\Delta _{v}$ , $d$\thinspace , and also $p_{v}$, the region of these locally shifted chemical potentials will also be different, and therefore the peak will also be changed. But anyway, it should be related to el-hole puddles, although we can not see from the DOS. (we can not because we use large system with average potentials as zero)}}} The local strong disorder in the hopping between carbon atoms is introduced in a similar way as the correlated potentials, i.e., with a distribution of the nearest-neighbor hopping parameter given by\cite{YRK10b \begin{equation} t_{ij}=t+\sum_{k=1}^{N_{imp}^{t}}T_{k}\exp \left( -\frac{\left\vert \mathbf{ }_{i}+\mathbf{r}_{j}-2\mathbf{r}_{k}\right\vert ^{2}}{8d_{t}^{2}}\right) , \label{tgaussian} \end{equation where $N_{imp}^{t}$ is the number of the Gaussian centers ($\mathbf{r}_{k} ), $T_{k}$ is uniformly random in the range $[-\Delta _{t},\Delta _{t}]$ and $d_{t}$ is interpreted as the effective screening length. Similarly, the typical values of $d_{t}$ are the same as for the Gaussian potential, i.e., d_{t}=0.65a$ and $5a$ for short- and long-range Gaussian random hopping, respectively, and the values of $N_{imp}^{t}$ are characterized by the ratio $P_{t}=N_{imp}^{t}/N$. Numerical results for the DOS and optical conductivity of graphene with short- ($\Delta _{t}=3t,$ $d_{t}=0.65a$) and long-range ($\Delta _{t}=1t,$ d_{t}=5a$) Gaussian hoppings are shown in Fig. \ref{dos_ac_cluster}(c-d). This kind of disorder accounts for the effect of substitutional impurities like B or N instead of C, or local distortions of the membrane. Concerning the physics around the neutrality point, in this case the Dirac point remains unchanged although there is a local renormalization of the slope of the band. As a consequence, the Fermi velocity around Dirac is locally increased (when $t_{k}>0$) or decreased (when $t_{k}<0$). However, no midgap states are created by this kind of disorder, and the DOS remains quite similar to the one of a clean graphene layer, as it can be seen in Fig. \ref{dos_ac_cluster}(c). In particular, the absence of an impurity band at $E\approx 0$ makes that the optical conductivity presents only slight deviations as compared to the clean case. This can be seen in Fig. \re {dos_ac_cluster}(d), where (apart from the smearing of the Van Hove peak) the optical spectrum, especially in the infra-red region, remains practically the same as in the absence of disorder. \subsection{Vacancy Clusters and Hydrogen Clusters} \begin{figure}[t] \begin{center} \mbox{ \includegraphics[width=4cm]{vacancyx0.02.eps} \includegraphics[width=4cm]{resonantximp0.02.eps} } \mbox{ \includegraphics[width=4cm]{vacancyrandom3x0.02.eps} \includegraphics[width=4cm]{resonantrandom3ximp0.02.eps} } \mbox{ \includegraphics[width=4cm]{vacancyfix3x0.02.eps} \includegraphics[width=4cm]{resonantfix3ximp0.02.eps} } \end{center} \caption{(Color online) Sketch of a gaphene sheet with vacancies (left pannels) or hydrogen adatoms (right pannels). The vacancies are presented as the missing of the carbon atoms, whereas the hydrogen adatoms are highlight in red color. From top to bottom: the resonant impurites are disbritued as the formation I ($R=0)$, II ($0\leq R\leq 3a$) and III ($R=3a ) as discribed in the text. For illustrative reasons, the size of the sample shown in this sketch is only $60\times 40$, and the concentration of impurities is approximately equal to $2\%$.} \label{figcluster} \end{figure} The correlated resonant impurities are introduced by formation of groups of vacancies or adsorbed hydrogen atoms (see Fig. \ref{figcluster}). The center of the formed vacancy or hydrogen cluster ($\mathbf{r}_{c}$) is randomly distributed over the honeycomb lattice sites, with equal probability on both sublattices A and B. One site ($i$) whose distance to one center ($R\equiv \left\vert \mathbf{r-r}_{c}\right\vert $) is smaller than a certain value ( R_{c}$), is assumed to be part of the cluster, i.e., being a vacancy or adsorbing a hydrogen atom. We further introduce another freedom of the resonant clusters, and it is that their radius can change within the sample, allowing for a graphene layer with cluster of impurities of different size. This means that the value of $R_{c}$ for each resonant cluster can either be different and randomly distributed to a maximum value, or can be kept fixed for all the clusters in the sample. We want to emphasize that as the center of the cluster is located on a particular sublattice A or B, the formation of the cluster does not preserve the sublattice symmetry and therefore can lead to the appearance of midgap states. \begin{figure*}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{eigenxfix0.eps} \includegraphics[width=7cm]{eigenxfix0.1.eps} } \mbox{ \includegraphics[width=7cm]{eigenximpfix0.eps} \includegraphics[width=7cm]{eigenximpfix0.1.eps} } \end{center} \caption{(Color online) Contour plot of the amplitudes of quasieigenstates at energy $E=0$ or $E=0.1094t$. The radius of the resonant clusters is fixed at $R_{c}=5a$.} \label{quasieigenstates} \end{figure*} We first compare in Fig. \ref{dos_ac_cluster}(e) and (g) the density of states with the same total number of resonant impurities (vacancies or hydrogen adatoms) but with different kinds of formations. We consider three different situations, i.e., randomly distributed uncorrelated single impurities (formation I), randomly distributed correlated clusters with varied radius of clusters (formation II) or with fixed radius of clusters (formation III). The different structures are sketched in Fig. \re {figcluster}. Notice that the formation I is a limiting case of the formation III with all the radius of clusters being zero. As we can see from the results of the simulations, the number of midgap states is larger in the cases of uncorrelated single resonant impurities, and smaller for the case of fixed radius of resonant clusters. This is expected since the midgap states are states which are quasilocalized around the vacancies or carbon atoms which adsorb hydrogen atoms.\cit {Peres2006,Pereira2006,Pereira2008,YRK10} Therefore, for the same concentration of impurities, the number of midgap states will grow with the \textit{isolation} of the impurities in small clusters. Something similar happens for the case of hydrogen clusters. This can understood by looking at Fig. \ref{quasieigenstates}, where we contour plot the amplitudes of quasieigenstates at the Dirac point or outside the midgap region. The quasieigenstate $\left\vert \Psi \left( \varepsilon \right) \right\rangle $ is a superposition of the degenerated eigenstates with the same eigenenergy \varepsilon $, obtained by the Fourier transformation of the wave functions at different times\cite{YRK10 \begin{equation} \left\vert \Psi \left( \varepsilon \right) \right\rangle =\frac{1}{2\pi \int_{-\infty }^{\infty }dte^{i\varepsilon t}\left\vert \varphi \left( t\right) \right\rangle , \end{equation where $\left\vert \varphi \left( t\right) \right\rangle =e^{-iHt}\left\vert \varphi \right\rangle $ is the time evolution of the initial state \left\vert \varphi \right\rangle $ defined in Eq.~(\ref{Eq:phi0}). Although the quasieigenstate is not exactly the energy eigenstate unless the corresponding eigenstate is not degenerated at energy $\varepsilon $, we can still use the distribution of the amplitude in the real space to verify the quasilocalization of the zero modes in the presence of random impurities, \cite{YRK10} or obtain the dc conductivity at certain energies or carrier densities.\cite{YRK10,WK10,YRK10b} As we can see from Fig. \re {quasieigenstates}, the contour plots of the quasieigenstates of graphene with vacancy and hydrogen clusters are quite similar, i.e., the amplitudes on the carbon atoms which adsorb an hydrogen atom are almost zero, just like if they are vacancies. Furthermore, at the Dirac point (left panels of Fig. \ref{quasieigenstates}, corresponding to $E=0$) the quasieigenstates are semi-localized around the edge of the clusters (see the red color in the regions around the cluster). On the other hand, for energies above the impurity band, the states are not localized around the resonant cluster, and the amplitudes of the quasieigenstates are more or less uniformly distributed over the sample except within the clusters, where the amplitudes are zero. Therefore, as we have discussed above, for a given concentration of impurities, the number of carbon atoms which are located around an impurity will be larger in the formation I than in the formation III. Then, the number of zero modes is also larger in I than in III, leading to spectra for the DOS and optical conductivity similar to the ones of clean graphene for samples in which disorder is concentrated in a small number of big clusters (formation III) than spread into a large number of small clusters (formation I), as it can be seen in Figs. \ref{dos_ac_cluster (e)-(h). Finally, notice that the possibility for new excitations between the impurity and the carrier bands, leads to a modulation of the optical conductivity (as compared to the clean membrane) whose peak structure depends on the renormalized DOS and band dispersion of each case. \color{blue} \footnote{{\color{blue} I'm not able to find any experimental paper showing optical conductivity results for undoped graphene, below $\sim 300$meV. This would be nice to compare to the modulation we find at low energies. Any suggestions?? ------- No idea yet...}}} \section{Optical conductivity of doped graphene} \label{Sec:Doped} \begin{figure}[t] \begin{center} \mbox{ \includegraphics[width=7cm]{ac_slg_miua.eps} } \mbox{ \includegraphics[width=7cm]{ac_slg_miub.eps} } \end{center} \caption{(Color online) Simulation results of the optical conductivity of dopped graphene with diffenrent kinds of non-correlated disorders. The chemical potential is $\protect\mu =0.1t$ in (a) and $0.2t$ in (b).} \label{ac_slg_miu} \end{figure} So far we have discussed the effects of disorder on the optical response of undoped graphene. In this section, we study the optical conductivity of graphene for finite values of the chemical potential, taking into account the effect of disorder. At zero temperature, a clean sheet of gated (doped) graphene has a zero optical conductivity in the region $\omega <2\mu $, and an universal conductivity of $\sigma(\omega)=\sigma _{0}$, due to optically active inter-band excitations through the Dirac point, for energies above the threshold $\omega>2\mu $.\cit {Ando2002,Stauber2008,Falkovsky2007,Kuzmenko2008,YRK10} In the presence of the disorder, the broadening of the bands as well as the appearance of possible midgap states leads to a more complicated selection rule for the optical transitions, making possible to have excitations in the \textit{forbidden region} $0<\omega<2\mu$, as it has been observed experimentally.\cite{LB08} In this section, we are interested on studying the effect of the different kinds of disorder considered in the previous section, in the optical spectrum of doped graphene. \begin{figure}[t] \begin{center} \mbox{ \includegraphics[width=4cm]{ac_x_miu.eps} \includegraphics[width=4cm]{ac_ximp_miu.eps} } \mbox{ \includegraphics[width=4cm]{ac_x_c.eps} \includegraphics[width=4cm]{ac_ximp_c.eps} } \end{center} \caption{(Color online) Optical conductivity of doped graphene with resonant scatterers. Upper panels: for a fixed concentration of impurities, $\protec \sigma(\protect\omega)$ for different values of $\protect\mu$. Lower panels: for a fixed chemical potential $\protect\mu$, $\protect\sigma(\protect\omega) $ for different concentration of impurities.} \label{Fig:Doped-resonant} \end{figure} In Fig. \ref{ac_slg_miu} we compare the numerical results for the optical conductivity of doped graphene, considering four different types of non-correlated disorder (random potentials, random hoppings, vacancies and hydrogen adatoms) as well as clean graphene. First, one notice that the effect of doping is not relevant in the high energy part of the spectrum ($\omega\gg \mu$), and $\sigma(\omega)$ follows the same behavior discussed in Secs. \ref{Sec:NCD} and \ref{Sec:CD}, with a peak corresponding to particle-hole inter-band transitions between states of the Van Hove singularities at $\omega\approx 2t$. However, the spectrum changes dramatically in the infra-red region, shown in the insets of Fig. \ref {ac_slg_miu}. Therefore, from now on we will focus our interest on the effect of disorder on this low energy part of the spectrum. First, one notices that for all kinds of disorder, there is a peak in $\sigma(\omega)$ close to $\omega=0$, whereas at slightly higher energies, $\sigma(\omega)$ drops to almost zero for the case of non-resonant scatterers (red and green curves), while there is still a finite background contribution when resonant scatterers are considered (light and dark blue curves). This can be understood as follows: for all the cases, disorder leads to a broadening of the bands, what permits intra-band transitions between surrounding states of the Fermi level. However, we have seen that resonant impurities create an impurity band at the Dirac point, with the corresponding peak in the DOS at $E=0 $, whereas non-resonant impurities are not so effective creating midgap states. Therefore, the background contribution that we find in Fig. \ref {ac_slg_miu}(a)-(b) between $0<\omega<2\mu$ for samples with resonant scatterers are due to transitions between the newly formed impurity band and the conduction band. Taking into account that resonant impurities are believed to be the main source of scattering in graphene,\cite{WK10,MM10,NG10} our results suggest that this kind of impurities could be behind the background contribution to the optical conductivity observed experimentally.\cite{LB08,Mak2008,ChenCF2011,Horng2011} Finally, notice that the peak observed in $\sigma(\omega)$ for the case of resonant impurities at the energy $ \omega\approx \mu$ is associated to transitions between the above discussed impurity band and states at the Fermi level. {\color{blue} \footnote{ \color{blue} Please, check this discussion}}} To gain more insight about the effect of disorder in the optical conductivity of doped graphene, in Fig. \ref{Fig:Doped-resonant} we show \sigma (\omega )$ for different values of $\mu $ at fixed concentration of impurities (upper panels), and $\sigma (\omega )$ for different concentrations of impurities and fixed $\mu $. In the first case, the main feature is that the conductivity increases as the doping decreases, in a qualitative agreement with the experimental results.\cite{LB08} When the chemical potential is fixed and it is the concentration of impurities what is changed (bottom panels) one observe that the conductivity in the region 0<\omega <2\mu $ grows with $n_{x(i)}$ from $\sigma (\omega )=0$ for a clean sample to $\sigma (\omega )\approx 0.4\sigma _{0}$ for the larger concentration of impurities consider ($n_{x(i)}=0.5\%$). If we compare to recent experiments, we notice that a $0.25\%$ of resonant impurities would leads to a background contribution similar to the one reported by Li \textit et al.} for graphene on SiO$_{2}$,\cite{LB08} whereas only a $\sim 0.1\%$ of resonant impurities would be necessary to quantitatively reproduce the results of Chen \textit{et al.} for graphene doped with a high-capacitance ion-gel gate dielectric.\cite{ChenCF2011} {\color{blue} Finally, we can see that similar results are obtained when a sample with correlated on-site potential disorder distributed in the form of Gaussian clusters, as it is shown in Fig. \ref{Fig:Doped-Gaussian}. Therefore, we conclude that there are several kinds of disorder (resonant scatterers and correlated impurities) that can induce a finite conductivity in the infra-red region of the spectrum, as it has been observed experimentally. It is the whole set of data on DC and AC transport which can specify the leading type of defects in real graphene. }{\color{blue} \footnote{{\color{blue} I don't know what to say more to distinguish the discussion for the resonant scatterers and for the Gaussian potentials. One possibility would be not to show the results for the Gaussian potential, and leaves the discussion making the claim that resonant impurities can explain the background observed experimentally. What do you think? Also, I think it would be nice to have Misha' and/or Hans opinion about the above this point (including the discussion I've done).------------- As in Ando's paper (Ref. 15), they have done the calculations of the optical condcutivity of dopped graphene with short- and long-range disorder in potentials, our results are somehow conformed their results. Our adavege is that we give the results for the whole spectrum, and they can not. Also we show that the change of the optical conductivity for dopped graphene are mainly for low energy, not for high energy region like Van Hove sigularities. Although the results are similar to the cases as the resonant impurities, but I think it is OK, because this is just the case, and we can simply say that the potentials clusters are also reasonalbe for the explaination of the experimental results.}}} \begin{figure}[t] \begin{center} \mbox{ \includegraphics[width=4cm]{ac_vrgaussianlong_miu.eps} \includegraphics[width=4cm]{ac_vrgaussianshort_miu.eps} } \mbox{ \includegraphics[width=4cm]{ac_vrgaussianlong_c.eps} \includegraphics[width=4cm]{ac_vrgaussianshort_c.eps} } \end{center} \caption{(Color online) Same as Fig. \protect\ref{Fig:Doped-resonant} but for doped graphene with on-site potential disorder distributed in gaussian clusters.} \label{Fig:Doped-Gaussian} \end{figure} \section{Conclusion and Discussion} \label{Sec:Conclusions} In conclusion, we have presented a detailed theoretical study of the optical conductivity of graphene with different kinds of disorder, as resonant impurities, random distribution of on-site potential or random renormalization of the nearest-neighbor hopping parameter (which can account for the effect of substitutional defects). Furthermore, we have consider the possibility for the impurities to be correlated or non-correlated. For all types of disorder considered, the high energy peak at $\omega\approx 2t$, due to inter-band excitations between states of the Van Hove singularities of the valence and the conduction bands, are always sensitive to disorder, getting smeared out proportionally to the strength of disorder. On the other hand, the low energy part of the optical spectrum is strongly dependent on the type of disorder, as well as its strength and concentration. In general, for undoped graphene and in the presence of small disorder in the on-site potentials, or in the nearest-neighbor hopping between the carbon atoms, the characteristics of the single-particle Dirac cone approximation remains evident in the spectrum, and $\sigma(\omega \approx\sigma_0$ at energies for which the continuum approximation applies. This is also true when we consider Gaussian hopping parameters. On the other hand, if there are locally long-range Gaussian potentials, the local shifts of the Dirac points leads to electron-hole puddles and to the emergence of states in the vicinal region of the Dirac points. As a consequence, we observe an enhancement of the optical conductivity in the infra-red part of the spectrum. Interestingly, in the presence of resonant impurities (vacancies or hydrogen adatoms) there appear midgap states which are quasilocalized around the impurities, the number of which is proportional to the number of carbon atoms which are located around the impurities. The totally random distributed (non-correlated) resonant impurities leads to the strongest enhancement of zero modes (seen as a prominent peak in the DOS at zero energy) and also the largest effect to the optical spectrum. In fact, for a large enough amount of resonant impurities, we obtain a new peak in the spectrum for the optical conductivity at an energy $\omega\approx t$, which is associated to optical transitions between the midgap states and states of the Van Hove singularities. When, for a given concentration of impurities, they merge together forming clusters, instead of staying uncorrelated, the influence of disorder on the electronic properties becomes smaller, especially if these clusters form large islands. Finally, we have considered the effect of doping on the spectrum. Whereas for clean graphene, only inter-band processes with an energy larger than \omega=2\mu$ are optically active, the presence of disorder leads to a low energy peak in $\sigma(\omega)$ (associated to transitions near the Fermi level) plus a possible spectral weight in the region $0<\omega<2\mu$ for disorders that can create an impurity band at zero energy. Most saliently, we have found that a small amounts of resonant impurities $\sim 0.1 - 0.2\% , leads to a background contribution to $\sigma(\omega)$ between 0<\omega<2\mu$ in qualitative and quantitative agreement with recent spectroscopy measurements. \section{Acknowledgement} The authors thank useful conversations with E. Cappelluti and F. Guinea. The support by the Stichting Fundamenteel Onderzoek der Materie (FOM) and the Netherlands National Computing Facilities foundation (NCF) are acknowledged. We thank the EU-India FP-7 collaboration under MONAMI and the grant CONSOLIDER CSD2007-00010. \bibliographystyle{apsrev4-1}
1,108,101,562,668
arxiv
\section{Introduction and Background}\label{sec:introduction} Generalizing the classical Laplace operator, Laplace-type operators have been defined for functions on various geometric structures, including domains in Euclidean space or on Riemannian manifolds, and graphs. From their spectra, one can usually extract important information about the underlying structure. In particular, in a seminal paper \cite{Cheeger70}, Cheeger showed that the first non-vanishing eigenvalue of the Laplace-Beltrami operator of a compact Riemannian manifold estimates how difficult it is to decompose the manifold into two pieces. This result has found many generalizations and extensions, and the discrete analogue, that is, the Cheeger-type inequality for graphs, leads to expander theory and is of fundamental importance in theoretical computer science and in the analysis of empirical networks when represented as graphs. And discrete Cheeger-type inequalities can be generalized, for instance, to weighted or signed graphs, again with diverse applications. But there are also higher-order Laplacians, like the Hodge Laplacian operating on exterior differential forms on a Riemannian manifold, or its discrete analogue, the Eckmann Laplacian on a simplicial complex. In this paper, we look at the simplicial case. It is natural to try to generalize the classical spectral results that are known for graphs to simplicial complexes. In particular, one can ask for a version of the Cheeger inequality for higher dimensional simplicial complexes. But it turns out that estimating the first non-trivial eigenvalue of the Eckmann Laplacian on a simplicial complex is a major long-standing open problem in the field of high dimensional expander theory. (Also the analogue in Riemannian geometry, to find Cheeger inequalities for differential $k$-forms, is still far from being understood and solved.) And such a Cheeger-type estimate for the Eckmann Laplacian on a simplicial complex is what we shall develop in this paper. A major difficulty that we had to overcome consists already in the appropriate formulation of the inequality, and for that, we need to figure out the relevant aspects of the combinatorial structure of a simplicial complex that could support such an inequality. This then needs to be combined with insights coming from a non-linear analogue of the Laplacian, the 1-Laplacian, which involves the $L^1$- instead of the $L^2$-norm behind the ordinary Laplacian. That operator is analytically much more difficult than the ordinary Laplacian, but has the advantage that the Cheeger-type inequality here becomes an equality. In view of the preceding, we need to recall and assemble some background material before we can develop our main results. This material will concern the general setting of Cheeger-type inequalities, simplicial complexes and the Eckmann Laplacian, signed graphs, as well as $p$-Laplacians, the usual Laplacians corresponding to $p=2$, and the technically useful case being $p=1$. \subsection{Simplicial complexes}\label{sec:Sim-complex} Here, we only consider a finite set $V$ of vertices, leaving the infinite case open. We recall some standard terminology. A \emph{simplicial complex} $\Sigma$ on $V$ is a subset of its power set $\mathcal{P}(V)$ that is closed under taking subsets, i.e.\ $\forall \sigma\in\Sigma$, $\forall \sigma'\subset \sigma$, $\sigma'\in\Sigma$. The elements of $\Sigma$ are called \emph{simplices}. It follows from this setting that all the vertices constituting a simplex are different from each other. A simplex $\sigma$ with $d+1$ vertices is called a \emph{$d$-simplex}, and we call $d$ its dimension. Its subsimplices are called its \emph{faces}, and its $(d-1)$-dimensional faces are called its \emph{facets}. The dimension of a simplicial complex is the largest dimension among its simplices. A 1-dimensional simplicial complex is a \emph{graph}. We usually assume that $\Sigma$ is \emph{connected}. This means that for any two of its non-empty simplices $\sigma, \sigma'$, there exists a chain of simplices $\sigma_0=\sigma, \sigma_1,\ldots, \sigma_m=\sigma'$ with the property that any two adjacent simplices in this chain have at least one vertex in common. And we usually and naturally assume that all elements of the vertex set $V$ participate in the simplicial complex $\Sigma$, that is, every vertex is contained in at least one simplex. In order to work with orientations, we need a slight modification or amplification of our notation. Here, an \emph{orientation} of a $d$-simplex is an ordering of its vertices up to even permutation. An odd permutation of the vertices changes an oriented $d$-simplex $\sigma_d$ into the oppositely oriented simplex $-\sigma_d$. Thus, from now on, $\sigma_d$ denotes an ordered simplex. Let $\Sigma_d$ be the collection of the $d$-simplices of $\Sigma$. In particular, $\Sigma_0$ is the vertex set $V$. We let $C_d=C_d(\Sigma)$ be the abelian group with coefficients in $\ensuremath{\mathbb{R}}$ generated by the elements of $\Sigma_d$. We also write $C^d=C^d(\Sigma)$ for the linear functions from $C_d$ to $\ensuremath{\mathbb{R}}$ that satisfy \begin{equation} \label{in11} f(-\sigma_d)=-f(\sigma_d), \end{equation} for every oriented $d$-simplex. For $f\in C^{d-1}$, we define its \emph{coboundary} $\delta f:C^{d}\to \ensuremath{\mathbb{R}}$ as \begin{equation} \label{in6} \delta f(v_0,v_1,\ldots ,v_d)=\sum_{i=0}^d (-1)^if(v_0,\ldots, \hat{v_i},\ldots ,v_d), \end{equation} where, as usual, a $\hat{\,}$ over a vertex means that it is omitted. Sometimes, we write \begin{equation}\label{in7} \delta_d: C^d \to C^{d+1}, \end{equation} in order to specify the dimension. The $d$-th \emph{cohomology group}\index{cohomology group} of the simplicial complex $\Sigma$ is \begin{equation} \label{ch5a} H^d(\Sigma):= \ker \delta_d/\mathrm{image\,} \delta_{d-1}. \end{equation} \begin{remark} More generaly, we can consider the linear space $C_d(\Sigma,\mathbb{F})$ with coefficients in an abelian group $\mathbb{F}$, generated by the elements of $\Sigma_d$, and let $C^d(\Sigma,\mathbb{F})$ be the linear functions from $C_d(\Sigma,\mathbb{F})$ to $\mathbb{F}$, satisfying \eqref{in11}, and then we can define the cohomology group $H^d(\Sigma,\mathbb{F})$ in the same way. It is usual to take $\mathbb{F}$ to be a commutative ring (e.g.\ the integer ring $\mathbb{Z}$) or even a field (e.g.\ the field $\mathbb{C}$ of the complex numbers, or the finite field $\mathbb{Z}_p{:=}{~} \mathbb{Z}/p\mathbb{Z}$). As an interesting example, we refer to \cite{Steenbergen14} for the Cheeger constants defined on a simplicial complex which use the cohomology over the finite field $\mathbb{Z}_2$. In this paper, we work with real coefficients. \end{remark} To proceed, we choose positive definite inner products $(\cdot,\cdot)_d$ on the $C^d$. We can then define the adjoint $(\delta_{d})^{*}:C^{d+1}\rightarrow ~C^{d}$ of the coboundary operator $\delta_{d}$ by $$ (\delta_{d}f_{1},f_{2})_{d+1}=(f_{1},(\delta_{d})^{*} f_{2})_{d}, $$ for $f_{1}\in C^{d}$ and $f_{2}\in C^{d+1}$. We can then go back and forth between the $C^d$, as we have the arrows \begin{equation} \label{lap1} C^{d-1} \begin{array}{l} \underrightarrow{\delta_{d-1\textrm{ }}}\\ \overleftarrow{ {\delta_{d-1}}^*} \end{array} C^{d} \begin{array}{l} \underrightarrow{\textrm{ }\textrm{ }\delta_{d\textrm{ }\textrm{ }}}\\ \overleftarrow{ {\textrm{ }\textrm{ }\delta_{d}}^{* \textrm{ }}} \end{array} C^{d+1}. \end{equation} This allows us to define the following three operators on $C^{d}$ (omitting the argument $\Sigma$, i.e., writing for instance $L_d$ instead of $L_d(\Sigma)$, as $\Sigma$ will be mostly kept fixed): \begin{enumerate} \item[(i)] The \emph{$d$-dimensional up Laplace operator} or simply \emph{$d$-up Laplacian} of the simplicial complex $\Sigma$ is $$ L_{d}^{up}{:=}{~} (\delta_{d})^{*}\delta_{d}, $$ \item [(ii)]The \emph{$d$-dimensional down Laplace operator} or \emph{$d$-down Laplacian} is $$ L_{d}^{down}{:=}{~} \delta_{d-1}(\delta_{d-1})^{*}, $$ \item [(iii)]The \emph{$d$-dimensional Laplace operator} or \emph{$d$-Laplacian} is the sum $$ L_{d}{:=}{~} L_{d}^{up}+ L_{d}^{down} =(\delta_{d})^{*}\delta_{d}+\delta_{d-1}(\delta_{d-1})^{*}. $$ \end{enumerate} The operators $L_{d}^{up}$, $L_{d}^{down}$ and $L_{d}$ are self-adjoint and non-negative. Therefore, their eigenvalues are non-negative real numbers. The multiplicities of the eigenvalue $0$ of the Laplacians $L_d(\Sigma)$ contain topological information about $\Sigma$. This is the content of Eckmann's Theorem \cite{Eckmann44}, which is a discrete version of the Hodge theorem. It says that \begin{displaymath} \ker L_{d}(\Sigma)\cong {H}^{d}(\Sigma). \end{displaymath} Thus, the multiplicity of the eigenvalue $0$ of the operator $L_d(\Sigma)$ is equal to the Betti number $b_d$, the dimension of ${H}^{d}(\Sigma)$. As a corollary, \begin{equation} C^d=\mathrm{image\,} \delta_{d-1}\oplus \mathrm{image\,} (\delta_d)^{*} \oplus \ker L_d. \end{equation} We point out that Eckmann's Theorem does not depend on the choice of scalar products on the spaces $C^d$ (although the harmonic cocycles do).\\ While cohomology groups are defined as quotients, that is, as equivalence classes of elements of $C^d$, Eckmann’s Theorem provides us with concrete representatives in $C^d$ of those equivalence classes, the so-called \emph{harmonic cocycles}. These are the eigenvectors for the eigenvalue $0$ of the Laplacian. We shall now look at the non-zero part of the spectrum which will depend on the choice of the scalar products. Since $\delta_{d}\delta_{d-1}=0$ and ${\delta_{d-1}}^{*}{\delta_{d}}^{*}=0$, \begin{align} &\mathrm{image\,} L_{d}^{down}(\Sigma) \subset \ker L_{d}^{up}(\Sigma)\label{hodge},\\ &\mathrm{image\,} L_{d}^{up}(\Sigma) \subset \ker L_{d}^{down}(\Sigma)\label{hodge1}. \end{align} This implies that $\lambda\neq 0$ is an eigenvalue of $L_d(\Sigma)$ if and only if it is a eigenvalue of either $L_{d}^{up}(\Sigma)$ or $L_{d}^{down}(\Sigma)$. Therefore, the non-zero parts of the spectra satisfy \begin{equation} \label{com1} \mathrm{spec}_{\neq 0}(L_{d}(\Sigma))= \mathrm{spec}_{\neq 0}(L_d^{up}(\Sigma))\cup \mathrm{spec}_{\neq 0}(L_d^{down}(\Sigma)). \end{equation} The multiplicity of the eigenvalue $0$ may be different, however. Since $\mathrm{spec}_{\neq 0}(AB)= \mathrm{spec}_{\neq 0}(BA)$, for linear operators $A$ and $ B$ on Hilbert spaces, we conclude \begin{equation} \label{com2} \mathrm{spec}_{\neq 0}(L_{d}^{up}(\Sigma))=\mathrm{spec}_{\neq 0}(L_{d+1}^{down}(\Sigma)). \end{equation} From (\ref{com1}) and (\ref{com2}) we conclude that each of the three families of multisets $$\{\mathrm{spec}_{\neq 0}(L_{d}(\Sigma))\mid 0\leq d \leq m\}\textrm{,}\; \{\mathrm{spec}_{\neq 0}(L_{d}^{up}(\Sigma))\mid 0\leq d \leq m-1\}\textrm{,} \; \{\mathrm{spec}_{\neq 0}(L_{d}^{down}(\Sigma))\mid 1\leq d \leq m\}$$ determines the other two. Therefore, it suffices to consider only one of them. We shall also make use of the following general result, the Courant-Fischer-Weyl minimax principle. \begin{lemma}\label{rayleigh} Let the linear operator $A:H\to H$ on a finite dimensional vector space be self-adjoint w.r.t. the scalar product $(.,.)$. Then its eigenvalues and eigenvectors are the critical values and the critical points of the Rayleigh quotient, defined for $f\neq 0$, \begin{equation} \label{r1} \frac{(Af,f)}{(f,f)}. \end{equation} \end{lemma} \subsection{Cheeger inequalities} \label{sec:many-cheeger} Cheeger\cite{Cheeger70} showed that the first non-vanishing eigenvalue of the Laplace-Beltrami operator of a compact connected Riemannian manifold can be bounded from below in terms of a constant introduced by him and thence called the Cheeger constant which quantifies how difficult it is to cut the manifold into two large pieces by a small hypersurface. Buser \cite{Buser} then also showed an upper estimate. Thus, this eigenvalue is controlling and controlled by the Cheeger constant. It was then realized in \cite{Dodziuk84,Alon,Chung} that an analogous estimate holds on graphs, for the first non-vanishing eigenvalue of the graph Laplacian. The analogue of Cheeger's constant had in fact already been introduced by Polya \cite{Polya}, without connecting it to eigenvalues. To formulate the latter inequalities, we consider an undirected and unweighted graph $\Gamma=(V,E)$ with vertex set $V$ and edge set $E$. The degree $\deg v$ of a vertex $v$ is the number of its neighbors, that is, the number of vertices directly connected it by edges. We define the volume of $S\subset V$ is $\vol (S)= \sum_{v\in S} \deg v$, for $V_1,V_2\subset V$, $|E(V_1,V_2)|$ is the number of edges with one endpoint in $V_1$ and the other in $V_2$. We then put \begin{equation}\label{che17a} \eta(S):=\frac{|E(S,V\backslash S)|}{\min(\mathrm{vol}(S),\mathrm{vol}(V\backslash S))}, \end{equation} and introduce the \emph{(Polya)-Cheeger constant} \begin{equation}\label{che17b} h= \min_{S}\eta(S). \end{equation} The estimate for the first non-vanishing eigenvalue $\lambda$ of the normalized graph Laplacian then says \begin{equation} \label{che17c} \frac{1}{2} h^2\le \lambda \le 2h. \end{equation} This estimate is important, for instance, in the theory of expander graphs, because a good expander should have a large such $\lambda$. In fact, one can not only bound the smallest non-vanishing eigenvalue of a graph from below, but also the largest one from above. The largest eigenvalue of the normalized Laplacian of a graph is always $\le 2$. Equality is realized for bipartite graphs, and for non-bipartite graphs, the difference $2-\lambda$ can be controlled \cite{Bauer,Trevisan}. Already in the original paper by Cheeger \cite{Cheeger70}, the problem was proposed to derive an estimate for the smallest non-vanishing eigenvalue of the Hodge Laplacian on differential $k$-forms on a closed Riemann manifold. So far, this problem is not solved. Its discrete version, a Cheeger-type inequality on simplicial complexes, is also a long-standing open problem in the area of high dimensional expanders. In fact, it is believed that the answer to this question is negative. There are many different definitions of Cheeger constants on simplicial complexes. But neither can satisfy a Cheeger inequality as in the graph setting. In particular, in the field of higher dimensional expanders, people use the so-called $\mathbb{Z}_2$-expander for constructing the Cheeger constants on a simplicial complex. The essential purpose is to establish some good estimate for the first nontrivial eigenvalue of the discrete Eckmann Laplacian by introducing some suitable Cheeger-type constants on a simplicial complex and proving that this controls, and in turn is controlled by that eigenvalue, analogously to \eqref{che17c}. Controlling this eigenvalue from below in terms of the Cheeger-type constant is called the Cheeger side, while controlling it from above is called the Buser side. Usually, the latter is easier than the former. Important contributions in this direction come from Dotterrer and Kahle \cite{Dotterer12} and Steenbergen, Klivans and Mukherjee \cite{Steenbergen14} In \cite{Steenbergen14}, a Cheeger constant via cochain complexes is analyzed, \begin{equation} \label{skm} h^d(\Sigma):= \min\limits_{\phi\in C^d(\Sigma,\mathbb{Z}_2)\setminus\mathrm{Im\,}\delta}\frac{\|\delta\phi\|}{\min\limits_{\psi\in\mathrm{Im\,}\delta}\|\phi+\psi\|} \end{equation} which satisfies $$h^d(\Sigma)=0\Longleftrightarrow \tilde{H}^d(\Sigma,\mathbb{Z}_2)\ne0,\;\; \forall d\ge 0,$$ where $\|\cdot\|$ is the Hamming norm on $C^d(\Sigma,\mathbb{Z}_2)$ (i.e.\ the $l^1$-norm on $\mathbb{Z}_2^{n}$ with $n=\#\Sigma_d$). This is a natural generalization of the classical graph Cheeger constant \eqref{che17b} to higher dimensions on simplicial complexes. Unfortunately, based on the results in \cite{Gundert12} and \cite{Steenbergen14}, the most straightforward attempt at a higher-dimensional Cheeger inequality fails, even for the Buser side -- in higher dimensions, spectral expansion (an eigenvalue gap for the Laplacian) does not imply combinatorial expansion. In fact, according to the examples and theorems in \cite{Dotterer12,Gundert14,Gundert16,Parzanchevski15,Steenbergen14}, all the Cheeger constants defined using cohomology (or homology) with $\mathbb{Z}_2$-coefficients cannot satisfy a general two-sided Cheeger inequality as in the graph setting. This is a consequence of the relation $$\lambda(\Delta_d^{up})=0\Leftrightarrow{\lambda(L_d^{up})=0 \Longleftrightarrow \tilde{H}^d(\Sigma,\ensuremath{\mathbb{R}})\ne0},\;\; d\ge 0,$$ but for $d\ge 1$, the non-vanishing of $\tilde{H}^d(\Sigma,\ensuremath{\mathbb{R}})$ is not equivalent to that of $\tilde{H}^d(\Sigma,\mathbb{Z}_2)$.\\ We should also mention that in \cite{Parzanchevski15}, another Cheeger-type constant is proposed, and their Theorem 1.2 generalizes the upper Cheeger inequality to higher dimensions. That modified Cheeger number is nonzero only if the simplicial complex has a complete skeleton, and the Cheeger side of the inequality includes an additive constant. We shall adopt a different definition, and therefore do not go into further detail here. In this paper, we shall first derive Theorem \ref{thm:anti-signed-Cheeger} which contains an estimate for the spectral gap from $d+2$, recalling that for the vertex Laplacian of a graph, i.e., in the case $d=0$, the spectral gap at 2 can be controlled. We then turn to the more difficult estimate for the spectral gap from $0$, namely, the Cheeger-type estimate for the first non-trivial eigenvalue of the Eckmann Laplacian. Since such an estimate cannot be derived for the Cheeger-type constants introduced earlier, our first contribution here is the introduction of a new Cheeger constant. The key point is that in contrast to the graph case, on higher dimensional simplices, orientations and multiplicities enter into the coboundary relations and therefore implicitly into the eigenvalues. We therefore consider generalized (i.e., with both positive and negative multiplicities) multisets of $d$-simplices. \subsection{Signed graphs}\label{signed} We consider unweighted and undirected graphs $\Gamma$. When $v,v'\in V$, the vertex set of $\Gamma$, are connected by an edge, denoted as $(vv')$, we write $v\sim v'$ and call $v$ and $v'$ neighbors. We shall need an additional structure, a sign function on the edges. A \emph{signed graph} thus is a graph $\Gamma$ equipped with a map $s$ from its edge set to $\pm 1$. We may \emph{switch signs} by taking a vertex and changing the signs of all edges that it is contained in. A signed graph is called \emph{balanced} if by switching some vertices, we can make all signs $=1$, and it is \emph{antibalanced}, if we can make them all $=-1$. Signed graphs have many applications in modeling biological networks, social relations, ferromagnetism, and general signed networks \cite{Zaslavsky,Harary53, ArefWilson19,ArefMasonWilson20}. The spectral theory for signed graphs has led to a number of breakthroughs in theoretical computer science and combinatorial geometry, including the solutions to the sensitive conjecture \cite{Huang19} and the open problem on equiangular lines \cite{JTYZZ21,JTYZZ}. The Laplacian of the signed graph $(\Gamma,s)$ is \begin{equation} \label{slap} \Delta_s f(v)= f(v)-\frac{1}{\deg v}\sum_{v' \sim v}s(vv')f(v')=\frac{1}{\deg v}\sum_{v' \sim v}(f(v)-s(vv')f(v')) \end{equation} We record some basic results about the spectrum of this operator \cite{Atay20} that can be easily checked. \begin{lemma}\label{ss} The eigenvalues of $\Delta_s$ are real and lie in the interval $[0,2]$. In fact, the smallest eigenvalue is $=0$ if and only if $(\Gamma,s)$ is balanced, and positive otherwise. Likewise, the largest eigenvalue is $=2$ if and only if the graph is antibalanced. \end{lemma} To proceed, we recall the multi-way Cheeger constant $h_k^s$ on a signed graph $(\Gamma,s)$ introduced in \cite{Atay20}. For disjoint $V_1,V_2\subset V$, let $E^+(V_1,V_2)=\{\{u,v\}\in E:u\in V_1,v\in V_2,s(uv)=1\}$ and $E^-(V_1)=\{\{u,v\}\in E:u,v\in V_1,s(uv)=-1\}$. The signed bipartiteness ratio is defined as $$\beta^s(V_1,V_2)=\frac{2\left(|E^-(V_1)|+|E^-(V_2)|+|E^+(V_1,V_2)|\right)+|\partial(V_1\sqcup V_2)|}{\vol(V_1\sqcup V_2)}.$$ The signed Cheeger constant of the signed graph $(\Gamma,s)$ is then defined as $$h^s=\min\limits_{(V_1,V_2)\ne(\emptyset,\emptyset)}\beta^s(V_1,V_2)$$ where the minimum is taken over all possible sub-bipartitions of $V$. $\beta^s$ and hence also $h^s$ is switching invariant. The Cheeger inequality for signed graphs established in \cite{Atay20} says that for a signed graph $(\Gamma,s)$, we have \begin{equation} \label{atay} \frac{\lambda_1(\Delta_s)}{2}\le h^s\le \sqrt{2\lambda_1(\Delta_s)}. \end{equation} The $k$-way signed Cheeger constant is defined as $$h_k^s=\min\limits_{\{(V_{2i-1},V_{2i})\}_{i=1}^k}\max\limits_{1\le i\le k}\beta^s(V_{2i-1},V_{2i})$$ where the minimum is taken over the set of all possible $k$ pairs of disjoint sub-bipartitions $(V_1,V_2)$, $(V_3,V_4)$, $\ldots$, $(V_{2k-1},V_{2k})$. $h^s_k$ is again switching invariant. This definition allowed Atay and Liu to generalize and put into perspective the higher-order Cheeger inequality for ordinary graphs by Lee, Oveis Gharan, and Trevisan \cite{LGT12}. Their estimate is (\cite{Atay20}) \begin{theorem}\label{thm:signed-Cheeger} There exists an absolute constant $C$ such that for any signed graph $(\Gamma,s)$, and any $k\in\{1,\ldots,n\}$, $$\frac{\lambda_k(\Delta_s)}{2}\le h_k^s\le Ck^3 \sqrt{\lambda_k(\Delta_s)}.$$ \end{theorem} \subsection{A relation between simplicial complexes and signed graphs}\label{sec:simp-sign} The aim of the present paper is to provide new Cheeger-type inequalities for the first nontrivial eigenvalues of $L_d(\Sigma)$, $L_{d}^{up}(\Sigma)$ and $L_{d}^{down}(\Sigma)$. As explained in Section \ref{sec:Sim-complex}, it suffices to consider $L_{d}^{up}(\Sigma)$ for every $d$. And as stated in that section, this operator depends on the choice of scalar products. With an appropriate choice, we obtain the \emph{normalized} Laplacian, which we denote by $\Delta_{d}^{up}$, in order to distinguish it from the general case. It is given by \[(\Delta_{d}^{up} f)([\sigma])= f([\sigma]) + \frac{1}{\deg \sigma}\sum_{\substack{\sigma'\in \Sigma_{d}: \sigma\neq \sigma',\\ \sigma,\sigma'\in\partial \rho}} \sgn([\sigma],\partial [\rho])\sgn([\sigma'],\partial [\rho])f([\sigma']), \] Our results will be obtained for this operator, and they only partially generalize to a general $L_{d}^{up}$. A key step is to express the up-Laplacian of a simplicial complex in terms of the Laplacian of an associated signed graph. The normalized up-Laplacian of a simplicial complex $\Sigma$ can be written as \begin{equation} \label{asc1} (\Delta_{d}^{up} f)([\sigma])= f([\sigma]) - \frac{1}{\deg \sigma}\sum_{\substack{\sigma'\in \Sigma_{d}: \sigma\neq \sigma',\\ \sigma,\sigma'\in\partial \rho}} s([\sigma],[\sigma'])f([\sigma']), \end{equation} where we have put \begin{align} s([\sigma],[\sigma'])&:= -\sgn([\sigma],\partial [\rho])\sgn([\sigma'],\partial [\rho]).\label{asc2} \end{align} Thus, we may express $\Delta_{d}^{up}$ in terms of the Laplacian $\Delta_{(\Gamma_d,s)}$ for the signed graph $(\Gamma_d,s)$ with vertex set consisting of the $d$-simplices of our simplicial complex, and where two different such vertices $\sigma, \sigma'$ are connected by an edge, $\sigma \sim \sigma'$, if there exists a $(d+1)$-simplex $\rho$ in $\Sigma$ with $\sigma,\sigma'\in\partial \rho$. \begin{remark} This construction is very natural and essentially follows from the definition of the (up/down) combinatorial Laplacian matrices of a simplicial complex. A similar idea was already used to define the signed adjacency matrix of a triangulation on a surface \cite{FST08}. \end{remark} The relation between the up-Laplacian and the signed graph Laplacian \eqref{slap} is \begin{equation} \label{asc3} \Delta_{d}^{up}= (d+1)\Delta_{(\Gamma_d,s)} - d\ \mathrm{Id}. \end{equation} By \eqref{asc3}, the eigenvalues $\mu_j$ of $\Delta_{d}^{up}$ and the eigenvalues $\lambda_j$ of $\Delta_{(\Gamma_d,s)}$ are related by \begin{equation} \label{asc5} \mu_j =(d+1)\lambda_j -d. \end{equation} Since the eigenvalues of $\Delta_{(\Gamma_d,s)}$ lie in the interval $[0,2]$, those of $\Delta_{d}^{up}$ lie in the interval $[0,d+2]$. In fact, since $\mu_j \ge 0$ in \eqref{asc5}, the eigenvalues of $\Delta_{(\Gamma_d,s)}$ are $\ge \frac{d}{d+1}$. Equality holds if and only if there is some non-trivial $f$ with $\partial^d f=0$. More precisely, the multiplicity of the eigenvalue $\frac{d}{d+1}$ of $\Delta_{(\Gamma_d,s)}$ equals the dimension of the kernel of the coboundary operator $\delta_d$. In particular, for $d>0$, the graph $(\Gamma_d,s)$ is never balanced. The next result then is an easy consequence of Lemma \ref{ss}. \begin{pro}\label{prop:q+2 eigen} The spectrum of $\Delta_{d}^{up}$ contains the eigenvalue $d+2$ if and only if the signed graph $(\Gamma_d,s)$ has an antibalanced component. Moreover, the multiplicity of $d+2$ equals the number of antibalanced components of $(\Gamma_d,s)$. \end{pro} We consider the opposite $(\Gamma_d,-s)$ of the signed graph $(\Gamma_d,s)$, with its Laplacian $\Delta_{(\Gamma_d,-s)}$. Then, the eigenvalues of the three Laplacians $\Delta_{d}^{up}$, $\Delta_{(\Gamma_d,s)}$ and $\Delta_{(\Gamma_d,-s)}$ satisfy the relation: $$ \begin{matrix} \text{Spectrum of }\Delta_{d}^{up} & ~& \text{Spectrum of }\Delta_{(\Gamma_d,s)} & ~& \text{Spectrum of }\Delta_{(\Gamma_d,-s)} \\ \\ &~& &~& \\ 0 &~& \frac{d}{d+1} &~& \frac{d+2}{d+1} \\ \vdots &~& \vdots &~& \vdots \\ \lambda &\Longleftrightarrow& \frac{\lambda+d}{d+1} &\Longleftrightarrow& \frac{d+2-\lambda}{d+1} \\ \vdots &~& \vdots &~& \vdots \\ d+2 &~& 2 &~& 0 \\ \end{matrix} $$ that is, \begin{pro} $\lambda$ is an eigenvalue of $\Delta_{d}^{up}$ if and only if $\frac{\lambda+d}{d+1} $ is an eigenvalue of $\Delta_{(\Gamma_d,s)}$ if and only if $\frac{d+2-\lambda}{d+1} $ is an eigenvalue of $\Delta_{(\Gamma_d,-s)}$. \end{pro} In addition, analogously to Proposition \ref{prop:q+2 eigen} \begin{pro} The multiplicity of the eigenvalue $0$ of $\Delta_{d}^{up}$ is $\ge d+1$ (when the simplicial complex is pure, the multiplicity of the eigenvalue $0$ of $\Delta_{d}^{up}$ is $d+1$ if and only if the simplicial complex is a simplex of dimension $d+1$). And the multiplicity of the eigenvalue $d+2$ of $\Delta_{d}^{up}$ agrees with the number of balanced components of $\Delta_{(\Gamma_d,-s)}$. \end{pro} \subsection{$p$-Laplacians}\label{plap} An essential feature of Cheeger-type inequalities is that they connect an $L^2$-quantity, the smallest nontrivial eigenvalue of the Laplacian, with an $L^1$-quantity, the Cheeger constant. Therefore, it seems natural to interpolate between the exponents 2 and 1. This can be done, as we shall briefly explain now, but the case $p=1$, which is the case of most interest, creates additional difficulties. But in fact, for $p=1$, the inequalities that we are after become equalities, and this conversely is useful for deriving the inequality for $p=2$. Thus, similar to the up and down Laplacians on simplicial complexes (see Section \ref{sec:Sim-complex}), we shall now introduce the $p$-Laplace operators on $C^d(\Sigma)$. For $p>1$, we put $$\alpha_p:(t_1,t_2,\cdots)\mapsto (|t_1|^{p-2}t_1,|t_2|^{p-2}t_2,\cdots).$$ Since this becomes undetermined for $p=1$ when $t=0$, we need to modify the definition and let it be set valued, that is, $$\alpha_1:(t_1,t_2,\cdots)\mapsto \{(\xi_1,\xi_2,\cdots):\xi_i\in\mathrm{Sgn}(t_i)\},$$ with $$\mathrm{Sgn}(t):=\begin{cases} \{1\} & \text{if } t>0,\\ [-1,1] & \text{if }t=0,\\ \{-1\} & \text{if }t<0, \end{cases} $$ We can then define the $d$-th up $p$-Laplace operator $$L^{up}_{d,p}:=\delta_d^*\alpha_{p}\delta_d,$$ having for $f\in C^d(\Sigma)$, $$L^{up}_{d,p}f=B_{d+1} \alpha_p(B_{d+1}^\top f).$$ Analogously, we can also define the $d$-th down $p$-Laplace operator $L^{down}_{d,p}:=\delta_{d-1}\alpha_p\delta_{d-1}^*$, having for $f\in C^d(\Sigma)$, $L^{down}_{d,p}f=B_d^\top \alpha_p (B_df)$, and the $d$-th $p$-Laplace operator as $ L_{d,p}:=L^{up}_{d,p}+ L^{down}_{d,p}$. The eigenvalue problem of $L^{up}_{d,p}$ is to find real numbers $\lambda$ and nonzero functions $f:\Sigma_d\to\ensuremath{\mathbb{R}}$ satisfying \[L^{up}_{d,p}f=\lambda\alpha_p(f),\text{ for the case of }p>1,\] or \[0\in L^{up}_{d,1}f-\lambda\alpha_1(f),\text{ for the case of }p=1.\] In the case of $d=0$, the above nonlinear eigenproblem is actually the spectral problem for the graph $p$-Laplacian \cite{Amghibech,HeinBuhler2009,HeinBuhler2010,CSZ17}. Of most interest for us will be the min-max eigenvalues, that is, those that can be obtained from Rayleigh quotients as in Lemma \ref{rayleigh}. Thus, we look for \begin{equation} \label{plap1} \lambda_i(L^{up}_{d,p}):=\inf_{\gamma(S)\ge i}\sup_{f\in S}\frac{\|B_{d+1}^\top f\|_p^p}{ \|f\|_{p}^p},\;\;i=1,2,\cdots,n, \end{equation} where $n=\#\Sigma_d$, and \begin{equation*} \gamma(S):=\begin{cases} \min\limits\{k\in\mathbb{Z}^+: \exists\; \text{odd continuous}\; \varphi: S\setminus \{\mathbf{0}\} \to \mathbb{S}^{k-1}\} & \text{if}\; S\setminus \{\mathbf{0}\} \ne\emptyset,\\ 0 & \text{if}\; S\setminus \{\mathbf{0}\} =\emptyset. \end{cases} \end{equation*} denotes the Krasnoselskii genus of a centrally symmetric set $S\subset\ensuremath{\mathbb{R}}^n$. As already indicated, the important case of \eqref{plap1} will be $p=1$. Obviously, analogous constructions are possible for $L^{down}_{d,p}$. \section{Cheeger-type inequalities on $d$-faces of simplicial complexes} As explained in Section \ref{sec:Sim-complex}, we shall work on an abstract simplicial complex $\Sigma$ with vertex set $V=\{1,\cdots,n\}$. For $\sigma=\{i_0,\cdots,i_d\}\in \Sigma$, we use $[\sigma]:=[i_0,\cdots,i_d]$ to indicate the oriented $d$-dimensional simplex which is formed by $\sigma$ when arranging its vertices in the specified order. We then let $[\Sigma_d]=\{[\sigma]:\sigma\in \Sigma_d\}$ be the set of all oriented $d$-simplexes. Analogously to the cochain group $C^d(\Sigma)$, the $d$-th chain group $C_d(\Sigma)$ of $\Sigma$ is a vector space with the basis $[\Sigma_d]$. The boundary map $\partial_d:C_d(\Sigma)\to C_{d-1}(\Sigma)$ is a linear operator defined by \[\partial_d[i_0,\cdots,i_d]=\sum_{j=0}^d(-1)^j[i_0,\cdots,i_{j-1},i_{j+1},\cdots,i_d],\] which can also be represented by the incidence matrix $B_d$ of dimension $|\Sigma_{d-1}|\times |\Sigma_d|$ whose elements belong to $\{-1,0,1\}$. With this notation, the $d$-th cochain group $C^d(\Sigma)$ is the dual of the chain group $C_d(\Sigma)$. The simplicial coboundary map $\delta_d:C^d(\Sigma)\to C^{d+1}(\Sigma)$ is a linear operator generated by $(\delta_df)([i_0,\cdots,i_{d+1}])=\sum_{j=0}^{d+1}(-1)^jf([i_0,\cdots,i_{j-1},i_{j+1},\cdots,i_{d+1}])$ for any $f\in C^d(\Sigma)$. It is obvious that $\delta_d=B_{d+1}^\top$, and we can then define the adjoint via $\delta_d^*=B_{d+1}$. We can therefore also use the incidence matrices express the Laplace operators (see \cite{Horak13a}): \begin{enumerate}[-] \item the $d$-th up Laplace operator $L^{up}_d=\delta_d^*\delta_d=B_{d+1} B_{d+1}^\top$ \item the $d$-th down Laplace operator $L^{down}_d=\delta_{d-1}\delta_{d-1}^*=B_d^\top B_d$ \item the $d$-th Laplace operator $L_d=L^{up}_d+L^{down}_d=\delta_d^*\delta_d+\delta_{d-1}\delta_{d-1}^*=B_d^\top B_d+B_{d+1} B_{d+1}^\top$ \end{enumerate} \subsection{Spectral gap from $d+2$} Here, we shall build upon Sections \ref{sec:Sim-complex} and \ref{sec:simp-sign}. Again, the key is to convert a Cheeger problem for higher dimensional simplices into one for signed graphs. We thus suggest the following Cheeger-type constants. As always, we consider a simplicial complex $\Sigma$, and we denote the collection of its $d$-dimensional simplices by $\Sigma_d$. We shall need a slight modification of the construction in Section \ref{sec:simp-sign}. Hereafter, we will consider the signed graph $(\Gamma_d,s)$ on the vertex set $\Sigma_d$, under the up adjacency relation, and with the sign function \begin{equation}\label{sign} s([\tau],[\tau'])= \sgn([\tau],\partial [\sigma])\sgn([\tau'],\partial [\sigma]) \end{equation} which is the opposite of the sign function defined in \eqref{asc2}. For disjoint $A,A'\subset \Sigma_d$, let $|E^+(A,A')|=\#\{\{\tau,\tau'\}:\tau\in A,\tau'\in A',s([\tau],[\tau'])=1\}$ and $|E^-(A)|=\#\{\{\tau,\tau'\}:\tau,\tau'\in A,s([\tau],[\tau'])=-1\}$. Let $$\beta(A,A')=\frac{2\left(|E^-(A)|+|E^-(A')|+|E^+(A,A')|\right)+|\partial(A\sqcup A')|}{\vol(A\sqcup A')}$$ where $|\partial A|$ is the number of the edges of $(\Gamma_d,s)$ that cross $A$ and $\Sigma_d\setminus A$, $\vol(A)=\sum_{\tau\in A}\deg \tau$ and $\deg \tau=\#\{\sigma\in \Sigma_{d+1}:\tau\subset \sigma\}$. Then we introduce the $k$-th Cheeger constant on $\Sigma_d$: $$h_k(\Sigma_d)=\min\limits_{\text{disjoint } A_1,A_2,\ldots,A_{2k-1},A_{2k}\text{ in }\Sigma_d}\max\limits_{1\le i\le k}\beta(A_{2i-1},A_{2i}).$$ $h_k(\Sigma_d)=0$ if and only if $(\Gamma_d,s)$ has exactly $k$ balanced components. \begin{remark} For $d=0$, the constant $h_k(\Sigma_0)$ reduces to the $k$-way Cheeger constant of a graph \cite{LGT12}. \end{remark} \begin{theorem}\label{thm:anti-signed-Cheeger} For any simplicial complex and every $d\ge 0$, \begin{equation}\label{eq:Cheeger-1-complex} \frac{ h_1(\Sigma_d)^2}{2(d+1)}\le d+2-\lambda_n(\Delta^{up}_d)\le 2h_1(\Sigma_d), \end{equation} where $n=\#\Sigma_d$. Moreover, there exists an absolute constant $C$ such that for any simplicial complex, and for any $k\ge 1$, \begin{equation}\label{eq:Cheeger-k-complex} \frac{ h_k(\Sigma_d)^2}{Ck^6(d+1)}\le d+2-\lambda_{n+1-k}(\Delta^{up}_d)\le 2h_k(\Sigma_d). \end{equation} \end{theorem} \begin{proof} We first show \begin{equation} \label{r2} d+2-\lambda_{n-i+1}(\Delta^{up}_d)=(d+1)\lambda_i(\Delta_{(\Gamma_d,s)}),\;\;\;i=1,\ldots,n. \end{equation} We have \begin{eqnarray*} &&(d+2)\sum_{\tau\in \Sigma_d}\deg_\tau f(\tau)^2-\sum\limits_{\sigma\in \Sigma_{d+1}}\left(\sum_{\tau\in \Sigma_d,\tau\subset\sigma}\sgn([\tau],\partial[\sigma])f(\tau)\right)^2\\ &=&\sum_{[\tau]\sim[\tau']}\left(f(\tau)-\sgn([\tau],\partial[\sigma])\sgn([\tau'],\partial[\sigma])f(\tau'))\right)^2. \end{eqnarray*} Recalling \eqref{sign}, this yields the identity $$d+2-\frac{\sum\limits_{\sigma\in \Sigma_{d+1}}\left(\sum_{\tau\in \Sigma_d,\tau\subset\sigma}\sgn([\tau],\partial[\sigma])f(\tau)\right)^2}{\sum_{\tau\in \Sigma_d}\deg_\tau f(\tau)^2}=(d+1)\frac{\sum_{[\tau]\sim[\tau']}\left(f(\tau)-s(\tau,\tau')f(\tau'))\right)^2}{\sum_{\tau\in \Sigma_d}\widetilde{\deg}_\tau f(\tau)^2}$$ for the Rayleigh quotients, where $[\tau]\sim[\tau']$ represents an edge in the underlying signed graph ${(\Gamma_d,s)}$, and $\widetilde{\deg}_\tau=(d+1)\deg_\tau$ is the degree of $\tau$ in ${(\Gamma_d,s)}$. (Whenever $\tau \subset \sigma \in \Sigma_{d+1}$, this connects $\tau$ with $d+1$ other $d$-simplices.) Recalling Lemma \ref{rayleigh}, this shows \eqref{r2}. $\frac{1}{d+1}h_k(\Sigma_d)$ is the $k$-th Cheeger constant of the signed graph $(\Gamma_d,s)$. By the Cheeger inequality \eqref{atay} for signed graphs, we have $\frac{\lambda_1(\Delta_{(\Gamma_d,s)})}{2}\le \frac{h_1(\Sigma_d)}{d+1}\le \sqrt{2\lambda_1(\Delta_{(\Gamma_d,s)})}$. And by Theorem \ref{thm:signed-Cheeger}, there exists an absolute constant $C$ such that for any signed graph and any $k\ge 1$, $\frac{\lambda_k(\Delta_{(\Gamma_d,s)})}{2}\le \frac{h_k(\Sigma_d)}{d+1}\le Ck^3 \sqrt{\lambda_k(\Delta_{(\Gamma_d,s)})}$. In consequence, we obtain $$ \frac{d+2-\lambda_{n}(\Delta^{up}_d)}{2}\le h_1(\Sigma_d)\le \sqrt{2(d+1)(d+2-\lambda_{n}(\Delta^{up}_d))}$$ and $$\frac{d+2-\lambda_{n+1-k}(\Delta^{up}_d)}{2}\le h_k(\Sigma_d)\le Ck^3\sqrt{(d+1)(d+2-\lambda_{n+1-k}(\Delta^{up}_d))} .$$ Then, we have verified \eqref{eq:Cheeger-1-complex} and \eqref{eq:Cheeger-k-complex}. \end{proof} By Theorem \ref{thm:anti-signed-Cheeger}, $\lambda_n(\Delta^{up}_d)=d+2$ if and only if $h_1(\Sigma_d)=0$, if and only if the associated signed graph ${(\Gamma_d,s)}$ has a balanced component. The latter fact follows from Prop. \ref{prop:q+2 eigen}, remembering that the sign we are currently using is the opposite of the one in that Proposition. \subsection{Spectral gap from 0} \label{sec:gap-0} Theorem \ref{thm:anti-signed-Cheeger} of the previous section contains the estimates for the spectral gap from $d+2$. However, the more important estimate is the one for the spectral gap from $0$, namely, the Cheeger-type estimate for the first non-trivial eigenvalue of the Eckmann Laplacian. For that purpose, we shall now introduce a new Cheeger constant. The key point is that we consider generalized (i.e., with both positive and negative multiplicities) multisets of $d$-simplices, in order to be able to take account of (positive or negative) multiplicities, as these also enter into the coboundary relations and therefore implicitly into the eigenvalues. \begin{itemize} \item[(D1)] A \emph{(generalized) multiset} is a pair $(S,m)$, where $S$ is the underlying set of the multiset, formed from its distinct elements, and $m:S\to\mathbb{Z}$ is an integer-valued function, giving the \emph{multiplicity}. We point out that this multiplicty is allowed to also take negative values, in order to account for orientations. For convenience, we usually write $S$ instead of $(S,m)$, and simply speak of a multiset, and we use $|S| : \sum_{s\in S}|m(s)|$ to indicate the \emph{size} of the multiset $S$. As the underlying set, we take $\Sigma_d$. We write $S\subset_M \Sigma_d$ when $S$ is a multiset on the underlying set $\Sigma_d$ with multiplicities in $\{-M,\ldots,0,\ldots,M\}$. The coboundary $\partial^*_{d+1}S$ of such a multiset $S$ is defined as the multiset of all $(d+1)$-simplices that have a member of $S$ in its boundary, together with the appropriate multiplicities. Thus, each $\sigma\in \Sigma_{d+1}$ has the multiplicity $\sum_{\tau\in \Sigma_d}m(\tau)\mathrm{sgn}([\tau],\partial[\sigma])$, where $m(\tau)$ is the multiplicity of $\tau$ in $S$. And the support of $\partial^*_{d+1}S$ then consists of all such simplices with non-zero multiplicity. We define $\vol(S):=\sum_{\tau\in \Sigma_d}\deg_\tau |m(\tau)|$ as the volume of the multiset $S$. \begin{defn} For $d\ge 0$, \begin{equation}\label{eq:combinatorial-h(S_d)}h(\Sigma_d)=\min\limits_{\substack{S\subset_M \Sigma_d \\ S\neq \partial^*_{d}(T),\forall T\subset_M \Sigma_{d-1}}}\frac{|\partial^*_{d+1} S|}{\min\limits_{S'\ne\emptyset:\partial^*_{d+1}S'=\partial^*_{d+1}S}\vol(S')} \end{equation} is constant when $M$ is sufficiently large. And for such a large number $M$, we call $h(\Sigma_d)$ the \emph{Cheeger constant} on $\Sigma_d$. \end{defn} \item[(D2)] We shall now give several different definitions of $h(\Sigma_d)$, and then show that these definitions all agree. First, we describe the Cheeger constant as the $\mathbb{Z}$-expander: \begin{defn} Let $$h(\Sigma_d)=\min\limits_{\phi\in C^d(\Sigma,\mathbb{Z})\setminus\mathrm{Im\,}\delta}\frac{\|\delta\phi\|_1}{\min\limits_{\psi\in\mathrm{Im\,}\delta}\|\phi+\psi\|_{1,\deg}}.$$ \end{defn} We point out that in contrast to the definition \eqref{skm} of $h^d(\Sigma)$, here we use $\mathbb{Z}$- instead of $\mathbb{Z}_2$-coefficients, and we use\ the (weighted) $l^1$-norm, where $\|\phi \|_{1,\deg}:=\sum_{\tau\in \Sigma_d}\deg_\tau |\phi(\tau)|$, instead of the Hamming norm. \item[(D3)] Anticipating Section \ref{sec:p-lap}, and similar to the graph 1-Laplacian, we define the up 1-Laplacian eigenvalue problem on $\Sigma_d$ as the nonlinear eigenvalue problem \begin{equation}\label{eq:1-lap} 0\in\nabla \|B_{d+1}^\top\mathbf{x}\|_1-\lambda\nabla\|\mathbf{x}\|_{1,\deg} \end{equation} where $\nabla $ represents the usual subgradient \cite{Clarke}. We let $\lambda_{I_d}(\Delta^{up}_{d,1})$ be the smallest non-trivial eigenvalue of the up 1-Laplacian, where $I_d:=\dim \mathrm{Image}(B_d^\top)+1=\mathrm{rank}(B_d)+1$. To describe $\lambda_{I_d}(\Delta^{up}_{d,1})$, we first introduce orthogonality w.r.t.\ a given norm. For a norm $\|\cdot\|$ on a real linear space with an inner product $\langle \cdot ,\cdot \rangle$, we say that $\mathbf{x}$ is \emph{$\|\cdot\|$-orthogonal} to $\mathbf{y}$ if there exists $\mathbf{u}\in \nabla\|\mathbf{x}\|$ satisfying $\langle \mathbf{u},\mathbf{y}\rangle=0$. We say $\mathbf{x}$ is $\|\cdot\|$-orthogonal to a non-empty set $Y$ if $\mathbf{x}$ is $\|\cdot\|$-orthogonal to all $\mathbf{y}\in Y$. Clearly, if $\|\cdot\|=\|\cdot\|_2$ is the standard $l^2$-norm, then the $\|\cdot\|_2$-orthogonality reduces to the usual orthogonality w.r.t.\ the standard inner product. \begin{defn}Let $$h(\Sigma_d)=\lambda_{I_d}(\Delta^{up}_{d,1})= \min\limits_{\mathbf{x}\bot^1 \mathrm{Image}(B_d^\top)}\frac{\|B_{d+1}^\top\mathbf{x}\|_1}{\|\mathbf{x}\|_{1,\deg}}$$ where $\mathbf{x}\bot^1\mathrm{Image}(B_d^\top)$ indicates that $\mathbf{x}$ is $\|\cdot\|_{1,\deg}$-orthogonal to $\mathrm{Image}(B_d^\top)$, i.e.\ $\mathbf{u}\in\mathrm{Image}(B_d^\top)^\bot$ for some $ \mathbf{u}\in\nabla \|\mathbf{x}\|_{1,\deg}$. \end{defn} \item[(D4)] The norm $\|\cdot\|_{1,\deg}$ on $C^{d}(\Sigma)$ induces a quotient norm on $C^{d}(\Sigma)/\mathrm{image}(\delta_{d-1})$, which will be denoted by $\|\cdot\|$ for simplicity. More precisely, for any equivalence class $[\mathbf{x}]\in C^{d}(\Sigma)/\mathrm{image}(\delta_{d-1})$, let $\| [\mathbf{x}]\|=\inf\limits_{x'\in [x]}\|\mathbf{x}'\|_{1,\deg}$. Then $$h(\Sigma_d)= \min\limits_{0\ne [\mathbf{x}]\in C^{d}(\Sigma)/\mathrm{image}(\delta_{d-1})}\frac{\|\delta_d\mathbf{x}\|_1}{\| [\mathbf{x}]\|}=\min\limits_{0\ne [\mathbf{x}]\in C^{d}(\Sigma,\mathbb{Z})/\mathrm{image}(\delta_{d-1})}\frac{\|\delta_d\mathbf{x}\|_1}{\| [\mathbf{x}]\|}.$$ \begin{defn} In the case of $\tilde{H}^{d}(\Sigma,\ensuremath{\mathbb{R}})= 0$, let $$h(\Sigma_d)=\min\limits_{\mathbf{y}\in \mathrm{image}(\delta_{d})}\frac{\|\mathbf{y}\|_1}{\|\mathbf{y}\|_{\mathrm{fil}}}=\frac{1}{\max\limits_{\mathbf{y}\in \mathrm{image}(\delta_{d})}\|\mathbf{y}\|_{\mathrm{fil}}/\|\mathbf{y}\|_1}=\frac{1}{\|\delta_d^{-1}\|_{\mathrm{fil}}}$$ where $\|\mathbf{y}\|_{\mathrm{fil}}: \inf\limits_{x\in\delta_d^{-1} (\mathbf{y})}\|\mathbf{x}\|_{1,\deg}$ is the filling norm of $\mathbf{y}$, and\\ $\|\delta_d^{-1}\|_{\mathrm{fil}}:=\max\limits_{\mathbf{y}\in \mathrm{image}(\delta_{d})}\|\mathbf{y}\|_{\mathrm{fil}}/\|\mathbf{y}\|_1$ is called the filling profile by Gromov (see Section 2.3 in \cite{Gromov}). \end{defn} \end{itemize} \begin{theorem} The four definitions in (D1)--(D4) are equivalent. \end{theorem} \begin{proof} We start with (D3). Since $\mathrm{Image}(B_d^\top)\subset\mathrm{Ker}(B_{d+1}^\top)$, by Theorem 2.1 in \cite{Jost/Zhang21c}, \begin{align} \lambda_{I_d}(\Delta^{up}_{d,1})&=\inf\limits_{\mathbf{x}\in \ensuremath{\mathbb{R}}^n\setminus \mathrm{Image}(B_d^\top)}\frac{\|B_{d+1}^\top\mathbf{x}\|_1}{\inf\limits_{\mathbf{z}\in \mathrm{Image}(B_d^\top)}\|\mathbf{x}+\mathbf{z}\|_{1,\deg}}\label{eq:B-inf} \\&=\inf\limits_{[\mathbf{x}]\in \ensuremath{\mathbb{R}}^n/ \mathrm{Image}(B_d^\top)}\frac{\|B_{d+1}^\top\mathbf{x}\|_1}{\| [\mathbf{x}]\|}\label{eq:B-quotient} \\&=\inf\limits_{\mathbf{x}\in \ensuremath{\mathbb{R}}^n:\nabla \|\mathbf{x}\|_{1,\deg}\bigcap\mathrm{Image}(B_d^\top)^\bot\ne\emptyset}\frac{\|B_{d+1}^\top\mathbf{x}\|_1}{\|\mathbf{x}\|_{1,\deg}} \label{eq:B-constraint} \end{align} where $n=\#\Sigma_d$, $\| [\mathbf{x}]\|=\inf\limits_{\mathbf{x}'\in [\mathbf{x}]}\|\mathbf{x}+\mathbf{z}\|_{1,\deg}$ and $[\mathbf{x}]=\{\mathbf{y}\in\ensuremath{\mathbb{R}}^n:\mathbf{y}-\mathbf{x}\in\mathrm{Image}(B_d^\top)\}$. In fact, the definition of the norm $\|\cdot\|$ on the quotient space $\ensuremath{\mathbb{R}}^n/ \mathrm{Image}(B_d^\top)$ implies $\|[\mathbf{x}]\|=\inf\limits_{\mathbf{z}\in \mathrm{Image}(B_d^\top)}\|\mathbf{x}+\mathbf{z}\|_{1,\deg}$. Moreover, Proposition 2.3 in \cite{Jost/Zhang21c} yields that $\|[\mathbf{x}]\|=\|\mathbf{x}\|_{1,\deg}$ if and only if $\mathbf{x}$ satisfies $\nabla \|\mathbf{x}\|_{1,\deg}\bigcap\mathrm{Image}(B_d^\top)^\bot\ne\emptyset$, that is, the minimization problem \[\inf\limits_{\mathbf{x}'\in\ensuremath{\mathbb{R}}^n:\mathbf{x}'-\mathbf{x}\in \mathrm{Image}(B_d^\top)}\|\mathbf{x}'\|_{1,\deg}\] reaches its minimum in the set $\{\mathbf{x}\in \ensuremath{\mathbb{R}}^n:\nabla \|\mathbf{x}\|_{1,\deg}\bigcap\mathrm{Image}(B_d^\top)^\bot\ne\emptyset\}$. So, the above three quantities \eqref{eq:B-inf}, \eqref{eq:B-quotient} and \eqref{eq:B-constraint} coincide. Using the $l^1$-type orthogonal notation $\bot^1$, since $\mathbf{x}\bot^1\mathrm{Image}(B_d^\top)$ means that $\mathbf{u}\bot \mathrm{Image}(B_d^\top)$ for some $ \mathbf{u}\in\nabla \|\mathbf{x}\|_{1,\deg}$, the constraint $\{\mathbf{x}\in \ensuremath{\mathbb{R}}^n:\nabla \|\mathbf{x}\|_{1,\deg}\bigcap\mathrm{Image}(B_d^\top)^\bot\ne\emptyset\}$ in \eqref{eq:B-constraint} can be reduced to $\{\mathbf{x}\in \ensuremath{\mathbb{R}}^n:\mathbf{x}\bot^1\mathrm{Image}(B_d^\top)\}$ as shown in (D3). Similar to the proof of Proposition 3.7 in \cite{Jost/Zhang21b}, we can apply Theorem 2.4 in \cite{Jost/Zhang21c} to derive that every eigenvalue of the up 1-Laplacian eigenproblem \eqref{eq:1-lap} has an eigenvector in the set of the extreme points associated with the function pair $(\|B_{d+1}^\top\cdot\|_1,\|\cdot\|_{1,\deg})$ since both $\|B_{d+1}^\top\cdot\|_1$ and $\|\cdot\|_{1,\deg}$ are piecewise linear. We shall now describe these extreme points in more detail. The unit $l^1$-sphere $\{\mathbf{x}\in\ensuremath{\mathbb{R}}^n:\|\mathbf{x}\|_{1,\deg}=1\}$ can be represented as a union of finitely many convex polytopes of dimension $(n-1)$ such that both $\|B_{d+1}^\top\cdot\|_1$ and $\|\cdot\|_{1,\deg}$ are linear on each convex polytope. Let $k$ be the smallest possible number of such convex polytopes, and let $\{P_1,\cdots,P_k\}$ be the family of convex polytopes of dimension $(n-1)$, i.e.\ $\{\mathbf{x}\in\ensuremath{\mathbb{R}}^n:\|\mathbf{x}\|_{1,\deg}=1\}=P_1\cup\cdots\cup P_k$ and $\|B_{d+1}^\top\cdot\|_1$ and $\|\cdot\|_{1,\deg}$ are linear when restricted on $P_i$ for any $i$. Denote by $\mathrm{Ext}(\|B_{d+1}^\top\cdot\|_1,\|\cdot\|_{1,\deg})$ the union of the vertices of $P_i$ for all $i$. Clearly, $\mathrm{Ext}(\|B_{d+1}^\top\cdot\|_1,\|\cdot\|_{1,\deg})$ is a finite set, and its elements are called the extreme points determined by the function pair $(\|B_{d+1}^\top\cdot\|_1,\|\cdot\|_{1,\deg})$. Since all the entries of the matrix $B_{d+1}^\top$ and the degrees are rational numbers, by the theory of systems of linear equations, $\mathrm{Ext}(\|B_{d+1}^\top\cdot\|_1,\|\cdot\|_{1,\deg})\subset \mathbb{Q}^n$. Let $M$ be a sufficiently large natural number that is greater than the least common multiple of all the denominators of the components of all points in $\mathrm{Ext}(\|B_{d+1}^\top\cdot\|_1,\|\cdot\|_{1,\deg})$. Then, $\mathrm{Ext}(\|B_{d+1}^\top\cdot\|_1,\|\cdot\|_{1,\deg})\subset \left\{t\mathbf{x}:t\ge0\text{ and }\mathbf{x}\in \{-M,\ldots,-1,0,1,\ldots,M\}^{n}\right\}$, and thus every eigenvalue has an eigenvector in $\left\{t\mathbf{x}:t\ge0\text{ and }\mathbf{x}\in \{-M,\ldots,-1,0,1,\ldots,M\}^{n}\right\}$. Since both $\|B_{d+1}^\top\cdot\|_1$ and $\|\cdot\|_{1,\deg}$ are positively one-homogeneous, we further derive that every eigenvalue has an eigenvector in the set $\{-M,\ldots,-1,0,1,\ldots,M\}^{n}$. Moreover, the minimizations \eqref{eq:B-inf} and \eqref{eq:B-constraint} can reach their minima at some points in $\{-M,\ldots,-1,0,1,\ldots,M\}^{n}$, and the minimization problem \eqref{eq:B-quotient} achieves its minima at some equivalence class $[\mathbf{x}]$ for some $\mathbf{x}\in \{-M,\ldots,-1,0,1,\ldots,M\}^{n}$. That means, we can use $\{-M,\ldots,-1,0,1,\ldots,M\}^n$ instead of $\ensuremath{\mathbb{R}}^n$ in the constraints of these three minimization problems \eqref{eq:B-inf}, \eqref{eq:B-quotient} and \eqref{eq:B-constraint}. It follows from $\{-M,\ldots,-1,0,1,\ldots,M\}^n\subset \mathbb{Z}^n\subset \ensuremath{\mathbb{R}}^n$ that one can also replace $\ensuremath{\mathbb{R}}^n$ by $\mathbb{Z}^n$ in the constraints of these three minimization problems \eqref{eq:B-inf}, \eqref{eq:B-quotient} and \eqref{eq:B-constraint}. We now proceed to prove the equivalence of (D1)--(D4). Using $\mathbb{Z}^n$ instead of $\ensuremath{\mathbb{R}}^n$ in \eqref{eq:B-inf}, and equivalently converting the notions $\mathbb{Z}^n$ to $C^d(K,\mathbb{Z})$, and $\mathrm{Image}(B_d^\top)$ to $\mathrm{Im\,}\delta$, we obtain that (D2) is a reformulation of \eqref{eq:B-inf}. Similarly, (D4) is a reformulation of \eqref{eq:B-quotient}. And, if $\tilde{H}^{d}(K,\ensuremath{\mathbb{R}})= 0$, then $\mathrm{image}(\delta_{d-1})=\mathrm{ker}(\delta_{d})$, which implies $C^{d}(K)/\mathrm{image}(\delta_{d-1})=C^{d}(K)/\mathrm{ker}(\delta_{d})\cong \mathrm{image}(\delta_{d})$ and $\delta_d^{-1}(\mathbf{y})=[\mathbf{x}]$ for any $\mathbf{y}\in \mathrm{image}(\delta_{d})$. Note that the filling norm $\|\mathbf{y}\|_{\mathrm{fil}}{:=}{~} \inf_{x\in\delta_d^{-1} (\mathbf{y})}\|\mathbf{x}\|_{1,\deg}$ coincides with $\|[\mathbf{x}]\|$, and $\|\mathbf{y}\|_1=\|\delta_d\mathbf{x}\|_1=\|B_{d+1}^\top\mathbf{x}\|_1$. So, (D4) and \eqref{eq:B-quotient} indicate the same quantity. Using $\{-M,\ldots,-1,0,1,\ldots,M\}^n$ instead of $\ensuremath{\mathbb{R}}^n$ in \eqref{eq:B-inf}, we can similarly identify every generalized multiset $S\subset_M\Sigma_d$ with a unique $\mathbf{x}\in \{-M,\ldots,-1,0,1,\ldots,M\}^n$ by identifying $x_\tau$ with $m(\tau)$ for any $\tau\in\Sigma_d$, where $m(\tau)$ is the generalized multiplicity of $\tau$ in $S$. Then, for such a couple of $S$ and $\mathbf{x}$, $\vol(S)=\|\mathbf{x}\|_{1,\deg}$, and $|\partial^*_{d+1} S|=\|B_{d+1}^\top\mathbf{x}\|_1$. If $\tilde{H}^{d}(\Sigma,\ensuremath{\mathbb{R}})\ne 0$, then $\mathrm{image}(B_{d}^\top)$ is a proper subset of $ \mathrm{ker}(B_{d+1}^\top)$, and thus \eqref{eq:B-inf} is zero, and in this case, there exists $S'\ne\emptyset$ such that $\partial^*_{d+1}S'=\partial^*_{d+1}S=\emptyset$, which means \eqref{eq:combinatorial-h(S_d)} also equals zero. If $\tilde{H}^{d}(\Sigma,\ensuremath{\mathbb{R}})= 0$, then $\mathrm{image}(B_{d}^\top)= \mathrm{ker}(B_{d+1}^\top)$, and thus for such a couple of $S$ and $\mathbf{x}$ with $\mathbf{x}\not\in\mathrm{ker}(B_{d+1}^\top)$, \[\inf\limits_{\mathbf{z}\in \mathrm{image}(B_d^\top)}\|\mathbf{x}+\mathbf{z}\|_{1,\deg}=\inf\limits_{\mathbf{x}'\in\ensuremath{\mathbb{R}}^n:\mathbf{x}'-\mathbf{x}\in \mathrm{ker}(B_{d+1}^\top)}\|\mathbf{x}'\|_{1,\deg}=\min\limits_{S'\ne\emptyset:\partial^*_{d+1}S'=\partial^*_{d+1}S}\vol(S').\] Therefore, \eqref{eq:combinatorial-h(S_d)} and \eqref{eq:B-inf} actually represent the same quantity which has been denoted by $h(\Sigma_d)$. The whole proof is then completed. \end{proof} It is very useful that the four definitions in (D1)--(D4) represent the same Cheeger constant $h(\Sigma_d)$ from different viewpoints. (D1) provides a combinatorial explanation of the Cheeger constant $h(\Sigma_d)$ using the language of multi-sets in combinatorics, which means that our Cheeger constant is actually a combinatorial quantity. (D2) says that our Cheeger constant is indeed a $\mathbb{Z}$-expander, and it is clear that $$h(\Sigma_d)=0 \Longleftrightarrow \tilde{H}^d(\Sigma,\ensuremath{\mathbb{R}})\ne0,\;\; \forall d\ge 0.$$ As we have discussed, the Cheeger constant defined as an $\mathbb{Z}_2$-expander violates the Cheeger inequality on simplicial complexes. However, with a $\mathbb{Z}$-expander it is possible to get a Cheeger inequality. (D3) says that our Cheeger constant coincides with the smallest non-trivial 1-Laplacian eigenvalue, which generalizes the equality in both graph and domain settings. (D4) reveals the non-obvious fact that our Cheeger constant has a deep relation with Gromov's filling profile. This is an equivalent reformulation of \eqref{eq:combinatorial-h(S_d)} using the language of norms on cochain groups, which helps us to further understand the formula \eqref{eq:combinatorial-h(S_d)}. In addition, for sufficiently large numbers $M\in\mathbb{Z}_+$, $$ h(\Sigma_d)\xlongequal[]{\text{if } \tilde{H}^{d}(\Sigma,\ensuremath{\mathbb{R}})= 0}\min\limits_{\substack{S\subset_M \Sigma_d \\ \partial^*_{d+1}S\ne \emptyset}}\frac{|\partial^*_{d+1} S|}{\min\limits_{S':\partial^*_{d+1}S'=\partial^*_{d+1}S}\vol(S')}>0. $$ For the case of $d=0$, we can take $M=1$, and then $h(\Sigma_0)$ reduces to the usual Cheeger constant on graphs. The following preliminary result indicates that such a constant $h(\Sigma_d)$ is a good candidate for Cheeger-type inequalities. \begin{pro}\label{pro:rough-Cheeger} Suppose that $\deg_\tau>0$, $\forall \tau\in \Sigma_d$. Then, $$\frac{h^2(\Sigma_d)}{|\Sigma_{d+1}|}\le \lambda_{I_d}(\Delta_d^{up})\le \vol(\Sigma_d)h(\Sigma_d).$$ \end{pro} \begin{proof} For simplicity, we write $h=h(\Sigma_d)$ and take $\lambda=\lambda_{I_d}(\Delta_d^{up})$. We shall prove $\frac{\min\limits_{\tau\in \Sigma_{d}}\deg_\tau}{\#\Sigma_{d+1}}h^2\le \lambda\le \vol(\Sigma_d)h^2$. Let $k=\mathrm{rank}(B_{d})$. Then $\lambda$ and $h$ are the $(k+1)$-th min-max eigenvalues of the $d$-th up Laplacian and the $d$-th up 1-Laplacian, respectively. We only need to prove that, for any $k\ge 1$, $$\sqrt{\frac{1}{\sum\limits_{\tau\in \Sigma_d}\deg_\tau} \lambda_k}\le h_k\le \sqrt{\frac{\#\Sigma_{d+1}}{\min\limits_{\tau\in \Sigma_{d}}\deg_\tau} \lambda_k}.$$ In fact, it is easy to see that $$\min\limits_\tau\deg_\tau \le \frac{\|\mathbf{x}\|_{1,\deg}^2}{\|\mathbf{x}\|_{2,\deg}^2}\le \sum_{\tau\in \Sigma_d}\deg_\tau \; \text{ and }\; 1\le \frac{\|B_{d+1}^\top\mathbf{x}\|_1^2}{\|B_{d+1}^\top\mathbf{x}\|_2^2}\le \#\Sigma_{d+1}.$$ Hence $$\frac{1}{\sum_{\tau\in \Sigma_d}\deg_\tau}\frac{\|B_{d+1}^\top\mathbf{x}\|_2^2}{\|\mathbf{x}\|_{2,\deg}^2} \le\frac{\|B_{d+1}^\top\mathbf{x}\|_1^2}{\|\mathbf{x}\|_{1,\deg}^2}\le \frac{\#\Sigma_{d+1}}{\min\limits_\tau\deg_\tau}\frac{\|B_{d+1}^\top\mathbf{x}\|_2^2}{\|\mathbf{x}\|_{2,\deg}^2}.$$ The proof of $\frac{h^2(\Sigma_d)}{\#\Sigma_{d+1}}\le \lambda_{I_d}(\Delta_d^{up})\le \vol(\Sigma_d)h(\Sigma_d)$ is then completed by noting that $h\le 1\le \deg_\tau$, $\forall \tau\in \Sigma_d$. \end{proof} \begin{remark}\label{remark:down-Cheeger} We can also define the down Cheeger constant $$h_{down}(\Sigma_d){:=}{~} \min\limits_{x\bot^1 \mathrm{Image}(B_{d+1})}\frac{\|B_{d}\mathbf{x}\|_1}{\|\mathbf{x}\|_{1,\deg}}=\lambda_{I_{d+1}}(\Delta^{down}_{d,1})$$ which possesses a combinatorial reformulation that is similar to \eqref{eq:combinatorial-h(S_d)}, where $I_{d+1}:=\dim \mathrm{Image}(B_{d+1})+1=\mathrm{rank}(B_{d+1})+1$. Consider a $d$-dimensional combinatorial manifold $\Sigma$, that is, a $d$-dimensional topological manifold possessing a simplicial complex structure. As a manifold, we assume that $\Sigma$ is connected and has no boundary. Then, $B_{d+1}$ is a $|\Sigma_d|\times 1$ matrix of rank $1$, and $I_{d+1}=\dim\mathrm{Image}(B_{d+1})+1=\mathrm{rank}(B_{d+1})+1=1+1=2$. Therefore, in particular, $\lambda_{I_{d+1}}=\lambda_2$. Moreover, the down adjacency relation induces a graph on $\Sigma_d$, and we have the Cheeger inequality: $$\frac{h^2_{down}(\Sigma_d)}{2}\le \lambda_{2}(\Delta_d^{down}) \le 2h_{down}(\Sigma_d).$$ In fact, Theorem 2.7 in \cite{Steenbergen14} closely resembles the above inequality, and the assumption made there for the lower bound that every $(d-1)$-dimensional simplex is incident to at most 2 $d$-simplices is satisfied for a combinatorial manifold. \end{remark} In the sequel, $M$ will be used to denote a manifold. \begin{defn} Let $M$ be a $d$-dimensional orientable compact closed Riemannian manifold. A triangulation $T$ of $M$ is \emph{$c$-uniform} if there exists $c>1$ such that for any two $d$-simplexes $\triangle$ and $\triangle'$ in the triangulation $T$, $$\frac1c <\frac{\mathrm{diam}(\triangle)}{\mathrm{diam}(\triangle')}<c\;\;\text{ and }\;\;\frac1c <\frac{\mathrm{diam}(\triangle)}{\mathrm{vol}(\triangle)^{\frac1d}}<c .$$ A triangulation $T$ of $M$ is \emph{uniform} if there exist $N>1$ and $c>1$ such that either the number of vertices of $T$ is smaller than $N$, or $T$ is $c$-uniform. The constants $N$ and $c$ are called the \emph{uniform parameters} of the triangulation. \end{defn} \begin{theorem}\label{thm:Cheeger-manifold-complex} Let $M$ be an orientable, compact, closed Riemannian manifold of dimension $(d+1)$. Let $\Sigma$ be a simplicial complex which is combinatorially equivalent to a uniform triangulation of $M$. Then, there is a Cheeger inequality $$ \frac{h^2(\Sigma_d)}{C}\le \lambda_{I_{d}}(\Delta_{d}^{up}) \le C\cdot h(\Sigma_d), $$ where $C$ is a uniform constant which is independent of the choice of $\Sigma$. In addition, $h(\Sigma_d)>0$ if and only if $H_1(\Sigma)=0$ (or equivalently, $H_1(M)=0$). \end{theorem} \begin{proof} By Proposition \ref{pro:rough-Cheeger}, $\lambda_{I_{d}}(\Delta_{d}^{up})=0$ if and only if $h(\Sigma_d)=0$. So, it suffices to assume that $h(\Sigma_d)>0$, i.e., $\tilde{H}^d(M)=\tilde{H}^d(\Sigma)=0$. Since $M$ and $\Sigma$ are of dimension $(d+1)$, Poincar\'e duality implies that $\tilde{H}_1(M)=\tilde{H}^d(M)=0$. We may assume without loss of generality that $M$ is simply connected, and the triangulation is $c$-uniform for some $c>1$, and $\Sigma_d$ has $n$ elements, where $n$ is a sufficiently large integer. For any $\epsilon>0$, there exists $N>0$ such that any $c$-uniform triangulation with at least $N$ facets satisfies $\frac{1}{3c^2}\epsilon<\mathrm{diam}(\triangle)<\epsilon$, $\forall \triangle$. Here, we also regard the uniform triangulation as a uniform $\epsilon$-net. \begin{enumerate} \item[Claim 1] For the down Cheeger constant, we have $$ \frac{d+2}{4}h^2_{down}(\Sigma_{d+1})\le \lambda_{I_d}(\Delta_d^{up}) \le (d+2)h_{down}(\Sigma_{d+1}). $$ Proof: This is derived by the Cheeger inequality $$ \frac{h^2_{down}(\Sigma_{d+1})}{2}\le \lambda_{2}(\Delta_{d+1}^{down}) \le 2h_{down}(\Sigma_{d+1}) $$ proposed in Remark \ref{remark:down-Cheeger}, and the duality property $\lambda_{I_d}(\Delta_d^{up})=\frac{d+2}{2}\lambda_{2}(\Delta_{d+1}^{down})$. \item[Claim 2] The Cheeger constant $h(\Sigma_d)$ and the down Cheeger constant $h_{down}(\Sigma_{d+1})$ satisfy $h(\Sigma_d)\sim h_{down}(\Sigma_{d+1})$, i.e., there exists a uniform constant $C>1$ such that \[\frac1C h_{down}(\Sigma_{d+1})\le h(\Sigma_d)\le C\, h_{down}(\Sigma_{d+1}).\] The proof is divided into the following two claims. \begin{enumerate} \item[Claim 2.1] $\frac1\epsilon h_{down}(\Sigma_{d+1})\sim h(M)$ Proof: Let $G$ be the graph with $n{:=}{~} \#\Sigma_{d+1}$ vertices located in the barycenters of all $(d+1)$-simplexes, such that two vertices form an edge in $G$ if and only if these two $d$-simplexes are down adjacent. We may call $G$ the underlying graph of the triangulation. Note that $ h_{down}(\Sigma_{d+1})$ also indicates the Cheeger constant of the unweighted underlying graph $G$. An approximation approach developed in \cite{TMT20,TrillosSlepcev-16} implies that the Cheeger constant of a uniform triangulation should approximate the Cheeger constant of the manifold when we equip the edges of the underlying graph of the triangulation with appropriate weights (related to $\epsilon$). In fact, since $G$ is the underlying graph of the triangulation, we may assume that $G$ is embedded in the manifold $M$, and the distribution of the vertices of $G$ is uniform\footnote{The vertices of $G$ are well-distributed on $M$.}. Then, according to the approximation theorems in \cite{TMT20,TrillosSlepcev-16}, by adding appropriate weights (related to $\epsilon$)\footnote{The weight of an edge $\{u,v\}$ is determined by the distance of $u$ and $v$ in $M$, which is about $O(\epsilon)$.} on $G$, the Cheeger constant of $G$ (with appropriate edge weights) would approximate $h(M)$ (i.e., the difference of $h(M)$ and the Cheeger constant of the weighted graph $G$ is bounded by $h(M)/2$ whenever $\epsilon$ is sufficiently small). We can then adopt the same approximation approach as in \cite{TMT20,TrillosSlepcev-16} (more precisely, a slight modification of the approximation theorem in \cite{TMT20,TrillosSlepcev-16,TrillosSlepcev-15}) to derive that $ \frac1\epsilon h_{down}(\Sigma_{d+1})\sim h(M)$. \item[Claim 2.2] $\frac1\epsilon h(\Sigma_d)\sim h(M)$ whenever $H_1(M)=0$. Proof: It is well-known that $H_1(M)=0$ if and only if $H^d(M)=0$ if and only if $\mathrm{Ker}(\delta_d)=\mathrm{Im}(\delta_{d-1})$, since $M$ is a compact closed manifold of dimension $(d+1)$. Thus, $$h(\Sigma_d)= \min\limits_{x\not\in \mathrm{Ker}(\delta_{d})}\frac{\sum\limits_{\sigma\in \Sigma_{d+1}}\left|\sum\limits_{\tau\in \Sigma_d}\mathrm{sgn}([\tau],\partial[\sigma])x_\tau\right|}{\min\limits_{z\in \mathrm{Ker}(\delta_d)}\sum\limits_{\tau\in \Sigma_d}2|x_\tau+z_\tau|}.$$ By the duality theorem (see Lemma 2.5 and Theorem 2.1 in \cite{Jost/Zhang21c}, or the main theorem in \cite{TZ22+}), we can further obtain $$h(\Sigma_d)=\frac{\max\limits_{\sigma \mathop{\sim}\limits^{\text{down}} \sigma'}\frac12|y_\sigma-y_{\sigma'}|}{\min\limits_{t\in\ensuremath{\mathbb{R}}}\max\limits_{\sigma\in \Sigma_{d+1}}|y_\sigma+t|}$$ where $\sigma \mathop{\sim}\limits^{\text{down}} \sigma'$ means $\sigma$ and $\sigma'$ are down adjacent, i.e., they share a common facet. And then by elementary techniques, there is no difficulty to check that the optimization in the right hand side coincides with $$\min\limits_{\min\limits_{\sigma} y_\sigma+\max\limits_\sigma y_\sigma=0}\frac{\max\limits_{\sigma \mathop{\sim}\limits^{\text{down}} \sigma'}|y_\sigma-y_{\sigma'}|}{2\max\limits_{\sigma}|y_\sigma|}=\frac{1}{\mathrm{diam}(G)}$$ where $\mathrm{diam}(G)$ indicates the combinatorial diameter of $G$. We remark here that we indeed rewrite $h(\Sigma_d)$ as the smallest non-trivial eigenvalue of the $\infty$-Laplacian, which agrees with $1/\mathrm{diam}(G)$. This argument is similar to a theorem in \cite{Juutinen99}. Finally, since the triangulation is $C$-uniform, it is easy to see that $$\frac1\epsilon h(\Sigma_d)= \frac{1}{\epsilon\cdot \mathrm{diam}(G)}\sim \frac{1}{ \mathrm{diam}(M)} .$$ Hence, $\frac1\epsilon h(\Sigma_d)\sim h(M)$. \end{enumerate} \end{enumerate} The proof is then completed by combining all the statements above. \end{proof} \begin{remark} \begin{itemize} \item The constant $C$ in Theorem \ref{thm:Cheeger-manifold-complex} depends on the uniform parameters of the triangulation, and the ambient manifold. We hope that it is possible to find a new approach to get a uniform constant that only depends on the dimension $d$. \item Under the same condition as in Theorem \ref{thm:Cheeger-manifold-complex}, we further have $\frac{\lambda_{k_{d}}(\Delta_{d,1}^{up})^2}{C}\le \lambda_{k_{d}}(\Delta_{d}^{up}) \le C\lambda_{k_{d}}(\Delta_{d,1}^{up})$, where $k_d:=\dim \mathrm{Ker}(B_{d+1}^\top)+1$. This inequality coincides with the Cheeger inequality in Theorem \ref{thm:Cheeger-manifold-complex} if and only if $H_1(M)=0$. \item A modification of the proof can deduce that $\frac{1}{\mathrm{diam}(G)}\sim \lambda_2(G)$ whenever $G$ can be uniformly embedded into such a typical manifold, where $\lambda_2(G)$ is the second smallest eigenvalue of the normalized Laplacian on $G$. \item Inspired by the approximation theory for Laplacians on triangulations of manifolds proposed by Dodziuk \cite{Dodziuk76} and Dodziuk-Patodi \cite{Dodziuk76a}, we hope that it is possible to develop an approximation theory for our Cheeger constants on triangulations of manifolds. \end{itemize} \end{remark} \section{Cheeger-type inequalities for $p$-Laplacians on simplicial complexes} \label{sec:p-lap} In this section, we want to study the nonlinear eigenvalue problems for the $p$-Laplacians introduced in Section \ref{plap} on simplicial complexes. Importantly, this will provide a perspective to unify some Cheeger-type inequalities. According to the main theorem in \cite{TZ22+}, the spectral duality that we had used for the $2$-Laplacian now becomes \begin{pro}\label{pro:nonzero-p-Lap} The nonzero eigenvalues of the up $p$-Laplacians are in one-to-one correspondence with those of the down $p^*$-Laplacians: $$\{\lambda^{\frac 1p}:\lambda\text{ is a nonzero eigenvalue of }L_{d,p}^{up}\}=\{\lambda^{\frac {1}{p^*}}:\lambda\text{ is a nonzero eigenvalue of }L_{d+1,p^*}^{down}\}.$$ Moreover, $\lambda^{\frac 1p}_{n-i}(L_{d,p}^{up})=\lambda^{\frac {1}{p^*}}_{m-i}(L_{d+1,p^*}^{down})$ for any $i=0,1,\cdots,\min\{n,m\}-1$, where $n=|\Sigma_d|$ and $m=|\Sigma_{d+1}|$. \end{pro} The case $p=2$ of Proposition \ref{pro:nonzero-p-Lap} is of course the well-known relation between up and down Laplacians that we had already noted in Section \ref{sec:Sim-complex}, that is, the nonzero eigenvalues of $L^{up}_d$ and $L^{down}_{d+1}$ coincide. So, we can concentrate on the up $p$-Laplacian for investigating the spectra of simplicial complexes. To get more concise results, we will work with the {\sl normalized up $p$-Laplace operator} $\Delta^{up}_{d,p}$, whose eigenvalues are determined by the critical values of the $p$-Rayleigh quotient \[f\mapsto\frac{\|B_{d+1}^\top f\|_p^p}{ \|f\|_{p,\deg}^p}\] where $\|f\|_{p,\deg}^p=\sum_{\tau\in \Sigma_d}\deg \tau\cdot |f(\tau)|^p$. \begin{theorem}\label{thm:anti-signed-Cheeger-p} For any simplicial complex and every $d\ge 0$, for any $p\in (1,2]$, there exist uniform constants $C_{p,d}\ge c_{p,d}>0$ such that \begin{equation}\label{eq:Cheeger-1-complex-p} c_{p,d}h_1(\Sigma_d)^p\le (d+2)^{p-1}-\lambda_n(\Delta^{up}_{d,p})\le C_{p,d}h_1(\Sigma_d), \end{equation} where $n=|\Sigma_d|$. \end{theorem} \begin{proof} We need the following key claim. \textbf{Claim}. For any $1<p\le 2$, and for any integer $k\ge 2$, there exist $M_{p,k}\ge m_{p,k}>0$ such that for any $\mathbf{x}\in\ensuremath{\mathbb{R}}^n$, \[m_{p,k}{\sum_{1\le i<j\le k}|x_i-x_j|^p}\le {k^{p-1}\sum_{i=1}^k|x_i|^p-|\sum_{i=1}^kx_i|^p}\le M_{p,k}{\sum_{1\le i<j\le k}|x_i-x_j|^p}\] \textbf{Proof}. We only need to prove that $$m_{p,k}'{:=}{~} \inf\limits_{x\text{ non-constant}}\frac{k^{p-1}\sum_{i=1}^k|x_i|^p-|\sum_{i=1}^kx_i|^p}{\sum_{i,j=1}^k|x_i-x_j|^p}>0$$ for $p>1$, and $$M_{p,k}'{:=}{~} \sup\limits_{x\text{ non-constant}}\frac{k^{p-1}\sum_{i=1}^k|x_i|^p-|\sum_{i=1}^kx_i|^p}{\sum_{i,j=1}^k|x_i-x_j|^p}<+\infty$$ for $1<p\le 2$. It is clear that $\sum_{i,j=1}^k|x_i-x_j|^p>0$ if and only if $\mathbf{x}$ is non-constant. By H\"older's inequality, $k^{p-1}\sum_{i=1}^k|x_i|^p\ge|\sum_{i=1}^kx_i|^p$ and the equality holds if and only if $\mathbf{x}$ is constant. Therefore, $\sum_{i,j=1}^k|x_i-x_j|^p>0$ if and only if $k^{p-1}\sum_{i=1}^k|x_i|^p-|\sum_{i=1}^kx_i|^p>0$. For any vector $\mathbf{x}$ satisfying $\max\limits_i x_i-\min\limits_i x_i=\delta>0$, $\delta^p\le \sum_{i,j=1}^k|x_i-x_j|^p\le k^2 \delta^p$. Let $g(\mathbf{x},t,p)=k^{p-1}\sum_{i=1}^k|x_i+t|^p-|\sum_{i=1}^kx_i+kt|^p$, for $\mathbf{x}\bot\mathbf{1}$ with $\mathbf{x}\ne \mathbf{0}$, $t\in\ensuremath{\mathbb{R}}$ and $p\ge1$. Since $\mathbf{x}$ is non-constant and $p>1$, by H\"older's inequality, we have $g(\mathbf{x},t,p)>0$. Note that $\partial_t g(\mathbf{x},t,p)=pk^{p-1}\sum_{i=1}^k|x_i+t|^{p-1}\mathrm{sign}(x_i+t)-pk|\sum_{i=1}^kx_i+kt|^{p-1}\mathrm{sign}(\sum_{i=1}^kx_i+kt)$. If $t>k\delta$, by H\"older's inequality, $\partial_t g(\mathbf{x},t,p)>0$. Similarly, if $t<-k\delta$, $\partial_t g(\mathbf{x},t,p)<0$. Therefore, $t\mapsto g(\mathbf{x},t,p)$ reaches its minimum on some $t_p\in[-k\delta,k\delta]$. Therefore, $\min\limits_{t\in\ensuremath{\mathbb{R}}}g(\mathbf{x},t,p)=\min\limits_{-k\delta\le t\le k\delta}g(\mathbf{x},t,p)$ is a continuous function of $\mathbf{x}\in\{\mathbf{x}\in\ensuremath{\mathbb{R}}^n:\sum_{i=1}^k x_i=0,\max\limits_i x_i-\min\limits_i x_i=\delta\}$. Hence, $\min\limits_{\mathbf{x}\bot \mathbf{1},\max\limits_i x_i-\min\limits_i x_i=\delta}\min\limits_{t\in\ensuremath{\mathbb{R}}}g(\mathbf{x},t,p)>0$. Thus, \begin{align*} \inf\limits_{x\text{ non-constant}}\frac{k^{p-1}\sum_{i=1}^k|x_i|^p-|\sum_{i=1}^kx_i|^p}{\sum_{i,j=1}^k|x_i-x_j|^p}&=\inf\limits_{\mathbf{x}\bot\mathbf{1},\max\limits_i x_i-\min\limits_i x_i=\delta}\min\limits_{t\in\ensuremath{\mathbb{R}}}\frac{g(\mathbf{x},t,p)}{\sum_{i,j=1}^k|x_i-x_j|^p} \\&\ge \frac{1}{k^2\delta^p}\min\limits_{\mathbf{x}\bot\mathbf{1},\max\limits_i x_i-\min\limits_i x_i=\delta}\min\limits_{t\in\ensuremath{\mathbb{R}}}g(\mathbf{x},t,p)>0. \end{align*} Clearly, $g(\mathbf{x},t,2)=\sum_{\{i,j\}\subset\{1,\ldots,k\}}(x_i-x_j)^2$ and $g(\mathbf{x},t,1)\ge0$ Note that $\partial_p g(\mathbf{x},t,p)=\frac1k \sum_{i=1}^k|kx_i+kt|^p\ln|kx_i+kt|-|\sum_{i=1}^kx_i+kt|^p\ln|\sum_{i=1}^kx_i+kt|$. Since $s\mapsto s^p\ln s$ is convex and increasing on $s\in(1,+\infty)$, by Jensen's inequality for convex functions, $\partial_p g(\mathbf{x},t,p)>0$ whenever $|t|>\delta+1/k$ and $p>1$. Therefore, $$g(\mathbf{x},t,1)< g(\mathbf{x},t,p)< \sum_{\{i,j\}\subset\{1,\ldots,k\}}(x_i-x_j)^2$$ whenever $|t|>\delta+1/k$ and $1<p<2$. Consequently, \begin{align*} \sup\limits_{x\text{ non-constant}}\frac{k^{p-1}\sum_{i=1}^k|x_i|^p-|\sum_{i=1}^kx_i|^p}{\sum_{i,j=1}^k|x_i-x_j|^p}&=\sup\limits_{\mathbf{x}\bot\mathbf{1},\max\limits_i x_i-\min\limits_i x_i=\delta}\max\limits_{t\in\ensuremath{\mathbb{R}}}\frac{g(\mathbf{x},t,p)}{\sum_{i,j=1}^k|x_i-x_j|^p} \\&\le \frac{1}{\delta^p}\max\limits_{\mathbf{x}\bot\mathbf{1},\max\limits_i x_i-\min\limits_i x_i=\delta}\max\limits_{t\in\ensuremath{\mathbb{R}}}g(\mathbf{x},t,p)<+\infty. \end{align*} The claim is proved. \vspace{0.3cm} Now we apply the above claim to estimate the spectral gap of $\lambda_n(\Delta^{up}_{d,p})$ from $(d+2)^{p-1}$. Note that \begin{align*} &(d+2)^{p-1}-\lambda_n(\Delta^{up}_{d,p})\\=~&(d+2)^{p-1}-\sup\limits_{f\ne0} \frac{\sum\limits_{\sigma\in \Sigma_{d+1}}\left|\sum_{\tau\in \Sigma_d,\tau\subset\sigma}\mathrm{sgn}([\tau],\partial[\sigma])f(\tau)\right|^p}{\sum_{\tau\in \Sigma_d} \deg\tau \cdot |f(\tau)|^p} \\=~&\inf\limits_{f\ne0} \frac{(d+2)^{p-1}\sum_{\tau\in \Sigma_d} \deg\tau \cdot |f(\tau)|^p-\sum\limits_{\sigma\in \Sigma_{d+1}}\left|\sum_{\tau\in \Sigma_d,\tau\subset\sigma}\mathrm{sgn}([\tau],\partial[\sigma])f(\tau)\right|^p}{\sum_{\tau\in \Sigma_d} \deg\tau \cdot |f(\tau)|^p} \\=~&\inf\limits_{f\ne0}\frac{(d+2)^{p-1}\sum\limits_{\sigma\in \Sigma_{d+1}}\sum\limits_{\tau\in \Sigma_d,\tau\subset \sigma}|f(\tau)|^p-\sum\limits_{\sigma\in \Sigma_{d+1}}\left|\sum_{\tau\in \Sigma_d,\tau\subset\sigma}\mathrm{sgn}([\tau],\partial[\sigma])f(\tau)\right|^p}{\sum_{\tau\in \Sigma_d} \deg\tau \cdot |f(\tau)|^p} \\=~&\inf\limits_{f\ne0}\frac{\sum\limits_{\sigma\in \Sigma_{d+1}}\left((d+2)^{p-1}\sum\limits_{\tau\in \Sigma_d,\tau\subset \sigma}|f(\tau)|^p- \left|\sum_{\tau\in \Sigma_d,\tau\subset\sigma}\mathrm{sgn}([\tau],\partial[\sigma])f(\tau)\right|^p\right)}{\sum_{\tau\in \Sigma_d} \deg\tau \cdot |f(\tau)|^p} \\\le~&\inf\limits_{f\ne0}\frac{\sum\limits_{\sigma\in \Sigma_{d+1}}M_{p,d+2}\sum\limits_{{\tau,\tau'\in \Sigma_d,\tau,\tau'\subset\sigma}} |\mathrm{sgn}([\tau],\partial[\sigma])f(\tau)-\mathrm{sgn}([\tau'],\partial[\sigma])f(\tau') |^p}{\sum_{\tau\in \Sigma_d} \deg\tau \cdot |f(\tau)|^p} \\=~&M_{p,d+2}(d+1)\inf\limits_{f\ne0}\frac{\sum\limits_{{\tau,\tau'\in \Sigma_d,\tau\sim^d\tau'}} |f(\tau)-s([\tau'],[\tau])f(\tau') |^p}{\sum_{\tau\in \Sigma_d} \widetilde{\deg}\tau \cdot |f(\tau)|^p} \\=~&M_{p,d+2}(d+1) \lambda_1 (\Delta_p(\Gamma_d,s))\le M_{p,d+2}(d+1)2^{p-1} h(\Gamma_d,s)=M_{p,d+2}2^{p-1} h_1(\Sigma_d) \end{align*} where $\widetilde{\deg}\,\tau=(d+1)\deg\tau$ is the degree of $\tau$ in $(\Gamma_d,s)$. Similarly, \begin{align*} (d+2)^{p-1}-\lambda_n(\Delta^{up}_{d,p})&\ge m_{p,d+2}(d+1)\lambda_1(\Delta_p(\Gamma_d,s)) \\&\ge m_{p,d+2}(d+1)2^{p-1}\frac{h^p(\Gamma_d,s)}{p^p}=m_{p,d+2} \frac{h^p}{p^p}\left(\frac{2}{d+1}\right)^{p-1} \end{align*} where we used a Cheeger inequality for the $p$-Laplacian on signed graphs from \cite{Amghibech,GLZ22}. Therefore, we can always take $$c_{p,d}= \frac{m_{p,d+2}2^{p-1}}{p^p(d+1)^{p-1}}\;\text{ and }\; C_{p,d}=2^{p-1}M_{p,d+2}.$$ While, for the case of $p=2$ (already treated in Theorem \ref{thm:anti-signed-Cheeger}), it follows from $$ k\sum_{i=1}^kx_i^2-(\sum_{i=1}^kx_i)^2=\sum_{1\le i<j\le k}(x_i-x_j)^2$$ that $m_{2,k}=M_{2,k}=1$, for any $k\ge2$, and $c_{2,d}=\frac{1}{2(d+1)}$ and $C_{2,d}=2$ for any $d\ge0$. \end{proof} \begin{remark}\label{rem:k-way-p} In fact, we can further prove that there exist absolute constants $C_{p,d}'\ge c_{p,d}'>0$ such that for any simplicial complex, and for any $k\ge 1$, \begin{equation}\label{eq:Cheeger-k-complex-p} c_{p,d}'\frac{ h_k(\Sigma_d)^2}{k^6}\le (d+2)^{p-1}-\lambda_{n+1-k}'(\Delta^{up}_{d,p})\le C_{p,d}'h_k(\Sigma_{d}), \end{equation} where \[\lambda_{n+1-k}'(\Delta^{up}_{d,p})=\sup_{\gamma(S)\ge k}\inf_{f\in S} \frac{\sum\limits_{\sigma\in \Sigma_{d+1}}\left|\sum_{\tau\in \Sigma_d,\tau\subset\sigma}\mathrm{sgn}([\tau],\partial[\sigma])f(\tau)\right|^p}{\sum_{\tau\in \Sigma_d} \deg\tau \cdot |f(\tau)|^p}\] indicates the $(n+1-k)$-th max-min eigenvalue. Clearly, $\lambda_n'(\Delta^{up}_{d,p})=\lambda_n(\Delta^{up}_{d,p})$ for any $p$. For simplicity, and to avoid tedious processes, we just sketch the proof below. First, using the claim in the proof of Theorem \ref{thm:anti-signed-Cheeger-p}, we have \[(d+1)m_{p,d}\lambda_{k}(\Delta_p(\Gamma_d,s))\le(d+2)^{p-1}-\lambda_{n+1-k}'(\Delta^{up}_{d,p})\le (d+1)M_{p,d}\lambda_{k}(\Delta_p(\Gamma_d,s))\] where $\Delta_p(\Gamma_d,s)$ represents the $p$-Laplacian on the signed graph $(\Gamma_d,s)$. By a slightly modified variant of Theorem 1.4 in \cite{Zhang21}, and by Theorem \ref{thm:signed-Cheeger}, we can get \[2^{p-2}c\frac{1}{k^6}(\frac{h_k(\Sigma_d)}{d+1})^2\le 2^{p-2}\lambda_{k}(\Delta_{(\Gamma_d,s)})\le\lambda_{k}(\Delta_p(\Gamma_d,s))\le 2^{p-1}\frac{h_k(\Sigma_d)}{d+1}\] in a similar manner. The proof is then finished by combining the above two inequalities. \end{remark} \begin{remark} Theorem \ref{thm:anti-signed-Cheeger} can be recovered by taking $p=2$ in Theorem \ref{thm:anti-signed-Cheeger-p} and Remark \ref{rem:k-way-p}. Amazingly, for $p>2$, we have $$\sup\limits_{x\text{ non-constant}}\frac{k^{p-1}\sum_{i=1}^k|x_i|^p-|\sum_{i=1}^kx_i|^p}{\sum_{i,j=1}^k|x_i-x_j|^p}=+\infty,$$ and hence, we can only obtain a one-sided estimate $c_{p,d}h_1(\Sigma_d)^p\le (d+2)^{p-1}-\lambda_n(\Delta^{up}_{d,p})$ (or $c_{p,d}'\frac{ h_k(\Sigma_d)^2}{k^6}\le (d+2)^{p-1}-\lambda_{n+1-k}'(\Delta^{up}_{d,p})$ for all max-min eigenvalues) when $p>2$. \end{remark} The last result gives a nonlinear version of the main theorem in Section \ref{sec:gap-0}. We put $$\lambda_{I_d}(\Delta^{up}_{d,p})=\min\limits_{x\bot \mathrm{Image}(B_d^\top)}\frac{\|B_{d+1}^\top\vec x\|_p^p}{\min\limits_{y\in \mathrm{Image}(B_d^\top)}\|\vec x+\vec y\|_{p,\deg}^p}$$ which indicates the first nontrivial eigenvalue of $\Delta^{up}_{d,p}$. \begin{pro}\label{pro:rough-Cheeger-p} Suppose that $\deg_\tau>0$, $\forall \tau\in \Sigma_d$. Then, for any $p\ge 1$, $$\frac{h^p(\Sigma_d)}{|\Sigma_{d+1}|^{p-1}}\le \lambda_{I_d}(\Delta_{d,p}^{up})\le \vol(\Sigma_d)^{p-1}h(\Sigma_d).$$ \end{pro} \begin{proof} The proof is easy and very similar to Proposition \ref{pro:rough-Cheeger}. \end{proof} \begin{theorem}\label{thm:p-lap-Cheeger} Let $M$ be an orientable, compact, closed Riemannian manifold of dimension $(d+1)$. Let $\Sigma$ be a simplicial complex which is combinatorially equivalent to a uniform triangulation of $M$. Then, there is a Cheeger inequality $$ \frac{h^p(\Sigma_d)}{C}\le \lambda_{I_{d}}(\Delta_{d,p}^{up}) \le C\cdot h^{p-1}(\Sigma_d), $$ where $C$ is a uniform constant which is independent of the choice of $\Sigma$. \end{theorem} \begin{proof} The proof is essentially the same to that of Theorem \ref{thm:Cheeger-manifold-complex}, with only a small difference at Claim 1 in the proof of Theorem \ref{thm:Cheeger-manifold-complex}. In fact, we only need to use the following claim instead of Claim 1. Claim: For the down Cheeger constant, for any $p>1$, we have \[(d+2)^{p-1}(\frac{h_{down}}{p^*})^{p}\le \lambda_{I_d}\left(\Delta_{d,p}^{up}\right)\le (d+2)^{p-1}h_{down}^{p-1}\] and in particular, when $p$ tends to $+\infty$, we have $\lim\limits_{p\to+\infty}\lambda_{I_d}(\Delta_{d,p}^{up})^{\frac1p}=(d+2)h_{down}$. Proof: Since the down adjacency relation induces a graph on $\Sigma_d$, we can directly use the Cheeger inequality for $p$-Laplacian on graphs to derive \begin{equation}\label{eq:down-Cheeger} \frac{2^{p-1}h^p_{down}(\Sigma_{d+1})}{p^p}\le \lambda_{2}(\Delta_{d+1,p}^{down}) \le 2^{p-1}h_{down}(\Sigma_{d+1}) . \end{equation} Since $|\Sigma|$ is a compact piecewise flat manifold without boundary, and since the dimension of $|\Sigma|$ is $d+1$, the normalized and unnormalized versions of $p$-Laplacian on $\Sigma$ satisfy $\lambda_{i}(L_{d+1,p}^{down})=(d+2)\lambda_{i}(\Delta_{d+1,p}^{down})$ and $\lambda_{i}(L_{d,p}^{up})=2\lambda_{i}(\Delta_{d,p}^{up})$ for any $i$. Together with the spectral duality $(\lambda_{I_d}(L_{d,p^*}^{up}))^{\frac{1}{p^*}}=(\lambda_{2}(L_{d+1,p}^{down}))^{\frac1p}$ derived by Proposition \ref{pro:nonzero-p-Lap}, we immediately obtain the duality equality \begin{equation}\label{eq:dual-equality}(2\lambda_{I_d}(\Delta_{d,p^*}^{up}))^{\frac{1}{p^*}}=((d+2)\lambda_{2}(\Delta_{d+1,p}^{down}))^{\frac1p}.\end{equation} Then substituting the duality equality \eqref{eq:dual-equality} into the above down Cheeger inequality \eqref{eq:down-Cheeger}, we finally deduce that \[(d+2)^{p^*-1}(\frac{h_{down}}{p})^{p^*}\le \lambda_{I_d}(\Delta_{d,p^*}^{up})\le (d+2)^{p^*-1}h_{down}^{p^*-1}\] The proof of the claim is completed by exchanging the positions of $p$ and $p^*$. Finally, combining the above claim with Claim 2 in the proof of Theorem \ref{thm:Cheeger-manifold-complex}, we derive the desired Cheeger-type inequality stated in Theorem \ref{thm:p-lap-Cheeger}. \end{proof} \vspace{0.6cm} {\bf Acknowledgements.} This research was supported by grants from Fundamental Research Funds for the Central Universities (No. 7101303046).
1,108,101,562,669
arxiv
\section{Introduction} The central object of study in this article is the following conjecture. \begin{conj}[Andr\'e-Pink-Zannier] \label{APZ} Let $S$ be a Shimura variety and $\Sigma$ a subset of a generalised Hecke orbit in $S$ (as in~\cite[\S3]{RY}). Then the irreducible components of the Zariski closure of $\Sigma$ are weakly special subvarieties. \end{conj} This conjecture is an important special case of the Zilber-Pink conjectures for Shimura varieties, which has recently been and continues to be a subject of active research. A special case of Conjecture~\ref{APZ} was first formulated in 1989 by Y.~André in~\cite[\S{}X 4.5, p.\,216 (Problem 3)]{Andre}. Conjecture~\ref{APZ} was then stated in the introduction to the second author’s 2000 PhD thesis~\cite{Y}\footnote{The statement there uses the terminology ‘totally geodesic subvarieties’ instead of ‘weakly special’, but Moonen had proved in~\cite{MoMo} that the two notions are equivalent.}, following discussions with Bas Edixhoven. Both statements refer to classical Hecke orbits, rather than \emph{generalised} Hecke orbits (cf.~\cite[\S3.4.1]{RY}). Zannier has considered questions of this type in the context of abelian schemes and tori. Richard Pink, in his 2005 paper~\cite{Pink}, has formulated and studied this question; he used a generalised notion of Hecke orbit, defined using auxiliary linear representations (cf.~\cite[\S3.4.2]{RY}). Pink proves it for “Galois generic” points of Shimura varieties\footnote{Roughly, the image of the corresponding Galois representation intersects the derived subgroup of the ambient group in an adélically open subgroup. This is too strong to hold in general. See the first author's 2009 PhD thesis~\cite[III.\S7, p.\,59]{R-PhDfull} for a weaker hypothesis that is sufficient expected to hold.}: this implies in particular that such points are Hodge generic in their connected component. Pink uses equidistribution of Hecke points proved in~\cite{COU} (or in~\cite{EO}). We refer to the introduction of~\cite{RY} for further background on Conjecture~\ref{APZ}. In the Pila-Zannier approach and most other approaches to Zilber-Pink conjectures, one of the major difficulties is to obtain suitable lower bounds for Galois orbits of points in the ``unlikely locus'' (see~\cite{DR}.) In~\cite{RY}, we develop a general approach to Conjecture~\ref{APZ} based on the Pila-Zannier strategy (o-minimality and functional transcendence). In~\cite{RY}, we define generalised Hecke orbits, we define a natural height function on these orbits, and we prove precise lower Galois bounds~\cite[Th.~7.4]{RY} under the ``weakly adélic Mumford-Tate conjecture''~\cite[\S7.1]{RY}. Let~$(G,X)$ be a Shimura datum and let~$K\leq G({\mathbb A}_f)$ be a compact open subgroup and let~$S=Sh_K(G,X)$ be the associated Shimura variety. The main result of \cite{RY} is as follows. \begin{theorem}[Theorem 2.4 of \cite{RY}]\label{main theorem RY} Let $x_0 \in X$. Assume that~$x_0$ satisfies the weakly adélic Mumford-Tate conjecture. Then the conclusion of the conjecture \ref{APZ} holds for any subset of the generalised Hecke orbit of $[x_0,1]$. \end{theorem} In the present article we prove conclusions of this theorem \emph{unconditionally} for all Shimura varieties \emph{of abelian type}. This completely generalises the main result of~\cite{Orr} by M.~Orr. Our main result is as follows. \begin{theorem} \label{main theorem} Let~$s_0$ be a point in a Shimura variety~$Sh_K(G,X)$ of abelian type. Let~$Z$ be a subvariety whose intersection with the generalised Hecke orbit of~$s_0$ is Zariski dense in~$Z$. Then~$Z$ is a finite union of weakly special subvarieties of~$S$. \end{theorem} We actually prove the more general statement below, which we believe to be of independent interest. Its assumption is weaker than~`weakly adélic Mumford-Tate conjecture' in~Th.~\ref{main theorem RY}. It is the `uniform integral Tate conjecture' assumption explained in~\S\ref{sec:Def:Tate}. We refer to~\cite[Def.~3.1]{RY} for the notion of geometric Hecke orbit. By~\cite[Th.~3.2]{RY}, a generalised Hecke orbit is a finite union of geometric Hecke orbits. \begin{theorem} \label{main theorem 2} Let~$s_0=[x_0,1]$ be a point in a Shimura variety~$Sh_K(G,X)$, and assume the uniform integral Tate conjecture for~$x_0$ in~$X$ in the sense of~Definition~\ref{defi:Tate bis}. Let~$Z$ be a subvariety whose intersection with the geometric Hecke orbit of~$s_0$ is Zariski dense in~$Z$. Then~$Z$ is a finite union of weakly special subvarieties of~$S$. \end{theorem} Using Faltings' theorems, we prove in~\S\ref{Tate:abelian type} that points on Shimura varieties of adjoint type and abelian type satisfy this `uniform integral Tate assumption'. Thus Theorem~\ref{main theorem}, in the adjoint type case, is a special case of Theorem~\ref{main theorem 2}. Because Conjecture~\ref{APZ} can be reduced to the adjoint case, we deduce Theorem~\ref{main theorem} for any Shimura variety of abelian type. At the heart of this article is obtaining polynomial lower bounds~\cite[Th.~7.4]{RY} which are unconditional for Shimura varieties of abelian type, or in general under the assumption of the Tate hypothesis. We emphasize that Shimura varieties of abelian type constitute the most important class of Shimura varieties. The Tate hypothesis is used to compare the sizes of Galois orbits with that of the adélic orbits of~\cite[App.~B]{RY}. In our setting, we can easily recover former results of~\cite{Orr} which were only concerned with~$S$-Hecke orbits (involving a finite set~$S$ of primes). In order to work with whole Hecke orbits, and even geometric Hecke orbits, we use an ``integral and uniform'' refined version of the Tate conjecture. Using generalised Hecke orbits is important for our strategy to work, in particular for the reduction steps in ~\cite[\S8]{RY}. Some of new ideas in this article relate the notion of ``Stability'' in Mumford sense to the Tate hypothesis. The fine estimates we need use stability not only over complex numbers, but in a broader context, over~$\mathbb{Z}_p$ and~$\mathbb{Z}$. This is where the ``uniformity and integrality'' in our Tate hypothesis is essential. These ideas originate from~\cite{R-PhD}, part of the first author's 2009 PhD thesis. This article also develops several results of independent interest. Theorem~\ref{pKN} is a~$p$-adic version of a Theorem of Kempf-Ness~\cite{KN}. We expect it to be useful in other contexts, and proved in more generality than needed here. Theorem~\ref{thm:compare reductive} gives precise and uniform comparison on norms along two closed orbits of reductive groups. \subsection*{Outline of the paper} We define the uniform integral hypothesis in section~\ref{sec:Def:Tate}. In section~\ref{sec:proof}, we reduce Th.~\ref{main theorem 2} to the bounds on Galois orbits established in the rest of the paper, and the functorial invariance properties of the Tate hypothesis of section~\ref{sec:functoriality}. Since the formal strategy is almost identical to that of \cite[\S8]{RY} we only give a sketch indicating necessary adjustments and provide precise references to~\cite{RY}. In section~\ref{sec:functoriality} also derive the refined version of Faltings' theorems that we use, using arguments of Serre and Noot. We deduce that the uniform integral Tate hypothesis holds in Shimura varieties which are of abelian type and also of adjoint type. The central and technically hardest part of the paper are \S\S\ref{sec:bounds}--\ref{sec:pKN}. There we establish the lower bounds for the Galois orbits of points in geometric Hecke orbits as in \cite{RY} under assumptions of Th.~\ref{main theorem 2}. The main result~Th.~\ref{thm:compare reductive} of section~\ref{sec:reductive} is essential to the proofs in section~\ref{sec:bounds}. We derive it in section~\ref{sec:reductive} from the results of sections~\ref{sec:pKN} and~\ref{sec:slopes}. Section~\ref{sec:pKN} gives a~$p$-adic analogue Th.~\ref{pKN} of a Theorem of Kempf-Ness. We prove in greater generality than required for Th.~\ref{thm:compare reductive}, as we believe it will be useful in other contexts. It involves good reduction properties of homogeneous spaces of reductive groups over ${\mathbb Z}_p$, and of closed orbits in linear representations over ${\mathbb Z}_p$. The ideas behind the convexity and slope estimates in \S\ref{sec:slopes} can be better understood in the context of Bruhat-Tits buildings as in~\cite{R-PhD}. The height functions which are central in our implementation of the Pila-Zannier strategy give examples of the type of functions studied in~\S\ref{sec:slopes}. \subsubsection*{Acknowledgements} We would like to express our greatest gratitude to Laurent Moret-Bailly for discussions and suggestions regarding the content of section~\ref{sec:pKN}. Both authors were supported by Leverhulme Trust Grant RPG-2019- 180. The support of the Leverhulme Trust is gratefully acknowledged. The first author is grateful to the IHÉS for its invitation during the preparation of this article. \section{Uniform integral Tate conjecture}\label{sec:Def:Tate} In this section, we define in~Def.~\ref{defi:Tate bis} our main assumption in this paper, the `uniform integral Tate conjecture' property. This is an extension of the conclusions of Faltings' theorem in the form given in Th.~\ref{Faltings}, to all Shimura varieties. \subsection{Uniform integral Tate conjecture} In~\S\ref{defTate1} and~\S\ref{defTate2} we consider an abstract setting. In~\S\ref{Shimura applied def} we specialise it to the context of Shimura varieties. \subsubsection{}\label{defTate1} Let~$M\leq G$ be (connected) reductive algebraic groups over~${\mathbb Q}$. We identify~$G$ with its image by a faithful representatio \[ \rho:G\to GL(d). \] Def.~\ref{defi:Tate} and Theorem~\ref{main theorem 2} will not depend on this choice. The Zariski closure in~$GL(d)_{\mathbb{Z}}$ of the algebraic groups~$M$ and~$G$ and~$Z_{G}(M)$ define models over~$\mathbb{Z}$. We write~$G_{\mathbb{F}_p}$ the special fibre\footnote{For almost all primes~$p$ the group~$G({\mathbb Z}_p)$ is hyperspecial and ~$G_{{\mathbb F}_p}$ is a connected reductive algebraic group over~${\mathbb F}_p$.} and \[ G({\mathbb Z}_p)=G({\mathbb Q}_p)\cap GL(d,{\mathbb Z}_p)\text{ and }G(\widehat{{\mathbb Z}})=\prod_p G({\mathbb Z}_p)=G({\mathbb A}_f)\cap GL(d,\widehat{{\mathbb Z}}). \] We also have a reduction map~$G({\mathbb Z}_p)\to G_{{\mathbb F}_p}({\mathbb F}_p)$. These constructions apply to~$M$ and~$Z_G(M)$ as well. \subsubsection{} \label{defTate2} Let~$U\leq M({\mathbb A}_f)$ be a compact subgroup. For every prime~$p$, we define~$U_p=M({\mathbb Z}_p)\cap U$. We denote by~$U(p)$ the image of~$U_p$ in~$G({\mathbb F}_p)$. We define~${U_p}^0=U_p\cap {H_p}^0({\mathbb Q}_p)$ where~$H_p=\overline{U_p}^{Zar}\leq G_{{\mathbb Q}_p}$ the Zariski closure as a~${\mathbb Q}_p$-algebraic subgroup, and~${H_p}^0$ its neutral Zariski connected component. \begin{definition}[{\bf Uniform integral Tate property}]\label{defi:Tate} We say that a compact subgroup~$U\leq M({\mathbb A}_f)$ \emph{``satisfies the uniform integral Tate'' property with respect to~$M$,~$G$ and~$\rho$} if: \begin{enumerate} \item \label{defi:Tate1} For every~$p$, \begin{subequations} \begin{equation}\label{defi:tate eq1} Z_{G_{\mathbb{Q}_p}}(U_p)= Z_{G_{\mathbb{Q}_p}}({U_p}^0)= Z_{G}(M)_{\mathbb{Q}_p}. \end{equation} and \begin{equation}\label{defi:tate eq 1.2} \text{the action of $U_p$ on ${\mathbb{Q}_p}^d$ is semisimple.} \end{equation} (This~\eqref{defi:tate eq 1.2} is equivalent to: $H_p$ is reductive.) \end{subequations} \item \label{defi:Tate2} For every~$D$, there exists an integer~$M(D)$ such that for every~$p \geq M(D)$ and every~$U'\leq U_p$ of index~$[U_p:U']\leq D$, we have \begin{subequations} \begin{equation}\label{defi:tate eq 2} Z_{G_{\mathbb{F}_p}}(U'(p))=Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p}) \end{equation} and \begin{equation}\label{defi:tate eq 2.2} \text{the action of $U'(p)$ on $\overline{\mathbb{F}_p}^d$ is semisimple.} \end{equation} \end{subequations} (When~$p>d$,~\eqref{defi:tate eq 2.2} is equivalent to: the Nori group, defined below, of~$U'(p)$ is semisimple.) \end{enumerate} \end{definition} In our terminology, \emph{integrality} refers to the second property over~$\mathbb{F}_p$ on~$U(p)$ and \emph{uniformity} to the fact that the integer~$M(D)$ depends on~$D$ only. \subsubsection{Remarks} \label{rem:Tate} We collect here some facts that will be used throughout this article. \begin{enumerate} \item \label{rem2} For~$p$ large enough, in terms of~$d$, we can use Nori theory~\cite{N}. For a subgroup~$U'(p)\leq G(\mathbb{F}_p)$, the group ${U'(p)}^{\dagger}$ defined in~\eqref{defi daggers}, is of the form~$H({\mathbb F}_p)^\dagger$ for a \textbf{reductive} algebraic group~$H\leq G_{{\mathbb F}_p}$ over ${\mathbb F}_p$. We call this~$H$ the \textbf{Nori group} of~$U'(p)$. The property~\eqref{defi:tate eq 2.2} is then equivalent to the fact that~$H$ is a \textbf{reductive} group~$H\leq G_{{\mathbb F}_p}$ over ${\mathbb F}_p$(see~\cite[Th.~5.3]{SCR}). We also note that~$[H({\mathbb F}_p):H({\mathbb F}_p)^\dagger]$ can be bounded in terms of~$\dim(G)$ (see.~\cite[3.6(v)]{N}). \item \label{rem2.2} If~$U'\leq U$ has index~$[U:U']\leq p$, then~$U'(p)^\dagger= U(p)^\dagger$. \item \label{rem3} This ``uniform integral Tate'' property does not depend\,\footnote{Indeed, Def.~\ref{defi:Tate} does not involve~$\rho$ itself, but only the induced models of~$G$ and~$M$. The algebraic groups~$G_{\mathbb{Q}_p}$ and~$M_{\mathbb{Q}_p}$ do not depend on the integral models, and two models, for almost all~$p$, induce the same local models~$G_{\mathbb{Z}_p}$ and~$M_{\mathbb{Z}_p}$.} on the choice of a faithful representation~$\rho$. \item \label{rem4} The semisimplicity of the action over~$\overline{\mathbb{F}_p}$ is equivalent to the semisimplicity over~$\mathbb{F}_p$. \item \label{passage aux Up} The group~$U\leq M({{\mathbb A}_f})$ ``satisfies the uniform integral Tate'' property with respect to~$M$,~$G$ and~$\rho$ if and only if the subgroup~$\prod_p {U_p}\leq U$ does so. \item Part~\eqref{defi:Tate1} of Def.~\ref{defi:Tate} is satisfied for~$U$ if and only it is satisfied for some subgroup of finite index in~$U$. \item Propriety~\eqref{defi:tate eq1} of part~\eqref{defi:Tate1} of Def.~\ref{defi:Tate} is satisfied for~$U_p$ if and only it is satisfied for a subgroup~$U'$ of~$U_p$: we will have~$U'^0\leq {U_p}^0\leq U \leq M$, and~$Z_G(M)=Z_G(U'^0)\geq Z_G({U_p}^0)\geq Z_G({U_p})\geq Z_G(M)$. \end{enumerate} In view of Remark~\ref{rem:Tate}~(\ref{rem3}) we will, from now on, just say ``satisfies the uniform integral Tate conjecture'' without referring to a particular faithful representation~$\rho$. We deduce from the above facts the following. \begin{lemma}\label{lem U ast} Let~$U''\leq U\leq M(\widehat{\mathbb{Z}})$ be such that~$U''$ satisfies the uniform integral Tate property with respect to~$M$,~$G$ and~$\rho$. Then~$U$ satisfies the uniform integral Tate'' property with respect to~$M$,~$G$ and~$\rho$. \end{lemma} \subsubsection{}\label{Shimura applied def} We denote by~$(G,X)$ a Shimura datum, by~$K\leq G(\mathbb{A}_f)$ a compact open subgroup, and by~$S=Sh_K(G,X)$ the associated Shimura variety. Fix~$x_0\in X$ and let~$M\leq G$ be the Mumford-Tate group of~$x_0$. Let~$E$ be a field of finite type over $\mathbb{Q}$ such that~$s_0=[x_0,1]\in S(E)$ (such an~$E$ always exists). We denote by~$\rho_{x_0}:{\rm Gal}(\overline{E}/E)\to M(\mathbb{A}_f)$ the representation associated to $x_0$ (see Section 4 of \cite{RY}), and by~$U\leq M(\mathbb{A}_f)\cap K$ its image. The main hypothesis in Theorem~\ref{main theorem 2} is the following. \begin{definition}\label{defi:Tate bis}We say that~$x_0$ ``satisfies the uniform integral Tate conjecture'' if~$U=\rho_{x_0}({\rm Gal}(\overline{E}/E))$ ``satisfies the uniform integral Tate'' property with respect to~$M$,~$G$ in the sense of Def.~\ref{defi:Tate}. \end{definition} \subsubsection{} We will make use of the following terminology. \begin{definition}\label{defi:indep} We say that a subgroup~$U\leq M({\mathbb A}_f)$ satisfies the~$\ell$-independence property if it is of the form \[ U=\prod_p U_p \] with~$U_p\leq M({\mathbb Q}_p)$ for every prime~$p$. \end{definition} \section{Proof of the main result}\label{sec:proof} \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} The structure of the proof Th.~\ref{main theorem 2} is essentially the same as in~\cite{RY}. The main difference is that our hypothesis is the integral uniform Tate property instead of the ``weakly adélic Mumford-Tate conjecture''. Using the results of \S\S\ref{sec:functoriality},\ref{sec:bounds}, we may follow the same proof as~\cite[\S8]{RY} making the following changes. \subsection{} In the step ``reduction to the Hodge generic case''~\cite[8.1.1]{RY} we make the following changes. Since we work with geometric Hecke orbits~$\mathcal{H}^{g}(x_0)$ instead of generalised Hecke orbits~$\mathcal{H}(x_0)$, we use~\cite[Cor.~3.5]{RY} to remark that, with~$\Sigma^g=\mathcal{H}^g([x_0,1])\cap Z$ the following set is a finite union \[ \Sigma'^g:=\stackrel{-1}{\Psi}(\Sigma^g)=\mathcal{H}^g([x'_1,1])\cup\ldots\cup \mathcal{H}^g([x'_k,1]) \] of geometric Hecke orbits in~$Sh_{K\cap G'({\mathbb A}_f)}(G',X')$. We replace~``On the other hand, the Mumford-Tate hypothesis [...]'' by the observation that if the geometric Hecke orbit~$\mathcal{H}^g([x_0,1])$ in $Sh(G,X)$ satisfies the Tate conjecture (relative to~$M$ and~$G$), then, by Prop.~\ref{Tate:subdatum}, each of the geometric Hecke orbits~$\mathcal{H}^g([x'_1,1]),\ldots,\mathcal{H}^g([x'_k,1])$ satisfy the Tate conjecture (relative to~$M$ and~$G'$). \subsection{} In the step ``reduction to the adjoint datum''~\cite[8.1.2]{RY} we make the following changes. Instead of~``Using § 4, the Mumford-Tate hypothesis will still be valid even [...]'', we use~Prop.~\ref{Tate:invariance}. Instead of~``In view of § 7, the Mumford-Tate hypothesis [...]'' we use Prop.~\ref{Tate:subdatum}. \subsection{} In~\cite[8.1.3]{RY}, ``Induction argument for factorable subvarieties'', we make the following changes. Instead of~``As explained in § 7, the Mumford-Tate hypothesis [...]'' we use Prop.~\ref{Tate:products}. \subsection{} The last change from~\cite[\S 8]{RY} is in~\cite[8.2.3]{RY} where we use our Th.~\ref{Galois bounds}, instead of the lower bound on the size of Galois orbits~\cite[Th.~7.4]{RY}. We may apply Th.~\ref{Galois bounds} to the Galois image~$U$, because: the hypothesis on~$M^{ab}$ is satisfied for Galois images (cf.~\cite[Lem.~\S7.11]{RY}); the other hypotheses are satisfied by assumption. (In the case of Shimura data of abelian type, see~\S\ref{Tate:abelian type}.) \section{Functoriality of the Tate condition and independence condition}\label{sec:functoriality} \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} In this section, we verify that the conditions in definition \ref{defi:Tate} and~\ref{defi:indep} are preserved by various natural operations. This is necessary to make simplifying assumptions in the proof of the main theorems (cf. \cite[\S8.1]{RY}). We also show that the conditions of \ref{defi:Tate} and~\ref{defi:indep} hold for all Shimura varieties of abelian type. According to~Remark~\ref{rem:Tate}, the definition~\ref{defi:Tate} does not depend on~$\rho$. It follows from definition~\ref{defi:indep}, that the property that the Galois image satisfies the~$\ell$-independence property does not depend on~$G$, nor on~$\rho$. \subsection{Invariance on the geometric Hecke orbit} \begin{proposition}\label{Tate:invariance} Let~$x_\phi\in \mathcal{H}^g(x_0)$. If~$x_0$ ``satisfies the uniform integral Tate conjecture'', then~$x_\phi$ ``satisfies the uniform integral Tate conjecture''. If the image~$U$ of~$\rho_{x_0}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}, then the image~$U'$ of~$\rho_{x_\phi}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}. \end{proposition} \begin{proof}Let~$g\in G(\overline{{\mathbb Q}})$ be such that~$\phi=g\phi_0g^{-1}$, and let~$L$ be a number field be such that~$g\in G(L)$, and let~${\mathbb A}_{f,L}={\mathbb A}_f\otimes_{\mathbb Q} L$ be the ring of ad\`eles of the number field~$L$. Denote by~$U'$ the image of~$\rho_{x_\phi}$. According to~\cite[Prop.~4.3]{RY} we have \[ U'=\phi(U). \] We prove the last assertion. Assume~$U=\prod_p U_p$. According to~\cite[Prop.~4.3]{RY}, we have~$U'=\phi(U)$. Since $\phi$ is defined over ${\mathbb Q}$, we have \[ U'=\prod_p \phi(U_p). \] This proves the last assertion. We treat the semisimplicity over~$\mathbb{Q}_p$ in Def.~\ref{defi:Tate}. Assume that the action of~$U_p$ is semisimple. Equivalently, the Zariski closure~$\overline{U_p}^{Zar}$ is reductive. As~$\phi$ is defined over~${\mathbb Q}$, the algebraic group~$\overline{\phi(U_p)}^{Zar}=\phi(\overline{U_p}^{Zar})$ is reductive, or equivalently, the action of~$U'_p=\phi(U_p)$ is semisimple. We now treat the centraliser property of part~\ref{defi:Tate1} of Def.~\ref{defi:Tate}. For every prime~$p$, we have \begin{multline*} Z_{G_{\mathbb{Q}_p}}(U'_p)=gZ_{G_{\mathbb{Q}_p}}(U_p)g^{-1}\\ =gZ_{G_{\mathbb{Q}_p}}({U_p}^0)g^{-1}= gZ_{G}(M)_{\mathbb{Q}_p}g^{-1}=Z_{G}(gMg^{-1})_{\mathbb{Q}_p}=Z_{G}(\phi(M))_{\mathbb{Q}_p} \end{multline*} As~$\phi(M)$ is the Mumford-Tate group of~$x_\phi$ we have proved~(1) of~Def.~\ref{defi:Tate} for~$x_\phi$. We now treat part \ref{defi:Tate2} of Def.~\ref{defi:Tate}. Note that the component~$g_p$ of~$g$ as an adélic element is in~$G(O_{L\otimes\mathbb{Q}_p})$ for~$p$ large enough. For~$p$ large enough, the group~$\phi(M)({\mathbb Z}_p)=gM({\mathbb Z}_p) g^{-1}$ is hyperspecial and the reduction map~ \[ g\mapsto \overline{g}:\phi(M)(\overline{{\mathbb Z}_p})\to \phi(M)(\overline{{\mathbb F}_p}) \] is well defined. Let~$m_0\in\mathbb{Z}_{\geq1}$ be such that the above apply for~$p\geq m_0$. Let~$D$ and~$M(D)$ be as in (2) of Def.~\ref{defi:Tate}. Then, for~$p\geq M'(D):=\max\{m_0;M(D)\}$ we have \[ U'(p)=\overline{g_p}U(p)\overline{g_p}^{-1} \] with~$\overline{g}$ the reduction of~$g$ in~$G(\kappa_L)\leq G(\overline{{\mathbb F}_p})$ , where~$\kappa_L$ is the residue field of~$L$ at a prime above~$p$. The semisimplicity follows. For~$p\geq M'(D)$, we also have \begin{multline*} Z_{G_{{\mathbb F}_p}}(U'(p))=\overline{g_p}Z_{G_{{\mathbb F}_p}}(U(p))\overline{g_p}^{-1}\\ =\overline{g_p}Z_{G_{{\mathbb F}_p}}({U(p)}^0)\overline{g_p}^{-1}= \overline{g_p}Z_{G_{{\mathbb F}_p}}(M)_{{\mathbb F}_p}\overline{g_p}^{-1}\\ =Z_{G_{{\mathbb F}_p}}(\overline{g_p}M\overline{g_p}^{-1})=Z_{G_{{\mathbb F}_p}}(\phi(M)).\qedhere \end{multline*} \end{proof} \subsection{Passage to a subdatum}\label{passage to sub} \begin{proposition}\label{Tate:subdatum} Let~$\Psi:(G',X')\to (G,X)$ be an injective morphism of Shimura data. Let~$x_\phi\in \mathcal{H}^g(x_0)$ be such that there exists~$x'_\phi$ such that~$x_\phi=\Psi\circ x'_\phi$. If~$x_0$ ``satisfies the uniform integral Tate conjecture'', then~$x'_\phi$ ``satisfies the uniform integral Tate conjecture''. If the image~$U$ of~$\rho_{x_0}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}, then the image~$U'$ of~$\Psi\circ \rho_{x_\phi}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}. \end{proposition} \begin{proof} By Prop.~\ref{Tate:invariance} we may assume~$x_\phi=x_0$. We identify~$G'$ with its image in~$G$. By~\cite[Prop. 3.5]{RY} we have \[ U=\Psi(U')=U' \] where we denote by~$U'$ the image of~$\rho_{x'_0}'$ in $G'(\mathbb{A}_f)$. The semisimplicity of the action of $U'$ is automatic. It follows readily from the definitions and the remark that \[ Z_{G'}(U'_p)=Z_{G}(U'_p)\cap G'=Z_{G}(M)\cap G'=Z_{G'}(M). \] and similarly for~${\mathbb F}_p$ for~$p$ big enough so that~$G'$ is hyperspecial at~$p$. The last statement follows from \[ U'=\Psi(U')=U=\prod_p U_p.\qedhere \] \end{proof} \subsection{Passage to quotients by central subgroups}\label{passage to quotients} \begin{proposition} Let $F \subset Z(G)$ ($Z(G)$ is the centre of $G$) be a subgroup and let $G'$ be the quotient $G/F$. Let~$\Psi:(G,X)\to (G',X')$ be the morphism of Shimura data induced by the quotient $G\longrightarrow G'$. If~$x_0$ ``satisfies the uniform integral Tate conjecture'', then~$x'_0=\Psi\circ x_0$ ``satisfies the uniform integral Tate conjecture''. If the image~$U$ of~$\rho_{x_0}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}, then the image~$U'$ of~$\rho_{x_0'}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}. \end{proposition} \begin{proof} Arguments are similar. Firstly, by~\cite[Prop.~4.3]{RY} we have \[ U'=\Psi(U). \] We use, remarking~$\Psi(M)$ is the Mumford-Tate group of~$x'_0$, \[ Z_{G^{ad}}(U'_p)=Z_{G}(U_p)/F=Z_{G}(M)/F=Z_{G^{ad}}(M/Z(G))=Z_{G^{ad}}(\Psi(M)). \] (For~$p$ big enough~$\Psi$ will be compatible with the integral models.) The semisimplicity and Finally, if~$U=\prod_p U_p$, then~$U'=\prod_p \Psi(U_p)$. This proves the assertion about~$\ell$-independence. For the semisimplicity property over~$\mathbb{F}_p$ we can use, for~$p$ large enough: the~\ref{rem2} of Remark~\ref{rem:Tate} in order apply Nori theory, and the remark below. \end{proof} \begin{rem} The subgroup~$U(p)^\dagger$ used by Nori is the~$p$-Sylow of~$U(p)$. From~Lemma~\ref{Sylow} one can deduce that the Nori group of~$\Psi(U(p))$ is~$\Psi(H)$. \end{rem} \begin{rem} This proposition in particular shows that we can restrict ourselves to the case of Shimura varieties where $G$ is semisimple of adjoint type (by taking $F = Z(G)$ in this proposition). \end{rem} \subsection{Compatibilty to products.} \begin{proposition}\label{Tate:products} Assume $G$ to be of adjoint type and not simple. Let $$ (G,X) = (G_1, X_1) \times (G_2, X_2) $$ be a decomposition of $(G,X)$ as a product. We denote~$\pi_1:G\to G_1$ and~$\pi_2:G\to G_2$ the projection maps. If~$x_0$ ``satisfies the uniform integral Tate conjecture'', then~$x_i=\pi_i\circ x_0$ ``satisfies the uniform integral Tate conjecture''. If the image~$U$ of~$\rho_{x_0}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}, then the image~$U_i$ of~$\rho_{x_i}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}. \end{proposition} The proof is the same above. We recall that~$\rho_{x_i}=\pi_i\circ \rho_{x_0}$ by~\cite[Prop.~4.3]{RY}. We also recall that the Mumford-Tate group of~$x_i$ is~$M_i=\pi_i(M)$, and that \[ G_i\cap Z_{G}(M)=Z_{G_i}(M_i). \] \subsection{Shimura varieties of abelian type}\label{Tate:abelian type} \begin{proposition} All Shimura varieties of abelian type and adjoint type satisfy conditions of \ref{defi:Tate} and \ref{defi:indep}. More precisely, let~$(G,X)$ be a Shimura datum of abelian type with~$G$ of adjoint type, and~$S$ be an associated Shimura variety. Then for every point of~$s_0=[x_0,1]$ of~$S$, \begin{itemize} \item the point~$x_0$ satisfies the uniform integral Tate conjecture, \item and there exist a field of finite type~$E$ over~${\mathbb Q}$ such that~$U=\rho_{x_0}(Gal(\overline{E}/E))$ satisfies the~$\ell$-independence condition. \end{itemize} \end{proposition} \begin{proof} By definition of abelian type Shimura data (\cite[\S3.2]{Upr}, \cite[Prop.~2.3.10]{Deligne}), there exists a isomorphism of Shimura data \[ ({G'}^{ad},{X'}^{ad}) \] with~$(G',X)$ of Hodge type. Using Lemma~\ref{Tate:invariance} we may replace~$s_0=[x_0,1]$ by any point of its geometric Hecke orbit, and assume~$x_0$ belongs to the image of~$X'$ in~$X\simeq {X'}^{ad}$: there exists~$x_0'\in X'$ such that~$x_0={x'_0}^{ad}$. According to~\S\ref{passage to quotients}, we may substitute~$(G,X)$ with~$(G,X')$ and~$x_0$ by~$x_0'$. By definition of Hodge type data, there exists an injective morphism of Shimura data~$(G,X)\to (\mathfrak{H}_{g},GSp(2g))$, the latter being the Shimura datum of the moduli space~$\mathcal{A}_g$. Let~$\tau_0$ be the image of~$x_0$ in~$\mathfrak{H}_{g}$. According to~\S\ref{passage to sub}, we may assume~$(G,X)=(\mathfrak{H}_{g},GSp(2g))$ and~$x_0=\tau_0$. It then follows from Th.~\ref{Faltings} and Cor.~\ref{coro Faltings}. \end{proof} \subsection{Uniform integral Faltings' theorem over fields of finite type} \begin{theorem}[Faltings]\label{Faltings} Let~$K$ be a field of finite type over~${\mathbb Q}$, and let~$A/K$ be an abelian variety. Fix an algebraic closure~$\overline{K}$ of~$K$. And denote \[ T_A\approx \widehat{\mathbb{Z}}^{2\dim(A)} \] the~$\widehat{{\mathbb Z}}$-linear Tate module, on which we have a continuous~$\widehat{\mathbb{Z}}$-linear representation \begin{equation}\label{Faltings rep} \rho=\rho_A:Gal(\overline{K}/K)\to GL_{\widehat{\mathbb{Z}}}(T_A). \end{equation} We assume that~${\rm End}_K(A)={\rm End}_{\overline{K}}(A)$ and we let~${\rm End}(A/K)$ act on~$T_A$ and denote \[ Z:=\{b\in{\rm End}_{\widehat{{\mathbb Z}}}(T_A)|\forall a\in{\rm End}(A/K),[b,a]=0\ \] the $\widehat{{\mathbb Z}}$-algebra which is the centraliser of~${\rm End}(A/K)$ in~${\rm End}_{\widehat{\mathbb{Z}}}(T_A)\approx Mat({2\dim(A)},\widehat{{\mathbb Z}})$. We denote the image of~$\rho$ by \begin{equation}\label{U in Falt} U:=\rho(Gal(\overline{K}/K)). \end{equation} Then, for every~$d\in\mathbb{Z}_{\geq1}$, there exists some~$M(A,K,d)\in \mathbb{Z}_{\geq1}$ such that: for every open subgroup~$U'\leq U$ of index at most~$d$, we have \[ \widehat{{\mathbb Z}}[U']\leq Z \] is an open subalgebra of index at most~$M(A,K,d)$. \end{theorem} This statement follows from Faltings' theorems. A reference in the case where~$K$ is a number field is~\cite[Th.~1, Cor.~1.1]{MW}. Because we lack a reference, we give a specialisation argument which reduces the theorem for general fields of finite over~${\mathbb Q}$ to the case of number fields. In view of~Def.~\ref{defi:Tate} we will prove the following refinement of Th.~\ref{Faltings}. We will prove a refinement. \begin{proposition}\label{prop Falt refined} The same conclusion hold if we replace~\eqref{U in Falt} by \begin{equation}\label{U in Falt prod} U:=\prod_p {U_p}^0. \end{equation} with~$U_p:=\rho({\rm Gal}(\overline{K}/K))\cap GL_{\mathbb{Z}_p}(T_A\otimes\mathbb{Z}_p)$. \end{proposition} The refinement will use the following results. \begin{theorem}[Serre]\label{Serre} In the same situation, assume moreover that~$K$ is a number field. Then there exists a finite extension~$L/K$ such that Galois image~$U:=\rho({\rm Gal}(\overline{L}/L))$ \begin{enumerate} \item is such that~$U$ satisfies the~$\ell$-independence condition in the sense of Def.~\ref{defi:indep}, (\cite[136. Th.~1, p.34]{S4}, \cite[\S3.1]{SCrit}) \item and such that the~$U_p$ are Zariski connected, (\cite[133.~p.\,15 ;135.~2.2.3 p.\,31]{S4}, \cite[6.14, p\,623]{LP}) \item and such that~$U\leq M(\widehat{\mathbb{Z}})$ where~$M$ is the Mumford-Tate group of~$A$. (cf.~\cite[\S4]{RY} and~\S\ref{Tate:abelian type}.) \end{enumerate} \end{theorem} \begin{proof}Let~$\eta=\mbox{Spec}(K)$ and~$\overline{\eta}=Spec(\overline{K})$. Following~\cite[\S1.2 and Cor.~1.5]{Noot}, there exists a number field~$F\leq K$, and an abelian variety~$A_F$ over~$F$ such that \begin{itemize} \item We have an identification of Tate modules~$T:=T_A\simeq T_{A_F}$ \item We have an identity (cf.~\cite[Cor.~1.5]{Noot} \begin{equation}\label{Noot End} {{\rm End}}_K(A)\simeq {{\rm End}}_F(A) \end{equation} as subalgebras of~$B:={\rm End}_{\widehat{{\mathbb Z}}}(T)$, \item we have a diagram \[ \begin{tikzcd} Gal(\overline{K}/K)\arrow{d}{\rho}\arrow[hookleftarrow]{r} & D_F \arrow[twoheadrightarrow]{r} & Gal(\overline{F}/F)\arrow{d}{\rho'}\\ {\rm End}(T_A)&\arrow[equals]{l}{\rm End}(T)\arrow[equals]{r}& {\rm End}(T_{A_F}). \end{tikzcd} \] \end{itemize} The commutativity implies that~$U_F:=\rho(D_F)$ satisfies \[ \rho(D_F)=U_F:=\rho'(Gal(\overline{F}/F)) \] and \[ \rho(D_F)\leq U:=\rho(Gal(\overline{K}/K)). \] By~Th.~\ref{Serre}, after possibly passing to a finite extension of~$F$ and the corresponding finite extension of~$K$, we may assue \[ U_F=\prod_p {(U_F)_p}^0. \] We note that~$(U_F)_p\leq U_p$ and thus~${(U_F)_p}^0\leq {U_p}^0$. We deduce \[ U_F\leq \widetilde{U}:=\prod_p {U_p}^0. \] We will prove the refinement Prop.~\ref{prop Falt refined} of Th.~\ref{Faltings} with \[ M(A,K,d)=M(A_F,F,d). \] Fix~$d$ and an open subgroup~$U'\leq \widetilde{U}$ of index at most~$d$. We denote \[ U'_F=U'\cap U_F. \] We first note that~$U'_F\leq U_F$ is a subgroup of index at most~$d$. We have, as~$\widehat{\mathbb{Z}}$-subalgebras of~$B:=Mat({2\dim(A)},\widehat{{\mathbb Z}})$, \[ \widehat{{\mathbb Z}}[U'_F]\leq \widehat{{\mathbb Z}}[U']. \] From~\eqref{Noot End} we have \[ Z=Z_F:= \{b\in{\rm End}_{\widehat{{\mathbb Z}}}(T_A)|\forall a\in{\rm End}(A_F/F),[b,a]=0\}. \] We use the number field case of the theorem (see~\cite[Th.~1, Cor.~1.1]{MW}) for~$A_F$ and~$d$ and~$U'_F$ and get \[ \left[Z_F:\widehat{{\mathbb Z}}[U'_F]\right]\leq M(A_F,F,d). \] We note that~$\widehat{{\mathbb Z}}[U'_F]\leq Z$ because~$U\geq U'$ commutes with the action of~${\rm End}(A)$ (all the endomorphisms are rational over~$K$). Finally \[ \widehat{{\mathbb Z}}[U'_F]\leq \widehat{{\mathbb Z}}[U']\leq Z \] hence \[ \left[Z:\widehat{{\mathbb Z}}[U']\right]\leq \left[Z:\widehat{{\mathbb Z}}[U'_F]\right] = \left[Z_F:\widehat{{\mathbb Z}}[U'_F]\right] \leq M(A_F,F,d). \]\end{proof} \begin{corollary}\label{coro Faltings} Choose an isomorphism~$H^1(A;\mathbb{Z})\simeq \mathbb{Z}^{2\dim(A)}$ and denote~$\rho:GL(H^1(A;\mathbb{Z}))\to GL(2\dim(A))$ the corresponding isomorphism. There exists~$c(A)$ such the following holds. The subgroup~$U\leq M(\widehat{\mathbb{Z}})$ satisfies the uniform integral Tate property with respect to~$M$,~$GL(2g)$ and~$\rho$ in the sense of Def.~\ref{defi:Tate}, with~$M(D):=\max\{c(A);M(A,K,D)\}$. The subgroup~$U\leq M(\widehat{\mathbb{Z}})$ satisfies the uniform integral Tate property with respect to~$M$,~$GSp(2g)$ and~$\rho$ in the sense of Def.~\ref{defi:Tate} with~$M(D):=\max\{c(A);M(A,K,D)\}$. \end{corollary} We only treat the case of~$GL(2g)$, as the case of~$GSp(2g)$ follows directly. \begin{proof}Thanks to Lemma~\ref{lem U ast}, we may assume~$U=\prod_p {U_p}^0$. We have then \[ \widehat{\mathbb{Z}}[U]=\prod_p\mathbb{Z}_p[{U_p}^0]. \] We use Prop.~\ref{prop Falt refined}. Then \begin{itemize} \item for every~$p$, the algebra~${\mathbb{Q}}_p[U_p]={\mathbb{Q}}_p[{U_p}^0]$ is the commutant of~${\rm End}(A/K)\otimes\mathbb{Q}_p=Z_{{\rm End}_{\mathbb{Q}_p}(H^1(A;\mathbb{Q}_p))}(M)$. This implies~\eqref{defi:tate eq1}. Because~${\rm End}(A/K)\otimes\mathbb{Q}_p$ is a semisimple algebra, so is it commutant in~${\rm End}_{\mathbb{Q}_p}(H^1(A;\mathbb{Q}_p))$ and thus the action of~$U_p$ is semisimple. \item for every~$D$, and every~$p\geq M(A,K,D)$, and every~$U'\leq U(p)$ of index at most~$D$, the algebra~${\mathbb{F}}_p[U']$ is the commutant of~${\rm End}(A/K)\otimes\mathbb{F}_p$, and is equal to~${\mathbb{F}}_p[M(\mathbb{F}_p)]$. For~$p\gg 0$, depending only on~${\rm End}(A/K)$ and~$M$, the action of~${\mathbb{F}}_p[M(\mathbb{F}_p)]$ is semisimple.\qedhere \end{itemize} \end{proof} We deduce the following, using the well-known relation between Galois representations on the Tate module, and Galois action on isogeny classes in the Siegel modular variety~$\mathcal{A}_g$ (see~\cite{UY}). \begin{corollary} Let~$s_0$ be a point in~$\mathcal{A}_g=Sh_{GL(2g,\widehat{\mathbb{Z}})}(\mathfrak{H}_g,GSp(2g))$. Then~$s_0$ satifies Def.~\ref{defi:Tate bis}. \end{corollary} \section{Polynomial Galois bounds}\label{sec:bounds} This section is at the heart of this paper. We obtain suitable lower bounds for Galois orbits of points in generalised Hecke orbits under much weaker assumptions than those made in \cite{RY} (in particular, as seen above, they are satisfied by all Shimura varieties of abelian type). \subsection{Statement } We use the notations~$\succcurlyeq$ and~$\approx$ of~\cite[Def.~6.1]{RY} for polynomial domination and polynomial equivalence of functions. For the definition of~$H_f(\phi)$ we refer to~\cite[App. B]{RY}. For the MT property we refer to~\cite[\S7, Def. 7.1]{RY}, and refer to~\cite[\S 7.4]{RY} for the fact that~\eqref{Galois bound 2} is satisfied for Galois images. \begin{theorem}\label{Galois bounds} Let~$M \leq G$ are connected reductive $\mathbb{Q}$-groups. Let~$U\leq M({\mathbb A}_f)$ be subgroup satisfying the following. \begin{enumerate} \item \label{Galois bound 1} The image of~$U$ in~$M^{ab}$ is MT in $M^{ab}$. \label{thm:galois bounds H1} \item \label{Galois bound 2} The group~$U$ satisfies the uniform integral Tate conjectures as in Def.~\ref{defi:indep} and~\ref{defi:Tate}.\label{thm:galois bounds H2} \item \label{Galois bound 3} For every~prime $p$, $U_p$ is Zariski connected.\label{thm:galois bounds H3} \end{enumerate} Denote by~$\phi_0:M\to G$ the identity homomorphism and~$W=G\cdot \phi_0$ its conjugacy class. Then as~$\phi$ varies in~$W(\mathbb{A}_f)$, we have, for any compact open subgroup~$K\leq G(\mathbb{A}_f)$, where~$K_M=K\cap M(\mathbb{A}_f)$, we have \[ [\phi(U):\phi(U)\cap K]\approx [\phi(K_M):\phi(K_M)\cap K]\succcurlyeq H_f(\phi). \] as functions~$W(\mathbb{A}_f)\to \mathbb{Z}_{\geq1}$. \end{theorem} \subsubsection{Reduction to a local problem} From~\cite[Th.~B.1]{RY} we already have \[ [\phi(K_M):\phi(K_M)\cap K]\succcurlyeq H_f(\phi). \] and because~$U\leq K_M$, we have \[ [\phi(U):\phi(U)\cap K]\leq [\phi(K_M):\phi(K_M)\cap K]. \] Thus it will be enough to prove \begin{equation}\label{to prove} [\phi(U):\phi(U)\cap K]\succcurlyeq [\phi(M(\widehat{\mathbb{Z}})):\phi(M(\widehat{\mathbb{Z}}))\cap K]. \end{equation} Since they are commensurable groups, we may replace~$K$ by~$G(\widehat{\mathbb{Z}})$. In view of Remark~\ref{rem:Tate}~\eqref{passage aux Up}, we may replace~$U$ by~$\prod_pU_p$, which is smaller. Then the required inequality~\eqref{to prove} can be rewritten in the product form \[ \prod_p[\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\succcurlyeq \prod_p[\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)] \] and thus the problem can be studied prime by prime. More precisely, it will be enough to prove \begin{itemize} \item that there exists~$c\in\mathbb{R}_{>0}$ such that, for almost all primes \begin{multline}\label{precise-bound} \forall \phi\in W(\mathbb{Q}_p), [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq\\ [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]^{c}. \end{multline} \item and, for the finitely remaining primes, that we have the polynomial domination, as functions~$W(\mathbb{Q}_p)\to \mathbb{R}_{\geq 0}$, \[ [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\succcurlyeq [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]. \] Namely, that there exist~$a(p),c(p)\in\mathbb{R}_{>0}$ such that \begin{multline}\label{imprecise-bound} \forall \phi\in W(\mathbb{Q}_p), [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq\\ a(p)\cdot [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]^{c(p)}. \end{multline} \end{itemize} By the argument from~\cite[Proof of Cor.~B.2]{RY}, it will be sufficient, instead of~\eqref{precise-bound}, to prove: there exist~$a,c\in\mathbb{R}_{>0}$ such that, for almost all primes \begin{multline}\label{precise with a} \forall \phi\in W(\mathbb{Q}_p), [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq\\ a\cdot [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]^{c}. \end{multline} This the the following statement. \begin{theorem}[Local Galois bounds]\label{thm:local Galois} In the setting of Th.~\ref{Galois bounds}, there exist~$a,c\in{\mathbb R}_{>0}$, and for each~$p$, there exists~$b(p)\in{\mathbb R}_{>0}$ such that \[ [\phi(U_p):\phi(U_p)\cap K_p]\geq b(p)\cdot [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap K_p]^{c} \] and such that~$b(p)\geq a$ for almost all~$p$. \end{theorem} We prove~\eqref{imprecise-bound} in~\ref{every prime}. It is deduced from the functoriality of heights. We prove, for almost all primes,~\eqref{precise-bound} in~\ref{almost all primes}. It requires new tools developed in this article. For reference, we rephrase ``The image of~$U$ in~$M^{ab}$ is MT in $M^{ab}$'' as follows. We denote~$ab_M:M\to M^{ab}:=M/M^{der}$ the abelianisation map. Then there exists~$C_{MT}\in \mathbb{Z}_{\geq 1}$ such that \begin{equation}\label{defi CMT} \forall p, [M^{ab}(\mathbb{Z}_p):ab_M(U_p)]\leq C_{MT}. \end{equation} Because~$\exp(p\mathfrak{m}^{ab}_{\mathbb{Z}_p})$ is a~$p$-group, its action on~$M^{ab}(\mathbb{Z}_p)/ab_M(U_p)$ is trivial when~$p>C_{MT}$: we have \begin{equation}\label{defi CMT 2} \forall p>C_{MT},\exp(p\mathfrak{m}^{ab}_{\mathbb{Z}_p}) \leq ab_M(U_p). \end{equation} \subsection{For every prime}\label{every prime} We fix a prime~$p$. Let~$f_1,\ldots,f_{k}$ be a basis of the~$\mathbb{Q}_p$-Lie algebra~$\mathfrak{u_p}$ of~$U_p$. Replacing each~$f_i$ by a sufficiently small scalar multiple, we may assume that each~$u_i=\exp(f_i)$ converges and belongs to~$U_p$. By~\eqref{Galois bound 2} of~Th.~\ref{Galois bounds} and~\eqref{defi:tate eq1} of~Def.~\ref{defi:Tate}, we have, \[ Z_{G_{\mathbb{Q}_p}}({U_p})=Z_{G_{\mathbb{Q}_p}}({U_p}^0)=Z_{G_{\mathbb{Q}_p}}(\mathfrak{u}_p)=Z_{G_{\mathbb{Q}_p}}(\{f_1,\ldots,f_k\}). \] We define \[ v=(f_1,\ldots,f_k)\in \mathfrak{g}^k\stackrel{d\rho}{\hookrightarrow} E:={\rm End}(V_{\mathbb{Q}_p})^k. \] For the induced action of~$G$ on~$E$ we have \[ Z_{G_{\mathbb{Q}_p}}(U_p)=Stab_{G_{\mathbb{Q}_p}}(v). \] By our assumption, \[ Z_{G_{\mathbb{Q}_p}}(U_p)=Stab_{G_{\mathbb{Q}_p}}(\phi_0)=Z_G(M)_{\mathbb{Q}_p}. \] As a consequence, we have a well defined isomorphism~$W\to G\cdot v$ defined over~$\mathbb{Q}_p$ of homogeneous varieties, given by \[ g\cdot Z_G(M)_{\mathbb{Q}_p}\mapsto g\cdot v \] From~\eqref{defi:Tate1}, the Zariski closure~$\overline{U_p}^{Zar}$ is reductive. We may thus apply~\cite{Ri-Conj}, and deduce that the induced map \[ \iota:W\to E \] is an closed affine embedding. We use the standard norm on~$E\simeq {\mathbb{Q}_p}^{\dim(V)^2\cdot k}$. We denote by~$H_\iota$ the local Weil height associated to this embedding, which is given by \begin{multline} H_\iota:\phi\mapsto H_p(g\cdot v):=\max\{1;\norm{g\cdot v}\}\\=\max\{1;\norm{g\cdot f_1};\ldots;\norm{g\cdot f_k}\}. \end{multline} By functoriality properties of height functions, the function~$H_\iota$ and~$\phi\mapsto H_p(\phi)$ are polynomially equivalent. Namely, there are~$a(p)$ and~$c(p)$ such that \[ H_\iota\geq a(p)\cdot H_p(\phi)^{c(p)}. \] We denote~$U'\leq U_p$ the~$p$-adic Lie subgroup generated by \[\{\exp(f_1);\ldots;\exp(f_k)\}.\] We have \[ [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq [\phi(U'):\phi(U')\cap G(\mathbb{Z}_p)]. \] Using~\cite[Th.~A3]{RY}, we also have \[ [\phi(U'): \phi(U')\cap K_p]\geq H_\iota(\phi). \] \subsubsection{Remark} Note that the above bound already implies the Andr\'e-Pink-Zannier conjecture for $S$-Hecke orbits. This is more general than the result of Orr (unpublished) for Shimura varieties of abelian type and less precise than \cite{RY2} which proves a strong topological form under a weaker hypothesis. The method of Orr relies on Masser-W\"ustholz bounds, and~\cite{RY2} relies ultimately on $S$-adic Ratner theorems through the work of~\cite{RZ}. \subsection{For almost all primes}\label{almost all primes} \subsubsection{Construction of tuples} We denote by \[ Y\mapsto \overline{Y}:\mathfrak{m}_{\mathbb{Z}_p}\to\mathfrak{m}_{\mathbb{F}_p}\text{ and }\pi_p: U_p\to M_{\mathbb{F}_p}(\mathbb{F}_p) \] the reduction modulo~$p$ maps, and define \[ U(p):=\pi_p(U_p) \] the image of~$U_p$ in~$M_{\mathbb{F}_p}(\mathbb{F}_p)$. We denote the subgroup of~$U(p)$ generated by its unipotent elements by \begin{equation}\label{defi daggers} U(p)^\dagger\text{ and by }U_p^\dagger:=\stackrel{-1}{\pi_p}(U_p) \end{equation} its inverse image in~$U_p$. We define \begin{equation}\label{defi nu} \nu=\mathfrak{m}^{der}_{\mathbb{Z}_p}+p\cdot \mathfrak{z(m)}_{\mathbb{Z}_p}. \end{equation} \begin{proposition}\label{prop X Y} We consider the setting of Theorems~\ref{thm:local Galois} and~\ref{Galois bounds}. For almost all~$p$, there exists \[ X_1,\ldots,X_k,Y_1,\ldots,Y_l\in\mathfrak{m}_{{\mathbb Z}_p} \] satisfying the following \begin{enumerate} \item \label{XY1} The exponentials~$\exp(X_1),\ldots,\exp(X_k)$ converge and topologically generate~$U_p^\dagger$. \item \label{XY2} We have \[ Y_1,\ldots,Y_l\in \mathfrak{u}:=\mathbb{Z}_p\cdot X_1+\ldots+\mathbb{Z}_p\cdot X_k. \] \item \label{XY3} We have \[ \frac{1}{p}\cdot Y_1\cdot {\mathbb Z}_p+\ldots+\frac{1}{p}\cdot Y_l\cdot {\mathbb Z}_p = \mathfrak{z(m)}_{\mathbb{Z}_p}\pmod{p\cdot\mathfrak{m}_{\mathbb{Z}_p}}. \] \item \label{XY4} We have \begin{equation}\label{PropNori:Centalisateurs} Z_{G_{{\mathbb F}_p}}(M_{{\mathbb F}_p})= Z_{G_{{\mathbb F}_p}}\left(\left\{\pi_p(X_1),\ldots,\pi_p(X_k),\overline{\frac{1}{p}Y_1},\ldots,\overline{\frac{1}{p}Y_l}\right\}\right). \end{equation} \end{enumerate} \end{proposition} \begin{proof} Let~$u_1,\ldots,u_i$ be unipotent generators of~$U(p)^\dagger$. Because~$U(p)^\dagger\leq U(p)=\pi_p(U_p)$, we may write \[ u_1=\pi_p(x_1),\ldots,u_i=\pi_p(x_i) \] with~$x_1,\ldots,x_i\in U_p$. By definition of~$U_p^\dagger$, we have~$x_1,\ldots,x_i\in U^\dagger_p$. The compact group~$\ker(\pi_p)\leq U_p\leq M(\mathbb{Z}_p)$ is topologically of finite type. We choose~$x_{i+1},\ldots,x_k$ a topologically generating family of~$\ker(\pi_p)$. By construction \begin{equation}\label{defi xs} x_1,\ldots,x_k\text{ generates~$U_p^\dagger$.} \end{equation} Moreover, the~$\pi_p(x_i)$ are unipotent. By~\cite[Prop.~A.1]{RY}, the series~$X_1=\log(x_1),\ldots,X_k=\log(x_k)$ converge, and, for~$p>d$, we have~$X_1,\ldots,X_k\in \mathfrak{m}_{\mathbb{Z}_p}$. By~\cite[Lem. A.2]{RY} we have \begin{equation}\label{defi X} x_1=\exp(X_1),\ldots,x_k=\exp(X_k) \end{equation} We define \begin{equation}\label{defi u} \mathfrak{u}=\mathbb{Z}_p\cdot X_1+\ldots+\mathbb{Z}_p\cdot X_k\qquad u=(X_1,\ldots,X_k). \end{equation} For~$X\in \mathfrak{m}_{\mathbb{Z}_p}\smallsetminus\nu$, its reduction in~$\mathfrak{m}_{\mathbb{F}_p}$ is not nilpotent. Thus, by~\cite[Prop.~A.1]{RY}, the series~$\exp(X)$ does not converge. Consequently we have \[ X_1,\ldots,X_k\in\nu\text{ and thus }\mathfrak{u}\leq \nu. \] We define \[ \pi_\nu:\nu\to \overline{\nu}:=\nu\otimes\mathbb{F}_p, \] and denote the image of~$\mathfrak{u}$ by \[ \overline{\mathfrak{u}}\leq \overline{\nu}. \] From~\eqref{defi nu}, we have \begin{equation}\label{defi nubar} \overline{\nu}=\mathfrak{m}^{der}~(\bmod~p\nu)+p\cdot \mathfrak{z(m)}~(\bmod~{p\nu}). \end{equation} We notice that \begin{equation}\label{same rep} \mathfrak{m}_{\mathbb{F}_p}=\mathfrak{m}^{der}_{\mathbb{F}_p}+\mathfrak{z(m)}_{\mathbb{F}_p}\text{ and }\overline{\nu}\simeq \mathfrak{m}^{der}_{\mathbb{F}_p}+\frac{p\cdot \mathfrak{z(m)}}{p^2\cdot\mathfrak{z(m)}} \end{equation} are isomorphic $\mathbb{F}_p$-linear representation of~$M(\mathbb{Z}_p)$ and~$M(\mathbb{F}_p)$, and thus as representations of~$U_p$ and~$U(p)$ as well. We consider \[ ab_{\mathfrak{m}}:\mathfrak{m}\to \mathfrak{m}^{ab}:=\mathfrak{m}/\mathfrak{m}^{der}. \] Let us prove the claim \begin{equation}\label{reciprocite p p2} p\cdot\mathfrak{m}^{ab}\leq ab_{\mathfrak{m}}(\mathfrak{u}). \end{equation} \begin{proof Let~$Z\in p\cdot\mathfrak{m}^{ab}$. Let~$z=\exp(Z)\in M^{ab}(\mathbb{Z}_p)$. From~\eqref{defi CMT 2}, when~$p>C_{MT}$, there exists~$y\in U_p$ with~${\rm ab}_M(y)=z$. Assume~$p\gg0$, so that the algebraic tori~$Z(M)_{\mathbb{Z}_p}$ and~$M^{ab}$ have good reductio , and assume furthermore~$p>\#\ker(Z(M)\to M^{ab})$ so that the differential of the isogeny~$Z(M)\to M^{ab}$ induces a $\mathbb{Z}_p$-isomorphism~$\mathfrak{z(m)}\to\mathfrak{m}^{ab}$. Thus, there exists~$Z'\in p\mathfrak{z}(m)$ with~$ab_{\mathfrak{m}}(Z')=Z$. Let~$z'=\exp(Z')$. We have~$\overline{z'}\in Z(M)(\mathbb{F}_p)^\dagger$ and, because~$Z(M)(\mathbb{F}_p)$ has good reduction, we have~$Z(M)(\mathbb{F}_p)^\dagger=\{1\}$. We also have~$y\in M^{der}(\mathbb{Z}_p)\cdot z'$. Thus \[ \overline{y}\in U(p)\cap M^{der}(\mathbb{F}_p). \] Let~$\gamma=c(\dim(G))$ from Lem.~\ref{lem:4.1}. Then~$\overline{y}^{\gamma}\in U(p)^\dagger$, and thus~$y^{\gamma}\in U_p^\dagger$. Assume~$p>\gamma$, so that~$\gamma\in\mathbb{Z}_p^\times$. Because~$Z$ is arbitrary, we have \[ ab_M(U_p^\dagger)\geq\exp(p\cdot\mathfrak{m}_{\mathbb{Z}_p}^{ab})^\gamma=\exp(\gamma\cdot p\cdot\mathfrak{m}_{\mathbb{Z}_p}^{ab}) = \exp(p\cdot\mathfrak{m}^{ab}). \] Conversely~$ab_M(U_p^\dagger)\leq \exp(p\cdot\mathfrak{m}_{\mathbb{Z}_p}^{ab})=\ker M^{ab}(\mathbb{Z}_p)\to M^{ab}(\mathbb{F}_p)$ because~$ab_{M_{\mathbb{F}_p}}(U(p)^\dagger)\leq ab_{M_{\mathbb{F}_p}}(M^{der}(\mathbb{F}_p)^\dagger)=\{1\}$. The group~$U_p^\dagger$ is topologically generated by~$\exp(X_1),\ldots,\exp(X_k)$, and thus $ab_M(U_p^\dagger)$ is topologically generated by~$\exp(Z_1),\ldots,\exp(Z_k)$ with~$Z_i=ab_{\mathfrak{m}}(X_i)$. Thus the logarithms \[ Z_i=\log(z_i)=ab_{\mathfrak{m}}(X_i) \] topologically generate~$\log(\exp(p\cdot\mathfrak{m}^{ab}))= p\cdot\mathfrak{m}^{ab}$. The conclusion follows. \end{proof} We let \begin{equation}\label{defi Zs} Z_1,\ldots,Z_l\text{ be a basis of } \frac{p\mathfrak{m}^{ab}}{p^2\mathfrak{m}^{ab}}\simeq \nu/\mathfrak{m}^{der}_{\mathbb{F}_p} \end{equation} Pick an arbitrary~$Z\in\{Z_1;\ldots;Z_l\}$, and define \[ A=\{\overline{Y}\in\overline{\mathfrak{u}}| \overline{Y}\equiv Z\pmod{\mathfrak{m}^{der}_{\mathbb{F}_p}}\}. \] From~\eqref{reciprocite p p2}, this~$A$ is non empty. It is thus an affine subspace of~$\nu$, and, for any~$\overline{Y}_0\in A$, we have \[ A=\overline{Y}_0+V\text{ where }V=\overline{\mathfrak{u}}\cap \mathfrak{m}^{der}_{\mathbb{F}_p} \] is the ``direction'' of~$A$. The $\mathbb{F}_p$-linear vector subspace~$V\leq \overline{\nu}$ is invariant under~$U(p)$, and because the action of~$U(p)$ is semisimple on~$\mathfrak{m}_{\mathbb{F}_p}$, and thus, by~\eqref{same rep}, on~$\overline{\nu}$, there exists a supplementary $U(p)$-invariant $\mathbb{F}_p$-linear subspace \[ W\leq \overline{\nu}. \] The following intersection is an affine space of dimension~$0$, hence it is a singleton \[ A\cap W=\{\overline{Y}\}. \] It is also invariant under~$U(p)$. Thus the line \[ \mathbb{F}_p\cdot \overline{Y} \] is fixed by~$U(p)$. But the centraliser of~$U(p)$ and~$M_{\mathbb{F}_p}$ in~$M_{\mathbb{F}_p}$ are the same. For~$p\gg0$, these centralisers are smooth as group schemes (cf. Lem.~\ref{conj orbit lemma}), and thus have the same Lie algebra \[ \mathfrak{z}_{\mathfrak{m}_{\mathbb{F}_p}}(U(p)) = \mathfrak{z(m)}_{\mathbb{F}_p}. \] Thus \begin{equation}\label{Ybar in Z} \overline{Y}\in{p\cdot \mathfrak{z}(M)}~(\bmod~p\nu). \end{equation} We finally choose a representative~$Y\in \mathfrak{u}$ of~$\overline{Y}\in\overline{\mathfrak{u}}$. Thus~$Y\in p\cdot \mathfrak{z(m)}+p\nu=p\mathfrak{m}_{\mathbb{Z}_p}$ and~$Y\in\mathfrak{u}$. We define \[ \widetilde{\mathfrak{m}}:=\frac{p\mathfrak{m}_{\mathbb{Z}_p}}{p^2\mathfrak{m}_{\mathbb{Z}_p}}=(p\mathfrak{m}_{\mathbb{Z}_p})\otimes \mathbb{F}_p, \] and denote the image of~$Y\in \mathfrak{u}\cap p\mathfrak{m}\leq p\mathfrak{m}$ by \[ \widetilde{Y}\in\widetilde{\mathfrak{u}}\leq \widetilde{\mathfrak{m}}. \] Again~$\widetilde{\mathfrak{m}}\simeq\mathfrak{m}_{\mathbb{F}_p}$ as a representation. We define \[ A=\{\overline{Y}\in\widetilde{\mathfrak{u}}| \overline{Y}\equiv \widetilde{Y}\pmod{\widetilde{\mathfrak{m}^{der}}}\}. \] and similarly, there exists~$\overline{Y}\in A$ which is fixed by~$U(p)$ and thus is in~$\widetilde{\mathfrak{z(m)}}$. We choose a lift~$Y$ of~$\widetilde{Y}$ in~$\mathfrak{u}$. Repeating the process for each~$Z\in\{Z_1;\ldots;Z_l\}$ we define \begin{equation}\label{Ys in u} Y_1,\ldots,Y_l\in\mathfrak{u}. \end{equation} The assertion~\ref{XY1} follows from~\eqref{defi xs}, \eqref{defi X}, \eqref{defi u}. The assertion~\ref{XY2} follows from~\eqref{defi u}, \eqref{Ys in u}. The assertion~\ref{XY3} follows from~\eqref{Ybar in Z}, \eqref{defi Zs}, \eqref{defi nu}, \eqref{defi nubar}. We will now prove the assertion~\ref{XY4}. We define \[ Z:=Z_{G_{{\mathbb F}_p}}\left(\left\{\pi_p(X_1),\ldots,\pi_p(X_k),\overline{\frac{1}{p}Y_1},\ldots,\overline{\frac{1}{p}Y_l}\right\}\right). \] and \[ U':=(U(p)\cap Z(M)^0_{\mathbb{F}_p}(\mathbb{F}_p)) \text{ and } U'':=U(p)^\dagger\cdot U'. \] We first note that~$\pi_p(X_1),\ldots,\pi_p(X_k)$ generates the group~$U(p)^\dagger$ and that~$\overline{Y_1/p},\ldots,\overline{Y_l/p}$ generates the Lie algebra~$\mathfrak{z(m)}_{\mathbb{F}_p}$. Thus \begin{equation Z=Z_{G_{{\mathbb F}_p}}(U(p)^\dagger)\cap Z_{G_{{\mathbb F}_p}}(\mathfrak{z(m)}_{\mathbb{F}_p}) \end{equation} We have\footnote{We use that~$Z(M_{\mathbb{F}_p})$ is connected and, for~$p\gg 0$ smooth as a group scheme.} \[ Z_{G_{{\mathbb F}_p}}(\mathfrak{z(m)}_{\mathbb{F}_p})=Z_{G_{{\mathbb F}_p}}(Z(M)^0). \] Applying Cor.~\ref{coro big dans Z} with~$\delta=C_{MT}$ from~\eqref{defi CMT}, we have,with~$U':=(U(p)\cap Z(M)^0_{\mathbb{F}_p}(\mathbb{F}_p))$, \[ [U(p): U(p)^\dagger\cdot U']\leq D:=C_{MT}\cdot \gamma(\dim(G)) \] With~$p>M(D)$, with~$M(D)$ as in Def.~\ref{defi:Tate}, we have \[ Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p})= Z_{G_{\mathbb{F}_p}}(U(p)^\dagger\cdot U') = Z_{G_{{\mathbb F}_p}}(U(p)^\dagger)\cap Z_{G_{{\mathbb F}_p}}(U'). \] We may thus apply Lemma~\ref{Lemma bounded and centraliser}, and deduce \[ \forall p\gg 0, Z_{G_{\mathbb{F}_p}}(U')=Z_{G_{\mathbb{F}_p}}(Z(M)^0_{\mathbb{F}_p}). \] From~$U'':=U(p)^\dagger\cdot U'\leq U(p)^\dagger\cdot Z(M)^0_{\mathbb{F}_p}\leq M_{\mathbb{F}_p}$ we get \[ Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p}) = Z_{G_{\mathbb{F}_p}}(U'')\leq Z_{G_{\mathbb{F}_p}}(U(p)^\dagger)\cap Z_{G_{\mathbb{F}_p}}(Z(M)^0_{\mathbb{F}_p}) \leq Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p}) \] Finally \[ Z=Z_{G_{\mathbb{F}_p}}(U(p)^\dagger)\cap Z_{G_{\mathbb{F}_p}}(Z(M)^0_{\mathbb{F}_p})= Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p}).\qedhere \] \end{proof} \subsubsection{Conjugacy classes of tuples} The following will be used to check, for almost all primes, one of the hypotheses of Th.~\ref{thm:compare reductive}. \begin{lemma}\label{conj orbit lemma} Let~$p$ be a prime, let~$G\leq GL(n)_{\mathbb{F}_p}$ be a reductive algebraic subgroup, and consider~$v_1,\ldots,v_n\in G(\mathbb{F}_p)$. Denote by~$U$ the group generated by~$\{v_1;\ldots;v_k\}$, and define~$v=(v_1,\ldots,v_n)$. Assume that \begin{equation}\label{U ss g} \text{the action of~$U$ on~$\mathfrak{g}_{\mathbb{F}_p}$ is semisimple.} \end{equation} If~$p>2\cdot \dim(G)$ then the simultaneous conjugacy class~$G\cdot v$ is Zariski closed in~$G^k$. If~$p>c_3(\dim(G))$ then the centraliser of~$v$ in~$G$, as a group scheme over~$\mathbb{F}_p$, is smooth. \end{lemma} The quantity~$c_3$ is from~\cite[\S4. Th.~E]{N}. \begin{proof} From~\cite[\S5.1]{SCR}, we have~$h(G)\leq \dim(G)$. From~\cite[Cor. 5.5]{SCR}, if~$p>2h(G)-2$, the assumption~\eqref{U ss g} implies that~$U$ is~$G$-cr, or ``strongly reductive'' in~$G$ in the sense of Richardson. The first assertion follows from~\cite[Th.~3.7]{SCR} (cf. \cite[\S16]{Ri-Conj}). Thanks to~\eqref{U ss g} and the condition~$p>c_3(\dim(G))$ we may apply~\cite[\S4. Th.~E]{N} (cf. also~\cite[137. p.\,40]{S4}). Thus the hypothesis of~\cite[II, \S5.2, 2.8, p.\,240]{DG} is satisfied and we conclude. (cf. also~\cite{BMR10} and~\cite{H13} on the subject, beyond the semi-simplicity assumption.) \end{proof} \setcounter{secnumdepth}{3} \subsubsection{Consequences for heights bounds}We denote~$\norm{~}:\mathfrak{g}_{\mathbb{Q}_p}\to\mathbb{R}_{\geq0}$ the~$p$-adic norm associated to the~$\mathbb{Z}_p$-structure~$\mathfrak{g}_{\mathbb{Z}_p}$. We denote~$\norm{\Sigma}=\max\{\norm{s}:s\in \Sigma\}$ for a bounded subset~$\Sigma\subseteq \mathfrak{g}_{\mathbb{Q}_p}$. We recall that~$H_f(\phi)=\prod_p H_p(\phi)$ with~$H_p(\phi)$ given for instance by \[ H_p(\phi)=\max\{1;\norm{\phi(\mathfrak{m}_{{\mathbb Z}_p})}\}. \] We also define \[ H'(\phi)=\max\{1;\norm{\phi(\nu)}\},\qquad H''(\phi)=\max\{1;\norm{\phi(\mathfrak{m}^{der}_{\mathbb{Z}_p})}\}. \] We then have~$p\cdot \mathfrak{m}_{\mathbb{Z}_p}\leq \nu\leq \mathfrak{m}_{\mathbb{Z}_p}$ and thus \[ \frac{1}{p}\cdot H'(\phi)\leq H_p(\phi)\leq H'(\phi) \] Using the tools of~\cite{RZ} we deduce the following. \begin{proposition} Define~ \[ v=(X_1,\ldots,X_k,Y_1,\ldots,Y_l)\text{ and } v'=(X_1,\ldots,X_k,\frac{1}{p}Y_1,\ldots,\frac{1}{p}Y_l) \] and \[ H_v(\phi)= \max\{1;\norm{g\cdot v}\}=H_p(g\cdot v)\text{ and }H_{v'}(\phi)=H_p(g\cdot v'). \] Then we have \begin{equation}\label{galois exp bound} [\phi(U):\phi(U)\cap G({\mathbb Z}_p)]\geq H_v(\phi)\geq H_{v'}(\phi)/p \end{equation} and \begin{equation}\label{eq} H_{v'}(\phi)\geq H_p(\phi)^{c(\rho)}. \end{equation} \end{proposition} \begin{proof} The inequality \[ [\phi(U):\phi(U)\cap G({\mathbb Z}_p)]\geq H_v(\phi)= \max\{1;\norm{g\cdot v}\}=H_p(g\cdot v) \] follows from the Lemma of the exponential and the subgroup principle (\cite[Th. A.3, \S B.0.1]{RY}). The inequality~$H_v(\phi)\geq H_{v'}(\phi)/p$ follows from the definitions. We prove~\eqref{eq}. Let~$m_1,\ldots,m_d$ be a generating set for~$\mathfrak{m}_{{\mathbb Z}_p}$ and define~$w=(m_1,\ldots,m_d)$. We recall that by construction, we have \[ H_p(\phi)=\max\{1;\norm{g\cdot m_1},\ldots,\norm{g\cdot m_d}\}. \] Using~\eqref{PropNori:Centalisateurs} and Lem.~\ref{conj orbit lemma} for~$v'$, we may apply Theorem~\ref{thm:compare reductive}, and we deduce \[ H_p(g\cdot v')\geq H_p(\phi)^{C(\Sigma(\rho))},\qedhere \] \end{proof} where~$\Sigma(\rho)$ is the set of roots of~$G$ and does not depend on~$p$. \begin{corollary}In particular, if~$H_{v'}(\phi)\notin\{1;p\}$ we have \[ [\phi(U):\phi(U)\cap G({\mathbb Z}_p)]\geq H_p(\phi)^{c(\rho)/2}. \] \end{corollary} \begin{proof} We recall that, because~$H_{v'}(\phi)\in p^{{\mathbb Z}}$, we have~$H_{v'}(\phi)\geq p^2$ as soon as~$H_p(\phi)\notin\{1;p\}$. It follows \[ H_v(\phi)\geq H_{v'}(\phi)/p\geq H_{v'}(\phi)^{1/2}\geq H_p(\phi)^{1/(2\cdot c(\rho))}.\qedhere \] \end{proof} In proving Th.~\ref{thm:local Galois} we may now assume that~$H_{v'}(\phi)\leq p$. We define \[ H_X(\phi)=\max\{1;\norm{\phi(X_1)};\ldots;\norm{\phi(X_k)}\} \] and \[ H_Y(\phi)=\max\{1/p;\norm{\phi(Y_1)};\ldots;\norm{\phi(Y_k)}\}, \] so that \[ H_{v'}(\phi)=\max\{H_X(\phi);p\cdot H_Y(\phi)\} \] and \[ H_{v}(\phi)=\max\{H_X(\phi); H_Y(\phi)\}. \] If~$H_X(\phi)=p$, we have, by~\eqref{galois exp bound}, \[ [\phi(U):\phi(U)\cap G({\mathbb Z}_p)]\geq H_v(\phi)=H_X(\phi)= H_v'(\phi)\geq H_p(\phi)^{c(\rho)}. \] We now assume~$H_X(\phi)=1$. We have~$p\cdot H_Y(\phi)=H_{v'}(\phi)\in\{1;p\}$. \subsubsection{The case~$p\cdot H_Y(\phi)=H_X(\phi)=1$} In this case we have~$H_{v'}(\phi)=1$, and by~\eqref{eq}, \[ H_p(\phi)=1. \] Obviously \[ [\phi(U_p):\phi(U_p)\cap G({\mathbb Z}_p)]\geq H_p(\phi). \] \subsubsection{The case~$p\cdot H_Y(\phi)=H_{v'}(\phi)=p$} \label{last case} From~\eqref{XY3} of Prop.~\ref{prop X Y}, for every~$Y_i$, there exists~$Z_i\in \mathfrak{z(m)}_{\mathbb{Z}_p}$ such that \[ Y_i\equiv Z_i\pmod{p\cdot\mathfrak{m}_{\mathbb{Z}_p}}. \] Define~$v''=(X_1,\ldots,X_k,Z_1,\ldots,Z_l)$. Then the reductions modulo~$p$ are equal \[ \overline{v'}=\overline{v''}\text{ in }{\mathfrak{m}_{\mathbb{F}_p}}^{k+l}\leq{\mathfrak{g}_{\mathbb{F}_p}}^{k+l}. \] Thus \begin{itemize} \item The orbit~$G_{\mathbb{F}_p}\cdot \overline{v''}$ is equal to~$G_{\mathbb{F}_p}\cdot \overline{v''}$ and is closed; \item and~$Stab_{G_{\mathbb{F}_p}}(\overline{v'})=Stab_{G_{\mathbb{F}_p}}(\overline{v''})=Z_{G_{{\mathbb F}_p}}(M_{{\mathbb F}_p})$ (cf~\eqref{XY4} of Prop.~\ref{prop X Y}). \end{itemize} Applying Th.~\ref{pKN}, we have \[ \phi(v')\in{\mathbb{Z}_p}^{k+l}\text{ if and only if } \phi(v'')\in{\mathbb{Z}_p}^{k+l}. \] We have \[ H_{v''}=\max\{H_X;H_Z\}\text{ with }H_Z(\phi):=\max\{1;\norm{\phi(Z_1)};\ldots;\norm{\phi(Z_l)}\}. \] Because~$H_X(\phi)=1$ and~$H_{v'}=p\neq 1$ in~\S\ref{last case}, we have \[ H_Z(\phi)\neq 1. \] For~$p$ big enough, we can apply~\cite[4.3.9]{EdYa} with the torus~$Z(M)^0$: we have, for some~$c\in\mathbb{R}_{>0}$ that does not depend on~$p$, \[ [\phi(Z(M)(\mathbb{Z}_p)):\phi(Z(M)(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]\geq p/c \] Using Prop.~\ref{prop4.7} we deduce \[ [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq p/(\gamma(n)\cdot c)=H_{v'}(\phi)/(\gamma(n)\cdot c). \] Using~\eqref{eq} we conclude \[ [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq H_{p}(\phi)^{c(\rho)}/(\gamma(n)\cdot c). \] This proves~\eqref{precise with a} with~$c=c(\rho)$ and~$a=1/(\gamma(n)\cdot c)$. We have proven Th.~\ref{thm:local Galois} and Th.~\ref{Galois bounds}. \subsection{Some Structure Lemmas} We consider the situation of Theorem~\ref{thm:local Galois}. We identify~$G$ with its image by a faithful representation in~$GL(n)$ such that~$G(\mathbb{Z}_p)=GL(n,\mathbb{Z}_p)\cap G$, and we denote by~$U(p)$ the image of~$U_p$ in~$G(\mathbb{F}_p)\leq GL(n,\mathbb{F}_p)$. We denote by~$\overline{M}=M_{\mathbb{F}_p}$ the~$\mathbb{F}_p$-algebraic group from the model of~$M$ over~$\mathbb{Z}_p$ induced by~$M\leq GL(n)$, and we denote~$Z(M_{\mathbb{F}_p})$ the centre of~$M_{\mathbb{F}_p}$. In Lem.~\ref{lem:4.1}, Prop.~\ref{prop4.7} Cor.~\ref{coro big dans Z}, the quantity depending on~$n$ also depend implicitly on the function~$D\mapsto M(D)$ in~\ref{defi:Tate2} of~Def.~\ref{defi:Tate}. \begin{proposition}\label{prop4.7} There exists~$\gamma(n)$ such that \[ [Z(M_{{\mathbb F}_p})({\mathbb F}_p):Z(M_{{\mathbb F}_p})({\mathbb F}_p)\cap U(p)]\leq \gamma(n)\cdot [M_{{\mathbb F}_p}^{ab}({\mathbb F}_p):ab_{M_{{\mathbb F}_p}}(U(p))]. \] \end{proposition} By Hypothesis~\ref{thm:galois bounds H1} of Th.~\ref{Galois bounds} we may use~\eqref{defi CMT} and deduce the following. \begin{corollary}\label{coro big dans Z} With~$C_{MT}$ as in~\eqref{defi CMT}, we have \[ [Z(M_{{\mathbb F}_p})({\mathbb F}_p):Z(M_{{\mathbb F}_p})({\mathbb F}_p)\cap U_p]\leq \gamma(n)\cdot C_{MT}. \] \end{corollary} We prove Proposition~\ref{prop4.7}. \begin{proof}We define~$\overline{M}:=M_{\mathbb{F}_p}$. We consider the isogeny \[ (ad_{\overline{M}},ab_{\overline{M}}):\overline{M}\to \overline{M}^{ad}\times \overline{M}^{ab} \] We write~$U=U(p)$ and denote by~$\widetilde{U}$ its image in~$\overline{M}^{ad}({\mathbb F}_p)\times \overline{M}^{ab}({\mathbb F}_p)$. We denote~$\widetilde{U}_1$ and~$\widetilde{U}_2$ its images by the projections on the two factors. From Lemma~\ref{lem:4.1}, we have \[ [\widetilde{U}_1:\widetilde{U}\cap \overline{M}^{ad}({\mathbb F}_p)]\leq [\widetilde{U}_1:\widetilde{U}^\dagger]\leq c(n). \] By Goursat's Lemma~\ref{Goursat} \[ [\widetilde{U}_2:\widetilde{U}\cap \overline{M}^{ad}({\mathbb F}_p)]=[\widetilde{U}_1:\widetilde{U}\cap \overline{M}^{ad}({\mathbb F}_p)]\leq c(n). \] Let~$U'$, resp.~$U''$ be the inverse image in~$U_p$ of~$\widetilde{U}\cap \overline{M}^{ad}({\mathbb F}_p)$, resp.~$\overline{M}({\mathbb F}_p)$. Because~$Z(\overline{M})$ is the $(ad_{\overline{M}},ab_{\overline{M}})$-inverse image of~$\overline{M}^{ab}$ in~$\overline{M}$, we have \[ U'\leq U''\leq Z(\overline{M})({\mathbb F}_p). \] Define~$F:=Z(\overline{M})\cap \overline{M}^{der}$, which is a finite~${\mathbb F}_p$-algebraic group of degree at most~$c_2(\dim(M))\leq c_2(n)$. As we have~$U''\leq F(\overline{{\mathbb F}_p})\cdot U'$, we have \[ [U'':U']\leq \# F\leq c_2(n). \] On the other hand, we have \[ [Z(\overline{M})({\mathbb F}_p):U'']\leq [ \overline{M}^{ab}({\mathbb F}_p):\widetilde{U}\cap \overline{M}^{ab}({\mathbb F}_p)]. \] It follows \begin{multline} [Z(\overline{M})({\mathbb F}_p):U']\leq c_2(n)\cdot[ \overline{M}^{ab}({\mathbb F}_p):\widetilde{U}\cap \overline{M}^{ab}]\\ \leq c(n)\cdot c_2(n)\cdot[ \overline{M}^{ab}({\mathbb F}_p):\widetilde{U}_1]\qedhere \end{multline} \end{proof} \begin{lemma}\label{lem:4.1} Let~$\hat{U}:=ad_{M_{{\mathbb F}_p}}(U(p))$~be the image of~$U(p)$ in~$M_{{\mathbb F}_p}^{ad}({\mathbb F}_p)$. Then~$\hat{U}^\dagger=ad_{M_{{\mathbb F}_p}}(U(p)^\dagger)$, and there exists~$c(n)$ such that \[ [\hat{U}:\hat{U}^\dagger]\leq c(n). \] \end{lemma} \begin{proof}The equality~$\hat{U}^\dagger=ad_{\overline{M}}(U(p)^\dagger)$ follows from Lemma~\ref{Sylow}. By construction, \[ Z_{\overline{M}^{ad}}(\hat{U})=Z_{\overline{M}}(U)/Z(\overline{M})\leq \overline{M}^{ad} \] We let~$j(N)$ be the Jordan constant (cf.~\cite[\S5]{SCrit}). From Jordan theorem there exists~$\hat{U}^\dagger\leq \hat{U}'\leq \hat{U}$ of index~$[\hat{U}:\hat{U}']\leq C(n):=j(d(n))$ such that \[ \hat{U}'/\hat{U}^\dagger \] is abelian, where~$d(n)$ is as in~\cite[134. \S7 p.25 about~$GL_d$]{S4}. We assume~$p\geq M(C(n)))$ as in~\ref{defi:Tate2} of the Tate hypothesis Def.~\ref{defi:Tate}. We use~$c(n)$ and~$m(n)$ from Lemma~\ref{lem:4.2}, and assume~$p>m(n)$ and~$p>M(C(n)\cdot c(n))$. Then, by~\ref{defi:Tate2} the Tate hypothesis Def.~\ref{defi:Tate}, the uniform integral Tate conjecture, Lemma~\ref{lem:4.2} applies to~$\hat{U}'\leq \overline{M}^{ad}({\mathbb F}_p)$. The conclusion follows. \end{proof} \begin{lemma}\label{lem:4.2} For every~$n\in\mathbb{Z}_{\geq1}$, there exists~$c(n)$, $c'(n)$, $m(n)$ such that the following holds. Let~$p>m(n)$ be a prime, let~$M\leq GL(n)$ be adjoint over~${\mathbb F}_p$, and let \[ \hat{U}\leq M({\mathbb F}_p) \] be a subgroup \begin{itemize} \item such that~$\hat{U}/\hat{U}^\dagger$ is abelian \item and such that for every~$U'\leq \hat{U}$ of index at most $c(n)$: \begin{enumerate} \item \label{cond 1} we have~$Z_M(U')=1$; \item \label{cond 3} the action of~$U'$ is semisimple; \end{enumerate} \end{itemize} Then we have~$[\hat{U}:\hat{U}^\dagger]\leq c'(n)$. \end{lemma} \subsubsection{Remark}\label{Remark Lemma} In proving the Lemma, we may substitute~$\hat{U}$ with~$U'$ if~$\hat{U}\leq U'\leq \hat{U}$ and~$[\hat{U}:U']\leq f(n)$, where~$f(n)$ depends only on~$n$. We then have to change~$c(n)$ into~$c(n)\cdot f(n)$ accordingly. \begin{proof}We assume~$p>n+1$, so that we can apply Nori theory~\cite{N}. We denote by~$S\leq M$ the~$\mathbb{F}_p$-algebraic associated by Nori to~$\hat{U}^\dagger$, and denote by~$N=N_M(S)$ the normaliser of~$S$ in~$M$. We deduce from~\eqref{cond 3} that~$S$ is semisimple, and thus~$N^0=S\cdot Z_M(S)^0$. We recall that the semisimple Lie subalgebras~$\mathfrak{m}\leq \mathfrak{gl}(N)_{\overline{\mathbb{F}_p}}$ can assume finitely many types, independently from~$p$. We deduce uniform bounds \begin{equation} \#N/N^0\leq c_1(n)\text{ and }\#Z(S)\leq c_2(n). \end{equation} We have~$\hat{U}\leq N$, and~$U^\dagger\leq N^\dagger\leq N^0$. If~$U'=\hat{U}\cap N^0$ we have~$[\hat{U}:U']\leq c_1(N)$ and~$U^\dagger\leq U'$. Using the Remark~\ref{Remark Lemma}, we may replace~$\hat{U}$ by~$U'=\hat{U}\cap N^0$. We denote~$Z(S)=Z_M(S)\cap S$ and we consider \[ N^0\to S^{ad}\times Z_M(S)^0/Z(S). \] We denote~$\widetilde{U}$ the image of~$\hat{U}$, by \[ \widetilde{U}_1\leq S^{ad}(\mathbb{F}_p)\text{ and~}\widetilde{U}_2\leq (Z_M(S)^0/Z(S))(\mathbb{F}_p) \] the projections of~$\widetilde{U}$ and define \[U'_1:=\widetilde{U}\cap (S^{ad}({\mathbb F}_p)\times\{1\}) \text{ and } U'_2:=\widetilde{U}\cap(\{1\}\times Z_M(S)/Z(S)).\] From Lemma~\ref{Sylow}, the image of~$S(\mathbb{F}_p)^\dagger\leq \hat{U}$ is~$S^{ad}({\mathbb F}_p)^\dagger$. Thus \[ S^{ad}({\mathbb F}_p)^\dagger\times\{1\} \leq U'_1\leq \widetilde{U}_1\times\{1\}\leq S^{ad}({\mathbb F}_p)\times\{1\}. \] With~$r(n)$ given by~\cite[3.6(iv-v) and p.\,270]{N}, we have \begin{equation}\label{rN} [U'_1:\widetilde{U}_1\times\{1\}]\leq [S^{ad}({\mathbb F}_p):S^{ad}({\mathbb F}_p)^\dagger]\leq r(n)=2^{n-1}. \end{equation} By Goursat's Lemma~\ref{Goursat} and~\eqref{rN} we have \[ [U'_1:\widetilde{U}_1\times\{1\}]=[\{1\}\times\widetilde{U}_2:U'_2]\leq r(n). \] Thus, with~$U':=U'_1\cdot U'_2\simeq U'_1\times U'_2$, we have \[ [\widetilde{U}:U']\leq [\widetilde{U}_1\times\widetilde{U}_2:U']\leq r(n)^2. \] Because~$\widetilde{U}^\dagger=S(\mathbb{F}_p)^\dagger$ is sent to~$S^{ad}(\mathbb{F}_p)^\dagger\times\{1\}$ (cf. Lemma~\ref{Sylow}) and because~$S^{ad}(\mathbb{F}_p)^\dagger\times\{1\}\leq U'_1\leq U'$ we may use the Remark~\ref{Remark Lemma}, and replace~$\hat{U}$ by the inverse image of~$U'$. We denote by \[ \hat{U}_1,\quad\hat{U}_2 \] the inverse images of~$U'_1$ and~$U'_2$. Because~$\hat{U}_1\leq Z_M(S)$ and~$\hat{U}_2\leq S$, the groups~$\hat{U}_1$ and~$\hat{U}_2$ commute with each other. We reduce the situation to the case where~$\hat{U}_2$ is abelian. We know that~$\hat{U}/\hat{U}^\dagger$ is abelian and that~$\hat{U}^\dagger\leq S(\mathbb{F}_p)$. It follows that~$\hat{U}_2/(\hat{U}_2\cap S(\mathbb{F}_p))$ is abelian. We have~$F:=\hat{U}_2\cap S(\mathbb{F}_p)\leq Z_M(S)\cap S=Z(S)$, and thus~$\abs{F}\leq c_2(n)$, and~$\hat{U}_2$ is an extension of the abelian group~$U'_2=\hat{U}_2/(\hat{U}_2\cap S)=\widetilde{U}\cap Z_M(S)/Z(S)$ by a finite group~$F$ of order at most~$c_2(n)$. Moreover,~$U'_2$ is of order prime to~$p$, and thus is diagonalisable in~$Z_M(S)/Z(S)(\overline{\mathbb{F}_p})$. It follows we can find a monomorphism~$U_2'\leq (\overline{\mathbb{F}_p}^\times)^r$ where~$r(Z_M(S))$ is the rank of~$Z_M(S)$. We have~$r(Z_M(S))\leq r(M)\leq N$. From Cor.~\ref{coro extension}, there exists an abelian subgroup~$U''_2\leq \hat{U}_2$ of index at most \[ c_3(n)=e(c_2(n))^n. \] Using the remark we may replace~$\hat{U}=\hat{U}_1\cdot \hat{U}_2$ by~$U'=\hat{U_1}\cdot U'_2$. Then~$U'_2$ commutes to~$\hat{U}_1$ and to itself:~$U'_2$ is in the centre of~$\hat{U}$. By Hypothesis~\eqref{cond 1}, we have~$U'_2=1$. Thus~$\hat{U}=\hat{U_2}\leq S({\mathbb F}_p)$ and \[ [\hat{U}:\hat{U}^\dagger]\leq [S^{ad}({\mathbb F}_p):S^{ad}({\mathbb F}_p)^\dagger]\leq r(n).\qedhere \] \end{proof} \subsubsection{Other lemmas} \begin{lemma}\label{Lemma bounded and centraliser} Let~$H\leq G\leq GL(d)$ be algebraic groups over~$\mathbb{Q}$ with~$H$ Zariski connected. There exists~$\lambda\in\mathbb{Z}_{\geq 1}$, and~$N\in\mathbb{Z}_{\geq0}$ such that: for all prime~$p\geq N$, and for all subgroup~$U\leq H_{\mathbb{F}_p}(\mathbb{F}_p)$ such that \[ [H_{\mathbb{F}_p}(\mathbb{F}_p):U]\leq \lambda \] we have \[ Z_{G_{\mathbb{F}_p}}(U)=Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p}). \] \end{lemma} \begin{proof}Without loss of generality we may assume~$G=GL(d)$. We define a scheme~$X\leq Y\times Y$, with~$Y=GL(d)_{\mathbb{Z}}$ by \[ X=\{(g,h)\in GL(d)_{\mathbb{Z}}\times GL(d)_{\mathbb{Z}}|[g,h]=1, h\in H\} \] and denote by~$\phi: X\to Y$ the first projection. According to the Lemma~\ref{lemma schemes}, for every prime~$p$, and every~$g\in G(\overline{\mathbb{F}_p})$, \[ \pi_0(H\cap Z_{G_{\overline{\mathbb{F}_p}}}(\{g\}))\leq \gamma. \] For~$p\gg0$, the~$\mathbb{F}_p$ group~$H_{\mathbb{F}_p}$ will be Zariski connected. (\citestacks[Lem. 37.26.5.]{055H}). From~\cite[Lem.~3.5]{N}, we have \[ \#H'(\mathbb{F}_p)\leq (p+1)^{\dim(H')}\cdot \gamma \] and \[ \#H(\mathbb{F}_p)\geq (p-1)^{\dim(H)}. \] Let~$\lambda=\gamma\cdot (\frac{p-1}{p+1})^{\dim(G)}$ and let \[ U\leq H(\mathbb{F}_p) \] be such that \[ Z_{G_{\mathbb{F}_p}}(U)\neq Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p}). \] We have~$Z_{G_{\mathbb{F}_p}}(U)\geq Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p})$. We remark that~$Z_{G_{\mathbb{F}_p}}(U)$ and~$Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p})$ are Zariski connected because they are non empty Zariski open subsets~$A\cap GL(d)$ in a subalgebra of~$A\leq End({\mathbb{F}_p}^d)$. For~$p> \left(\frac{p-1}{p+1}\right)^{\dim(G)}$ (cf.~\cite[Lem.~3.5]{N}), we will have \[ \#Z_{G_{\mathbb{F}_p}}(U)({\mathbb{F}_p})< \#Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p})({\mathbb{F}_p}) \] and there exists \[ g\in Z_{G_{\mathbb{F}_p}}(U)({\mathbb{F}_p})\smallsetminus \#Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p})({\mathbb{F}_p}) \] We have then, with~$H'=X_g=Z_{G_{\mathbb{F}_p}}(\{g\})$, which is defined over~$\mathbb{F}_p$, \[ H'<H. \] Because~$H$ is connected, we have~$\dim(H')<\dim(H)$ and \[ \#U\leq \#H'(\mathbb{F}_p)\leq \frac{1}{\lambda}\cdot \#H(\mathbb{F}_p). \] The Lemma follows. \end{proof} \begin{lemma}\label{lemma schemes} Let~$\phi:X\to Y$ be a morphism of schemes of finite type over~$\mathbb{Z}$. Then there exists~$\gamma$ such that: for every field~$K$, and every~$y\in Y(K)$, the number of geometric connected components~$\#\pi_0(X_y)$ of the fibre~$X_y$ satisfies \[ \#\pi_0(X_y)\leq \gamma. \] If~$\phi$ is flat, then~$y\mapsto \dim(X_y)$ is lower semicontinuous on~$Y$. \end{lemma} \begin{proof}If~$Y$ is non-empty, there exists a proper closed subset outside of which the function~$y\mapsto\#\pi_0(X_y)$ is constant, according to\footnote{Applied to the generic point of an irreducible component of~$Y$. The latter exist because~$Y$ is noetherian.} \citestacks[Lemma 37.26.5.]{055H}. We conclude by noetherian induction. The second assertion, on dimensions, is~\citestacks[Lemma 37.28.4.]{0D4H} (compare \cite[15.5.1]{EGA42}.) \end{proof} \begin{lemma}\label{Sylow} Let~$\phi:U\to U'$ be an epimorphism of finite groups and, for some prime~$p$, denote by~$U^\dagger$, resp.~$U'^\dagger$ be the subgroup generated by elements of order a power of~$p$. Then \[ \phi(U^\dagger)=U'^{\dagger}. \] \end{lemma} \begin{proof} If~$u$ is of order a power of~$p$, then so is~$\phi(u)$. Thus \[ \phi(U^\dagger)\leq U'^{\dagger}. \] The subgroup~$U'^\dagger$ is normal and is the smallest normal subgroup such that~$U/U^\dagger$ does not contain a non trivial element of order a power of~$p$; equivalently~$\#U/U^\dagger$ is prime to~$p$. Because~$\phi$ is an epimorphism,~$\phi(U^\dagger)$ is normal in~$U'$ and we have a group isomorphism \[ U/U^\dagger\to U'/\phi(U^\dagger). \] Thus~$\#U'/\phi(U^\dagger)$ is prime to~$p$, and the mentioned minimality property implies \[ \phi(U^\dagger)\geq U'^{\dagger}.\qedhere \] \end{proof} \begin{lemma}\label{Lemma extension} Let \[ 1\to N\to G \xrightarrow{\pi} H\to 1 \] be a short exact sequence of finite groups such that~$H$ is abelian. There exist~$e(\#N)\in\mathbb{Z}_{\geq1}$ and~$H'\leq H$ and~$H''\leq G$ such that~$\pi|_H$ is an isomorphism onto~$H''$ and \[ H'\geq e(\#N)\cdot H. \] \end{lemma} \begin{proof} Let~$\rho:G\to\operatorname{Aut}(N)$ be the adjoint action on its normal subgroup~$N$. Then~$G'=\ker \rho$ has index at most~$\#\operatorname{Aut}(N)\leq (\#N)!$ and we may replace~$G$ with~$G'$ and~$H$ with~$p(H)$, that is: we assume the extension of~$H$ by~$N$ is a central extension. Let~$\gamma=\#N$, and, for~$h=\gamma\cdot h'$ in~$H':=\gamma\cdot H$, choose~$g'$ such that~$p(g')=h'$. We claim that \[ h\mapsto \sigma(h):=g'^\gamma \] is a well defined section of~$p$ on~$H'$. This would prove the Lemma for~$e(\#N)=(\#N)!\cdot \#N$ and~$H''=\sigma(H')$. We prove that~$\sigma(h)$ does not depend on the choice of~$g'$. Let~$g''=n\cdot g'$ with~$n\in N$. Then, because~$N$ is central, we have \[ g''^\gamma=(g'\cdot n)\cdot\ldots\cdot (g'\cdot n)=g'^{\gamma}\cdot n^\gamma=g'^\gamma. \] We prove~$\sigma(h_1\cdot h_2)=\sigma(h_1)\cdot \sigma(h_2)$. We pick lifts~$g_1,g_2$ of~$h_1,h_2$. Because~$H$ is commutative, we have \[ [g'_1,g'_2]\in N \] that is~$g_1\cdot g_2=n\cdot g_2\cdot g_1$ for some~$n\in N$. We have, for~$i,j\in\mathbb{Z}_{\geq0}$, \[ g_1^{i+1}\cdot g_2^{j+1}=g_1^{i}\cdot(n\cdot g_2\cdot g_1)\cdot g_2^{j}= (g_1^{i}g_2\cdot g_1\cdot g_2^{j})\cdot n \] and by induction \[ g_1^{i+1}\cdot g_2^{j}=g_2\cdot g_1^{i+1}\cdot g_2^{j}\cdot n^{i+1}. \] We deduce by induction, for~$i=j=\gamma$, \[ g_1^{\gamma}g_2^{\gamma}=g_2^{\gamma}g_1^{\gamma}n^{\gamma^2} =g_2^{\gamma}g_1^{\gamma}. \] The Lemma is proved. \end{proof} \begin{corollary}\label{coro extension} If~$H$ is generated by~$k$ elements, we have \[[H':H]\leq e(\#N)^k.\] \end{corollary} We used the following form of Goursat's Lemma. \begin{lemma}[Goursat's Lemma]\label{Goursat} Let~$U\leq G_1\times G_2$ be a subgroup, and~$U_1$, $U_2$ be its projections, and define~$U'_1=U\cap(G_1\times\{1\})$ and~$U'_2=U\cap(\{1\}\times G_2)$. Then~$(U_1\times\{1\})/U'_1$ and~$(\{1\}\times U_2)/U'_2$ are isomorphic, and hence \[ \abs{(U_1\times\{1\})/U'_1}=\abs{(\{1\}\times U_2)/U'_2}. \] \end{lemma} \section{Reductive norm estimates from residual stability}\label{sec:reductive} \subsection{Standing hypotheses}\label{standing hyp} Let~$F\leq G\to GL(d)$ be reductive groups over~$\mathbb{Q}_p$. The ultrametric absolute value is denoted by~$\abs{~}:\mathbb{C}_p\to \mathbb{R}_{\geq0}$ and the norm on~${\mathbb{C}_p}^d$ is denoted by \[ \norm{(v_i)_{i=1}^d}=\max\{\abs{v_1};\ldots;\abs{v_d}\}. \] The $\mathbb{Q}_p$-algebraic group~$GL(d)$ has a model~$GL(d)_{\mathbb{Z}_p}$, which induces models~$F_{\mathbb{Z}_p}$ and~$G_{\mathbb{Z}_p}$ over~$\mathbb{Z}_p$. We denote by~$F_{\mathbb{F}_p}$ and~$G_{\mathbb{F}_p}$ their special fibres, which are algebraic groups over~$\mathbb{F}_p$. We assume that, in the sense\footnote{An equivalent property is that~$F_{\mathbb{F}_p}$ and~$G_{\mathbb{F}_p}$ are connected reductive algebraic groups.} of~\cite[S3.8]{Tits} \begin{equation}\label{hyp:hyp} \text{$F_{\mathbb{Z}_p}$ and~$G_{\mathbb{Z}_p}$ are ``hyperspecial'' .} \end{equation} \setcounter{secnumdepth}{4} \subsubsection{Some consequences} We review some constructions and some properties that hold under hypotheses~\ref{hyp:hyp}, and will be needed later. \paragraph{} We consider a maximal split torus~$T\leq G$, a basis~$\mathbb{Z}^d\simeq X(T)$. We denote the set of weights of the representation~$\rho:T\to G\to GL(d)$ by \[ \Sigma(\rho)\subseteq X(T) \] and the weight decomposition of~$V={\mathbb{Q}_p}^d$ under the action of~$T$, by \begin{equation}\label{eigen decomp} {\mathbb{Q}_p}^d=\bigoplus_{\chi\in\Sigma(\rho)} V_\chi\text{ where }V_\chi:=\{v\in{\mathbb{Q}_p}^d|\forall t\in T(\mathbb{Q}_p), t\cdot v=\chi(t)\cdot v\}. \end{equation} \paragraph{Remark}\label{rem set weights} For any other maximal torus~$T'$ there is a conjugation~$t\mapsto gtg^{-1}:T\mapsto T'$ in~$G(\mathbb{C}_p)$. We deduce a set~$\Sigma(T')$ corresponding to~$\Sigma(T)$. The resulting set~$\Sigma(T)$ does not depend on the choice of the conjugating element. The weight spaces~$V_\chi$ in the decomposition~\eqref{eigen decomp} depend on~$T$. \paragraph{}\label{Good torus} From\footnote{With~$\Omega=\{x_0\}$ if~$x_0\in\mathcal{BT}(G/L)$ is the fixed point of~$G_{\mathbb{Z}_p}(\mathbb{Z}_p)$.}~\cite[\S3.5]{Tits} we know that the induced model~$T_{\mathbb{Z}_p}$ has good reduction, i.e.~$T_{\mathbb{F}_p}$ is a torus, and that we have \[ X(T)\simeq X(T_{\mathbb{F}_p}). \] This also implies, c.f. e.g.~\cite[Prop.\,5]{Sesh} that~\eqref{eigen decomp} is compatible with integral structures: \[ {\mathbb{Z}_p}^d=\bigoplus \Lambda_{\chi}\text{ where }\Lambda_{\chi}:={\mathbb{Z}_p}^d\cap V_\chi; \] and that we have a corresponding weight decomposition \[ {\mathbb{F}_p}^d=\bigoplus \overline{V}_{\chi}\text{ where }\overline{V}_{\chi}:=\Lambda_{\chi}\otimes\mathbb{F}_p. \] \paragraph{} There is a Cartan decomposition~\cite[\S4.6(i), 4.4.3]{BT72} (see also\footnote{See~\cite[\S3 and \S3.3]{Tits} for assumptions of~\cite[3.3.3]{Tits}.}~\cite[3.3.3]{Tits}), for~$L/\mathbb{Q}_p$ a finite extension, and~$T_L$ a maximally split torus of~$G/L$, \begin{equation}\label{Cartan} G(L)=G_{\mathbb{Z}_p}(O_L)T_L(L)G_{\mathbb{Z}_p}(O_L). \end{equation} and consequently over~$\overline{\mathbb{Q}_p}$, when~$T$ is a maximal torus, \begin{equation}\label{Cartanbar} G(\overline{\mathbb{Q}_p})=G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})T(\overline{\mathbb{Q}_p})G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p}). \end{equation} \subsection{Main statement} The following theorem can be seen as a refined more precise version of the functoriality of local heights used in~\S\ref{every prime}. \begin{theorem}[Local relative stability estimates]\label{thm:compare reductive} Under hypotheses from~\S\ref{standing hyp}, let~$v,v'\in{\mathbb{Z}_p}^d$ be non zero vectors, denote by~$\overline{v},\overline{v'}\in {{\mathbb F}_p}^d$ their reduction, and assume that \begin{enumerate} \item \label{test} the orbits~$G_{\mathbb{Q}_p}\cdot v,G\cdot v'\subseteq {\mathbb A}_{{\mathbb Q}_p}^d$ are closed subvarieties; \item the stabiliser groups~$F_v:=\Stab_G(v),F_{v'}:=\Stab_G(v')$ satisfy \[ F_v=F_{v'}=F; \] \end{enumerate} and that \begin{enumerate} \item the orbits~$G_{{\mathbb F}_p}\cdot \overline{v},G_{{\mathbb F}_p}\cdot \overline{v'}\subseteq {\mathbb A}_{{\mathbb F}_p}^d$ are closed subvarieties; \item the stabiliser groups~$F_{\overline{v}}:=\Stab_G(v),F_{\overline{v'}}:=\Stab_G(v')$ satisfy, as group schemes\footnote{It amounts to the property that~$F_{\overline{v}}$ and~$F_{\overline{v}}$ are smooth.}, \begin{equation}\label{Hyp 51} F_{\overline{v}}=F_{\overline{v'}}=F_{{\mathbb F}_p}. \end{equation} \end{enumerate} We define two functions~$G(\mathbb{C}_p)\to \mathbb{R}$ given by \[ H_{v}:g\mapsto \max\{1;\norm{g\cdot v}\}\text{ and } H_{v'}:g\mapsto \max\{1;\norm{g\cdot v'}\}. \] Then the functions~$h_v=\log H_v$ and~$h_v=\log H_{v'}$ satisfy \begin{equation}\label{log equiv on reductive} h_v\leq C\cdot h_{v'}\text{ and }h_{v'}\leq C\cdot h_v, \end{equation}\label{reductive theorem final estimate} in which~$C=C(\Sigma(\rho))$ depends only on the set of weights of~$\rho$ (cf.~\ref{rem set weights}). \end{theorem} \noindent In our proof, the quantity~$C(\Sigma)$ will depend upon the choice of an invariant euclidean metric ``in the root system'' of~$G$, and there are canonical choices of such metrics. The hypothesis~\eqref{Hyp 51} can be replaced by the weaker hypothesis in~\eqref{pKN flat}. Several features that are important to our strategy. \begin{itemize} \item The quantity~$C$ only depends on the weights of~$\rho$. Thus, when~$\rho$ comes from a representation defined over~${\mathbb Q}$, this~$C$ does not depend on the prime~$p$. \item That the inequality does not need an additive constant: we have \[ H_v\leq A\cdot {H_{v'}}^C \] with~$A=1$. Thus, when we multiply the inequalities over infinitely many primes, we don't accumulate an uncontrolled multiplicative factor~$\prod_p A(p)$. \item The estimate~\eqref{reductive theorem final estimate} depends upon~$v$ only through its stabiliser group~$F$. This is precisely information about the stabilisers that we deduce from Tate conjecture. \end{itemize} \subsection{Proof} Because~$G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})\leq GL(d,\overline{\mathbb{Z}_p})$ acts isometrically on~$\overline{\mathbb{Z}_p}^d$, the functions~$h_v$ and~$h_{v'}$ are left~$G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})$-invariant. We denote the quotient functions by \[ h'_v,h'_{v'}: G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})\backslash G(\overline{\mathbb{Q}_p})\to \mathbb{R}. \] Choose an arbitrary~$g\in G(\overline{\mathbb{Q}_p})$. It is sufficient to prove~\eqref{CCL proof KN} with this element~$g$, as the other inequality in~\eqref{log equiv on reductive} can be deduced after swapping~$v$ and~$v'$. Let~$T\leq G$ be a maximal torus defined over~$\overline{\mathbb{Q}_p}$. We endow~$A_T$, defined in~\eqref{defi appartment}, with a canonical euclidean distance~$d(~,~)=d_G(~,~)$, invariant under~$N_G(T)$ and depending only on~$G$ (using e.g.~\cite[LIE VI.12]{BBK}). We denote~$\Sigma(\rho)$ the set of weights of the action~$T\to G\xrightarrow{\rho} GL(n)$. We denote~$\gamma(\Sigma(\rho))$ the quantity from Prop.~\ref{coro slopes}, which does not depend on the maximal torus~$T$ up to conjugation, and only on the weights of~$\rho$. Because~$G_{\mathbb{Z}_p}$ is hyperspecial, there is a Cartan decomposition~\eqref{Cartan}. Thus there are some~$t'\in T(\overline{\mathbb{Q}_p})$ and~$k\in G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})$ such that \[ G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})\cdot g=G(\overline{\mathbb{Z}_p})\cdot t \] with~$t=k t' k^{-1}$. We may thus assume~$g=t$. We may write, as in~\eqref{hv to hmu} \[ h_v\restriction_{T(\mathbb{C}_p)}=h_{\mu}\circ a_T\text{ and }h_v\restriction_{T(\mathbb{C}_p)}=h_{\mu'}\circ a_T. \] According to Proposition~\ref{prop slopes comparison}, we have \[ c(\Sigma(v))\cdot d(a,C_\mu)\leq h_{\mu}(a)\text{ and }h_{\mu'}(a)\leq c'(\Sigma(v'))\cdot d(a,C_{\mu'}). \] Thanks to hypotheses of~Theorem~\ref{thm:compare reductive} we may apply\footnote{Here~\eqref{universal integral} would suffice to prove~$C_\mu=C_{\mu'}$.} Theorem~\ref{pKN}. Thus \begin{multline*} \set{t\in T(\overline{\mathbb{Q}_p})}{h_v(t)=0} \\=\set{t\in T(\overline{\mathbb{Q}_p})}{tF\in G/F(\overline{\mathbb{Z}_p})} \\=T(\overline{\mathbb{Q}_p})\cap (G(\overline{\mathbb{Z}_p})\cdot F(\overline{\mathbb{Q}_p})) \\=\set{t\in T(\overline{\mathbb{Q}_p})}{h_{v'}(t)=0}. \end{multline*} As the valuation group~$\Gamma(\overline{\mathbb{Q}_p})$ is~$\mathbb{Q}$, we deduce~$C^\mathbb{Q}_{\mu}=C^\mathbb{Q}_{\mu'}$, and, by Lemma~\ref{C Q density}, we deduce \[ C:=C_\mu=C_{\mu'}. \] Applying Corollary~\ref{coro slopes}, we conclude \begin{equation}\label{CCL proof KN} h_{v'}(g)=h_{v'}(t)=h_\mu\circ a_T(t)\leq \gamma(\Sigma(\rho))^2\cdot h_\mu\circ a_T(t)=h_{v'}(t)=h_{v}(g). \end{equation} \subsection{Norms on Toric orbits and the Apartment}\label{appartments} For a torus~$T$ over an ultrametric extension~$L/\mathbb{Q}_p$, the associated ``apartment'' is defined as \begin{equation}\label{defi appartment} A_T=A_{T/L}=Y(T/L)\otimes\mathbb{R}\simeq {\rm Hom}(X(T),\mathbb{R}) \end{equation} where~$Y(T)=Y(T/L):={\rm Hom}(GL(1)_L,T)$ and~$X(T)=X(T/L):={\rm Hom}(T,GL(1)_L)$ are the group of cocharacters and characters, and are~$\mathbb{Z}$-linear dual to each other. Then the pairing \[ (t,\chi)\mapsto \log_p\abs{\chi(t)}:T(L)\times X(T)\to \mathbb{R}. \] induces a map \begin{equation}\label{T to A} a_T:T(L)\to A_T, \end{equation} Denote by~$\mathbb{Z}\leq \Gamma_L:=\log_p\abs{L^\times}\leq \mathbb{R}$ be the valuation group of~$L$. When~$T$ has a model over~$L$ which is a torus~$T_{O_L}$ over~$O_L$, the map~$a_T$ factors as \[ T(L)\twoheadrightarrow \frac{T(L)}{T_{O_L}(O_L)}\xrightarrow{\sim} Y(T)\otimes\Gamma_L\hookrightarrow A_T. \] For a character~$\chi\in X(T)$ the function \[ \log_p\abs{\chi}:T(L)\xrightarrow{\chi} L^\times\xrightarrow{\abs{~}} \mathbb{R}_{>0}\xrightarrow{\log_p} \mathbb{R} \] passes to the quotient to~$\frac{T(L)}{T_{O_L}(O_L)}$ and extends to a~$\mathbb{R}$-linear form which we denote by \[ \omega_\chi:A_T\to \mathbb{R}, \] which is also the one deduced from~$A_T\simeq {\rm Hom}(X(T),\mathbb{R})$. Assume~$T\leq GL(n)$ is a Torus over~$L$ with good reduction: denoting the eigenspace decomposition of~$L^n$ for the action of~$T$ by \[ L^n=\sum_{\chi \in X(T)} V_\chi \] we have (\ref{Good torus}, \cite[Prop.\,5]{Sesh}) \begin{equation}\label{integral eigen} {O_L}^n=\sum_{\chi \in X(T)} V_\chi\cap {O_L}^n. \end{equation} It follows, denoting by~$\norm{~}$ the standard norm on~$L^n$, that, for~$v\in L^n$, \begin{equation}\label{norm tore} \norm{v}=\max\{0\}\cup\{\norm{v_\chi}~|~{\chi\in X(T)}\}. \end{equation} We denote by~$\Sigma(T)\subseteq X(T)$ the set of weights for the action of~$T$, and denote by \[ \Sigma(v)=\{\chi\in X(T)~|~v_\chi\neq 0\}\subseteq \Sigma(T), \] and, if~$v\in{O_L}^n$, we define \[ \overline{\Sigma}(v)=\{\chi\in X(T)~|~\norm{v_\chi}=1\}\subseteq \Sigma(v) \] and a function~$\mu:\Sigma(v)\to \mathbb{R}_{\geq0}$ given by \[ \mu(\chi)=\log_p \norm{v_\chi}. \] The functions~$H_v:T(\overline{\mathbb{Q}_p})\to\mathbb{R}_{\geq 0}$ and~$h_v=\log(H_v)$ defined by \[ H_v(t)=\max\{1;\norm{t\cdot v}\} \] can be computed from the formula \begin{equation}\label{hv to hmu} h_v=h_\mu\circ a_T\text{ with }h_\mu(a):=\max\{0\}\cup\set{\omega_\chi(a)+\mu(\chi)}{\chi \in \Sigma(v)}. \end{equation} \begin{lemma}\label{C Q density} Define \[ C_\mu=\set{a\in A_T}{h_{\mu}(a)=0}\text{ and }A_T^\mathbb{Q}=Y(T)\otimes\mathbb{Q}\subseteq A_T \] Then \[ C_\mu=\overline{C_\mu\cap A_T^\mathbb{Q}}. \] \end{lemma} We skip the proof of Lemma~\ref{C Q density}. The main point is that the convex set~$C_\mu$ is constructed from affine forms~$\omega_{\chi}+\mu(\chi)$ on~$A_T$ which are \emph{defined over~${\mathbb Q}$}, with respect to the~${\mathbb Q}$-structure~$A_T^{\mathbb Q}$. \section{Residual stability and {$p$-adic} Kempf Ness Theorem} \label{sec:pKN} The estimates of Th.~\ref{thm:compare reductive} rely on the following result, of independent interest. This is an analogue of~\cite[Th. 0.1 b)]{KN} in the context~\cite{B92} of~$p$-adic Mumford's stability. It relies on careful analysis of the reduction of models of homogeneous spaces given by invariant theory~\cite{Sesh} or closed orbits in a linear representation. \begin{theorem}[$p$-adic Kempf-Ness Theorem]\label{pKN} Let~$F_{\mathbb{Z}_p}\leq G_{\mathbb{Z}_p}\leq GL(n)_{\mathbb{Z}_p}$ be smooth reductive group schemes, such that~$F_{\mathbb{Z}_p}\to G_{\mathbb{Z}_p}\to GL(n)_{\mathbb{Z}_p}$ are closed immersions, and~$G_{\mathbb{Z}_p}$ is connected. Let~$v\in{\mathbb{Z}_p}^n$, denote by~${\overline{v}\in\mathbb{F}_p}^n$ its reduction and assume that \begin{equation}\label{pKN flat} \Stab_{G_{\mathbb{Q}_p}}(v)=F_{\mathbb{Q}_p}\text{ and } \dim (\Stab_{G_{\mathbb{F}_p}}(\overline{v}))=\dim (F_{\mathbb{F}_p}), \end{equation} (using Krull dimensions) and assume that the orbits \begin{equation}\label{pKN stab} G_{\mathbb{Q}_p}\cdot v\subseteq \mathbb{A}^n_{\mathbb{Q}_p}\text{ and } G_{\mathbb{F}_p}\cdot\overline{v}\subseteq \mathbb{A}^n_{\mathbb{F}_p} \end{equation} are closed. Then, for all~$g\in G(\overline{\mathbb{Q}_p})$, we have, denoting by~$\mathbb{Z}_p[G/F]:=\mathbb{Z}_p[G]\cap \mathbb{Q}_p[G]^{F}$ the algebra of $F$-invariant functions~$G\to\mathbb{A}^1$ defined over~$\mathbb{Z}_p$, \begin{equation}\label{universal integral} g\cdot v \in \overline{\mathbb{Z}_p}^n~\text{ if and only if }~\forall f\in \mathbb{Z}_p[G/F], f(g)\in\overline{\mathbb{Z}_p}. \end{equation} Moreover,~$ \mbox{Spec}(\mathbb{Z}_p[G/F])$ is smooth over~$\mathbb{Z}_p$, and we have \begin{equation}\label{KN CCL} (G(\overline{\mathbb{Q}_p})\cdot v)\cap\overline{\mathbb{Z}_p}^n=G(\overline{\mathbb{Z}_p})\cdot v. \end{equation} \end{theorem} A reformulation of~\eqref{universal integral} is: denoting by~$gF\in(G/F)(\overline{\mathbb{Q}_p})$ the image of~$g\in G(\overline{\mathbb{Q}_p})$ in, we have \begin{equation}\label{universal integral bis} g\cdot v \in \overline{\mathbb{Z}_p}^n~\text{ if and only if }~gF\in(G/F)(\overline{\mathbb{Z}_p}). \end{equation} \subsubsection*{Remarks} Some of the hypotheses can be rephrased as follows. The~$\mathbb{Q}_p$-algebraic groups~$F$ and~$G$ are reductive, the compact subgroups ~$F_{\mathbb{Z}_p}(\mathbb{Z}_p)\leq F(\mathbb{Q}_p)$ and~$G_{\mathbb{Z}_p}(\mathbb{Z}_p)\leq G(\mathbb{Q}_p)$ are hyperspecial subgroups, and we have~$F_{\mathbb{Z}_p}(\mathbb{Z}_p)=F(\mathbb{Q}_p)\cap GL(n,\mathbb{Z}_p)$ and~$G_{\mathbb{Z}_p}(\mathbb{Z}_p)=G(\mathbb{Q}_p)\cap GL(n,\mathbb{Z}_p)$. The property~\eqref{pKN stab} is related to semi-stability and residual semi-stability of the vector~$v$ in the sense of~\cite{B92}. In~\eqref{pKN flat}, the hypothesis on dimensions means that~$\Stab_{G_{\mathbb{F}_p}}(\overline{v})^{0,red}$ (the reduced subgroup of the neutral component) is equal to~$(F_{\mathbb{F}_p})^0$. Equivalently~$\Stab_{G_{\mathbb{F}_p}}(\overline{v})^{0}(\overline{\mathbb{F}_p})=F^0(\overline{\mathbb{F}_p})$. This is implied by the stronger condition \begin{equation}\label{red scheme identity} \Stab_{G(\overline{\mathbb{F}_p})}(\overline{v})=F(\overline{\mathbb{F}_p}) \end{equation} and the stronger one \begin{equation}\label{Scheme stab identity} \Stab_{G_{\mathbb{F}_p}}(\overline{v})=F_{\mathbb{F}_p}\text{ as group schemes.} \end{equation} \subsubsection*{Proof of Theorem~\ref{pKN}} We first prove~\eqref{universal integral}. \begin{proof}Let~$S$ be the closure of~$G_{\mathbb{Q}_p}\cdot v$ in~$\mathbb{A}^n_{\mathbb{Z}_p}$ as in~\eqref{defi schematic closure S}: it is flat and we have \begin{equation}\label{flat S} x\in S(\overline{\mathbb{Z}_p})=G(\overline{\mathbb{Q}_p})\cdot v \cap \overline{\mathbb{Z}_p}^n \qquad \overline{x}\in S(\overline{\mathbb{F}_p})=\left\{\overline{x'}\middle|x'\in S(\overline{\mathbb{Z}_p})\right\}. \end{equation} Let~$a_1,\ldots,a_n\in A:=\mathbb{Z}_p[S]$ be the restrictions to~$S$ of the coordinate functions on~$\mathbb{A}^n_{\mathbb{Z}_p}$: by definition we have, for~$x=g\cdot v\in S(\overline{\mathbb{Q}_p})=G(\overline{\mathbb{Q}_p})\cdot v$, \[ x=g\cdot v\in\overline{\mathbb{Z}_p}^n\text{ if and only if }\forall i\in \{1;\ldots;n\}, a_i(x)\in\overline{\mathbb{Z}_p}. \] Because~$S$ is closed in~$\mathbb{A}^n_{\mathbb{F}_p}$ the family~$(a_i)_{i\in\{1;\ldots;n\}}$ generates~$\mathbb{Z}_p[S]$. Denote by~$x':=g F$ the image of~$g$ in~$(G/F)(\overline{\mathbb{Q}_p})\simeq G(\overline{\mathbb{Q}_p})/F(\overline{\mathbb{Q}_p})$, and define~$B:=\mathbb{Z}_p[G/F]$. Applying Cor.~\ref{coro integral extension}, we may use Lem.~\ref{lemma integral extension}. We deduce~\eqref{universal integral}. \end{proof} We now prove~\eqref{KN CCL}. \begin{proof}Consider~$x=gF\in (G/F)(\overline{\mathbb{Z}_p})$ corresponding to \[ \xi:\mbox{Spec}(\overline{\mathbb{Z}_p})\to G/F. \] The reduction~$\overline{x}$ of~$x$ is~$\overline{x}=\overline{gF}\in\xi(\mbox{Spec}(\mathbb{F}_p)) (G/F)(\overline{\mathbb{F}_p})$. Because~$(G/F)(\overline{\mathbb{F}_p})\simeq G(\overline{\mathbb{F}_p})/F(\overline{\mathbb{F}_p})$, there exists~$\overline{g}\in G(\overline{\mathbb{F}_p})$ such that~$\overline{g}\in \overline{x}=\overline{gF}(\overline{\mathbb{F}_p})$. From Lemma~\ref{platitude}, the map~$\omega:G\to G/F$ is flat. Flatness is a property which is stable under base change. Thus the restriction~$\omega\times_{\mbox{Spec}(\mathbb{Z}_p)} \xi$ of~$\omega$ along~$\xi$ is a flat map which we denote by \[ T_\xi\to \mbox{Spec}(\mathbb{Z}_p). \] By construction~$T_\xi(\overline{\mathbb{Q}_p})$ is the transporter~$\set{g\in G(\overline{\mathbb{Q}_p})}{g\cdot v=x}$, and~$T_\xi(\overline{\mathbb{F}_p})$ is the transporter~$\set{\overline{g}\in G(\overline{\mathbb{F}_p})}{\overline{g}\cdot \overline{v}=x}$. We have~$\overline{g}\in T_\xi(\overline{\mathbb{F}_p})$ by construction, and, by flatness of~$T_\xi$, there exists~$g$ in~$T_\xi(\overline{\mathbb{Z}_p})=T_\xi(\overline{\mathbb{Q}_p})\cap G(\overline{\mathbb{Z}_p})$. By definition \[ g\cdot v=x\text{ and }g\in G(\overline{\mathbb{Z}_p}). \] Because~$x$ is arbitrary, we have~$(G/F)(\overline{\mathbb{Z}_p})=\omega(G(\overline{\mathbb{Z}_p}))={G(\overline{\mathbb{Z}_p})}F/F$. The equation~\eqref{KN CCL} follows, using~\eqref{universal integral}. \end{proof} \subsubsection*{The Smooth case} The authors thank L. Moret-Bailly for helpful discussions, the following addition, and its proof, and useful references. We denote by~$\underline{G}/\underline{F}$ denote the quotient of~$G$ by~$F$ as an ``algebraic space'' is the sense of Artin. Some references are~\cite{A},~\cite{Knu},~\cite{Ana},~\cite{Ray}. \begin{proposition}\label{LMB Prop} In the situation of Th.~\ref{pKN}, assume moreover that we have~\eqref{Scheme stab identity}. Then the morphism \begin{equation}\label{LMB map} \underline{G}/\underline{F}\to S \end{equation} is an isomorphism of schemes. In particular~$S$ is smooth and its reduced fibre is regular. \end{proposition} \begin{proof}Under assumption~\eqref{Scheme stab identity}, the morphism~\eqref{LMB map} is a monomorphism. (e.g.~\cite[V Th.~10.1.2]{SGA31}). Since the morphism~\eqref{LMB map} is also finite (by Lemma\footnote{We may use the Lemma for~$\underline{G}/\underline{F}$ instead of~$G/F$ with following changes in the proof of the Lemma. We replace Zariski main theorem by the version~\citestacks{082K} for algebraic spaces. Using the monomorphism~$\underline{G}/\underline{F}\to G/F$ from~\cite[V Th.~10.1.2]{SGA31}, we can deduce from Lem.~\ref{Seshadri} the corresponding version for~$\underline{G}/\underline{F}$.}~\ref{Lemma integral}), and thus proper, we may invoke~\cite[18.12.6]{EGA44} (or~\citestacks{04XV}). We deduce that the morphism~\eqref{LMB map} is closed immersion, that is an isomorphism onto a closed subscheme. This image is closed and contains the generic fibre, thus contains~$S$. Thus~$\underline{G}/\underline{F}$ contains~$S^{red}$. Because the scheme~$S$ is reduced, as can be checked on the generic fibre (or see~\citestacks{01J2}), the morphism~\eqref{LMB map} is an isomorphism. We also know that~$\underline{G}/\underline{F}$ is smooth from~\cite[$\text{VI}_\text{B}$ Prop.~9.2 (xii) (and~V Th.~10.1.2)]{SGA31}. \end{proof} \begin{corollary}\label{LMB vs Sesh} The natural morphism~$\underline{G}/\underline{F}\to G/F$ is an isomorphism onto the quotient defined by invariant theory. \end{corollary} \begin{proof}It will follow from Prop.~\eqref{LMB Prop} if we will realise~$G/F$ as an instance of~$S\subseteq \mathbb{A}^n$ such as in~Prop.~\eqref{LMB Prop}. According to~\cite[Th.~2]{Sesh}, the algebra~$\mathbb{Z}_p[G/F]$ admits a finite generating family~$f_1,\ldots,f_k\in \mathbb{Z}_p[G]\subseteq \mathbb{Q}_p[G]$. According to~\cite[Prop.~3]{Sesh}, there exists a finite dimensional subrepresentation~$V\subseteq \mathbb{Q}_p[G]$ of~$G$ containing~$\{f_1;\ldots;f_k\}$. We endow~$V$ with the integral structure~$V_{\mathbb{Z}_p}$ induced by~$\mathbb{Z}_p[G]$. We pick~$v=(f_1,\ldots,f_k)\in V_{\mathbb{Z}_p}^k\approx {\mathbb{Z}_p}^{n}$, with~$n:=\dim(V)\cdot k$. By construction~$G/F\to \mathbb{A}_{\mathbb{Z}_p}^n$ is a closed immersion. We only need to check~\eqref{pKN flat} and~\eqref{pKN stab}. Using Lemma~\ref{Seshadri} we get the following. \begin{itemize} \item We have~$\dim (G/F)_{\mathbb{F}_p}=\dim G_{\mathbb{F}_p}-\dim F_{\mathbb{F}_p}$, and~\eqref{pKN flat} follows. \item The morphism~$G_{\mathbb{F}_p}/F_{\mathbb{F}_p}\to (G/F)_{\mathbb{F}_p}$ is a closed immersion. Thus the images of~$f_1,\ldots,f_k$ generate~$\mathbb{F}_p[G_{\mathbb{F}_p}/F_{\mathbb{F}_p}]$, and~$G_{\mathbb{F}_p}\to G_{\mathbb{F}_p}\cdot \overline{v}$ is a closed immersion, proving~\eqref{pKN stab}.\qedhere \end{itemize} \end{proof} \subsection{Connectedness and Special fibre} \begin{lemma}\label{lemma KN connected} Let~$G\leq GL(n)$ be hyperspecial group over~$\mathbb{Q}_p$, let~$v\in \overline{\mathbb{Z}_p}$ and let \begin{equation}\label{defi schematic closure S} S=\overline{G\cdot v}^{Zar(\mathbb{A}^n_{\overline{\mathbb{Z}_p}})} \end{equation} be the schematic closure of~${G\cdot v}\subseteq \mathbb{A}^n_{\overline{\mathbb{Z}_p}}$ in~$\mathbb{A}^n_{\overline{\mathbb{Z}_p}}$. Then~$S_{\mathbb{F}_p}$ is connected. \end{lemma} We first treat the case~$T=G= GL(1)$. \begin{proof}If~$S_{\mathbb{F}_p}$ is a closed orbit of~$T$, then it is connected, as it is the image of~$GL(1)$, which is connected. We will show that, otherwise, we can decompose~$S_{\mathbb{F}_p}$ under the form \begin{equation}\label{decompo tore en + 0 -} S_{\mathbb{F}_p}(\overline{\mathbb{F}_p})=S^-\cup\{\overline{v_0}\}\cup S^+ \end{equation} where each of~$S^-$ and~$S^+$ is either empty or of the form~$X=T(\overline{\mathbb{F}_p})\cdot\overline{w}$ with~$\{\overline{v_0}\}\in \overline{X}^{Zar}$. For every~$\overline{w}\in\overline{F_p}^n$, because~$T=GL(1)$ is connected, so is~$T\cdot \overline{w}$, and so is its Zariski closure. It follows that~$S^-$ and~$S^+$ are contained in the connected component of~$\overline{v_0}$, and finally that~$S_{\mathbb{F}_p}$ the connected component of~$\overline{v_0}$. From~\eqref{flat S} a point in~$S(\overline{\mathbb{Z}_p})$ is of the form \[ \overline{x}\text{ with }x=t\cdot v\in\overline{\mathbb{Z}_p}^n\text{ and }t\in T(\overline{\mathbb{Q}_p}). \] We identify~$X(T):={\rm Hom}(GL(1),GL(1))$ with~$\mathbb{Z}$ and denote by \[ v=\sum_{k\in\mathbb{Z}} v_k \] the eigendecomposition of~$v$ for the action of~$T$. Then~$x=t\cdot v=\sum t^k\cdot v_k$, and, by~\eqref{norm tore}, \begin{equation}\label{eq t x entier} \norm{x}=\max\{t^k\cdot \norm{v_k}\}\leq 1. \end{equation} Define \[ c=\min_{k< 0} \norm{v_k}^{-1/k}\in[0;1]\text{ and } c'=\max_{k> 0} \norm{v_k}^{-1/k}\in[1;+\infty] \] For~$t\in T(\overline{\mathbb{Q}_p})$ we have~$t\cdot v\in\overline{\mathbb{Z}_p}^n$ if and only if~$c\leq \abs{t}\leq c'$. We define \begin{eqnarray*} T_-&=&\set{t\in T(\overline{\mathbb{Q}_p})}{\abs{t}=c}\\ T_0&=&\set{t\in T(\overline{\mathbb{Q}_p})}{c<\abs{t}<c'}\\ T_+&=&\set{t\in T(\overline{\mathbb{Q}_p})}{\abs{t}=c'}. \end{eqnarray*} and \[ S^-=\set{\overline{t\cdot v}}{t\in T^-}\text{ and } S^+=\set{\overline{t\cdot v}}{t\in T^+}. \] and \[ v_-=\sum_{k\geq 0} v_k\text{ and }v_+=\sum_{k\geq 0} v_k. \] If~$c=0$, then~$T^-=S^-=\emptyset$. Otherwise, let us pick~$u\in T^-$. Assume first~$c\neq c'$. We then have~$\overline{u\cdot v_+}=0$ and \[ \overline{w}:=\overline{u\cdot v}=\overline{u\cdot v_-+v_0} \] We then have \[ S^-=T(\overline{\mathbb{F}_p})\cdot \overline{w} \] Because the weights of~$\overline{u\cdot v_-}$ are negative we have \[ \lim_{\overline{t}\to +\infty} \overline{t}\cdot\overline{u\cdot v_-+v_0}= \overline{u\cdot 0+v_0}=\overline{v_0}, \] where limits are understood in the sense of Hilbert-Mumford criterion, as in~\cite[Lem.\,1.3]{Kempf}. Thus~$\{v_0\}\in \overline{S^-}^{Zar}$. The case of~$S^+$ is treated similarly and we have obtained~\eqref{decompo tore en + 0 -} with the desired properties. We now treat the remaining case~$c=c'$. We then have \[ S_{\mathbb{F}_p}(\overline{\mathbb{F}_p})=S^+=S^-=T(\overline{\mathbb{F}_p})\cdot\overline{v}. \] (This is then a closed orbit of~$T_{\mathbb{F}_p}$, as~$S_{\mathbb{F}_p}$ is closed). \end{proof} We reduce the Proposition to the case of a torus~$GL(1)\simeq T\leq G$. \begin{proof}It is enough to prove that for an arbitrary~$\overline{x}\in S(\overline{\mathbb{F}_p})$, this~$\overline{x}$ and~$\overline{v}$ belong to the same connected component of~$S_{\mathbb{F}_p}$. We may find~$x\in S(\overline{\mathbb{Q}_p})\cap\overline{\mathbb{Z}_p}^n$ with reduction~$\overline{x}$. There exists~$g\in G(\overline{\mathbb{Q}_p})$ with~$g\cdot v=x$. From Cartan decomposition~\eqref{Cartanbar}, there exists~$k\in G(\overline{\mathbb{Z}_p})$ and maximal torus~$T\leq G$ and~$t\in T(\overline{\mathbb{Q}_p})$ with~$k\cdot t=g$. The torus~$T$ has good reduction by~\ref{Good torus}. There exists~$y:GL(1)\to T$ defined over~$\overline{\mathbb{Q}_p}$ and~$u\in T(\overline{\mathbb{Z}_p})$ and~$t'\in \overline{\mathbb{Q}_p}^\times$ with~$y(t')=u\cdot t$. Because~$G_{\mathbb{F}_p}$ is connected and~${S}_{\mathbb{F}_p}$ is~$G_{\mathbb{F}_p}$-invariant, the orbit~$G_{\mathbb{F}_p}(\overline{\mathbb{F}_p})\cdot\overline{x}$ is connected and contained in~$S_{\mathbb{F}_p}$. Thus~$\overline{x}$ and~$\overline{x'}={\overline{(k\cdot u)}^{\phantom{l}}}^{-1}\cdot \overline{x}$ lie in the same connected component of~$S_{\mathbb{F}_p}$. We may thus replace~$\overline{x}$ by~$\overline{x'}$ and~$x$ by~$(k\cdot u)^{-1}\cdot x$, and~$g$ by~$y(t')$. We have~$x\in GL(1)(\overline{\mathbb{Q}_p})\cdot v\cap \overline{\mathbb{Z}_p}^n$ and thus \[ \overline{v},\overline{x}\in S_T(\overline{\mathbb{F}_p})\text{ with }S_T:=\overline{T\cdot v}^{Zar(\mathbb{A}^n_{\overline{\mathbb{Z}_p}})}\subseteq S. \] From the previous~$G=GL(1)$ case,~$S_T$ is connected. Thus~$\overline{x}$ and~$\overline{v}$ lie in the same connected component. \end{proof} \begin{lemma}\label{Lemma S S'} In the situation of Lemma~\ref{lemma KN connected}, we assume that \[ \text{ the orbit $G_{\mathbb{F}_p}\cdot \overline{v}$ is Zariski closed in~$\mathbb{A}^n_{\mathbb{F}_p}$} \] and denote~$S'$ the corresponding reduced subcheme of~$\mathbb{A}^n_{\mathbb{F}_p}$. We assume furthermore that, using Krull dimension, \[ \dim \Stab_G(v)=\dim \Stab_{G_{\mathbb{F}_p}}(\overline{v}) \] Then \[ S'=(S_{\mathbb{F}_p})^{red.}. \] \end{lemma} \begin{proof}By construction~$S$ is flat over~$\mathbb{Z}_p$. According to Lemma~\ref{lemma schemes} we have \begin{equation}\label{semi dim S} \dim S_{\mathbb{F}_p}\leq \dim S_{\mathbb{Q}_p} \end{equation} From~\eqref{dim formula}, we have \begin{eqnarray} \dim S_{\mathbb{Q}_p} &= &\dim G_{\mathbb{Q}_p} - \dim \Stab_{G_{\mathbb{Q}_p}}(v),\\ \dim S'_{\phantom{\mathbb{Q}_p}} &= &\dim G_{\mathbb{F}_p} - \dim \Stab_{G_{\mathbb{F}_p}}(\overline{v}). \end{eqnarray} We deduce~$\dim(S')\geq \dim(S_{\mathbb{F}_p})$. Because~$S'\subseteq S_{\mathbb{F}_p}$ we have actually \[ \dim(S')=\dim(S_{\mathbb{F}_p}). \] Thus, (at the level of topological spaces,)~$S'$ contains a generic point of one irreducible component of~$S$, and thus\footnote{This~$S'$ is of finite type over~$S$ and its image will be constructible.} contains a non empty open subset of~$S$. Because~$S'$ is closed in~$\mathbb{A}^n_{\mathbb{F}_p}$, it is closed in~$S_{\mathbb{F}_p}$. Thus~$S'$ contains a connected component of~$S_{\mathbb{F}_p}$, and because~$S_{\mathbb{F}_p}$ is connected and~$S'$ is reduced, \[ S'=(S_{\mathbb{F}_p})^{red}.\qedhere \] \end{proof} We define, following~\cite{Sesh}, \begin{equation}\label{defi G sur F} \mathbb{Z}_p[G/F]=\mathbb{Z}_p[G]\cap \mathbb{Q}_p[G]^F\text{ and }G/F:= \mbox{Spec}(\mathbb{Z}_p[G/F]). \end{equation} By~\cite[\S{II}.4, Th.~2]{Sesh} (cf. also~\citestacks[Prop. 10.162.16.]{0335}) \[ \text{ $\mathbb{Z}_p[G/F]$ is of finite type over~$\mathbb{Z}_p$. } \] Since~$G$ is flat we have an isomorphism~$\mathbb{Z}_p[G_{\mathbb{F}_p}]\otimes\mathbb{F}_p\simeq\mathbb{F}_p[G_{\mathbb{F}_p}]$. Thus there exists an homomorphism~$\mathbb{Z}_p[G_{\mathbb{F}_p}]^{F}\otimes\mathbb{F}_p\to\mathbb{F}_p[G_{\mathbb{F}_p}]^{F_{\mathbb{F}_p}}$, and hence a morphism \[G_{\mathbb{F}_p}/F_{\mathbb{F}_p}\to G/F.\] \begin{lemma}\label{Seshadri} The map \[ G_{\mathbb{F}_p}/F_{\mathbb{F}_p}\to G/F \] induces an isomorphism with the reduced subscheme of the special fibre \[ G_{\mathbb{F}_p}/F_{\mathbb{F}_p}\simeq ((G/F)_{\mathbb{F}_p})^{red}. \] \end{lemma} \begin{proof}We apply~\cite[Prop. 6, \S{II.1}]{Sesh} to the closed affine embedding onto a~$G$-invariant subscheme \[ X:=G\subseteq GL(n)\subseteq SL(n+1)\subseteq V:=\mathbb{A}^{(n+1)^2}, \] where~~$G$ will be our~$F$, which is flat over~$\mathbb{Z}_p$, acting on the right on~$X$. In our instance, every geometric point~$x=g\cdot F$ of~$X$ is ``stable'' in the sense of~\cite[\S{II}.1, Def.~1]{Sesh}, every geometric orbit is closed, and the closure of two distinct orbits is empty. With~$B\subseteq \mathbb{Z}_p[X]^F$ the image of~$\mathbb{Z}_p[\mathbb{A}^{n+1}]^F$ in~$\mathbb{Z}_p[X]$, we have, at the level of the schemes \[ \mbox{Spec}(\mathbb{Z}_p[X])\to X/G:= \mbox{Spec}(\mathbb{Z}_p[X]^G)\to T:= \mbox{Spec}(B). \] From~\cite[Prop. 6, \S{II.1}]{Sesh} we have, on geometric points in an algebraically closed field~$k$ over~$\mathbb{Z}_p$, \[ X(k)\to X(k)/G(k)\simeq T(k). \] In our terms, this implies that we have bijections \[ (G/F)(\overline{\mathbb{F}_p})\to G(\overline{\mathbb{F}_p})/F(\overline{\mathbb{F}_p})\to T(\overline{\mathbb{F}_p}). \] It follow that the~$G_{\mathbb{F}_p}$-equivariant map \[ S'\simeq (G_{\mathbb{F}_p}/F_{\mathbb{F}_p})\to (G/F)_{\mathbb{F}_p} \] induces bijections \[ (G_{\mathbb{F}_p}/F_{\mathbb{F}_p})(\overline{\mathbb{F}_p})\simeq G(\overline{\mathbb{F}_p})/F(\overline{\mathbb{F}_p})\simeq (G/F)_{\mathbb{F}_p}. \] Because~$S':=G_{\mathbb{F}_p}/F_{\mathbb{F}_p}$ is reduced, we have a factorisation \begin{equation}\label{S' to Sesh} S'\to ((G/F)_{\mathbb{F}_p})^{red}\to (G/F)_{\mathbb{F}_p}. \end{equation} Let~$x$ be a geometric generic point of~$((G/F)_{\mathbb{F}_p})^{red}$ and let~$k$ be its residue field. Then there is a unique inverse image~$x'\in S'(k)$, and~$S'\to ((G/F)_{\mathbb{F}_p})^{red}$ will be an isomorphism from a neighbourhood~$U'$ of~$x'$ onto a neighbourhood~$U$ of~$x$ (cf. e.g.~\citestacks{0BXP}). Using the action of~$G_{\mathbb{F}_p}$ we may assume~$U$ and~$U'$ are~$G_{\mathbb{F}_p}$-invariant. We have necessarily~$U'=S'$, and~\eqref{S' to Sesh} is an open immersion. It is also surjective on~$\mathbb{F}_p$-points, hence surjective. \end{proof} \subsection{Normalisation and Integrality} \begin{lemma}\label{Lemma integral} We keep the situation of Lemma~\ref{Lemma S S'} and the notations from Lemma~\ref{Seshadri}. The morphism~$G/F\to S$ is integral, and finite. \end{lemma} \begin{proof} We claim the map \[ G/F\to S \] is bijective on geometric points. It is bijective on geometric points of characteristic~$0$. Indeed, on the generic fibres we have the isomorphisms~$G_{\mathbb{Q}_p}\cdot v\simeq G_{\mathbb{Q}_p}/F_{\mathbb{Q}_p} \simeq (G/F)_{\mathbb{Q}_p}= \mbox{Spec}(\mathbb{Q}_p[G])^{F_{\mathbb{Q}_p}}$. It also is bijective on geometric points of characteristic~$p$. Indeed, from Lem.~\ref{Lemma S S'} and~\ref{Seshadri}, we have \[ S'\simeq (S_{\mathbb{F}_p})^{red.}\simeq ((G/F)_{\mathbb{F}_p})^{red.}. \] We proved the claim and it follows that~$G/F\to S$ is quasi-finite and bijective. We define~$\overline{S}= \mbox{Spec}(\overline{\mathbb{Z}_p[S]})$ where we denote by~$\overline{\mathbb{Z}_p[S]}$ the integral closure of~$\mathbb{Z}_p[S]$ in~$\mathbb{Z}_p[G/F]$. According to Zariski Main Theorem in the form~\citestacks[Th. 10.123.12 (Zariski's Main Theorem)]{00Q9}, for any point~$x$, say of characteristic~$p$, in~$G/F$, there open subset~$U\subseteq \overline{S}$ containing its image in~$\overline{S}$ such that the map \[ \overline{\pi}:G/F\to \overline{S} \] induces an isomorphism~$\stackrel{-1}{\overline{\pi}}(U)\to U$ above~$U$. Let~$Z=G/F\smallsetminus \stackrel{-1}{\pi}(U)$ and~$Z'=\pi(Z)$ and~$U'=S\smallsetminus Z'$. This is a non-empty subset of~$S$ containing the image~$s:=\pi(x)\in S_{\mathbb{F}_p}$ and such that \[ \pi:G/F\to S \] is integral above~$U$. By homogeneity, for every~$g\in G(\overline{\mathbb{Z}_p})$, the morphism~$\pi':(G/F)_{\overline{\mathbb{Z}_p}}\to S_{\overline{\mathbb{Z}_p}}$ will be integral above~$gU\subseteq S_{\overline{\mathbb{Z}_p}}$. Because integrality is a local property, the morphism~$\pi'$ will be integral over the neighbourhood~$U''=G(\overline{\mathbb{Z}_p})\cdot U$ of the closed fibre~$S_{\overline{\mathbb{F}_p}}$. We also know~$\pi'$ is an isomorphism, hence integral, over the open subset~$U'''=S_{\overline{\mathbb{Q}_p}}$ (the generic fibre). It is then integral over~$U''\cup U'''=S_{\overline{\mathbb{Z}_p}}$. We use~\citestacks[Lem. 10.36.5.]{02JJ} to deduce finiteness from integrality. \end{proof} The following is a reformulation. \begin{corollary}\label{coro integral extension} The normalisation of~$S$ and~$G/F$ in~$G_{\mathbb{Q}_p}$ are the same. The ring~$\mathbb{Z}_p[G/F]$ is integral over~$\mathbb{Z}_p[S]$, and finite. \end{corollary} \begin{lemma}\label{lemma integral extension} Let~$A\subseteq B$ be~$\mathbb{Z}_p$-algebras with~$B$ integral over~$A$, an let~$(a_i)_{i\in I}$ be a generating set of~$A$ For~$x\in \mbox{Spec}(B)(\overline{\mathbb{Q}_p})$ the following are equivalent. \begin{eqnarray} \forall i\in{I},& a_i(x)&\in\overline{\mathbb{Z}_p}.\label{integral lemma generators}\\ \forall a\in{A},& a(x)&\in\overline{\mathbb{Z}_p}.\\ \forall b\in{B},& b(x)&\in\overline{\mathbb{Z}_p}.\label{integral lemma extension} \end{eqnarray} \end{lemma}\label{integral sur Zpbar} \begin{proof}It suffices to prove that for an arbitrary~$b$, assuming~\eqref{integral lemma generators}, we have \begin{equation}\label{eq integral sur Zpbar} b (x)\in\overline{\mathbb{Z}_p}. \end{equation} Because~$b$ is integral on~$A=\mathbb{Z}_p[(a_i)_{i\in I}]$, its image~$b(x)\in\overline{\mathbb{Q}_p}$ is integral over \[ \mathbb{Z}_p[(a_i(x))_{i\in I}] \] (If~$b^{d+1}=a_{(0)}+\ldots+a_{(d)}\cdot b^d$, then~$b(x)^{d+1}=a_{(0)}(x)+\ldots+a_{(d)}(x)\cdot b(x)^d$.) By assumption~$\mathbb{Z}_p[(a_i(x))_{i\in I}]\subseteq\overline{\mathbb{Z}_p}$. But~$\overline{\mathbb{Z}_p}$ is integrally closed in~$\overline{\mathbb{Q}_p}$. We deduce~\eqref{eq integral sur Zpbar}. \end{proof} The above is sufficient for proving~\eqref{universal integral} and our proof of our main result Th.~\ref{main theorem 2}. Below we address the smoothness of~$G/F$, which we use for~\eqref{KN CCL}. \subsection{Flatness and Smoothness} \begin{lemma}\label{platitude} Assume that~$(G/F)_{\mathbb{F}_p}$ is reduced, for instance that~$G/F$ is smooth over~$\mathbb{F}_p$. If~$F$ is flat, resp. smooth, then the maps \begin{equation}\label{univ orbit map} \omega:G\to G/F\text{ and }G/F\to \mbox{Spec}(\mathbb{Z}_p) \end{equation} are flat, resp. smooth. \end{lemma} \begin{proof} We know that~$G_{\mathbb{Z}_p}$ is flat and smooth over~$\mathbb{Z}_p$ by hypothesis. If~$F$ is smooth, so are~$F_{\mathbb{F}_p}$ and~$F_{\mathbb{Q}_p}$. From Lemma~\ref{Flat orbit lemma}, we know that~ \[ G_{\overline{\mathbb{Q}_p}}\to (G/F)_{\overline{\mathbb{Q}_p}}\cdot v=S_{\overline{\mathbb{Q}_p}}\text{ and~}G_{\overline{\mathbb{F}_p}}\to G_{\overline{\mathbb{F}_p}}/F_{\overline{\mathbb{F}_p}} \] are flat, resp. smooth morphisms of algebraic varieties. Because~$(G/F)_{\mathbb{F}_p}$ is reduced we have, by Lem.~\ref{Seshadri}, an isomorphism~$G_{\overline{\mathbb{F}_p}}/F_{\overline{\mathbb{F}_p}}\simeq (G/F)_{\overline{\mathbb{F}_p}}$. We may conclude with~\cite[Part 2, \S5.6, Lem.~5.21, p.\,132]{FGA} or~\citestacks[Lem. 37.16.3.]{039D} that~\eqref{univ orbit map} is flat, resp. with~\cite[17.11.1 d)]{EGA44} that~\eqref{univ orbit map} is smooth. (See also~\cite[$\text{VI}_\text{B}$ Prop.~9.2 (xii) (and~V Th.~10.1.2)]{SGA31}.) \end{proof} \begin{lemma}\label{Flat orbit lemma}Over a field~$\kappa$ let~$G\leq GL(n)_\kappa$ be a algebraic subgroup (smooth closed group subscheme), and choose~$v\in\kappa^n$. Then the map ``orbit through~$v$'' map \begin{equation}\label{omega smooth?} \omega:G\to G\cdot v \end{equation} is flat, where~$G\cdot v\simeq G/\Stab_G(v)$ is locally closed and given the reduced scheme structure. We have, using Krull dimension, \begin{equation}\label{dim formula} \dim(G\cdot v)=\dim(G)-\dim(\Stab_G(v)). \end{equation} If~$\Stab_G(v)$ is smooth as a group scheme\footnote{In practice~$\dim \Stab_{\mathfrak{g}}(v)=\dim \Stab_{G}(v)$.}, then~$\omega$ is a smooth map and~$G\cdot v$ is smooth (regular). \end{lemma} \begin{proof} According to the Orbit Lemma~\cite[\S{I} 1.8]{BorelLAG}, the orbit~$G\cdot v$ is locally closed. Because~$G\cdot v$ is reduced, by~\citestacks[Prop. 29.27.2]{052B} the flatness locus of the map~$\omega$ is a non-empty (dense open) subset of~$G\cdot v$. But this subset is~$G$-invariant. Thus the map is flat everywhere. We deduce~\eqref{dim formula} from the flat case of~\citestacks[Lem. 29.28.2.]{02JS} and~\citestacks[Lem. 29.29.3.]{02NL}, (using Krull dimension, cf.~\citestacks[Def. 5.10.1.]{0055}). (We can also find~\eqref{dim formula} in~\cite[p.\,7]{GIT}).) Concerning the smoothness, see for instance~Prop.~\ref{LMB Prop} and~\ref{LMB vs Sesh} or~\cite[$\text{VI}_\text{B}$ Prop.~9.2 (xii) (and~V Th.~10.1.2)]{SGA31}. \end{proof} \section{Slopes weights estimates}\label{sec:slopes} We consider an integer~$n\in\mathbb{Z}_{\geq0}$ and an Euclidean distance~$d(~,~)$ on~$\mathbb{R}^n$. The quantities~$c,c'$ and~$\gamma$ will implicitely also depend on~$d(~,~)$. \begin{lemma}\label{lemma cvx 1} Let~$\Sigma$ be a finite set of linear forms on~$\mathbb{R}^n$, and let the function~$h_{\Sigma}:\mathbb{R}^n\to\mathbb{R}_{\geq0}$ be given by \[ h_{\Sigma}(x)=\max_{\lambda\in\{0\}\cup\Sigma}\lambda(x), \] and define~$C=C(\Sigma):=\{x\in\mathbb{R}^n|h_{\Sigma}(x)=0\}$. Then there exist~$c(\Sigma),c'(\Sigma)\in\mathbb{R}_{>0}$ such that: for all~$x\in\mathbb{R}^n$ satisfying \begin{equation}\label{mini to C} d(x,C)=d(x,0) \end{equation} we have \begin{equation}\label{cvx c c'} c(\Sigma)\cdot d(0,x)\leq h_{\Sigma}(x)\leq c'(\Sigma)\cdot d(0,x). \end{equation} \end{lemma} \begin{proof}We may assume that~$x\neq 0$ and, by homogeneity of~\eqref{cvx c c'}, that \begin{equation}\label{sphere} d(0,x)=1. \end{equation} We can rewrite the condition~\eqref{mini to C} as \begin{equation}\label{mini to c} \forall c\in C,~d(0,x)\leq d(c,x). \end{equation} The set~$C^\bot:=\set{x\in\mathbb{R}^n}{d(0,x)=d(C,x)}$ is an intersection of affine half-spaces, and is a closed set~$C^\bot\subseteq \mathbb{R}^{d}$ (it is the \emph{polar dual cone} to~$C$). The intersection~$K$ of~$C^\bot$ with the unit sphere~$\set{x\in\mathbb{R}^n}{d(0,x)=1}$ is thus a compact set. We have~$x\in K$, by~\eqref{mini to C} and~\eqref{sphere}. The continuous function~$h_{\Sigma}$ has a minimum value and maximum value on the compact~$K$, which we denote by \[ c(\Sigma):=\min_{k\in K}h_{\Sigma}(k)\text{ and }c'(\Sigma):=\max_{k\in K}h_{\Sigma}(k). \] By definition,~\eqref{cvx c c'} is satisfied and we have~$0\leq c(\Sigma)\leq c'(\Sigma)<+\infty$. It will be enough to prove~$0<c(\Sigma)$. Assume by contradiction that~$c(\Sigma)= 0$ and choose~$k\in K$ such that~$h_{\Sigma}(k)=0$. Then~$k\in C$. From~\eqref{mini to c} for~$x=c=k$, we deduce~$d(0,k)\leq d(k,k)=0$, contradicting~\eqref{sphere}. \end{proof} \begin{proposition}\label{prop slopes comparison} We keep the setting of Lemma~\ref{lemma cvx 1}. Let us fix a map~$\mu:\Sigma\to\mathbb{R}_{\leq 0}$, let~$h_{\mu}:\mathbb{R}^n\to \mathbb{R}_{\geq0}$ be defined by \[ h_{\mu}(x)=\max\{0;\max_{\lambda\in\Sigma}\lambda(x)+\mu(\lambda)\}. \] Define~$C=C_{\mu}=\{x\in\mathbb{R}^n|h_{\Sigma}(x)=0\}$. Then \begin{equation}\label{cvx affine} \forall x\in \mathbb{R}^n, c(\Sigma)\cdot d(C,x)\leq h_{\Sigma}(x)\leq c'(\Sigma)\cdot d(C,x). \end{equation} \end{proposition} \begin{proof} Define~$\overline{\Sigma}:=\{\lambda\in\Sigma|\mu(\lambda)=0\}$ and~$\overline{C}=\{x\in\mathbb{R}^n|h_{\overline{\Sigma}}(x)=0\}$. We have~$h_{\overline{\Sigma}}\leq h_{\mu}$ and thus \[ C\subseteq \overline{C} \] In a first step we prove~\eqref{cvx affine} with the extra condition \begin{equation}\label{cvx extra} d(x,C)=d(x,0)\text{ (that is:~$\forall c\in C, d(x,c)\geq d(x,0)$)}. \end{equation} Let~$\norm{~}$ denote the euclidean norms induced by~$d(~,~)$ on~$\mathbb{R}^n$ and its dual. For~$a\in \mathbb{R}^n$ we have \[ \max_{\sigma\in\Sigma}\sigma(a)\leq \norm{a}\cdot \max_{\sigma\in\Sigma}\norm{\sigma}. \] Define~$\mu_0:=\max\set{\mu(\sigma)}{\sigma\in\Sigma\smallsetminus\overline{\Sigma}}<0$. Then, if~$a\in\mathbb{R}^n$ satisfies \begin{equation}\label{petit a} \norm{a}\cdot \max_{\sigma\in\Sigma}\norm{\sigma}\leq \mu_0, \end{equation} we have \begin{equation}\label{petit a et negatif} h_{\mu}(a)-h_\Sigma(a)\leq 0,\text{ and thus }h_{\mu}(a)=h_\Sigma(a). \end{equation} Let us prove that~$d(x,\overline{C})=d(x,0)$. \begin{proof}We want to prove that for an arbitrary~$b\in \overline{C}$ we have \begin{equation}\label{cvx claim 1} d(x,b)\geq d(x,0). \end{equation} Let~$\lambda\in\mathbb{R}_{>0}$ be sufficiently small so that~$a:=\lambda\cdot b$ satisfies~\eqref{petit a}. We deduce from~\eqref{petit a et negatif} that~$a\in C$, and from~\eqref{cvx extra} that \[ d(x,a)\geq d(x,0). \] Equivalently, denoting by~$(~,~)$ the euclidean scalar product,~$(a,x)\leq 0$. It follows~$(b,x)=\lambda\cdot (a,x)\leq 0$ and, equivalently,~\eqref{cvx claim 1}. \end{proof} Applying Lemma~\ref{lemma cvx 1}, we deduce~\eqref{cvx affine} under the assumption~\eqref{cvx extra}. We now reduce the general case to the first step, by a translation of the origin of~$\mathbb{R}^n$. Let~$x_0\in C$ be such that \[ d(x,C)=d(x,x_0) \] and define \[ \mu'(\lambda)=\lambda(x_0)-\mu(\lambda), \] so that \[ h_{\mu'}(y)=h_\mu(y+x_0). \] and~$C_{\mu'}=C_{\mu}-x_0$. Thus \[ d(x-x_0,C_{\mu'})=d(x-x_0,C_{\mu}-x_0)=d(x,C_{\mu})=d(x,x_0)=d(x-x_0,0). \] From~$x_0\in C_\mu$, we deduce~$h_{\mu'}(0)=h_\mu(x_0)=0$ and~$\forall\sigma\in\Sigma,\mu'(\sigma)\leq 0$. Then~\eqref{cvx affine} for~$x$ follows from the first step applied to~$x-x_0$. \end{proof} Defining \[ \gamma(\Sigma_0)= \frac{\max\set{c'(\Sigma)}{\Sigma\subseteq \Sigma_0}} {\hspace{2.2pt}\min\set{\phantom{{}'}c(\Sigma)}{\Sigma\subseteq \Sigma_0}} \] we deduce the following. \begin{corollary}\label{coro slopes} Let~$\Sigma_0$ be a finite set of linear forms on~$\mathbb{R}^n$. There exists~$\gamma(\Sigma_0)\in\mathbb{R}_{>0}$ such that for~$\Sigma,\Sigma'\subseteq \Sigma_0$ and~$\mu:\Sigma\to\mathbb{R}_{\leq0}$ and~$\mu':\Sigma\to\mathbb{R}_{\leq0}$ such that~$C_{\mu}=C_{\mu'}$, we have \[ \forall x\in \mathbb{R}^n, h_{\mu}(x)\leq \gamma(\Sigma_0)\cdot h_{\mu'}(x). \] \end{corollary}
1,108,101,562,670
arxiv
\section{Introduction} A key challenge in the molecular simulation of soft matter is posed by the separation of length-scales between its microscopic description and the existence or emergence of mesoscopic structure. In such cases, one often relies on coarse-grained (CG) descriptions of the system that (approximately) integrate out microscopic degrees of freedom \cite{likos2001effective,noid2008multiscale,peter2009multiscale}, to yield a tractable simplified model. Examples in the soft-matter context include polymers~\cite{Praprotnik2007,Pierleoni2007}, biomolecules\cite{Ouldridge2011,Mladek2013,Pak2018}, and colloidal systems~\cite{asakura1954interaction,roth2000depletion}. Such CG descriptions are essential for multi-scale modelling approaches~\cite{Karplus2014,Warshel2014}. However, they are not usually exact, and the associated coarse-graining errors are often difficult to assess. Such CG models have been studied extensively in colloidal systems with depletion interactions~\cite{asakura1954interaction,Lekkerkerker1992,Poon2002}. The typical example is a mixture of relatively large colloidal particles with much smaller non-adsorbing polymers which generate effective attractions between the colloids. This can drive de-mixing, crystallisation, or gelation, depending on the context. Model systems in this context include the Asakura-Oosawa (AO) model \cite{asakura1954interaction} where the CG model can even be exact, if the disparity between colloid and polymer radii is large enough. The theoretical tractability of the AO model arises from a simplified modelling assumption, that polymer particles act as spheres that can interpenetrate. Alternatively, one may consider a mixture where both the colloids and the depletant are modelled as hard spheres. From a theoretical perspective, this is an interesting model in its own right as it undergoes a fluid-fluid de-mixing phase separation for large enough size-disparities and concentrations~\cite{biben1991phase,dijkstra1999phase,roth2000depletion,kobayashi2021critical}. This happens despite the lack of attractive forces between the particles in the model, and can be attributed to geometric packing effects of the big and small spheres. Direct simulation of such mixtures is very challenging, because of the large number of small particles. Accurate CG models are available in this context too~\cite{roth2000depletion}, but the CG representations are not exact: their errors can be detected by accurate computer simulation of the full (FG) mixtures. Hence, such models are natural testing grounds for theories and simulation methods associated with coarse-graining. In this context, we recently developed a method \cite{kobayashi2019correction,kobayashi2021critical} that links a CG description with the underlying fine-grained (FG) description. We call this the \emph{two-level} method, because the CG and FG models describe the same system, with different levels of detail. The method was validated by computations on the AO system~\cite{kobayashi2019correction}, where it provided numerically-exact results for the FG model, even in the regime where the CG description is not quantitatively accurate. The methodology was also applied to the hard sphere mixture~\cite{kobayashi2021critical}, where it provided a quantitative analysis of the critical point associated with de-mixing of the large and small particles. These previous results rely on the idea that properties of the FG model can be estimated in terms of some CG quantity, with an additive correction that accounts for the coarse-graining error. This is an importance sampling method, familiar in equilibrium statistical mechanics from free-energy perturbation theory\cite{zwanzig1954high}, which involves reweighting between two thermodynamic ensembles. In the present context, the reweighting factors depend on the free energy of the small spheres, computed in a system where the large particles are held fixed. This free energy can be estimated by an annealing process based on Jarzynski's equality~\cite{jarzynski1997nonequilibrium,crooks2000path,neal2001annealed} that slowly introduces small particles to fixed CG configurations. In this paper, we present an extension of the two-level method that incorporates additional intermediate levels to improve the overall performance. Specifically, we introduce a step in the annealing process where small particles are partially inserted in regions close to big particles. Before finishing the small-particle insertion, we then replace weighted sets of configurations with unweighted ones, duplicating configurations with large weight and deleting ones with low weight. This resampling step allows us to make optimal use of the information available at the intermediate stage, focusing our subsequent computations on configurations that matter. This general approach fits in the framework of sequential Monte Carlo (SMC) \cite{gordon1993novel,del2004feynman,doucet2009tutorial}. Such algorithmic ideas have been successfully applied in applications across disciplines under various names, including population Monte Carlo\cite{iba2001population} or the go-with-the-winners strategy\cite{grassberger2002go}. Examples in computational physics include the pruned-enriched Rosenbluth method for polymers\cite{hsu2011review}, the cloning method for rare events\cite{giardina2011simulating}, and diffusion quantum Monte Carlo\cite{reynolds1982fixed}. We combine the SMC method with an additional variance reduction strategy. Instead of estimating the FG average directly, we combine a CG estimate with estimates of subsequent level differences, using the previous levels as control variate. This is the idea behind multilevel Monte Carlo methods\cite{giles2008multilevel,hoang2013complexity,dodwell2015hierarchical}. The combination of a difference estimate with SMC has been previously investigated for example in Refs.~\onlinecite{jasra2017multilevel,beskos2017multilevel,del2017multilevel}. As in Ref.~\onlinecite{kobayashi2021critical}, we develop a general method alongside its application to highly size-asymmetric binary hard-sphere mixtures, which provide a challenging but well understood example to benchmark our algorithm. This paper is organised as follows: In Section \ref{sec:HSMixture}, we introduce the hard-sphere mixture model. Then in Section \ref{sec:method}, we summarise the setup of the two-level method before presenting our extension to three (or more) levels. The three-level method requires an intermediate level for the hard-sphere mixture, whose details we discuss in Section \ref{sec:IntermediateLevel}. In Section \ref{sec:NumericalResults}, we present a numerical test of the method and compare its performance against the two-level method, and in Section \ref{sec:ConvergenceResults} we present convergence results. We conclude in Section \ref{sec:conclusions}. \section{Hard-sphere mixture} \label{sec:HSMixture} Throughout this work, we illustrate the multilevel method with an example system, which is a mixture of large and small hard spheres at size ratio $10 : 1$. This system is challenging for simulation because the big particles may display interesting collective behaviour (in particular, a critical point), but the dominant computational cost for simulation comes from the large number of small particles. However, despite our focus on this single example, we emphasise that the multilevel method is presented in a general way, which should be applicable also in other systems with a separation of length scales. \subsection{Hard sphere mixture} \begin{figure*} \includegraphics[width=\textwidth]{images/Overview.png} \caption{ An overview of the levels of the example system from Section \ref{sec:example} and the structure of the three-level method. (a) A sample of the CG model $p_{\text{C}}$ with $N = 11$ big particles. (b) The two-body potential $V_{\text{RED}}$ used for the CG model. (c) A sample of the full FG model $p_{\text{F}}$ which has the big particle configuration as (a). This system contains $n = 8842$ small particles. (d) A sample of the partially inserted model $p_{\text{I}}$ used in the three-level algorithm. The small particles are primarily inserted around big particles, reducing the number to $n = 4473$. (e) A sketch of the two- and three-level method: starting with a population of CG configurations, we can directly compute importance weights by simulating an annealing process introducing the small particles (upper path, two-level method). Alternatively, we introduce a partially-inserted intermediate level where we interrupt this annealing process and resample to boost relevant configurations (lower path, three-level method). } \label{fig:OverviewMethod} \end{figure*} The example system is a mixture of big and small particles, whose diameters are $\sigma_{\rm B}$ and $\sigma_{\rm S}$ respectively. We consider a periodic box $[0,L]^3$ of linear size $L$ and we work in the grand canonical ensemble. (This choice is particularly relevant for analysis of de-mixing, where the number density of large particles is a suitable order parameter\cite{bruce2003computational}.) In a given configuration, the numbers of big and small particles are $N$ and $n$ respectively; the position of the $i$th big particles is $\mathbf{R}_i$ while the position of the $j$th small particle is $\mathbf{r}_j$. We denote the configurations of big and small particles by $\mathcal{C} = (N; \mathbf{R}_1, \dots, \mathbf{R}_N)$ and $\mathcal{F} = (n; \mathbf{r}_1, \dots, \mathbf{r}_n)$ respectively, and the full configuration is denoted $\mathcal{X}=(\mathcal{C}, \mathcal{F})$.% Since the particles are hard, the temperature plays no role in the following so we set the temperature as $k_{\rm B}T=1$ without any loss of generality. The equilibrium distribution of the mixture is described by a probability density \begin{equation} p_{\text{F}}(\mathcal{C}, \mathcal{F}) = \frac{1}{\Xi_{\text{F}}}e^{\mu_{\rm B} N + \mu_{\rm S} n - U_{\text{F}}(\mathcal{C}, \mathcal{F})} \label{equ:pf} \end{equation} where the subscript $\text{F}$ indicates that we refer to the FG model, $\mu_{\rm B},\mu_{\rm S}$ are the chemical potentials for the large and small particles and $\Xi_{\text{F}}$ is the grand canonical partition function. The particles are hard (non-overlapping) so the potential energy is \begin{equation} U_{\text{F}}(\mathcal{C}, \mathcal{F}) = \begin{cases} \infty, & \text{if any particles overlap,} \\ 0, & \text{otherwise}. \end{cases} \end{equation} This $p_{\text{F}}$ is normalised as $1 = \int p_{\text{F}}(\mathcal{C}, \mathcal{F}) {\rm d}\mathcal{C} {\rm d}\mathcal{F}$, the precise meaning of these integrals is given in Appendix \ref{app:DetailsModel}. Within this setting, the dimensionless parameters of the model are the ratio of the particle diameters $\sigma_{\rm B} / \sigma_{\rm S}$, the system size parameter $L/\sigma_{\rm B}$, and the two chemical potentials $\mu_{\rm S},\mu_{\rm B}$. In practice, $\mu_{\rm S}$ is more naturally parametrised by the associated reservoir volume fraction $\eta_{\rm S}^{\rm r}$, which we relate to $\mu_{\rm B}$ via an accurate equation of state \cite{kolafa2004accurate}. Our multi-level method is designed for accurate estimates of properties of the large particles. Specifically, we consider observable quantities of interest $A = A(\mathcal{C})$ that only depend on the large particles. (Examples are discussed in the next Section, see also Fig.~\ref{fig:ExampleProperties}.) Our aim is to compute the equilibrium average of $A$, that is \begin{equation} \langle A \rangle_{\text{F}} = \int A(\mathcal{C}) p_{\text{F}}(\mathcal{C}, \mathcal{F}) {\rm d}\mathcal{C} {\rm d}\mathcal{F}. \label{eqn:FineAverage} \end{equation} Since $A(\mathcal{C})$ does not depend on $\mathcal{F}$, it is natural to define the marginal distribution for the big particles \begin{equation}\label{eqn:marginal} p_{\text{F}}(\mathcal{C}) = \int p_{\text{F}}(\mathcal{C}, \mathcal{F}) \mathrm{d} \mathcal{F}, \end{equation} so that $\langle A \rangle_{\text{F}} = \int A(\mathcal{C}) p_{\text{F}}(\mathcal{C}) {\rm d}\mathcal{C}$. A similar situation occurs in the context of statistics, where one seeks to analyse the behaviour of a few quantities of interest in a high-dimensional system: in that context, the small-particle degrees of freedom in \eqref{eqn:FineAverage} would be referred to as \emph{nuisance parameters}. This means that their values are not required to compute the quantity of interest, but their statistical properties strongly affect the average of this quantity. \subsection{Coase-grained model} If samples for the marginal distribution $p_{\text{F}}(\mathcal{C})$ could be generated by an MC method for the big particles alone, this would make the system much more tractable by simulation. This is a central idea in coarse-grained modelling\cite{noid2008multiscale}. However, the complexity of packing of the small hard spheres means that $p_{\text{F}}(\mathcal{C})$ is a complex distribution, and it is not possible to sample it exactly. A great deal of effort has gone into developing CG models that approximate this distribution with high accuracy\cite{dijkstra1999phase,roth2000depletion,ashton2011depletion}. A suitable CG model is an equilibrium distribution with probability density \begin{equation} \label{eqn:CGSystem} p_{\text{C}}(\mathcal{C}) = \frac{1}{\Xi_{\text{C}}}e^{\mu_{\rm B} N - U_{\text{C}}(\mathcal{C})} \end{equation} where $\Xi_{\text{C}}$ is the partition function, and the CG (effective) interaction energy is \begin{equation} \label{equ:UC} U_{\text{C}}(\mathcal{C}) = N \Delta\mu + \sum_{i=1}^{N-1}\sum_{j=i+1}^{N} V_2(|\mathbf{R}_i - \mathbf{R}_j|), \end{equation} where $V_2$ is a pairwise interaction potential. Averages with respect to the CG model are denoted as \begin{equation} \langle A \rangle_{\rm C} = \int A(\mathcal{C}) p(\mathcal{C}) \mathrm{d}\mathcal{C}. \end{equation} For a suitably chosen $V_2$, the coarse distribution $p_{\text{C}}(\mathcal{C})$ can be an accurate approximation to $p_{\text{F}}(\mathcal{C})$. For the CG model in this work, we take the accurate potential $V_2 = V_{\text{RED}}$, developed by Roth, Evans, and Dietrich \cite{roth2000depletion}. Following Ref.~\onlinecite{kobayashi2021critical}, we choose $\Delta\mu$ such that the distributions of $N$ coincide for FG and CG models. \subsection{Benchmark system: parameters and observables}\label{sec:example} Throughout the paper, we benchmark our numerical methods by considering the hard-sphere mixture with fixed parameters, as follows. We take the ratio of particle sizes $(\sigma_{\rm B}/\sigma_{\rm S})=10$, the linear size of the periodic system is $L=31\sigma_{\rm S}$, and the small-particle (reservoir) volume fraction is $\eta_{\rm S}^{\rm r}=0.2$. This volume fraction is large enough to generate a significant depletion attraction between the large particles, but not strong enough to cause de-mixing of the large and small particles\cite{kobayashi2021critical}. Aspects of the CG and FG models are illustrated in Fig.~\ref{fig:OverviewMethod}(a-c), for these parameters. In particular, we show representative configurations of the CG and FG models, as well as a plot of the RED potential. While direct GCMC sampling of the full mixture is possible in principle, it should be apparent from Fig.~\ref{fig:OverviewMethod}(c) that this would be intractable, because insertion of large particles in such a fluid is hardly ever possible. Advanced MC methods\cite{ashton2011grand,ashton2011depletion} might be applicable but these tend to struggle when the volume fraction gets large. This motivates the development of two-level and multi-level methods. \begin{figure} \centering \includegraphics[width=\columnwidth]{images/plot-mu_b.pdf} \caption{ Properties of a big-particle-only hard sphere model with RED potential when varying the effective chemical potential $\mu_{B}^{\text{eff}}$ as defined in the main text. (a) The average number of big particles $\langle N\rangle $. (b) The variance of the number of big particles $\var(N)$, which is maximised around $\mu_B^{\text{eff}} = -7$. } \label{fig:ChoiceOfMu} \end{figure} Fig.~\ref{fig:ChoiceOfMu} highlights properties of the distribution of the number of big particles for the CG model when varying the effective large-particle chemical potential $\mu_{\rm B}^{\text{eff}} = \mu_{\rm B} - \Delta\mu$. In particular, Fig.~\ref{fig:ChoiceOfMu}(b) shows that increasing $\mu_{\rm B}^{\text{eff}}$ in the CG model leads to a non-monotonic behaviour in the variance of the particle number $N$ (analogous to the compressibility of the model). This maximum indicates that the system has a tendency for de-mixing at larger $\eta_{\rm S}^{\rm r}$ (one expects a divergent compressibility at the critical point, if one exists). In the following, we fix $\mu_{\rm B}$ at the value corresponding to this maximum -- the relatively large fluctuations at this point are challenging for the multi-level model, because the distributions $p_{\text{C}}(\mathcal{C})$ and $p_{\text{F}}(\mathcal{C})$ are broader, requiring good sampling. The corresponding CG system has an average of $N \approx 11.6$ big particles, occupying around $20\%$ of the available volume. For the specific quantities that we will compute for this mixture, Fig.~\ref{fig:ExampleProperties} shows the expectations of the big-particle pair correlation function $g(r)$ and the distribution of the number of big particles $P(N)$. Results are shown for both CG and FG models (in the FG case, results are computed using the two-level method). For both quantities of interest, the CG model provides an accurate but not exact description of the model. In particular, the CG model underestimates the pair correlation at the point where two big particles are in contact. The distributions of the number of big particles in Figure \ref{fig:ExampleProperties}(b) are both unimodal: both the FG and CG systems are well below the critical point of demixing. Compared to the critical hard-sphere mixture discussed in Ref.~\onlinecite{kobayashi2021critical}, the system we consider here is smaller and has a lower volume fraction $\eta_{\rm S}^{\rm r}$ of the small particles. This is still challenging for conventional Monte Carlo algorithms, but can be simulated fast enough to evaluate the performance and compare the computational methods discussed here. Furthermore, the lower small-particle volume fraction helps with the construction of the intermediate level in Section \ref{sec:IntermediateLevel}, whose underlying approximation decays as $\eta_{\rm S}^{\rm r}$ increases, see Appendix \ref{app:DetailsIntermediateLevel}. \begin{figure} \centering \includegraphics[width=\columnwidth]{images/plot-CG-FG.pdf} \caption{ Two quantities of interest for the binary hard-sphere system, computed for the CG and FG model from Section \ref{sec:example}. (a) The big-particle pair correlation function $g(r)$. Apart from underestimating its value at the touch of two big particles, the CG approximation captures the behaviour accurately. (b) The distribution of the number of big particles $N$. By the choice of $\Delta \mu$, the average number of big particles of the CG and FG models coincide. Both models are clearly well below the critical point of demixing. } \label{fig:ExampleProperties} \end{figure} \section{Multilevel simulation} \label{sec:method} \subsection{Overview} This section reviews the two-level method of Refs.~\onlinecite{kobayashi2019correction,kobayashi2021critical}, and then lays out its three-level extension. The presentation of the method is intended to be generic and applicable to a variety of systems. However we first introduce the key ideas using the example and illustrations of Fig.~\ref{fig:OverviewMethod}, for the hard-sphere mixture. The two-level method is constructed with the scale separation of the mixture in mind: it splits the simulation of the big and small spheres into two stages by first simulating a CG system of large particles alone, and computing $\langle A \rangle_{\text{C}}$. Then, differences between $\langle A \rangle_{\text{C}}$ and $\langle A \rangle_{\text{F}}$ are computed by a reweighting (importance sampling) method. The weight factors for this computation are obtained by an annealing step, where the small particles are slowly inserted into the system, with the large particles held fixed (see Fig.~\ref{fig:OverviewMethod}(e)). The advantage of this procedure is that large particle motion only happens in the CG simulation where the small particles are absent -- there is no scale separation in this case so simulations are tractable. Similarly, insertion of the small particles happens in a background of fixed large particles, so these annealing simulations do not suffer long time scales associated with large-particle motion. This makes for tractable simulations in scale-separated systems, as long as the CG model is sufficiently accurate: see Refs.~\onlinecite{kobayashi2019correction,kobayashi2021critical} for further discussion. In practice, the simulation effort for two-level computations is dominated by the annealing step. The weighting factors are required to high accuracy, which means that the annealing must be done gradually. Moreover, the weights are subject to numerical uncertainties that tend to be large in systems with many small particles. This limits the method to systems of moderate size, with moderate $\eta_{\rm S}^{\rm r}$, see Ref.~\onlinecite{kobayashi2021critical}. We show in this work that such problems can be reduced by breaking the annealing process into several stages -- this is the idea of the three-level method (Fig.~\ref{fig:OverviewMethod}(e)). Specifically, we start (as before) with a population of configurations of the CG model. We perform a first annealing step where the small particles are added in regions that are close to large ones. The information from this step is used in a resampling process, which partially corrects the coarse-graining error by discarding some of the configurations from the population, and duplicating others. (This idea is similar to go-with-the-winners~\cite{grassberger2002go}.) Finally, the second annealing step inserts the small particles in the remaining empty regions, arriving at configurations of the FG model. Hence the end point is the same as the two-level method, but the annealing route is different. In practice the effectiveness of the three-level method relies on a clear physical understanding of the intermediate (partially-inserted) system, in order to decide which configurations to discard in the resampling step. That issue will be discussed in Sec.~\ref{sec:IntermediateLevel}; the remainder of this Section describes the two- and three-level methods in more detail. \subsection{Two-level method} \label{sec:TwoLeve} \newcommand{\kappa}{\kappa} \newcommand{\hat{W}^{\rm n}}{\hat{W}^{\rm n}} We review the two-level method of Refs.~\onlinecite{kobayashi2019correction,kobayashi2021critical}. For a general presentation, we assume that CG and FG models exist with configurations $\mathcal{C}$ and $\mathcal{X}=(\mathcal{C},\mathcal{F})$ respectively. In the case of hard spheres, $\mathcal{C}$ and $\mathcal{F}$ correspond to configurations of the large and small spheres respectively. The two-level method is an importance sampling\cite{robert2004monte} (or reweighting) computation, closely related to the free-energy perturbation method of Zwanzig\cite{zwanzig1954high}. We use the grand canonical Monte Carlo (GCMC) method to sample $M_{\text{C}}$ configurations from $p_{\rm C}$, these are denoted by $\mathcal{C}^1,\mathcal{C}^2,\dots,\mathcal{C}^{M_{\text{C}}}$. Then, the CG average can be estimated as \begin{equation} \label{eqn:CGEstimator} \hat A_{\text{C}} = \frac{1}{M_{\text{C}}} \sum_{j=1}^{M_{\text{C}}} A(\mathcal{C}^j). \end{equation} As the sampling is increased ($M_{\text{C}} \to \infty$) we have $\hat A_{\text{C}} \to \langle A \rangle_{\text{C}}$. However, if the coarse-graining error \begin{equation} \label{eqn:CGError} \Delta = \langle A \rangle_{\text{F}} - \langle A \rangle_{\text{C}} \end{equation} is significant then $\hat A_{\text{C}}$ does not provide an accurate estimate of $\langle A \rangle_{\rm F} $. To address this problem, we use an annealing procedure based on Jarzynski's equality \cite{jarzynski1997nonequilibrium} that starts from a coarse configuration $\mathcal{C}$ and populates the fine degrees of freedom $\hat \mathcal{F}$; at the same time, it generates a random weight $\hat{W}(\mathcal{C})$ with the property that \begin{equation} \label{eqn:unnormalisedWeight} \langle \hat W(\mathcal{C}) \rangle_{\rm J} = \frac{\xi p_{\rm F}(\mathcal{C})}{p_{\rm C}(\mathcal{C})} , \end{equation} where the angle brackets with subscript J indicate an averaging over the annealing process (analogous to Jarzynski's equality\cite{jarzynski1997nonequilibrium}), and $\xi$ is a constant (independent of $\mathcal{C}$). The details of the annealing process are given in Appendix~\ref{app:FreeEnergy}. It is applied to a set of $M_{\text{F}}$ coarse configurations, again denoted by $\mathcal{C}^1, \mathcal{C}^2, \dots, \mathcal{C}^{M_{\text{F}}}$, which are typically a subset of the $M_{\text{C}}$ CG configurations above. For later convenience, we define \begin{equation} \hat{W}^{\rm n}(\mathcal{C}) = \hat{W}(\mathcal{C})/\xi \; . \end{equation} In practical applications, the constant $\xi$ is not known but its effect can be controlled by defining the self-normalised weight \begin{equation} \label{eqn:NormalisedWeight} \hat{w}(\mathcal{C}^j) = \frac{\hat{W}(\mathcal{C}^j)}{\frac{1}{M_{\text{F}}} \sum_{i=1}^{M_{\text{F}}} \hat{W}(\mathcal{C}^i)}. \end{equation} Since the $\mathcal{C}^j$ are representative of $p_{\rm C}$, the denominator in $\hat{w}$ converges to $\xi$ as $M_{\text{F}}\to\infty$ and so $\hat{w}(\mathcal{C}^j) \to \hat{W}^{\rm n}(\mathcal{C}^j)$. Then, the estimator \begin{equation} \label{eqn:ISEstimator} \hat A_{\text{F}} = \frac{1}{M_{\text{F}}} \sum_{j=1}^{M_{\text{F}}} \hat{w}(\mathcal{C}^j) A(\mathcal{C}^j) \end{equation} converges to $\langle A \rangle_{\rm F}$ as $M_{\text{F}} \to \infty$. (In the case that $\hat{W}$ is not random then this procedure recovers the free energy perturbation theory of Zwanzig\cite{zwanzig1954high}.) The annealing process has one useful additional property: Let the joint probability density for the weight and the fine degrees of freedom be $\kappa(\hat{W},\hat{\mathcal{F}} | \mathcal{C})$, which is normalised as $\int \kappa(\hat W, \hat \mathcal{F} | \mathcal{C}) \mathrm{d} \hat \mathcal{F} \mathrm{d} \hat W = 1$. We show in Appendix~\ref{app:FreeEnergy} that \begin{equation} \int \hat W \kappa(\hat W, \hat \mathcal{F} \mid \mathcal{C}) \mathrm{d} \hat W = \frac{ \xi p_{\text{F}}(\mathcal{C} , \hat \mathcal{F}) }{ p_{\text{C}}(\mathcal{C}) }. \label{equ:jarz-fine-property} \end{equation} This formula is the essential property of the annealing procedure, which is required for the operation of the method. Additionally integrating over $\hat \mathcal{F}$ shows that \eqref{equ:jarz-fine-property} ensures that \eqref{eqn:unnormalisedWeight} also holds. This means in turn that if $B = B(\mathcal{C}, \mathcal{F})$ is an observable quantity that depends on both coarse and fine degrees of freedom then \begin{equation} \hat{B}_{\rm F} = \frac{1}{M_{\text{F}}} \sum_{j=1}^{M_{\text{F}}} \hat w(\mathcal{C}^j)B(\mathcal{C}^j,\hat \mathcal{F}^j). \label{equ:hatB-2} \end{equation} converges to $\langle B \rangle_{\rm F}$ as $M_{\text{F}} \to \infty$. This method can be easily improved without extra computational effort. The key idea~\cite{giles2008multilevel,hoang2013complexity,dodwell2015hierarchical} is to estimate the FG average as the sum of the CG average and the coarse-graining error \eqref{eqn:CGError} \begin{equation} \langle A \rangle_{\text{F}} = \langle A \rangle_{\text{C}} + \Delta. \end{equation} Then use importance sampling to estimate $\Delta$, as \begin{equation}\label{eqn:ErrorEstimator} \hat \Delta = \frac{1}{M_{\text{F}}} \sum_{j=1}^{M_{\text{F}}} \left( \hat w(\mathcal{C}^j) - 1 \right) A(\mathcal{C}^j). \end{equation} Finally, a suitable estimator for the FG average is obtained by combining the estimate of the coarse-graining error with the corresponding CG quantity: \begin{equation} \hat A_{\text{F}, \Delta} = \hat A_{\text{C}} + \hat \Delta. \label{equ:A-TML} \end{equation} This estimator converges to $\langle A \rangle_{\rm F}$ in the limit where $M_{\text{C}}, M_{\text{F}} \to \infty$. As discussed in Ref.~\onlinecite{kobayashi2019correction}, the variance of the estimate $\hat \Delta$ is typically smaller than that of $\hat A_{\text{F}}$, and the CG estimate $\hat A_{\text{C}}$ is cheap to compute accurately. Thus, the combined difference estimator $\hat A_{\text{F}, \Delta}$ is typically more accurate at fixed computational cost. The importance sampling methodology has a useful physical interpretation, which we explain for the example of the hard-sphere mixture. If we consider a fixed configuration of the large particles, then the grand canonical partition function for the small particles is \begin{equation} \Xi[\mathcal{C}, \mu_S] = \int e^{\mu_{\rm S} n - U_{\text{F}}(\mathcal{C}, \mathcal{F})} \mathrm{d} \mathcal{F}. \label{eqn:GCPotential} \end{equation} As the system is annealed (the small particles are inserted), we estimate \eqref{eqn:GCPotential} by a free-energy method based on Jarzynski's equality \cite{jarzynski1997nonequilibrium}, see Appendix~\ref{app:FreeEnergy} for details. Since the annealing is stochastic, this yields an estimate of the partition function, which we denote by $\hat \Xi[\mathcal{C}, \mu_S]$. Moreover, this estimate is unbiased $\langle \hat \Xi[\mathcal{C}, \mu_S] \rangle_{\rm J} = \Xi[\mathcal{C}, \mu_S]$. Hence we can take \begin{equation} \label{eqn:defUnnormalisedWeight} \hat W(\mathcal{C}) = \hat \Xi[\mathcal{C}, \mu_{S}] e^{U_{\text{C}}(\mathcal{C})} \end{equation} and using (\ref{eqn:marginal},\ref{eqn:CGSystem}), we see that \eqref{eqn:unnormalisedWeight} holds, with $\xi = (\Xi_{\rm C} / \Xi_{\rm F})$. Physically, the CG model is constructed so that the Boltzmann factor $e^{-U_{\text{C}}(\mathcal{C})}$ is a good estimate of the small-particle partition function $\Xi[\mathcal{C}, \mu_{S}]$. If this is the case then the model is accurate. The two-level methodology uses estimates of the small-particle partition function (or, equivalently, their free energy) and compares it with the assumptions that were made about this quantity in the CG model. By analysing the differences between these quantities, the differences between CG and FG models can be quantified. The effectiveness of this method for numerical simulation of the mixtures of large and small particles was discussed in Refs.~\onlinecite{kobayashi2019correction,kobayashi2021critical}. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{images/plot-distr-w.pdf} \caption{ The empirical distribution of weights $\hat w(\mathcal{C})$ of the two-level method for $18000$ coarse samples $\mathcal{C} \sim p_{\text{C}}$ applied to the example system from Section \ref{sec:example}. } \label{fig:Weights} \end{figure} The distribution of the importance weights $\hat w(\mathcal{C})$ impacts the accuracy of the resulting FG estimate $\hat A_{\text{F}}$. Additionally, it serves as a useful indicator of the accuracy of the CG model and the variance of the free energy computation. To give an example, we apply the two-level method to the example problem from Section~\ref{sec:example}. In Figure \ref{fig:Weights}, we show the empirical distribution of $18000$ weights of the example system which are computed using an accurate annealing process; we use these computations as the reference solution in Section \ref{sec:NumericalResults}. This illustrates a situation where the two-level method is applicable, where no single sample dominates and only very few samples have a weight larger than $10$. If one considers less accurate CG models, the variance of the weights increases, and the tail of their distribution gets heavier. Eventually, one would reach a situation where a few samples dominate the weighted sum \eqref{eqn:ISEstimator}. For accurately computed weights $\hat w(\mathcal{C})$, such a breakdown of the two-level method indicates that the CG model is not sufficiently accurate. This behaviour provides a useful feedback loop which can be used to iterate on the CG model itself \cite{kobayashi2019correction}. \subsection{Three-level method}\label{sec:threelevel} We now present the three-level method for estimation of $\langle A(\mathcal{C}) \rangle_{\text{F}}$. \subsubsection{Coarse level} \label{par:CoarseLevel} We start by generating $M_{0}$ samples of the CG model, denoted by $ \mathcal{C}^1_0, \dots, \mathcal{C}_0^{M_{0}} $ The subscript indicates the step within the algorithm (which is $0$ for the initial sampling of coarse configurations). The CG average of $A$ can be estimated similarly to (\ref{eqn:CGEstimator}): \begin{equation} \hat A_{\text{C}}^{\text{3L}} = \frac{1}{M_{0}} \sum_{j=1}^{M_{0}} A(\mathcal{C}^j_0) \; . \end{equation} \subsubsection{Intermediate level} \label{sec:3-int} \begin{figure} \centering \includegraphics[width=\columnwidth]{images/ThreeLevelAIS.png} \caption{Schematic representation of the small particle insertion process during the two stages of the three level method. } \label{fig:AISStages} \end{figure} In addition to the CG and FG models, the three-level method also relies on an intermediate set of configurations, which correspond in the hard-sphere mixture to the system where the small particles have been inserted in regions close to the large ones, see Fig.~\ref{fig:AISStages}. This state is described by an equilibrium probability distribution \begin{equation}\label{eqn:AbstractIntermediateDistribution} p_{\text{I}}(\mathcal{C}, \mathcal{F}) = \frac{1}{\Xi_{\text{I}}} e^{\mu_{\rm B} N + \mu_{\rm S} n - U_{\text{I}}(\mathcal{C}, \mathcal{F})}. \end{equation} where $U_{\text{I}}(\mathcal{C}, \mathcal{F})$ is an interaction energy. Its construction for the hard-sphere mixture will be discussed in Sec.~\ref{sec:IntermediateLevel}, below. The first annealing step of the three-level algorithm applies the two-level method, with the FG distribution $p_{\rm F}$ replaced by $p_{\rm I}$. This part of the algorithm closely follows the previous section, we give a brief discussion which mostly serves to fix notation. We start with a set of $M_{1}$ coarse configurations which are samples of $p_{\rm C}$; they are denoted by $ \mathcal{C}^1_1, \mathcal{C}^2_1, \dots, \mathcal{C}_1^{M_{1}}$ where now the subscript $1$ indicates the intermediate stage of the three-level method. (These will typically be a subset of the configurations that were generated on the coarse level.) For each coarse configuration $\mathcal{C}_1$, we anneal the fine degrees of freedom of the system $\hat \mathcal{F}_1$ to arrive at the intermediate level and generate a random weight $\hat W_1(\mathcal{C}_1)$ with the property \begin{equation} \langle \hat W_1(\mathcal{C}_1) \rangle_{\rm J} = \frac{\xi_1 p_{\text{I}}(\mathcal{C}_1)}{p_{\text{C}}(\mathcal{C}_1)}, \label{eqn:AvgW1} \end{equation} with a constant $\xi_1$ independent of $\mathcal{C}_1$. (For the hard spheres, we recall that particles are inserted preferentially in regions close to large ones, this is illustrated in the top row of Figure \ref{fig:AISStages}.) As before, we define $ \hat W^{\rm n}_1(\mathcal{C}_1) = \hat W_1(\mathcal{C}_1) / \xi_1. $ Again the constant $\xi_1$ is generally not known, so we define the self-normalised weight \begin{equation} \hat w_1(\mathcal{C}_1^j) = \frac{\hat W_1(\mathcal{C}_1^j)}{\frac{1}{M_{1}} \sum_{i=1}^{M_1} \hat W_1(\mathcal{C}_1^i)}, \end{equation} which converges to $\hat W^n_1(\mathcal{C}_1^j)$ as $M_{1} \to \infty$. Then, the estimator \begin{equation} \hat A^{\text{3L}}_{\text{I}} = \frac{1}{M_1} \sum_{j=1}^{M_1} \hat w_1(\mathcal{C}_1^j) A(\mathcal{C}_1^j) \end{equation} converges to $\langle A \rangle_{\text{I}}$ as $M_1 \to \infty$. Similar to (\ref{equ:jarz-fine-property}), the joint probability density $\kappa_1(\hat W_1, \hat \mathcal{F}_1 \mid \mathcal{C}_1)$ of the weight and fine degrees of freedom at the intermediate level, defined by the annealing process, fulfils \begin{equation} \int \hat W_1 \kappa_{1}(\hat W_1, \hat \mathcal{F}_1 \mid \mathcal{C}_1) \mathrm{d} \hat W_1 = \frac{ \xi_1 p_{\text{I}}(\mathcal{C}_1, \hat \mathcal{F}_1) }{ p_{\text{C}}(\mathcal{C}_1) } \label{equ:jarz-fine-property-int}. \end{equation} Hence, similar to \eqref{equ:hatB-2} we also obtain \begin{equation} \hat{B}^{\rm 3L}_{\text{I}} = \frac{1}{M_1} \sum_{j=1}^{M_1} \hat w_1(\mathcal{C}_1^j)B(\mathcal{C}_1^j, \hat \mathcal{F}_1^j) \end{equation} which converges to $\langle B \rangle_{\text{I}}$ as $M_1 \to \infty$. \subsubsection{Fine level} \label{sec:3-fine} At the end of the intermediate level, we have $M_1$ large-particle configurations. For each configuration $\mathcal{C}_1^j$, the process of annealing to the intermediate level also provided the weight $\hat w_{1}(\mathcal{C}_1^j)$ and the small-particle configuration $\hat \mathcal{F}_1^j$. This information can be used to build a set of configurations that are representative of $p_{\rm I}$. This procedure is called \emph{resampling}, its validity in this example relies on the property \eqref{equ:jarz-fine-property-int} of the annealing procedure. This is the part of the method that is similar to population-based sampling approaches such as SMC~\cite{hsu2011review} or go-with-the-winners~\cite{grassberger2002go}. The idea is that one should focus the effort of the annealing process onto coarse configurations which are typical of the full system, and to discard those which are atypical, see Fig.~\ref{fig:Resampling} for a visualisation of this step. We write $\hat \mathcal{X}_1^j=(\mathcal{C}_1^j,\hat \mathcal{F}_1^j)$ for the full configuration that is obtained by the annealing procedure at the intermediate level. The resampled configurations will be denoted by $\mathcal{X}_2^1,\mathcal{X}_2^2,\dots,\mathcal{X}_2^{M_2}$; they are representative of the intermediate level $p_{\text{I}}$. There are $M_2$ of them, and the subscript $2$ indicates the final stage of the three-level method. The simplest resampling method (\emph{multinomial resampling}) is that each $\mathcal{X}^i_2$ is obtained by copying one of the $\hat \mathcal{X}_1^j$, chosen at random with probability $\hat{w}_{1}(\mathcal{C}^j_1)$. In applications, one typically replaces this by a lower variance resampling scheme like residual resampling, see Ref.~\onlinecite{douc2005comparison} for a comparison of commonly used variants. \begin{figure} \centering \includegraphics[width=\columnwidth]{images/Resampling.png} \caption{ Visualisation of the resampling step. We start with a population of weighted configurations (top row), where the weighting is depicted by a star rating. The goal of the resampling step is to randomly transform the weighted population into an unweighted one that has, on average, the same empirical distribution. We achieve this by duplicating large-weight configurations and deleting small-weight configurations, yielding an unweighted population of configurations (bottom row). } \label{fig:Resampling} \end{figure} We then perform the second annealing step that starts from an intermediate level configuration $\mathcal{X}_2 = (\mathcal{C}_2, \mathcal{F}_2)$ and anneals the fine degrees of freedom from the intermediate to the fine level, yielding $\hat \mathcal{F}_2$ and a weight $\hat W_2(\mathcal{X}_2)$, details are given in Appendix \ref{app:FreeEnergy}. For the hard sphere system, this involves further insertion of small particles, to fill the system and generate realistic configurations of the full mixture. This procedure is shown in the bottom row of Figure \ref{fig:AISStages}. Since the starting point of the annealing procedure is $\mathcal{X}_2$, the joint probability density of the annealing process $\kappa_2(\hat W_2, \hat \mathcal{F}_2 \mid \mathcal{X}_2)$ depends on both large and small particles. Therefore, the analogue of (\ref{equ:jarz-fine-property-int}) requires an additional average over the small particles of the starting configuration: \begin{multline} \int \hat W_2 \kappa_2(\hat W_2, \hat \mathcal{F}_2 \mid \mathcal{C}_2, \mathcal{F}_2) p_{\text{I}}(\mathcal{F}_2 \mid \mathcal{C}_2) \mathrm{d} \hat W_2 \mathrm{d} \mathcal{F}_2 \\ \quad = \frac{\xi_2 p_{\rm F}(\mathcal{C}_2, \hat \mathcal{F}_2) }{ p_{\rm I}(\mathcal{C}_2)} \label{equ:WF-ave} \end{multline} for some constant $\xi_2$. Note that $p_{\rm I}(\mathcal{F} | \mathcal{C}) = p_{\rm I}(\mathcal{C},\mathcal{F})/p_{\rm I}(\mathcal{C})$. Similar to \eqref{eqn:AvgW1}, the weights $\hat W_2(\mathcal{X}_2)$ have the property \begin{equation} \int \langle \hat W_2(\mathcal{C}_2, \mathcal{F}_2) \rangle_{\rm J} p_{\text{I}}(\mathcal{F}_2 \mid \mathcal{C}_2) \mathrm{d} \mathcal{F}_2 = \frac{\xi_2 p_{\text{F}}(\mathcal{C}_2)}{p_{\text{I}}(\mathcal{C}_2)}. \label{eqn:MarginalIntegralW2} \end{equation} From here, we proceed as before. We define the normalised weight $ \hat{W}^{\rm n}_2(\mathcal{X}_2) = \hat W_2(\mathcal{X}_2) / \xi_2 $ and its self-normalised estimate \begin{equation} \hat w_{2}(\mathcal{X}_2^j) = \frac{\hat W_2(\mathcal{X}^j_2)}{\frac{1}{M_2} \sum_{i=1}^{M_2} \hat W_2(\mathcal{X}^i_2)}. \end{equation} Since the $\mathcal{X}_2^j$ are representative of $p_{\rm I}$, it follows from \eqref{equ:WF-ave} that observables of the coarse system $A$ can be estimated as \begin{equation} \label{eqn:IntermediateEstimate} \hat A_{\text{F}}^{\text{3L}} = \frac{1}{M_2} \sum_{j=1}^{M_2} \hat{w}_2(\mathcal{X}_2^j) A(\mathcal{C}_2^j), \end{equation} which converges to $\langle A \rangle_{\text{F}}$ as $M_2 \to \infty$. Similar to \eqref{equ:hatB-2}, we can also obtain a consistent FG estimates of observable quantities $B$ that depend both on coarse and fine degrees of freedom by \begin{equation} \hat B_{\text{F}}^{\text{3L}} = \frac{1}{M_2} \sum_{j=1}^{M_2} \hat w_2(\mathcal{X}_2^j) B(\mathcal{C}_2^j, \hat \mathcal{F}_2^j). \label{eqn:IntermediateEstimateB} \end{equation} Following the same variance reduction strategy as in Section \ref{sec:TwoLeve}, we can define a difference estimator of the FG average, which is expected to have lower statistical uncertainty: let \begin{align} \hat \Delta_{\text{I}}^{\text{3L}} &= \frac{1}{M_{1}} \sum_{j=1}^{M_{1}} \left( \hat w_1(\mathcal{C}_1^j) - 1 \right) A(\mathcal{C}_1^j), \label{eqn:3LDiffEst1}\\ \hat \Delta_{\text{F}}^{\text{3L}} &= \frac{1}{M_{2}} \sum_{j=1}^{M_{2}} \left( \hat w_{2}(\mathcal{X}^j_2) - 1 \right) A(\mathcal{C}_2^j). \label{eqn:3LDiffEst2} \end{align} Then \begin{equation} \label{eqn:Multilevel3LEstimator} \hat A_{\text{F}, \Delta}^{\text{3L}} = \hat A_{\text{C}}^{\text{3L}} + \hat \Delta_{\text{I}}^{\text{3L}} + \hat \Delta_{\text{F}}^{\text{3L}} \end{equation} is a consistent estimator of $\langle A\rangle_{\rm F}$, analogous to (\ref{equ:A-TML}). \subsubsection{General features of the three-level method} \label{sec:properties} A few comments on the three-level method are in order. First, there is a simple generalisation to four or more levels by splitting the annealing procedure into more than two stages. As such, the method is an example of a sequential Monte Carlo (SMC) algorithm (which is sometimes more descriptively referred to as sequential importance sampling and resampling~ \cite{cappe2005inference,del2006sequential,ionides2006inference,hsu2011review}). We note from (\ref{eqn:unnormalisedWeight}) that the weights obtained from the annealing step are random, this is not the standard situation in SMC but similar ideas have been previously studied in Refs.~\onlinecite{fearnhead2008particle,fearnhead2010random,naesseth2015nested,rohrbach2022convergence}. Combining an SMC algorithm with a difference estimate as in \eqref{eqn:Multilevel3LEstimator} has been investigated in Refs.~\onlinecite{jasra2017multilevel,beskos2017multilevel,del2017multilevel}. Second, we observe that the key distinction between the two- and three-level algorithms is the resampling step at the intermediate level. Without this, the three-level method reduces to a simple two-level method with an arbitrary stop in the middle of the annealing process. As noted above, the resampling process is designed to partially correct differences between the CG and FG models. This relies on a good accuracy of the intermediate level (otherwise the wrong configurations might be discarded, which hinders numerical accuracy). On the other hand, we note that for sufficiently large numbers of samples $M_{0},M_{1},M_{2}$, the method does provide accurate FG estimates, even if the CG and intermediate level models are not extremely accurate. The distinction between the different methods comes through the number of samples that are required to obtain accurate FG results. Third, note that the ideal situation for difference estimation is that the three terms in (\ref{eqn:Multilevel3LEstimator}) get successively smaller. That is, the coarse estimate is already close to $\langle A\rangle_{\text{F}}$, the intermediate-level estimate provides a large part of the correction, and the fine-level correction is small. In this case, it is natural to use a tapering strategy where the number of samples used at each level decreases \begin{equation} M_{0} > M_{1} > M_{2}. \end{equation} This allows a fixed computational budget to be distributed evenly between the various levels, to minimise the total error. \section{Construction of the intermediate level} \label{sec:IntermediateLevel} As noted above, the intermediate probability distribution $p_{\rm I}$ must be designed carefully, in order for the resampling part of the three-level method to be effective. We now describe how this is achieved for the hard sphere mixture. We start by analysing the small particles, so we fix the large particles in some configuration $\mathcal{C}$. The idea of the intermediate level is to first insert small particles only in a region close to the large particles $\mathcal{C}$, and then use this information to make the intermediate marginal distribution $p_{\text{I}}(\mathcal{C})$ match the FG marginal $p_{\text{F}}(\mathcal{C})$ as closely as possible. The structure of the intermediate level is depicted in the bottom row of Fig.~\ref{fig:OverviewMethod}(e), and an example configuration is shown in Fig.~\ref{fig:OverviewMethod}(d). We implement this idea by introducing an effective (one-body) potential that acts on the small particles. We first define \begin{equation} \text{dist}(\mathbf{r}, \mathcal{C}) = \min_{j=1, \dots, N} |\mathbf{r} - \mathbf{R}_j| \end{equation} to be the distance from the point $\mathbf{r}$ to the nearest large particle. Small-particle insertion is suppressed in regions far from large particles by a potential energy term \begin{equation} \label{eqn:tildeU} \tilde U(\mathcal{C}, \mathcal{F}) = \sum_{j=1}^{n} E_{\mathcal{C}}(\mathbf{r}) \end{equation} where \begin{equation} \label{eqn:SmallParticlePotential} E_{\mathcal{C}}(\mathbf{r}) = \varepsilon(\text{dist}(\mathbf{r}, \mathcal{C})) \end{equation} and the function \begin{equation} \label{eqn:CosinePotential} \varepsilon(r) = \begin{cases} 0, & r < \delta_{\text{free}}, \\ s \sin^2 \left[ \frac{(r - \delta_{\text{free}})\pi}{2l} \right] , & \delta_{\text{free}} \leq r < \delta_{\text{free}}+l, \\ s, & r \geq \delta_{\text{free}}+l \end{cases} \end{equation} interpolates from zero (for small distances $r$) to the value $s$ at large $r$. This function acts as a smoothed out step function, where $\delta_{\rm free}$ is the position of the step and $l$ its width. In Figs.~\ref{fig:OverviewMethod}(e) and \ref{fig:AISStages}, areas where $E_{\mathcal{C}}(\mathbf{r}) > 0$ are indicated by blued shaded regions, in which the insertion of small particles is suppressed. Then define a grand-canonical probability distribution for the small particles in the partially-inserted (intermediate) system as \begin{equation} \label{eqn:IntermediateEnsemble} \tilde p_{\text{I}}(\mathcal{F} \mid \mathcal{C} ) = \frac{1}{\tilde \Xi_{\text{I}}[\mathcal{C},\mu_{\rm S}]} e^{\mu_{\rm S} n - U_{\text{F}}(\mathcal{C}, \mathcal{F}) - \tilde U(\mathcal{C}, \mathcal{F})}. \end{equation} This distribution is normalised as $\int \tilde p_{\text{I}}(\mathcal{F} \mid \mathcal{C} ) \mathrm{d} \mathcal{F}=1$. It depends on the three parameters $s,\delta_{\rm free},l$, as well as the underlying parameters of the hard sphere mixture model. The next step is to construct the weights $\hat{W}_1(\mathcal{C})$. For consistency with (\ref{eqn:AbstractIntermediateDistribution}), we write the intermediate-level distribution in the form \begin{equation} \label{equ:int-cor} p_{\text{I}}(\mathcal{C},\mathcal{F}) = \frac{1}{\Xi_{\rm I}} e^{ \mu_{\rm B} N + \mu_{\rm S} n - U_{\text{F}}(\mathcal{C}, \mathcal{F}) - \tilde U(\mathcal{C}, \mathcal{F}) - \Phi^{\rm corr}(\mathcal{C}) }. \end{equation} As discussed above, the term $\Phi^{\rm corr}$ should be designed so that the respective coarse-particle marginals $p_{\rm I}(\mathcal{C})$ and $p_{\text{F}}(\mathcal{C})$ match as closely as possible. Using (\ref{eqn:marginal},\ref{eqn:GCPotential},\ref{eqn:IntermediateEnsemble}), we can show that a perfect match requires $\Phi^{\rm corr}(\mathcal{C}) = \Phi^{\rm ex}(\mathcal{C})$ with \begin{equation} \Phi^{\rm ex}(\mathcal{C}) = \log \frac{ \tilde \Xi_{\text{I}}[\mathcal{C},\mu_{\rm S}] }{ \Xi_{\text{F}}[\mathcal{C},\mu_{\rm S}] } - \phi_0, \label{equ:Phiex} \end{equation} where $\phi_0$ is an irrelevant constant. Since the $\Xi$s in \eqref{equ:Phiex} are partition functions, determination of $\Phi^{\rm ex}$ reduces to computation of the free energy difference between the non-homogeneous small particle distributions of the partially- and fully-inserted system. We now explain how $\Phi^{\rm corr}$ is defined, as an approximation to $\Phi^{\rm ex}$. \subsection{Square-gradient approximation of a non-homogeneous hard sphere fluid} \label{sec:SGApprox} \label{sec:sg} \newcommand{{\bf 0}}{{\bf 0}} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{\mathfrak{g}}{\mathfrak{g}} \newcommand{\mathcal{E}}{\mathcal{E}} As a preliminary step for estimating $\Phi^{\rm ex}$, we first consider the grand potential $\Phi$ for the small particles, in a system with no large particles, where the small particles feel an (arbitrary) smooth potential $\mathcal{E}=\mathcal{E}(\mathbf{r})$. The grand potential of this system is \begin{equation} \Phi[\mathcal{E}; \mu_{\rm S}] = - \log \int e^{\mu_{\rm S} n - U_{\text{F}}({\bf 0}, \mathcal{F}) - \sum_{j=1}^{n} \mathcal{E}(\mathbf{r}_j)} \mathrm{d} \mathcal{F}. \label{eqn:PerturbedSmallSphereFluid} \end{equation} where ${\bf 0}$ indicates the large-particle configuration with no particles at all ($N=0$). If $\mathcal{E}$ varies slowly in space, a simple approach to this integral is to assume that the system is locally the same as a homogeneous system in equilibrium -- similar to the local density approximation~\cite{evans1979nature}. In this case \begin{equation}\label{eqn:eqintegral} \Phi[\mathcal{E}; \mu_{\rm S}] \approx -\int_V \mathfrak{p}(\mu_{\rm S} - \mathcal{E}(\mathbf{r})) \mathrm{d} \mathbf{r} \end{equation} where $\mathfrak{p}$ is the pressure, expressed as a function of the chemical potential. However, this approximation is not sufficiently accurate for the current application. To this end, we include a correction to account for inhomogeneities, as a squared gradient term: $\Phi[\mathcal{E}; \mu_{\rm S}] \approx \Phi^{\rm sq}[\mathcal{E}; \mu_{\rm S}] $ with \begin{equation}\label{eqn:sgapprox} \Phi^{\rm sq}[\mathcal{E}; \mu_{\rm S}] = - \int_V \mathfrak p(\mu_{\rm S} - \mathcal{E}(\mathbf{r})) + \mathfrak{g}(\mu_{\rm S} - \mathcal{E}(\mathbf{r})) |\nabla \mathcal{E}(\mathbf{r})|^2 \mathrm{d} \mathbf{r}. \end{equation} (Within a gradient expansion, this is the first correction that is consistent with rotational and inversion symmetry.) We show in Appendix \ref{app:DetailsApprox}, that $\mathfrak{g}$ can be estimated as \begin{equation} \mathfrak{g}(\mu) = \frac{3\eta}{2\pi \sigma_{S}^3} \frac{\partial^2}{\partial q^2} S(\mu; q) \Big\rvert_{q=0}, \label{equ:gam} \end{equation} where $S(\mu;q)$ is the structure factor of the small hard-sphere system. For a numerical estimate of this $\Phi^{\rm sq}$, we estimate the pressure $\mathfrak{p}$ by the accurate equation of state from Ref.~\onlinecite{kolafa2004accurate}, and $\mathfrak{g}$ is estimated from (\ref{equ:gam}) using the structure factor from Ref.~\onlinecite{de2004structure}. A numerical example demonstrating the accuracy of this second order approximation for a non-homogeneous hard-sphere fluid can be found in Appendix \ref{app:exampleSG}. \subsection{Definition of $\Phi^{\rm corr}$} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{images/Free_Energy_Approximation.png} \caption{ Illustration of the computation of $\Phi^{\rm corr}$, which is an estimate of the free energy difference $\Phi^{\rm ex}$ between panels (a,b), see \eqref{eqn:DefDeltaPhi}. As described in the text, this difference is computed as a sum of three differences, using the integration path (a,c,d,b). The free energy difference between (c,d) is $\Phi[ 0 ; \mu_{\rm S} ] - \Phi[ E_\mathcal{C} ; \mu_{\rm S} ]$. We make the approximation that the differences between (a,c) and (d,b) are equal and opposite, this should be accurate if the shaded blue regions in panel (a) are well-separated in space from the large particles. Combining this assumption with the square gradient approximation (\ref{eqn:sgapprox}) yields $ \Phi^{\rm corr}(\mathcal{C})$ in (\ref{eqn:PhiCorr1}) as a numerically tractable estimate of $\Phi^{\rm ex}$. } \label{fig:FreeEnergyApproximation} \end{figure} We are now in a position to approximate $\Phi^{\rm ex}$ in terms of $\Phi^{\rm sq}$. This (analytical) calculation is illustrated in Fig.~\ref{fig:FreeEnergyApproximation}. We require an estimate of $\Phi^{\rm ex}$, which is the free-energy difference between the partially-inserted and fully-inserted systems in panels (a,b). This is achieved as a sum of three free-energy differences. In the first step, the large particles are removed and the small-particle fluid is re-equilibrated, to fill up the remaining space, leading to panel (c). Then, the confining potential $\tilde U$ is removed and the small particles fully inserted, leading to (d). Finally, the large particles are re-inserted and the small particles re-equilibrated again, leading to (b). To make this precise, define $\Phi_\mathcal{C}[ \mathcal{E} ; \mu_{\rm S} ]$ as the grand potential of the small particles in the potential $\mathcal{E}$, where the large particles are also included, with configuration $\mathcal{C}$. Then the desired free energy difference between panels (a,b) is \begin{equation} \label{eqn:DefDeltaPhi} \Phi^{\rm ex}(\mathcal{C}) = \Phi_\mathcal{C}[ 0 ; \mu_{\rm S} ] - \Phi_\mathcal{C}[ E_\mathcal{C} ; \mu_{\rm S} ] \end{equation} where we took $\phi_0=0$. From the definitions in Sec.~\ref{sec:sg}, the free energy difference between panels (c,d) is $\Phi[ 0 ; \mu_{\rm S} ] - \Phi[ E_\mathcal{C} ; \mu_{\rm S} ]$, from (\ref{eqn:tildeU},\ref{eqn:PerturbedSmallSphereFluid}). Our central approximation is that the free energy difference between panels (a,c) is (approximately) equal and opposite to the difference between (d,b), because the local environment of the large particles is the same in both cases. (The only differences are in regions far from any large particles.) At this level of approximation, the free energy differences between (a,b) and (c,d) are equal: \begin{equation} \label{eqn:ApproxOfFEWithC} \Phi^{\rm ex}(\mathcal{C}) \approx \Phi[ 0 ; \mu_{\rm S} ] - \Phi[ E_\mathcal{C} ; \mu_{\rm S} ] \; . \end{equation} Finally, the right hand side can be estimated by the square gradient approximation (\ref{eqn:sgapprox}), yielding $\Phi^{\rm ex}(\mathcal{C}) \approx \Phi^{\rm corr}(\mathcal{C}) $ with \begin{equation} \label{eqn:PhiCorr1} \Phi^{\rm corr}(\mathcal{C}) = \Phi^{\rm sq}[ 0 ; \mu_{\rm S} ] - \Phi^{\rm sq}[ E_\mathcal{C} ; \mu_{\rm S} ] \; . \end{equation} The integral in (\ref{eqn:sgapprox}) is computed using the \texttt{cuhre} algorithm of the \texttt{cuba} library\cite{hahn2005cuba}. While the choice of the numerical integrator influences the intermediate level, errors in estimation of this integral will be corrected by the second annealing step, so very high accuracy is not essential. Given this choice of $\Phi^{\rm corr}(\mathcal{C}) $, the intermediate level distribution $p_{\rm I}$ of (\ref{equ:int-cor}) has been completely defined, although it still depends on the three parameters $\delta_{\rm free},s,l$ that appear in the function $\varepsilon(r)$. We also note that given the approximations made, it is not expected that this $p_{\rm I}$ is optimal (its marginal $p_{\text{I}}(\mathcal{C})$ does not match $p_{\text{F}}(\mathcal{C})$ perfectly). The next subsection discusses the parameter choices, and some possibilities for correction factors that can be added to $\Phi^{\rm corr}$, in order to address specific sources of error. \subsection{Variants of the intermediate-level distribution} \begin{figure} \centering \includegraphics[width=\columnwidth]{images/plot-p_i.pdf} \caption{ Estimated accuracy of the CG and the intermediate levels from the main text for the example from Section \ref{sec:example}. We show the difference between the respective CG and intermediate level estimates and the true FG estimate for the pair correlation function $g(r)$ in (a) and the distribution of big particles in (b). The FG estimates of these quantities of interest are shown in Figure \ref{fig:ExampleProperties}. } \label{fig:AccuracyIntermediateLevel} \end{figure} In fixing the parameters $\delta_{\rm free},s,l$, several considerations are relevant. First, if $s$ is too small or $\delta_{\rm free}$ is too large, the potential $E_\mathcal{C}$ has little effect on the system and the small particles are not restricted to be close to the large ones. In this case $p_{\rm I}$ ends up close to $p_{\rm F}$ and there is little benefit from the intermediate level. On the other hand, the accuracy of $\Phi^{\rm sq}$ is greatest when the gradient of the potential $E_\mathcal{C}$ is small, this favours small $s$ and large $l,\delta_{\rm free}$. In practice, it is also convenient if the two annealing stages insert similar numbers of particles, so that their computational costs are similar. For the example system of Section \ref{sec:example}, we will present results for a suitable parameter set \begin{equation}\label{eqn:ILParameters} \delta_{\text{free}} = 0.5\sigma_{\rm S}, \;\;\; s = 4.4, \;\;\; l = 3.5\sigma_{\rm S}. \end{equation} We have also tested other values, a few comments are given below. We will consider several variants of the intermediate level. We denote by $p_{\rm I}^{(1)}$ the distribution defined by (\ref{equ:int-cor},\ref{eqn:PhiCorr1}), with parameters (\ref{eqn:ILParameters}). Fig.~\ref{fig:AccuracyIntermediateLevel} shows how the quantities of interest differ between the CG and FG models, and the corresponding differences between the intermediate level and the FG model. Here $\Delta g(r)$ is the difference between $g(r)$ for the FG model and the distribution of interest (which is either the CG distribution $p_{\rm C}$ or one of the variants of the intermediate distribution). And $\Delta P(N)$ is the corresponding difference in the probability that the system has $N$ large particles. For the value of $g(r)$ at contact, we see that the intermediate level $p_{\rm I}^{(1)}$ corrects around half of the deviation between CG and FG models. However, the probability distribution of the $N$ has the opposite situation, that the intermediate level is \emph{less} accurate than the CG model. [This is partly attributable to the fact that $\Delta\mu$ in Eq.~(\ref{equ:UC}) has been chosen to make the CG model accurate.] \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{images/plot-gr.pdf} \caption{The layering of small particles ($\sigma_S = 1$) around one big particle ($\sigma_B = 10$) at volume fraction $\eta_{S} = 0.2$. The displayed pair correlation function $g_{\rm BS}(r)$ given the radius from the centre of the big particle shows that small particles form layers of higher and lower concentration that vanish with increased distance from the big particle.} \label{fig:Layering} \end{figure} To explore the behaviour of the intermediate level, we constructed two variants of $p_{\rm I}$. The aim is to understand why $ p_{\rm I}^{(1)}$ has inaccuracies, and to (partially) correct for them. There are two main approximations in the intermediate level $p_{\rm I}^{(1)}$: the first is (\ref{eqn:ApproxOfFEWithC}) and the second is that $\Phi[E_{\mathcal{C}},\mu_{\rm S}]$ can be approximated by the square-gradient approximation (\ref{eqn:sgapprox}). The first approximation neglects a significant physical phenomenon in these systems, which is a layering effect of the small particles around the large ones. This is illustrated in Fig.~\ref{fig:Layering} by the radial distribution function $g_{\rm BS}^0$ between large and small particles (measured in a system with a single large particle). One sees that there is typically an excess of small particles close to the large ones, followed by a deficit ($g_{\rm BS}^0(r)<1$), and a (weak) second layer. For (\ref{eqn:ApproxOfFEWithC}) to be accurate, the intermediate level should have enough small particles to capture this layering, so that the particles being inserted in the second annealing stage are not strongly affected by the presence of the large particles. However, computational efficiency requires that $\delta_{\rm free}$ is not too large, so these layers are not fully resolved at the intermediate level. To partially account for this effect, we make an ad hoc replacement of $\mu_{\rm S}$ in \eqref{eqn:sgapprox} by an effective chemical potential $\mu_{\rm lay}(\textbf{r})$, which is chosen such that the corresponding reservoir volume fraction $\eta^{\rm lay}_{\rm S}(\textbf{r})$ satisfies \begin{equation} \frac{ \eta^{\rm lay}_{\rm S}(\textbf{r}) }{ \eta_{\rm S}^{\rm r} } = g_{\rm BS}^0( \text{dist}(\mathbf{r}, \mathcal{C})). \end{equation} In estimating the free energy of the small particles that are inserted in the second level of annealing, this adjustment to $\Phi^{\rm sq}$ helps to counteract the error made in (\ref{eqn:ApproxOfFEWithC}), leading to an updated potential $\Phi^{\rm corr, 2}$. The intermediate level constructed in this way is denoted by $p_{\rm I}^{(2)}$. The results of Fig.~\ref{fig:AccuracyIntermediateLevel} show that this variant is (somewhat) more accurate than $p_{\rm I}^{(1)}$. However, the intermediate level still tends to have a smaller number of large particles than the full (FG) mixture. \begin{figure} \centering \includegraphics[width=\columnwidth]{images/plot-lin-correction.pdf} \caption{The error of the free energy prediction for $800$ configurations sampled from the coarse distribution $p_{\text{C}}$, grouped by their number of big particles. Each dot represents the difference of an estimate of the predicted small-particle free energy used in the intermediate level $p_{\text{I}}^{(2)}$ against an estimate of the full free energy. We correct for the noticeable trend in the error with a linear ad-hoc correction term, displayed in black.} \label{fig:AdHocCorrection} \end{figure} To investigate this further, we took 800 representative CG configurations. For each one, we estimate the error associated with the approximation (\ref{eqn:PhiCorr1}) \begin{equation} \Delta\Phi^{\rm corr} = \Phi^{\rm ex} - \Phi^{\rm corr, 2}. \end{equation} Results are shown in Fig.~\ref{fig:AdHocCorrection}. One sees that the errors are of order unity (note that $\Phi^{\rm ex}$ itself is of order $10^4$ so this is a small relative error, see below); there is a systematic trend, that $\Phi^{\rm corr, 2}$ underestimates $\Phi^{\rm ex}$ when $N$ is large. To correct this error we introduce an additional correction term to $\Phi^{\rm corr,2}$ \begin{equation} \Phi^{\rm corr,3}(\mathcal{C}) = \Phi^{\rm corr,2}(\mathcal{C}) + \alpha_{\rm corr} N \end{equation} and denote the intermediate level constructed in this way by $p_{\rm I}^{(3)}$. A least squares fit to Fig.~\ref{fig:AdHocCorrection} suggests to take $\alpha_{\rm corr}=0.076$; in practice this tends to over-correct the error in $\Phi^{\rm corr,2}(\mathcal{C})$ and we find better performance with a smaller value \begin{equation}\label{eqn:ILParameters2} \alpha_{\text{corr}} = 0.058. \end{equation} However, the performance of the method depends only weakly on the specific choice of $\alpha_{\text{corr}}$, this is discussed in Appendix \ref{app:AdHocCorrection}. For all following results, we define the intermediate level $p_{\text{I}} = p_{\text{I}}^{(3)}$ to use the potential $\Phi^{\rm{corr,3}}$. \subsection{Discussion of intermediate level} An important aspect of the three-level method is the self-consistency of the general approach. The intermediate level variants $p_{\rm I}^{(1)}$ and $p_{\rm I}^{(2)}$ were constructed on a purely theoretical basis. The corresponding results in Fig.~\ref{fig:AccuracyIntermediateLevel} indicated good performance, but that the distribution of $N$ had a systematic error. This error was quantified precisely in Fig.~\ref{fig:AdHocCorrection}, which enabled an improvement to the intermediate level. In principle, this procedure could be repeated to develop increasingly accurate variants of $p_{\rm I}$. That approach would be useful if (for example) one wanted to consider increasingly large systems, where the requirements for the accuracy of $p_{\rm I}$ become increasingly demanding. One way to see the effect of system size is to note that Fig.~\ref{fig:AdHocCorrection} required the estimation of $\Phi_{\mathcal{C}}[E_\mathcal{C},\mu_{\rm S}]$ and $\Phi_{\mathcal{C}}[0,\mu_{\rm S}]$, whose values are of order \num{1e4}. Since the free energies are exponentiated in the weights for resampling, an absolute error of $\pm1$ is required on these free energies, while their absolute values are extensive in the system size. Hence one sees that accurate estimates of the free energy are required: their relative error is required to be of the order of the inverse volume of the system. \section{Numerical tests}\label{sec:NumericalResults} In this Section, we apply the three-level method to the example from Section \ref{sec:example} using the intermediate level from Section \ref{sec:IntermediateLevel}, with the parameters defined in \eqref{eqn:ILParameters} and \eqref{eqn:ILParameters2}. The parameters and the annealing schedules are chosen such that, on average, the first and second step have the same computational effort, see Appendix \ref{app:FreeEnergy} for details. It can be proven~\cite{rohrbach2022convergence} that the three-level method provides accurate results, in the limit where the population sizes $ M_{0}, M_1, M_2$ are all large. In particular, we expect the estimators $\hat{A}^{\rm 3L}_{\text{F}}, \hat{A}^{\rm 3L}_{\text{F},\Delta}$ to all obey central limit theorems (CLTs), the two-level estimators $\hat{A}_{\text{F}}, \hat{A}_{\text{F},\Delta}$ behave similarly. Detailed results are given in Sec.~\ref{sec:ConvergenceResults}. The important fact is that for large populations, the variances of the estimators behave as \begin{equation} \operatorname{Var}(\hat{A}) \approx \frac{1}{M} \Sigma \label{equ:clt-gen} \end{equation} where $M$ is the relevant population size and $\Sigma$ is called the asymptotic variance (it depends on the observable $A$ and on which specific estimator is used). In general, the estimators may have a bias, which is also of order $1/M$. This means that the uncertainty in our numerical computations is dominated by the random error, whose typical size is $\sqrt{\Sigma/M}$. We note that the theoretical results for convergence do not require that the coarse or intermediate levels are accurate. However, one easily sees~\cite{kobayashi2019correction} that serious inaccuracies in these levels lead to very large $\Sigma$. In such cases, one may require prohibitively large populations to obtain accurate results. In this section, we demonstrate (for the example of Sec.~\ref{sec:example}) that we do not require very large populations for the three-level method, and that the numerical results are consistent with (\ref{equ:clt-gen}). After that, we estimate the asymptotic variances for the two-level and three-level methods. We will find that introducing the third level improves the numerical performance, corresponding to a reduction in $\Sigma$. To this end, we investigate the pair correlation $g(r)$ of the big particles. As seen in Figure \ref{fig:AccuracyIntermediateLevel}(a), the coarse approximation of $g(r)$ has a substantial error, especially when two big particles are in contact. To quantify this specific effect, we define the coordination number $N_c$, which is the number of large particles within a distance $r_1$ of a given large particle. (For a given configuration, this quantity is estimated as an average over the large particles. We take $r_1 \approx 10.73\sigma_{\rm S} $ to be the first minimum of $g(r)$ of the CG model.) For our example, the coordination number for the FG and CG systems are given by \begin{equation} \langle N_c \rangle_{\text{F}} \approx 1.61, \qquad\langle N_c \rangle_{\text{C}} \approx 1.56. \end{equation} \subsection{Accuracy of method} \label{sec:convergence} \begin{figure} \centering \includegraphics[width=\columnwidth]{images/plot-convergence.pdf} \caption{ Estimating the pair-correlation function $g(r)$ from Figure \ref{fig:ExampleProperties}(a) with the two- and three-level method. (a) The difference of three-level estimates $\hat g(r)$ with increasing numbers $M_{1} = M_{2}$ number of particles against a reference value of $g(r)$ which was computed with the two-level method. (b) The error of the binned values in (a) as defined in \eqref{eqn:BinningError}. The dotted black line displays the expected asymptotic Monte Carlo convergence rate of $M_{2}^{-1/2}$. } \label{fig:Convergence} \end{figure} To illustrate the reliable performance of the method, we take a simple example with $M_0 = \num{4e5}$ and $M_{1} = M_{2}$ (no tapering) and we focus on the difference estimator $\hat A_{\text{F}, \Delta}^{\text{3L}}$, which we expect to be the most accurate. The corresponding numerical estimate of $g(r)$ is denoted by $\hat g(r)$, binned using $40$ equidistant bins at positions $r_{j}$ between $r=10$ and $r=12$. Figure \ref{fig:Convergence}(a) shows estimates of the difference between $\hat g(r)$ and its true value, as the population size increases. (The FG result was estimated independently by the two-level method, using a large value of $M_{\text{F}}=\num{18000}$.) A population $M_{2}$ of several thousand is sufficient for an accuracy better than $0.5$ in each bin of $g(r)$. For smaller $M_{2}$, fluctuations in the measured $\hat g(r)$ are apparent in Figure \ref{fig:Convergence}(a). To estimate their size, we define the error for a single run of the three-level method by summing over the bins: \begin{equation} ({\rm error})^2 = \sum_{j=1}^{40} |\hat g(r_j) - g(r_j)|^2. \label{eqn:BinningError} \end{equation} Hence, one expects from (\ref{equ:clt-gen}) that this error decays with increasing population, proportional to $M_{\text{F}}^{-1/2}$. Figure \ref{fig:Convergence}(b) shows an estimate of \eqref{eqn:BinningError}, which is consistent with this expected scaling. \subsection{Measurements of variances $\Sigma$} \label{sec:variance} We now investigate whether the three-level method does indeed improve on the performance of the (simpler) two-level method of Refs~\onlinecite{kobayashi2019correction,kobayashi2021critical}. The key question is whether the resampling step is effective in focussing the computational effort on the most important configurations of the big particles. We recall from above that removing the resampling step from the three-level method leads to a two-level method, where the annealing process is paused at the intermediate level, and then restarted again. In order to test the effect of resampling, we compare these two schemes, keeping the other properties of the algorithm constant, including the annealing schedule. (To test the overall performance, one might also optimise separately the annealing schedules for the two-level and three-level algorithms, and compare the total computational time for the two methods to obtain a result of fixed accuracy. However, such an optimisation would be very challenging, so instead we focus on the role of resampling.) As a very simple quantity of interest, we take the co-ordination number $N_c$. We run the whole algorithm $N_{\rm runs}$ independent times and we estimate $N_c$ for each run. This can be done using several different estimates of $N_c$. These are: (i) the two-level estimates $\hat{A}_{\rm F}$ and $\hat{A}_{\text{F},\Delta}$ from (\ref{eqn:ISEstimator},\ref{equ:A-TML}); (ii) the corresponding three-level estimates $\hat{A}^{\rm 3L}_{\rm F}$ and $\hat{A}^{\rm 3L}_{\text{F},\Delta}$ of (\ref{eqn:IntermediateEstimate},\ref{eqn:Multilevel3LEstimator}), in which we also vary the ratio $M_{1}:M_{2}$, to see the effects of tapering. All comparisons are done with a fixed total computational budget. We have chosen parameters such that the first and second annealing stage have the same (average) computational cost. This means we need to hold $M_{\rm T} = (M_{1}+M_{2})/2$ constant during tapering. The two-level method takes $M_{\text{F}} = M_{\rm T}$ (because the single step of annealing in the two-level method has the same cost as the two annealing steps of the three-level method). For the coarse level estimates ${\hat A}_{\text{C}}$ and ${\hat A}_{\text{C}}^{\rm 3L}$ (which are used in computation of $\hat{A}_{\text{F},\Delta}$ and $\hat{A}^{\rm 3L}_{\text{F},\Delta}$), the CG computations are cheap so we take $M_0 = M_{\text{C}}=\num{6e6}$. This is large enough that the numerical errors on these coarse estimates are negligible in comparison to the errors from higher levels. For each version of the algorithm, we measure the sample variance of the $N_{\rm runs}$ estimates. Results are shown in Figure \ref{fig:ResultsStdDev} for $N_{\rm runs}=60$ and $M_{\rm T} = 500$. The error bars are computed by the bootstrap method\cite{efron1981nonparametric}. It is useful that the variance of all these estimators are expected to be proportional $1/M_{\rm T}$: this means that reducing the variance by a factor of $\alpha$ requires that the computational effort is increased by the same factor. Hence the ratio of variances of two estimators is a suitable estimate for the ratio of their computational costs. When carrying out these runs, each estimator was computed by performing annealing on the same set of coarse configurations, to ensure direct comparability. (More precisely: we take a set of $700$ representative configurations which are used for the method with $M_{1}:M_{2}=7:3$, other versions of the method used a subset of these $700$.) In addition, it is possible to share some of the annealing runs when computing the different estimators (while always keeping the 60 different runs completely independent). This freedom was exploited as far as possible, which reduces the total computational effort. However, it does mean that the calculations of the different estimators are not at all independent of each other. \begin{figure} \centering \includegraphics[width=\columnwidth]{images/plot-var-new.pdf} \caption{ The sample variance of $N_{\rm runs} = 60$ independent estimates of the coordination number $N_c$. We compare results using a two-level method as well as three-level methods, with and without tapering. Further, we give the results for the final-level (left) and difference (right) variants of the estimator. The error bars are computed via bootstrap; their interpretation is however not obvious as the different estimators are highly correlated, see the main text. } \label{fig:ResultsStdDev} \end{figure} \subsection{Performance: discussion} \label{sec:variance-discuss} All three-level estimators have a reduced standard deviation compared to their two-level equivalents, demonstrating the usefulness of the intermediate resampling step. In all cases, the difference estimate outperforms its equivalent final-level estimate; this effect is stronger for the three-level estimate, providing evidence that the intermediate stop additionally improves the quality of the control variate in the difference estimates. The effect of introducing tapering from $M_{\text{I}}=600$ to $M_{\text{F}}=400$ is difficult to assess, given the statistical uncertainties in this example. Nevertheless, the improvement of the final-level estimator $\hat A_{\text{F}}^{\text{3L}}$ provides some evidence that tapering has improved the overall sampling of the FG model while at the same time having fewer samples on the final level. This is possible since we start with more samples in the CG model which improves the sampling at the intermediate step, where we then resample to keep relevant configurations. As the results for the $700$ to $300$ tapering shows, the tapering rate needs to be chosen carefully as a too aggressive rate can decrease the performance quickly. Overall, the numerical tests in this section provide strong evidence of the benefit of the intermediate resampling. For our example, switching from a two-level to a three-level difference estimator substantially reduces the variance, from around $0.0029$ for the two-level method to $0.0017$ at a fixed computational budget. \section{Convergence of the multilevel method} \label{sec:ConvergenceResults} In Section \ref{sec:NumericalResults}, we have seen that the three-level method outperforms the two-level method in numerical tests, both for the final-level as well as the difference version of the estimator. In this section, we provide convergence results for both algorithms, and compare their asymptotic performance as the number of configurations goes to infinity. The proof is general, but it does require some assumptions on the models of interest. First, for every allowed CG configuration (that is, configurations $\mathcal{C}$ with $p_{\rm C}(\mathcal{C})>0$), we assume that the quantity of interest $A$ is bounded. Also, the probability density $p_{\mathcal{C}}(\mathcal{C})$ must be non-zero whenever $p_{\text{I}}(\mathcal{C})$ is non-zero, and similarly $p_{\text{I}}(\mathcal{C}, \mathcal{F})$ must be non-zero whenever $p_{\text{F}}(\mathcal{C}, \mathcal{F})$ is non-zero. \subsection{Two-level method} \newcommand{A^{\rm r}}{A^{\rm r}} The two-level method has been previously analysed in Ref.~\onlinecite{kobayashi2019correction}. We summarise its key properties. It was noted in Sec.~\ref{sec:TwoLeve} that $\hat{A}_{\rm F}\to \langle A\rangle_{\rm F}$ as $M_{\text{F}}\to\infty$ (specifically, this is convergence in probability\cite{williams1991probability}). We also expect a CLT for this quantity: as in Eq.~\ref{equ:clt-gen}, the distribution of the error $(\hat{A}_{\rm F}- \langle A\rangle_{\rm F})$ converges to a Gaussian with mean zero, and variance $\Sigma_{\text{F}}/M_{\rm F}$. We will derive a formula for this variance, which will be compared later with the corresponding quantity for the three-level model. For compact notation, it is convenient to define the recentred quantity of interest \begin{equation} A^{\rm r}(\mathcal{C}) = A(\mathcal{C}) - \langle A \rangle_{\rm F}. \end{equation} A significant contribution to $\Sigma_{\text{F}}$ comes from the randomness of the annealing procedure, this can be quantified as \begin{equation} v(\mathcal{C}) = \operatorname{Var}_{\rm J} \big[\hat{W}^{\rm n}(\mathcal{C})\big] \label{equ:vC2} \end{equation} where the variance is again with respect to the annealing procedure (from coarse to fine). Then, following Ref.~\onlinecite{kobayashi2019correction}, it can be shown that \begin{equation} \Sigma_{\text{F}} = \langle A^{\rm r}(\mathcal{C})^2 [w(\mathcal{C})^2+v(\mathcal{C})] \rangle_{\rm C} \label{equ:clt-Af} \end{equation} where $w(\mathcal{C})=\langle \hat{W}^{\rm n}(\mathcal{C})\rangle_{\rm J} = p_{\rm F}(\mathcal{C})/p_{\rm C}(\mathcal{C})$, so one identifies $w(\mathcal{C})^2+v(\mathcal{C})$ as the mean square weight obtained from the annealing procedure. Similarly, the estimator $\hat\Delta$ that appears in the difference estimate $\hat{A}_{\text{F},\Delta}$ also obeys a CLT, with variance $\operatorname{Var}(\hat{\Delta}) \approx \Sigma_{\text{F},\Delta}/M_{\text{F}}$, where \begin{multline} \Sigma_{\text{F},\Delta} = \Big\langle A^{\rm r}(\mathcal{C})^2 \big[w(\mathcal{C})^2+v(\mathcal{C})-1\big] \Big\rangle_{\rm C} \\ + \operatorname{Var}_{\rm C}(A) - \operatorname{Var}_{\rm F}(A) \, . \label{equ:clt-Delta} \end{multline} As discussed in Ref.~\onlinecite{kobayashi2019correction}, if the computational cost of the coarse model is low then $M_{\text{C}}$ can be taken large enough that the variance of the coarse estimator $\hat{A}_{\text{C}}$ is negligible, in which case (\ref{equ:A-TML}) implies $\operatorname{Var}(\hat{A}_{\text{F},\Delta}) \approx \operatorname{Var}(\hat{\Delta})$, and hence \begin{equation} \operatorname{Var}(\hat{A}_{\text{F},\Delta}) \approx \frac{1}{M_{\text{F}}} \Sigma_{\text{F},\Delta} \; . \label{equ:clt-Afml} \end{equation} Comparing (\ref{equ:clt-Af}) and (\ref{equ:clt-Delta}) -- which give the variances of $\hat{A}_{\text{F}}$ and $\hat{A}_{\text{F},\Delta}$ respectively -- the term $v(\mathcal{C})$ in (\ref{equ:clt-Af}) is replaced by by $v(\mathcal{C})-1$ in \eqref{equ:clt-Delta}, which reduces the variance of the estimator. We expect in general that $ \operatorname{Var}_{\rm C}(A)$ and $\operatorname{Var}_{\rm F}(A)$ should be similar in magnitude, in which case these terms in (\ref{equ:clt-Delta}) should have little effect. Hence one expects that the estimator $\hat{A}_{\text{F},\Delta}$ has lower variance than $\hat{A}_{\text{F}}$. This is consistent with the results of Fig.~\ref{fig:ResultsStdDev}. \subsection{Three-level method} The results (\ref{equ:clt-Af},\ref{equ:clt-Afml}) are based on the property that each estimator is a sum of (nearly) independent random variables, which means that we can immediately apply standard Monte Carlo convergence results \cite{robert2004monte}. This is not possible for the three-level method, since the resampling step correlates the configurations. This makes the analysis of SMC-type algorithms challenging, but widely applicable results are available \cite{cappe2005inference,douc2008limit,chan2013general}. The three-level method in Section \ref{sec:threelevel} is an implementation of a random-weight SMC method which has been analysed in Ref.~\onlinecite{rohrbach2022convergence}. To analyse the variance of the three-level method, we require results analogous to (\ref{equ:clt-Af}), which depend on the mean square weights associated with the annealing procedure. To this end, define the average of the final level weight \begin{equation} w_2(\mathcal{X}_2) = \langle \hat{W}^{\rm n}_2(\mathcal{X}_2) \rangle_{\rm J} \label{eqn:w2X} \end{equation} which fulfils \eqref{eqn:MarginalIntegralW2}. Similar to \eqref{equ:vC2}, the variance of this weight is \begin{equation} v_2(\mathcal{X}_2) = \operatorname{Var}_{\rm J} \big[ \hat{W}^{\rm n}_{2}(\mathcal{X}_2)\big]. \label{eqn:v2X} \end{equation} The averages in these equations are with respect to the second annealing step (from intermediate to fine level), starting at configuration $\mathcal{X}_2$, see Sec.~\ref{sec:3-fine}. For the contribution to the asymptotic variance of the first annealing step, it is important to consider a product of weight factors: $\hat{W}^{\rm n}_1(\mathcal{C}_1) w_2(\hat \mathcal{X}_1)$. The first factor in this product is the random weight $\hat{W}^{\rm n}_1$ that is obtained by annealing from the coarse to the intermediate level, leading to the intermediate configuration is $\hat \mathcal{X}_1 = (\mathcal{C}_1, \hat \mathcal{F}_1)$. The second factor is the averaged weight $w_2(\hat \mathcal{X}_1)$ from \eqref{eqn:w2X} associated with the second (subsequent) annealing step. Combining \eqref{equ:jarz-fine-property-int} and \eqref{eqn:MarginalIntegralW2}, the average of the product is \begin{equation} \langle \hat{W}^{\rm n}_1(\mathcal{C}_1) w_2(\hat \mathcal{X}_1) \rangle_{\rm J} = p_{\text{F}}(\mathcal{C}_1) / p_{\text{C}}(\mathcal{C}_1) = w(\mathcal{C}_1) \end{equation} and the corresponding variance is \begin{equation} v_1(\mathcal{C}_1) = \operatorname{Var}_{\rm J} \big[\hat{W}^{\rm n}_1(\mathcal{C}_1) w_2(\hat \mathcal{X}_1) \big] \; . \label{equ:vi} \end{equation} Hence $w(\mathcal{C}_1)^2+v_1(\mathcal{C}_1)$ is the mean square value of $\hat{W}^{\rm n}_1(\mathcal{C}_1) w_2(\hat \mathcal{X}_1)$ with respect to the the annealing process: this turns out to be a relevant quantity for the asymptotic variance. The number of configurations $M_1, M_2$ can be varied between steps of the three-level method. We formulate the asymptotic variance in the average number of configurations \begin{equation} M_{\rm T} =\frac12(M_1 + M_2). \end{equation} If the two annealing steps have comparable cost, we can then directly compare the variances for different tapering rates at fixed $M_{\rm T}$. Define also \begin{equation} c=\frac{M_{1}}{2M_{\rm T}}, \qquad \bar c = 1-c = \frac{M_{2}}{2 M_{\rm T}}. \end{equation} Then, a direct application of Theorem 2.1 of Ref.~\onlinecite{rohrbach2022convergence} gives a CLT for $\hat A_{\text{F}}^{\text{3L}}$: for large $M_{\rm T}$ we have \begin{equation} \operatorname{Var}(\hat{A}_{\text{F}}^{\text{3L}}) \approx \frac{1}{M_{\rm T}} \Sigma_{\text{F}} ^{\text{3L}} \label{equ:clt-Afml3L} \end{equation} with asymptotic variance \begin{equation} \Sigma_{\text{F}} ^{\text{3L}} = \frac{1}{2 c} \Sigma_{\rm F,1}^{\rm{3L}} +\frac{1}{2 \bar c} \Sigma_{\rm F,2}^{\rm{3L}} \label{equ:SigTF3} \end{equation} with \begin{align} \Sigma_{\rm F,1}^{\rm{3L}} & = \Big\langle A^{\rm r}(\mathcal{C})^2 \big[w(\mathcal{C})^2+v_{1}(\mathcal{C})\big] \Big\rangle_{\rm C} \;, \nonumber \\ \Sigma_{\rm F,2}^{\rm{3L}} & = \Big\langle A^{\rm r}(\mathcal{C})^2 \big[w_{2}(\mathcal{X})^2+v_{2}(\mathcal{X})\big] \Big\rangle_{\rm I} \; . \label{equ:SigF-SigF} \end{align} The physical interpretation of these formulae will be discussed in the next subsection. Computing the asymptotic variance of the three-level difference estimator $\hat A_{\text{F}, \Delta}^{\text{3L}}$ is more difficult, since it involves difference of non-trivially correlated samples. For some examples of multilevel difference estimators, upper bounds on the asymptotic variance have been developed in Refs.~\onlinecite{beskos2017multilevel, del2017multilevel}. A detailed analysis of these bounds in the context of our algorithm is beyond the scope of this paper. \subsection{Discussion of CLTs} To understand the differences between the two- and three-level method, we compare the asymptotic variances of their corresponding final level estimators $\Sigma_{\text{F}}$ in \eqref{equ:clt-Af} and $\Sigma_{\text{F}}^{\text{3L}}$ in \eqref{equ:SigTF3}. The variance of the three-level method has two contributions $\Sigma_{\text{F},1}^{\rm 3L}$ and $\Sigma_{\text{F},2}^{\rm 3L}$; they are the variances of two-level methods from the coarse to the fine model and the intermediate to the fine model, respectively. The first term $\Sigma_{\text{F},1}^{\text{3L}}$ is therefore directly related to $\Sigma_{\text{F}}$, where the variance of the importance weight $v(\mathcal{C})$ has been replaced by $v_1(\mathcal{C})$. In order to make quantitative comparisons, we again consider the three-level method without intermediate resampling. As discussed in Sec.~\ref{sec:variance}, this is a two-level method with a specific annealing process that consists of the concatenation of the two annealing processes of the three-level method. For the concatenated annealing process, we have \begin{equation} \hat{W}^{\rm n}(\mathcal{C}) = \hat{W}^{\rm n}_1(\mathcal{C}) \hat{W}^{\rm n}_2(\hat \mathcal{X}), \end{equation} where $\hat \mathcal{X} = (\mathcal{C}, \hat \mathcal{F})$ is generated by the first annealing stage. This means that \begin{equation} v(\mathcal{C}) = \operatorname{Var}_{\rm J} \Big[ \hat{W}^{\rm n}_1(\mathcal{C}) \hat{W}^{\rm n}_2(\hat \mathcal{X}) \Big], \label{eqn:VarConcatAnnealing} \end{equation} where the variance is now over the randomness of both annealing processes. Comparing \eqref{eqn:VarConcatAnnealing} to \eqref{equ:vi}, we see that $v_1(\mathcal{C})$ computes the variance of the same importance weight, but after averaging over the second annealing stage in \eqref{eqn:w2X}. We can apply Jensen's inequality\cite{williams1991probability} to show that \begin{equation} v(\mathcal{C}) \geq v_1(\mathcal{C}) . \end{equation} By definitions (\ref{equ:clt-Af}, \ref{equ:SigTF3}), this directly implies \begin{equation} \Sigma_{\text{F},1}^{\text{3L}} \leq \Sigma_{\text{F}}. \end{equation} For the case without tapering $c = 1/2$, the three-level method therefore trades a reduction in the variance of the importance weights from coarse to fine in $\Sigma_{\text{F},1}^{\rm 3L}$ for the addition of a term $\Sigma_{\text{F},2}^{\rm 3L}$ that corresponds to the variance of a two-level method going from the intermediate to the fine level. The possibility of tapering, i.e.~$c \neq 1/2$, further allows us to optimise the distribution of computation effort between the two stages, which is particularly useful if $\Sigma_{\text{F},2}^{\rm 3L} \ll \Sigma_{\text{F}, 1}^{\rm 3L}$. For our application to hard sphere mixture example in Sec.~\ref{sec:example}, the annealing process is computationally expensive and the resulting weights are noisy. We are therefore in the situation where the variance $v(\mathcal{C})$ contributes substantially to the overall variance, and where we have constructed an intermediate in Sec.~\ref{sec:IntermediateLevel} that improves on the CG model. Following the discussion above, this is the setting where we expect the three-level method to improve upon a two-level method, which is confirmed by the numerical results in Sec.~\ref{sec:NumericalResults}. Further discussion of the effect of resampling on random-weight SMC methods can be found in Ref.~\onlinecite{rohrbach2022convergence}. \section{Conclusions} \label{sec:conclusions} We have introduced a three- and multilevel extension of the two-level simulation method first discussed in Ref.~\onlinecite{kobayashi2019correction}. We have applied this method to a highly size-asymmetric binary hard-sphere system. As shown in the numerical test in Section \ref{sec:NumericalResults} and theoretical results in Section \ref{sec:ConvergenceResults}, the introduction of intermediate resampling that distinguishes the two- from the three-level method can lead to substantial improvements in performance by reducing the variance in importance weights and by allowing efficient allocation of resources between levels via tapering. In the application to binary hard-sphere systems, this required us to construct a semi-analytic intermediate level that consists of a system with partially inserted small particles where the remaining particles are inserted analytically via a perturbative approximation of the free energy. For this, we have combined a highly accurate square-gradient theory with pre-computed ad hoc corrections, yielding an intermediate level that substantially improves the accuracy of the investigated quantities of interest compared to the initial coarse level. Furthermore as we show in Appendix \ref{app:AdHocCorrection}, the three-level method appears robust with respect to slight deviations of the intermediate level. Compared to our numerical example, Ref.~\onlinecite{kobayashi2021critical} applied the two-level method to larger and more dense systems than considered here, to investigate the critical point of demixing. This was achieved by replacing the two-body CG model with RED potential used in this publication by a highly accurate two- and three-body potential. The computation of accurate effective potentials entails a substantial upfront computational cost (compared to our construction of the intermediate level), but for the hard sphere mixtures this results in a CG level that is more accurate than our intermediate level. Despite the challenges of keeping the variance of the importance weights under control for large systems, this turned out to be more efficient overall. Another limitation of the multilevel methods is that the population of unique coarse configurations is fixed from the start, and reduces with each subsequent resampling step. This is closely related to the sample depletion effect commonly observed effect in particle filtering, and SMC methods in general~\cite{crisan2000convergence,doucet2009tutorial}. For the multilevel method, we can address this by following each resampling step with a number of MCMC steps, to decorrelate duplicated configurations and further explore the system at the current level of coarse-graining\cite{crisan2000convergence}. While such an approach is not feasible for the hard-sphere system where intermediate MCMC is limited by the cost of computing the required approximations, we expect this to be beneficial for example whenever intermediate physical systems are described in terms of effective, few-body interactions. To conclude, our results show that the multilevel method can effectively make use of intermediate levels when available, leading to improvements in performance at fixed computational cost. This could be particularly useful in multi-scale systems that admit a true hierarchy of possible coarse-grainings, like long-chain polymers where various numbers of monomers could be grouped together to yield a CG representation. We look forward to further applications of multilevel methods in physical simulations. \section*{Acknowledgements} This work was supported by the Leverhulme Trust through research project Grant No. RPG–2017–203.
1,108,101,562,671
arxiv
\section{\label{sec:level1intro}Introduction} Germanium telluride (GeTe) is a metal chalcogenide that exhibits three structural phases at room temperature and one phase at high temperature.\cite{Schlieper1999,Bletskan2005} The room-temperature phases are amorphous GeTe and crystalline rhombohedrally distorted $\alpha$- as well as orthorhombic $\gamma$-GeTe. The high-temperature crystalline cubic phase is known as $\beta$-GeTe. While the atoms of such a phase-change material are covalently bonded in its amorphous (A) state, metavalent bonding can be found in the crystalline (C) states.\cite{Raty2019} These two types of bonds lead to very different properties. The amorphous phase has a relatively low optical reflectivity $R_A$ and high electrical resistivity $\rho_A$. Upon crystallization, the resistivity decreases by five orders of magnitude with $\rho_A$\,$\approx$\,10$^2$\,$\Omega$cm and $\rho_C$\,$\approx$\,10$^{-3}$\,$\Omega$cm.\cite{Jost2015} Simultaneously, the reflectance contrast $\Delta R$ defined by $[(R_C$\,-\,$R_A)$\,/\,$R_C]$\,$\times$\,100 is about 43\% in the near-infrared spectral range (wavelengths near 1\,\textmu m).\footnote{The reflectances $R_A$ and $R_C$ were calculated via Fresnel coefficients for a 100\,nm GeTe film between a silicon and an air half-space, under normal incidence, and with the dielectric function of GeTe according to Ref.~\onlinecite{Shportko2009}. The definition of the reflectance contrast $\Delta R$ was taken from Ref.~\onlinecite{Schlich2015}.} The room-temperature amorphous phase of GeTe crystallizes at $T_{C,1}$\,=\,185$^{\circ}$C.\cite{Jost2015} Thermal annealing, optical pulses, or electrical pulses can be used to induce this structural relaxation. However, only laser or electrical pulses allow for quenching of the GeTe melt and thus, re-amorphization. The melting temperature of GeTe is $T_M$\,=\,723\,$^{\circ}$C.\cite{Schlieper1996} The strong property contrast between the non-volatile phases can be exploited for optical data storage and memristive memories.\cite{Raoux2010,Wright2012} The latter are one of the most promising candidates for neuromorphic computing.\cite{Boybat2018} While ternary and quaternary phase-change materials, such as germanium antimony telluride or silver indium antimony telluride, have been applied in phase-change memories, germanium telluride has gained interest for active photonics.\cite{Wuttig2017,Carrillo2019,Hail2019,Michel2020b} Furthermore, GeTe exhibits ferromagnetism with a Curie temperature of about 920$^{\circ}$C, while doping GeTe with Mn, Fe, or Cr leads to a Curie temperature $\leq$\,420$^{\circ}$C.\cite{Kriener2016} Moreover, GeTe has recently been identified as a Rashba ferroelectric.\cite{Slawinska2020} All of the aforementioned applications and effects have been investigated either for epitaxially grown or sputtered GeTe films ranging from several tens of nanometers to several micrometers in thickness. Recently, spatially confined phase-change materials have been studied, mainly due to two opportunities. First, nanowires and nanoparticles offer an alternative approach to fabricate films of phase-change material or patterned arrays.\cite{Milliron2007,Agarwal2009,Caldwell2010,Yarema2018} Thereby, the purchase of dedicated expensive equipment (e.g. magnetron sputtering tool) can be avoided. In addition, preformed, high-aspect-ratio voids or patterns can be filled. Second, nanoscale phase-change materials allow for studying size-dependent properties of these compounds. For example, localized surface plasmon resonances have been reported for crystalline GeTe nanoparticles,\cite{Polking2013} and a bandgap increase has been observed.\cite{Michel2020,Yarema2020} Furthermore, several studies on the size-dependent shift of the crystallization temperature $T_{C,1}$ have been published, as shown for GeTe in Tab.~\ref{tab:temp}. The listed values for $T_{C,1}$ refer to the transition from the amorphous phase to the rhombohedrally distorted $\alpha$-GeTe and reveal that $T_{C,1}$ increases with decreasing particle diameter $d$. We also note that the observed crystallization temperature depends not only on the material dimensions but also on the characterization technique. For example, the drop in resistivity associated with crystallization does not require a phase change of the entire GeTe volume; a conductive crystalline channel in the film is sufficient. Another important factor is the applied heating rate $\vartheta$ throughout the measurement. A well-known example is the shifted peak temperature to higher $T$ upon increase of $\vartheta$ in differential scanning calorimetry (DSC). All of the aforementioned effects have to be taken into account when comparing the values for $T_{C,1}$ and making conclusions about size-dependent effects. While $T_{C,1}$ has been reported for spatially confined GeTe, the high-temperature crystalline $\beta$ phase has neither been observed for ultra-small, nor initially amorphous nanoparticles so far (\textit{cf.} Tab.~\ref{tab:temp}). Here we study the reversible crystalline-to-crystalline phase transition from $\alpha$- to $\beta$-GeTe at $T_{C,2}$ and back to the $\alpha$-phase for sub-10\,nm GeTe nanoparticles, which were initially amorphous after synthesis. This is realized by collecting X-ray diffraction (XRD) patterns for repeated heating and cooling cycles of drop-casted particles, which are synthesized in their amorphous phase. \begin{table} \caption{\label{tab:temp}Comparison of crystallization temperatures $T_{C,1}$ and $T_{C,2}$ for different GeTe samples, determined by different characterization methods (XRD - X-ray diffraction, at a synchrotron ($_{\textnormal{s}}$) if applicable, $\rho(T)$ - resistivity measurement during heating and cooling, and DSC - differential scanning calorimetry) with varied heating rates given in $^{\circ}$C/min. The samples have either been synthesized (approx. spherical particle diameter $d$) or sputtered (approx. film thickness $t$). Initially, the GeTe has either been in its amorphous ($_A$) or crystalline ($_C$) state. The sputtered thin films are given for reference and separated by a horizontal line. $^{\dagger}$ and $^{\ddagger}$ mark a surface-oxidized and TaN-capped GeTe film, respectively.} \begin{ruledtabular} \begin{tabular}{llllll} Size\,[nm] & T$_{C,1}$\,[$^{\circ}$C] & T$_{C,2}$\,[$^{\circ}$C] & Method & $\vartheta$\,[$^{\circ}$C/min] & Ref.\\ \hline $d_A$\,=\,1.8 & 400 & - & \textit{in-situ} XRD$_{\textnormal{s}}$ & 60 & \onlinecite{Caldwell2010}\\ $d_A$\,=\,2.6 & 350 & - & \textit{in-situ} XRD$_{\textnormal{s}}$ & 60 & \onlinecite{Caldwell2010}\\ $d_A$\,=\,3.4 & 320 & - & \textit{in-situ} XRD$_{\textnormal{s}}$ & 60 & \onlinecite{Caldwell2010}\\ $d_A$\,=\,3.5 & 340 & - & $\rho(T)$ & 300\,-\,1.800 & \onlinecite{Caldwell2010}\\ $d_A$\,=\,6.0 & 227 & - & \textit{in-situ} XRD & 7 &\onlinecite{Yarema2018}\\ & 170 & - & $\rho(T)$ & - & \onlinecite{Yarema2018}\\ & 223\,-\,240 & - & DSC & 2.5\,-\,30 & \onlinecite{Yarema2018}\\ $d_A$\,=\,8.7 & 237 & - & DSC & 5 &\onlinecite{Arachchige2011}\\ $d_A$\,=\,10.6 & 224 & - & DSC & 5 &\onlinecite{Arachchige2011}\\ $d_A$\,=\,18.5 & 209 & - & DSC & 5 &\onlinecite{Arachchige2011}\\ $d_C$\,=\,17.0 & - & 355 & \textit{in-situ} XRD$_{\textnormal{s}}$ & 60 &\onlinecite{Polking2011}\\ $d_C$\,=\,100 & - & 360 & \textit{in-situ} XRD$_{\textnormal{s}}$ & 60 &\onlinecite{Polking2011}\\ $d_C$\,=\,500 & - & 370 & \textit{in-situ} XRD$_{\textnormal{s}}$ & 60 &\onlinecite{Polking2011}\\ \hline $t_A$\,=\,50 & 170 & 350 & \textit{in-situ} XRD$_{\textnormal{s}}$ & 180 & \onlinecite{Raoux2009}\\ & 175 & - & $\rho(T)$ & 60 & \onlinecite{Raoux2009}\\ $t_A$\,=\,80 & 185 & - & $\rho(T)$ & 5 &\onlinecite{Jost2015}\\ $t_A$\,=\,100$^{\dagger}$ & 180 & - & $\rho(T)$ & 10 &\onlinecite{Kolb2019}\\ $t_A$\,=\,100$^{\ddagger}$ & 230 & - & $\rho(T)$ & 10 &\onlinecite{Kolb2019}\\ $t_A$\,=\,150 & 180 & - & $\rho(T)$ & 10 & \onlinecite{Mantovan2017}\\ \end{tabular} \end{ruledtabular} \end{table} \section{\label{sec:level1exper}Experimental methods} We prepared colloidal dispersions of amorphous monodisperse GeTe nanoparticles following two different protocols. All studied spherical nanoparticles had a diameter $d$\,<\,10\,nm since size-dependent crystallization had previously been identified for this size regime (Tab.~\ref{tab:temp}). \subsection{\label{sec:level2Synth}Nanoparticle synthesis} The syntheses of the amorphous (A-)GeTe nanoparticles followed protocols adapted from Caldwell \textit{et al}.\cite{Caldwell2010} and reported by Yarema \textit{et al}.\cite{Yarema2018} The first batch was synthesized through a hot-injection route as schematically shown in Fig.~\ref{fig:synthesis}(a). Anhydrous germanium(II) iodide (GeI$_2$, 163\,mg) was dissolved in 2\,ml trioctylphosphine (TOP) in a glove box and stirred overnight. The following day, 2\,g of trioctylphosphine oxide (TOPO) were added and the yellow solution was transferred to a reaction flask which was purged with nitrogen beforehand. After heating the solution to 235$^{\circ}$C, 240\,\textmu l dodecanethiol and 30\,s later 667\,\textmu l 0.75\,M Te-TOP solution (previously prepared) were injected. About 40\,s later, the color of the solution in the flask changed from yellow to dark brown, indicating nucleation of nanoparticles. After 4\,min, the reaction was terminated and the flask was cooled rapidly by acetone mist and, later, with pressurized air. The crude solution of GeTe nanoparticles was transferred air-free to the glove box where anhydrous ethanol was added (3:1). The black precipitate was separated by centrifugation (4000\,rpm, 10\,min) and dispersed in 1\,ml anhydrous chloroform. After centrifuging (4000\,rpm, 10\,min), ethanol was added to the dark brown solution (2:1). Another centrifugation step resulted in a clear liquid and a black precipitate. The latter was dispersed in 1\,ml toluene, forming a colloid that remained stable for multiple weeks. We refer to this synthetic protocol below as \textit{synthesis 1}. The alternative synthetic approach, \textit{synthesis 2}, led to several batches with different GeTe particle sizes. This amide-promoted synthesis is schematically shown in Fig.~\ref{fig:synthesis}(b) and described in detail in Ref.~\onlinecite{Yarema2018}. While the A-GeTe nanoparticles obtained from synthesis 1 were covered by TOP ligands [\textit{cf.} Fig.~\ref{fig:synthesis}(c)], the particles from synthesis 2 were covered with an oleate shell. From transmission electron microscopy (TEM) observations, the average size of each particle synthesis was estimated. The A-GeTe particle size available from synthesis 1 was 5.5\,$\pm$\,1.6\,nm; synthesis 2 provided A-GeTe particles with diameters 4.8\,$\pm$\,0.6\,nm, Fig.~\ref{fig:synthesis}(d), and 6.9\,$\pm$\,0.9\,nm, Fig.~\ref{fig:synthesis}(e). It has to be noted that synthesis 2 led to particles with a much narrower size distribution, as visible in Fig.~\ref{fig:synthesis}(f): the green size distribution refers to the particles from synthesis 1 and the blue and red size distributions refer to the particles from synthesis 2. Upon annealing, the A-GeTe particles will relax into the crystalline phase if $\Delta T\,>\,T_{C,1}$. However, due to the large surface-to-volume ratio of small nanoparticles, coalescence is energetically favorable. Thus, coalescence of sub-10\,nm particles has been reported either at temperatures above $T_{C,1}$,\cite{Yarema2018} or at lower temperatures. Thus, it occurs prior or throughout crystallization [\textit{cf.} Fig.~\ref{fig:synthesis}(g)].\cite{Keitel2016} \begin{figure} \includegraphics{reaction_scheme_v05-01.png} \caption{\label{fig:synthesis} Sketch of the hot-injection methods 1, (a), and 2, (b), used for the synthesis of amorphous (A) sub-10\,nm GeTe nanoparticles, covered by organic ligands, such as trioctylphosphine (TOP), shown in (c). TEM images of the particles with an average diameter of 4.8\,nm, (d), and 6.9\,nm, (e), allow for the determination of the particle size distributions shown in (f). The top (blue) and bottom (red) distribution relate to particles from synthesis 2, (b); the middle (green) distribution is for synthesis 1, (a). (g) Heating of the A-GeTe particles with $\Delta T$\,$\geq$\,$T_{C,1}$ results in crystalline (C) GeTe particles which are likely to coalesce prior or throughout the crystallization process (dashed and solid arrow, respectively).} \end{figure} \subsection{\label{sec:level2XRD}Time-dependent X-ray diffraction} Samples were prepared for each particle size by repeated drop casting onto a circular quartz substrate (diameter 1.3\,cm). The sample thickness and weight, including the nanoparticles, ligands, and residual solvent, were not determined. However, the sample deposition was conducted with a particle concentration of about 5\,mg/ml and an estimated deposited amount of 5\,mg. To avoid oxidation, the colloid was deposited air-free in a nitrogen glove box. The samples were mounted in an Anton Paar XRK 900 reactor chamber that was purged with nitrogen (flow rate: 200\,ml/min, measured at ambient temperature and pressure) throughout the entire measurement. The samples were characterized with a PANalytical Empyrean diffractometer equipped with a X’Celerator Scientific ultrafast line detector and Bragg-Brentano HD incident beam optics. The instrument was operated at 45\,kV and 40\,mA using Cu K$\alpha$ radiation (1.54060\,\AA). The temperature was measured and controlled in the vicinity of the sample using a type K thermocouple; separate control measurements with a second thermocouple placed at the exact position of the sample indicated that the temperature difference between the two thermocouples was <\,5$^{\circ}$C for temperatures $T$\,<\,800$^{\circ}$C. Fig.~\ref{fig:tempcurve}(a) shows the applied temperature curve. The XRD chamber temperature $T$ is plotted as a function of the time $t$ during two heating and cooling cycles. The samples were heated and cooled at a rate of $\vartheta$\,=\,10$^{\circ}$C/min. At each $T$, the chamber was held for 6\,min total, which includes 1\,min for equilibration and 5\,min for the actual XRD scan. In the case of bulk stoichiometric compounds, we expect the crystallization of the initially A-GeTe to the $\alpha$ phase. $\alpha$-GeTe will then remain stable to $T_{C,2}$\,=\,357$^{\circ}$C.\cite{Bletskan2005} Above this temperature, the $\beta$ phase becomes stable. If GeTe is rich in tellurium (>\,50.9\%), A-GeTe crystallizes to the $\gamma$ phase. At elevated temperatures, a $\gamma$-to-$\beta$ transition can be observed.\cite{Bletskan2005} An overview of the crystalline structures and the corresponding reference patterns of GeTe are given in Tab.~\ref{tab:gete} and Fig.~\ref{fig:tempcurve}(b), respectively. Additionally, the reference patterns of crystalline Te and Ge, which are known impurities observed in GeTe,\cite{Caldwell2010,Yarema2018,Jost2015} are shown. \begin{table} \caption{\label{tab:gete}Overview of the crystalline phases observed for bulk GeTe. As mentioned in Sec.~\ref{sec:level1intro}, $\alpha$- and $\gamma$-GeTe are stable at room temperature, while $\beta$-GeTe is the high-temperature phase.} \begin{ruledtabular} \begin{tabular}{lllll} Phase & Crystal system & Space group no. & Space group & Ref.\\ \hline $\alpha$ & trigonal & 160 & R3m & \onlinecite{Goldak1966}\\ $\beta$ & cubic & 225 & Fm$\bar{3}$m & \onlinecite{Shelimova1965}\\ $\gamma$ & orthorhombic & 62 & Pnma & \onlinecite{Karbanov1968}\\ \end{tabular} \end{ruledtabular} \end{table} During the first heating, we collected an XRD pattern every 25$^{\circ}$C for $T$\,$\leq$\,200$^{\circ}$C and every 10$^{\circ}$C for 200$^{\circ}$C\,<\,$T$\,$\leq$\,450$^{\circ}$C. This was based on prior knowledge of the crystallization with $T_{C,1}$\,>\,200$^{\circ}$C observed for small nanoparticles (\textit{cf}.\,Tab.~\ref{tab:temp}) and the $\alpha$-to-$\beta$ transition $T_{C,2}$\,<\,400$^{\circ}$C as reported for bulk GeTe.\cite{Bletskan2005} Since we focused on monitoring the reversible crystalline-to-crystalline transition and no further events were expected for GeTe at lower temperatures during repeated cooling and heating, we adapted our temperature intervals accordingly. Thus, we chose $\Delta T$\,=\,10$^{\circ}$C for 450$^{\circ}$C\,$\geq$\,$T$\,$\geq$\,350$^{\circ}$C and $\Delta T$\,=\,25$^{\circ}$C for $T$\,<\,350$^{\circ}$C. For a reference value regarding the $\alpha$-to-$\beta$ transition temperature of bulk GeTe, we characterized flakes of a crystalline GeTe sputter target with \textit{in-situ} XRD as described above. The temperature-dependent diffractograms are shown in Fig.~\ref{fig:tempcurve}(c). The transitions from a peak doublet to a single peak for both 2$\theta$\,=\,24\,-\,27$^{\circ}$ and 2$\theta$\,=\,41\,-\,44$^{\circ}$ allow for the confirmation of the $\beta$ phase of GeTe. Based on the XRD scans taken every 10$^{\circ}$C, $T_{C,2}$ is extracted as 380$^{\circ}$C [Fig.~\ref{fig:tempcurve}(d)]. This matches the transition temperature of GeTe with a Te content between 50.2 and 50.5\% given by the phase diagram in Ref.~\onlinecite{Bletskan2005}. Hence, we can conclude that the reference sample is stoichiometric. \begin{figure} \includegraphics{xrd_spectra_schemes_v02-01.png} \caption{\label{fig:tempcurve} (a) XRD chamber temperature $T$ as a function of the time $t$ during two heating and cooling cycles with $\vartheta$\,=\,10$^{\circ}$C/min for heating and cooling. (b) Reference XRD patterns for all crystalline GeTe phases, tellurium, and germanium.\cite{Shelimova1965,Karbanov1968,Goldak1966,Bradley1924,Cooper1962} The insets show the angular ranges and Miller indices which allow for a distinction between $\alpha$- and $\beta$-GeTe. (c) The \textit{in-situ} XRD pattern of flakes from a crystalline GeTe sputter target shows the transition from $\alpha$- to $\beta$-GeTe during the first heating cycle and back to $\alpha$-GeTe during the following cooling (the relevant temperature ranges are marked by white dashed rectangles). (d) The individual XRD patterns corresponding to the 2$\theta$- and $T$-range of the $\alpha$-to-$\beta$ transition show the peak doublets and singlets with the reflexes marked in green and red, respectively.} \end{figure} \section{\label{sec:level1result}Results and discussion} In the following, we will discuss the temperature-dependent XRD patterns for three nanoparticle-based samples. First, we will focus on the structural evolution of the GeTe particles from synthesis 1, which showed a broad size distribution. Second, we will analyze the diffractograms obtained for the particles from synthesis 2, which had a narrower size distribution. Also, it provided two samples with sizes smaller and larger than the average size of the particles from synthesis 1. All diffractograms were normalized by dividing the intensity values by the maximum intensity for each particular diffractogram, meaning $I$/$I_{\textnormal{max}}$, which allows for a better graphical representation. Additionally, the patterns are displayed with a constant offset [($I$/$I_{\textnormal{max}}$)+\,0.25 between each diffractogram] to show the structural evolution of the sample over time. Such waterfall plots facilitate the interpretation of time-dependent XRD data since interpolation as used in 2D contour plots [\textit{cf.} Fig.~\ref{fig:tempcurve}(c)] is avoided. Nevertheless, it has to be noted that the time axis has been adapted to follow the XRD patterns. Thus, the spacing is not necessarily equal. This is due to the fact that we chose $\Delta T$\,=\,25$^{\circ}$C for $T$\,$\leq$\,200$^{\circ}$C between each measurement in heating 1 as well as for $T$\,$\leq$\,350$^{\circ}$C in cooling 1, heating 2, and cooling 2. For higher temperatures in each cycle, we chose $\Delta T$\,=\,10$^{\circ}$C for a better resolution. These temperature choices were defined by the expected structural transitions (\textit{cf.} Tab.~\ref{tab:temp}). In the discussion of all time-dependent XRD diffractograms, we focus on three angular ranges of 2$\theta$ to investigate $T_{C,1}$ and $T_{C,2}$: \begin{itemize}[noitemsep \item 24-27$^{\circ}$: transition from (003)/(021)-doublet to (111)-singlet marks $\alpha$-to-$\beta$ transition (and \textit{vice versa}), \item 29-30$^{\circ}$: appearance of (202)-peak marks crystallization, and \item 41-44$^{\circ}$: transition from (024)/(220)-doublet to (220)-singlet marks $\alpha$-to-$\beta$ transition (and \textit{vice versa}). \end{itemize} Since the doublet-to-singlet transition is more pronounced between 41 and 44$^{\circ}$ [\textit{cf.} reference in Fig.~\ref{fig:tempcurve}(b)], we use this angular range to identify $T_{C,2}$. The transition temperatures are marked by dashed rectangles in Figs.~\ref{fig:xrdhanbing}-\ref{fig:xrdy877} and the values for $T_{C,1}$, $T_{C,2}$, and $T_{C,2'}$ are noted next to the pattern. $T_{C,2}$ refers to the $\alpha$-to-$\beta$ transition during heating and $T_{C,2'}$ refers to the reverse transition, $\beta$ to $\alpha$, during cooling. For orientation, all diffractograms we will discuss below are color-coded with respect to the heating curve and the temperature scaling. In addition to the transition temperatures $T_{C,1}$, $T_{C,2}$, and $T_{C,2'}$, we mark the transition points between each cycle, \textit{i.e.} $T_{\textnormal{min}}$\,=\,75$^{\circ}$C and $T_{\textnormal{max}}$\,=\,450$^{\circ}$C on the right of the XRD patterns. \subsection{\label{sec:level2resultHB}\textit{In-situ} XRD on polydisperse GeTe nanoparticles} The diffractograms of the GeTe nanoparticles from synthesis 1, which led to a broad size distribution [\textit{cf.} Fig.~\ref{fig:synthesis}(f)], are shown in Fig.~\ref{fig:xrdhanbing}. The XRD patterns start at $T$\,=\,100$^{\circ}$C in the first heating cycle. During heating 1, a narrowing of the intensity peak close to 2$\theta$\,=\,30$^{\circ}$ as well as a convergence of the (024)/(220)-doublet can be seen. First, we focus on the interpretation of the width of the (202) and (200)-peak of the $\alpha$ and $\beta$ phase, respectively. In a general and simplified consideration, the full width at half maximum (FWHM or $w$) of an XRD peak can be related to the lattice strain and the crystallite size $D$. It has to be noted, that $D$ is not necessarily identical with the particle size $d$. In case of coalescence for example, $D$ can be larger than the initial $d$. Hence, without a high-resolution TEM investigation it is difficult to judge whether the particle or domain sizes are determined via XRD. This ambiguity has to be kept in mind when the term \textit{crystallite} is used for $D$.\cite{Girgsdies2015} Nevertheless, its size can theoretically be estimated from the well-known Scherrer equation: \[D = [K \cdot \lambda]/[w \cdot cos(\theta)]\] with $K$ being the shape factor or Scherrer constant (often approximated by 0.9), $\lambda$ being the X-ray wavelength in nm, and $\theta$ being the angle of diffraction in rad.\cite{Girgsdies2015} The observed peak width $w_{\textnormal{obs}}$ in rad has to be corrected by subtracting the instrumental broadening $w_{\textnormal{instr}}$ in rad: \[w = w_{\textnormal{obs}} - w_{\textnormal{instr}}.\] In our case, $w_{\textnormal{instr}}$\,$\approx$\,0.1$^{\circ}$ and thus, up to 72\% of $w_{\textnormal{obs}}$. Therefore, it would be necessary to collect the diffractograms with a much higher resolution (\textit{e.g.} at a synchrotron) to obtain a better estimate of $D$. Nevertheless, a qualitative approximation of $D$ based on the XRD patterns is possible. In a simplified picture, the initial particle diameter $d$\,$\approx$\,5.5\,nm determined by TEM would imply $w_{\textnormal{obs}}$\,$\approx$\,1.6$^{\circ}$ (assuming $D$\,=\,$d$, $K$\,=\,0.9, $w_{\textnormal{instr}}$\,=\,0.1$^{\circ}$, and $\lambda$\,=\,0.15406\,nm). In contrast, Fig.~\ref{fig:xrdhanbing} shows 0.14$^{\circ}$\,$\leq$\,$w_{\textnormal{obs}}$\,$\leq$\,0.2$^{\circ}$. Thus, it seems very likely that the GeTe nanoparticles coalesced during the initial crystallization. Thereby, it has to be emphasized that XRD patterns represent only an averaged signal, and no conclusion on the individual particles can be drawn. Between $T$\,=\,240 and 300$^{\circ}$C the peak width decreases further, indicating continued growth of $D$. This could relate to further coalescence or growth of larger grains at the expense of smaller ones.\cite{Polking2011,Yarema2018} Further thermal treatment showed no influence on the width of the (202)- or (200)-peak of $\alpha$- or $\beta$-GeTe, respectively. Thus, it can be assumed that grain growth stopped. Apart from the narrowing of the (202)-peaks, a slight shift to smaller angles upon heating can be seen. During cooling, the peak position shifts back to its initial value. A similar behavior can be seen for the second heating and cooling. This can be rationalized by the relaxation of the distorted $\alpha$-phase to the cubic $\beta$ phase.\cite{Polking2011,Boschker2017} For $T$\,=\,260$^{\circ}$C, the (024)/(220)-doublet becomes clearly visible and converges smoothly towards the (220)-singlet for increasing $T$\,$\geq$\,360$^{\circ}$C. This indicates a slow transition from the rhombohedrally distorted $\alpha$-phase to the relaxed cubic $\beta$-phase. Something similar has been observed for larger GeTe nanoparticles, which were crystalline after synthesis.\cite{Polking2011} During the first cooling cycle, the gradual splitting of the (220)-peak towards the (024)/(220)-doublet can be seen. The related $\beta$-to-$\alpha$-transition temperature was $T_{C,2'}$\,=\,400$^{\circ}$C according to the diffractogram. A similar behavior can be found for heating and cooling 2, where the (024)/(220)-doublet transitions into the (220)-singlet at $T_{C,1}$\,=\,420$^{\circ}$C and the high-temperature phase transitions back into the room-temperature phase at $T_{C,2'}$\,=\,400$^{\circ}$C. Thus, the transition between the two C-GeTe phases is reversible for the sample based on nanoparticles from synthesis 1. \begin{figure} \includegraphics{xrd_spectra_nps-hanbing-181113_v02-01.png} \caption{\label{fig:xrdhanbing} Structural evolution of GeTe nanoparticles from synthesis 1 with a diameter of 5.5\,$\pm$\,1.6\,nm [green histogram in Fig.~\ref{fig:synthesis}(f)]. Each XRD pattern is normalized to the maximum peak intensity ($I$/$I_{\textnormal{max}}$). The colors represent the scan temperature $T$, as shown on the left side. Upon heating, amorphous GeTe (A-GeTe) crystallizes into $\alpha$-GeTe, $T_{C,1}$\,=\,240$^{\circ}$C, and relaxes into the $\beta$-phase. The transition between $\alpha$- and $\beta$-GeTe continues during further heating and cooling with $T_{C,2}$\,=\,420$^{\circ}$C and $T_{C,2'}$\,=\,400$^{\circ}$C. The small peak at 2$\theta$\,$\approx$\,27$^{\circ}$ (black asterisk) could indicate a small amount of crystalline Te or Ge impurity, \textit{cf.} Fig.~\ref{fig:tempcurve}(b).} \end{figure} \begin{figure} \includegraphics{xrd_spectra_nps-y1215-01.png} \caption{\label{fig:xrdy1215} Structural evolution of A-GeTe nanoparticles prepared by synthesis 2 and with a diameter of 4.8\,$\pm$\,0.6\,nm [blue histogram in Fig.~\ref{fig:synthesis}(f)]. Normalization, temperature profile, and color code are similar to Fig.~\ref{fig:xrdhanbing}. Upon heating the A-GeTe nanoparticles, they crystallize into $\alpha$-GeTe at $T_{C,1}$\,=\,240$^{\circ}$C and relax into the $\beta$-phase at $T_{C,2}$\,=\,400$^{\circ}$C. The transition between $\alpha$- and $\beta$-GeTe continues during further heating and cooling with $T_{C,2}$\,=\,$T_{C,2'}$\,=\,390$^{\circ}$C. We ascribe the small peak at 2$\theta$\,$\approx$\,27.2$^{\circ}$ (black asterisk) to a small amount of crystalline Te or Ge impurity, \textit{cf.} Fig.~\ref{fig:tempcurve}(b). The small peak at 2$\theta$\,$\approx$\,25.7$^{\circ}$ (black rhombus) could not be matched with any reference diffractograms given (\textit{cf.} Appendix~\ref{sec:App2}). However, carbon shows a strong peak at this angle and could have contaminated the sample.} \end{figure} The aforementioned reversible change is less evident for the (003)/(021)-doublet to (111)-singlet transition at 2$\theta$\,=\,24$\ldots$27$^{\circ}$. This was expected due to the low intensity of these peaks in the reference [\textit{cf.} Fig.~\ref{fig:tempcurve}(b)]. Nevertheless, the transition is visible, but does not allow for a determination of the transition temperatures $T_{C,2}$ and $T_{C,2'}$. \subsection{\label{sec:level2resultMY}\textit{In-situ} XRD on nanoparticles with narrower size distribution} In Figs.~\ref{fig:xrdy1215} and \ref{fig:xrdy877} the diffractograms of the nanoparticles from synthesis 2 are shown. These two samples had a smaller and larger average size and a narrower size distribution than the GeTe nanoparticles discussed in the previous section. While the XRD patterns were collected, normalized, and plotted similarly to Fig.~\ref{fig:xrdhanbing}, it is obvious that the diffractograms in Figs.~\ref{fig:xrdy1215} and \ref{fig:xrdy877} for the amorphous samples with 100$^{\circ}$C\,$\leq$\,$T$\,$\leq$\,250$^{\circ}$C are noisier. One possible reason could be that the total amount of sample from synthesis 2 was lower compared to samples from synthesis 1 due to different sample contributions from the TOP and oleic acid ligands (\textit{cf.} Section \ref{sec:level2Synth}). Another reason could be the normalization $I$/$I_{\textnormal{max}}$. This leads to a pronounced increase of the signal-to-noise ratio (SNR) for XRD patterns above $T_{C,1}$, where $I_{\textnormal{max}}$\,$\gg$\,$I$, but does not affect the SNR of amorphous diffractograms (\textit{cf.} Appendix~\ref{sec:App1}) In Fig.~\ref{fig:xrdy1215}, the first sign of crystallization into $\alpha$-GeTe can be observed for $T$\,=\,240$^{\circ}$C which matches what we already observed in Fig.~\ref{fig:xrdhanbing}. Again, the peak width indicates a coalescence of particles prior to or during crystallization. Already the XRD pattern for the next temperature step shows a clear (024)/(220)-doublet. Further heating leads to a quick divergence of the two peaks, while the high-angle peak seems to overlap with another small peak. At the same time, the (202) peak shifts quite strongly to lower angles and a weak (003)/(021)-doublet becomes visible. The latter shows a prompt divergence and an overlapping additional peak (for $T$\,=\,370\,-\,390$^{\circ}$C) as well. The described trends remain for all three diffractive signatures of the $\alpha$-phase until $T$\,=\,400$^{\circ}$C. For this temperature, a prompt shift of the central peak occurs and both doublet-to-singlet transitions are observed. Thus, we define the onset of the $\alpha$-to-$\beta$-transition at $T_{C,2}$\,=\,400$^{\circ}$C. Further heating leads to a relaxation of the peaks and the subsequent cooling shows the smooth and reversible transition between $\beta$- and $\alpha$-GeTe. This behavior is similar to what was seen in Fig.~\ref{fig:xrdhanbing}. Only the transition temperatures are slightly lower with $T_{C,2'}$\,=\,390$^{\circ}$C and $T_{C,2}$\,=\, $T_{C,2'}$ from there on. It has to be noted, that the (024)/(220)-doublet shows very small split peaks. This could be related to the fact that we used CuK$_{\alpha,1}$ and CuK$_{\alpha,2}$. Since the peaks are sharper compared to what we found for the diffractograms in Fig.~\ref{fig:xrdhanbing}, this effect might appear more strongly for the sample investigated here. Furthermore, the narrower peaks shown in Fig.~\ref{fig:xrdy1215} indicate a larger crystallite size $D$ and thus, more pronounced coalescence than observed for the GeTe particles from synthesis 1 can be assumed. This is surprising since oleic acid ligands are used for the particles from synthesis 2. These molecules are much longer than the TOP ligands, which are shown in Fig.~\ref{fig:synthesis}(c), and thus, a more pronounced nanoparticle separation and potentially less coalescence could be expected.\cite{Yarema2018} Similar to the diffractograms in Fig.~\ref{fig:xrdhanbing}, a small additional peak not matching the $\alpha$- or $\beta$-lattice was found. It appears at 2$\theta$\,$\approx$\,27.2$^{\circ}$ for $T$\,$\geq$\,310$^{\circ}$C, but it disappears for $T$\,$\geq$\,380$^{\circ}$C and does not reoccur throughout the following temperature treatment. Due to the angular position of this peak, it could indicate traces of crystalline Te or Ge in the sample. Since the most pronounced peaks for both crystalline patterns are almost overlapping [\textit{cf.} Fig.~\ref{fig:tempcurve}(b)], an unambiguous identification is not possible. Nevertheless, Te impurities have been reported for annealed initially amorphous GeTe nanoparticles.\cite{Caldwell2010} Additionally, segregated Te has been observed as a result of surface oxidation of GeTe (\textit{cf.} Appendix~\ref{sec:App2}).\cite{Kolb2019} In Fig.~\ref{fig:xrdy877}, a very weak crystalline signal can already be identified for $T$\,=\,100$^{\circ}$C. The (202)-peak is very broad and flat, thus indicating a much smaller $D$ than what we found for the smaller GeTe nanoparticles (\textit{cf.} Figs.~\ref{fig:xrdhanbing} and \ref{fig:xrdy1215}). Due to this initial (partial) crystallization of the particles, the determination of $T_{C,1}$ is difficult. Therefore, we use the enhancement of this hump at 2$\theta$\,$\approx$\,29.6$^{\circ}$ and the simultaneous onset of a crystalline signal in the high-angle range, which we define for $T_{C,1}$\,=\,210$^{\circ}$C. However, this is less reliable than $T_{C,1}$ defined for the samples based on the smaller GeTe nanoparticles discussed above. Further heating leads to more pronounced peaks for the three considered angular ranges. Thereby, the (202)-peaks narrow, while the peak width $w_{\textnormal{obs}}$ remains relatively broad compared to what is shown in Figs.~\ref{fig:xrdhanbing} and \ref{fig:xrdy1215}. This would imply that coalescence did not progress as far as for these samples. If we could apply the simplified logic of longer ligands resulting in a lower degree of particle sintering, we would expect $D_{\textnormal{5.5}}$\,>\,$D_{\textnormal{6.9}}$ (particles with an average diameter $d$\,$\approx$\,5.5\,nm and TOP ligands \textit{versus} particles with $d$\,$\approx$\,6.9\,nm and oleate ligands). Similar to what we observed before, the (202)- and (200)-peak, respectively, shift smoothly. However, during the first heating, we cannot determine $T_{C,2}$. Instead, during cooling a broadening and subsequent peak splitting from the (220)-singlet to the (024)/(220)-doublet can be observed, starting at $T_{C,2'}$\,=\,350$^{\circ}$C. The following $\alpha$-to-$\beta$-transition sets in at $T_{C,2'}$\,=\,370$^{\circ}$C with the transition back to $\alpha$ at the same temperature during the second heating cycle. The (003)/(021)-doublet cannot be observed before the end of the last cooling, \textit{i.e.} $T$\,=\,75$^{\circ}$C. \begin{figure} \includegraphics{xrd_spectra_nps-y877-01.png} \caption{\label{fig:xrdy877} Structural evolution of A-GeTe nanoparticles from synthesis 2 with a diameter of 6.9\,$\pm$\,0.9\,nm [red histogram in Fig.~\ref{fig:synthesis}(f)]. Normalization, temperature profile, and color code are similar to Figs.~\ref{fig:xrdhanbing} and \ref{fig:xrdy1215}. Upon heating, A-GeTe crystallizes at $T_C$\,=\,210$^{\circ}$C. Presumably, $\alpha$-GeTe relaxes into $\beta$-GeTe since the transition back to the $\alpha$ phase can be found at $T_{C,2'}$\,=\,350$^{\circ}$C. During further heating and cooling the crystalline-to-crystalline transitions can be found at $T_{C,2}$\,=\,$T_{C,2'}$\,=\,370$^{\circ}$C.} \end{figure} \section{\label{sec:level1outlook}Conclusion} We synthesized ultrasmall nanoparticles of the phase-change material GeTe with diameters below 10\,nm. In the literature, these sizes have been identified to show size-dependent material properties. We studied the crystallization behavior of drop-casted nanoparticle films using \textit{in-situ} XRD while heating the films under a nitrogen atmosphere. All nanoparticle-based samples showed crystallization to $\alpha$-GeTe followed by a crystalline-to-crystalline transition to the high-temperature $\beta$-phase of GeTe. During cooling, this transition was reversible and could be repeated for a second heating and cooling cycle. All samples showed increased crystallization temperatures $T_{C,1}$ and $T_{C,2}$ compared to bulk GeTe. \begin{figure} \includegraphics{overview_transition-temps_v03-01.png} \caption{\label{fig:Toverview} (a) Comparison of the GeTe transition temperatures $T_{C,1}$ (disk) and $T_{C,2}$ (diamonds) determined for different particle sizes $d$ and film thicknesses $t$. The colors represent the different characterization techniques and samples. The larger the symbol area, the higher was the applied heating rate $\vartheta$. Exemplary values of $\vartheta$ are noted in the plot. (b) A magnified plot for the section in (a) where most $T_{C,1}$ were determined for small $d$.} \end{figure} Fig.~\ref{fig:Toverview} compares our results to the values obtained by previous studies (\textit{cf.} Tab.~\ref{tab:temp}). While we observed coalescence, similar to the literature, we list the transition temperatures as a function of the initial particle size $d$. Fig.~\ref{fig:Toverview} reveals that the literature values follow a general trend of an increasing $T_{C,1}$ and $T_{C,2}$ for decreasing $d$ below 10\,nm. However, quantification of the size-dependent $T_{C,1}$ and $T_{C,2}$ is complicated by coalescence. We propose to perform a similar study with separated nanoparticles, \textit{e.g.} by using atomic layer deposition. Furthermore, \textit{in-situ} TEM, \textit{in-situ} Raman spectroscopy, and ultrafast DSC could give further insights into the crystallization of individual nanoparticles.\cite{Chen2016,Polking2011,Pries2019} This would be especially interesting for GeTe, since a decrease of $T_{C,1}$ has been found for decreasing particle size $d$ for Ge$_2$Sb$_2$Te$_5$, which is one of the most prominent phase-change materials.\cite{Shportko2009,Wuttig2017} In contrast, Ref.~\onlinecite{Raoux2008} reported an increased $T_{C,1}$ for sputter-deposited doped GeSb, Sb$_2$Te, and Ge$_2$Sb$_2$Te$_5$ films with thicknesses $t$\,<\,10\,nm. Nevertheless, Ref.~\onlinecite{Simpson2010} showed for Ge$_2$Sb$_2$Te$_5$ that this behavior can be ascribed to (capping-dependent) strain in the thin films. The crystallization of nanoparticles will likely be influenced by the large surface-to-volume ratio and presumably, significant strain. Moreover, Kolb \textit{et al.} reported on the nucleation of non-oxidized bulk GeTe at 230$^{\circ}$C. In this context, the crystallization at 180$^{\circ}$C was found to be induced by surface oxidation and related elemental segregation, which led to Te serving as nucleation sites.\cite{Kolb2019} Apart from the initial crystallization, our study focused on the reversible $\alpha$-to-$\beta$-transition which we observed for the three nanoparticle samples. These particles were initially amorphous, synthesized following two different protocols, and had different diameters. The transition temperatures were higher compared to the thin films and crystalline nanoparticles discussed in the literature (\textit{cf.} Tab.~\ref{tab:temp}), even if particle coalescence is assumed. It is promising that samples based on very small solution-deposited amorphous nanoparticles still show reversible phase change behavior along with an increased bandgap and tunable refractive index.\cite{Michel2020} This is potentially useful for the scalability of active photonic, phase-change random access memory, and optical data storage. \begin{acknowledgments} This project was funded by the European Research Council under the European Union's Seventh Framework Program (FP/2007-2013)/ERC Grant Agreement Number 339905 (QuaDoPS Advanced Grant). A.-K.U.M. acknowledges funding from the ETH Zurich Postdoctoral Fellowship Program and the Marie Curie Actions for People COFUND Program (Grant 17-1 FEL-51). M.Y. acknowledges funding from the SNF Ambizione Fellowship (No.\,161249). The authors thank H.~Rojo Sanz, S.~Meyer, and I.~Giannopoulos for technical assistance and P.~Knüsel, M.D.~Wörle, and J.~Schawe for fruitful discussions. TEM measurements were performed at the Scientific Center for Optical and Electron Microscopy (ScopeM) at ETH Zurich. \end{acknowledgments} \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request.
1,108,101,562,672
arxiv
\section{Introduction} Reconfigurable intelligent surface (RIS) has been received considerable attention owing to the property that it can programmably change the propagation characteristic of the signal~\cite{9140329}. When it comes to millimeter-wave communications, RIS can make the signal bypass the blockage by reflecting the signal from the base station (BS) to the user equipment (UE)~\cite{9119122}. The advent of RIS has generated various new research, and one of them is the channel estimation for RIS-aided systems. One of major concerns in the channel estimation for RIS-aided systems is that the RIS has been added to the link between BS and UE, and the addition of RIS causes the excessive beam training overhead for channel estimation. To address the issue on large overhead, the channel estimation algorithms based on compressive sensing (CS)~\cite{9103231,9354904,chen2019channel} and atomic norm minimization (ANM)~\cite{he2021channel} have been introduced, assuming that signal paths are sparse. Channel estimation algorithms for RIS-aided systems in~\cite{9103231,9354904,he2021channel} commonly estimate angle-of-departures (AoDs) and angle-of-arrivals (AoAs), then estimate channel gains to construct the channel. Algorithms in~\cite{9103231,9354904} exploit CS for AoD/AoA estimation, but in this case, a grid-mismatch limits the estimation accuracy~\cite{5710590}. In~\cite{he2021channel}, the grid-mismatch is handled by ANM, however, the channel estimation becomes inaccurate when AoAs and AoDs are closely separated. The channel estimation for RIS-aided multi-user systems is proposed in~\cite{chen2019channel}, where the algorithm in~\cite{chen2019channel} jointly estimates multiple channels by using CS-based multi-user joint channel estimator. Although there have been various studies on the low-overhead channel estimation, the problem induced by the shortage of training beams and how it affects the channel estimation have not been discussed properly. In this paper, we propose an ANM-based low-overhead channel estimation for RIS-aided multiple-input-multiple-output (MIMO) systems. When the beam training is reduced, an erroneous multipath reception may occur and induces the channel estimation failure. To tackle this issue, a training beamwidth adaptation is proposed to widen the beamwidth when there is less beam training. Then, the atomic norm of the channel for RIS-aided MIMO systems is defined, where defining the atomic norm is feasible when pilot signals received during beam training are compiled in a specified manner. A detailed explanation on the proposed algorithm is presented in following sections. $\textit{Notations:}$ We use lower-case and upper-case bold characters to respectively represent vectors and matrices throughout this paper. $(\cdot)^{T}$, $(\cdot)^{H}$, and $(\cdot)^{*}$ respectively denote transpose, conjugate transpose, and complex conjugation. $\textrm{Tr}(\cdot)$ denotes the trace of a matrix, and $\textrm{diag}(\cdot)$ denotes the diagonal matrix whose diagonal entries equal to entries of given vector. $\textrm{vec}(\cdot)$ denotes vectorization of given matrix. $\lVert \cdot \rVert_{\textrm{F}}$ denotes Frobenius norm. The curled inequality symbol $\succeq$ denotes matrix inequality. If $\mathbf{A} \succeq \mathbf{B}$, a matrix $\mathbf{A}-\mathbf{B}$ is positive semidefinite. $\otimes$ and $\diamond$ respectively denote Kronecker product and Khatri-Rao product. $\mathbf{I}_{N}$ denotes a $N \times N$ identity matrix. \section{Channel and Signal Model}\label{system model} We consider a downlink RIS-aided MIMO system, which means that a base station (BS) transmits the signal to a RIS and the RIS bounces back the signal to an user equipment (UE). The BS, the RIS, and the UE are equipped with $M_{\textrm{B}}$, $M_{\textrm{R}}$, and $M_{\textrm{U}}$ antennas respectively. Here, antenna arrays that BS, RIS, and UE use are uniform linear arrays (ULAs) with half-lambda spacing. In this paper, the BS and the UE employ full-complexity hybrid beamforming structure~\cite{hybrid}, where the BS and the UE are respectively equipped with $N_{\textrm{B}}$ and $N_{\textrm{U}}$ RF chains. The steering vector of the ULA with half-lambda spacing, $\mathbf{a}(\theta)$ is \begin{equation}\label{steer_vec} \mathbf{a}(\theta) = [1,e^{j\pi\cos\theta},\ldots,e^{j\pi(M-1)\cos\theta}]^T \in \mathbb{C}^{M\times 1}, \end{equation} where $\theta$ denotes the steering direction, and $M$ denotes the number of antennas. A scheme of RIS-aided MIMO system considered in this paper is given in Fig.~\ref{RIS}. Assuming all signal paths between BS and UE are blocked, a channel for RIS-aided MIMO system can be represented as a cascade of two separate channels: BS-to-RIS channel and RIS-to-UE channel. The BS-to-RIS channel $\mathbf{H}_{\textrm{BR}}$ can be given by \begin{equation}\label{HBR} \begin{split} \mathbf{H}_{\textrm{BR}} &=\sum_{l=1}^{L_{\textrm{BR}}} \alpha_{\textrm{BR}}^{l} \mathbf{a}(\phi_{\textrm{BR}}^{l}) \mathbf{a}(\theta_{\textrm{BR}}^{l})^{H} \\ &= \mathbf{A}(\bm{\phi}_{\textrm{BR}}) \textrm{diag}(\bm{\rho}_{\textrm{BR}}) \mathbf{A}(\bm{\theta}_{\textrm{BR}})^{H} \in \mathbb{C}^{M_{\textrm{R}} \times M_{\textrm{B}}}, \end{split} \end{equation} where $L_{\textrm{BR}}$ denotes the number of signal paths between BS and RIS. $\alpha_{\textrm{BR}}^{l}$, $\phi_{\textrm{BR}}^{l}$, and $\theta_{\textrm{BR}}^{l}$ respectively denote the channel gain, the BS-to-RIS AoA, and the BS-to-RIS AoD of the $l$-th signal path. $\bm{\phi}_{\textrm{BR}}= \{ {\phi}^{1}_{\textrm{BR}},\ldots,{\phi}^{L_{\textrm{BR}}}_{\textrm{BR}} \}$ and $\bm{\theta}_{\textrm{BR}}= \{ {\theta}^{1}_{\textrm{BR}},\ldots,{\theta}^{L_{\textrm{BR}}}_{\textrm{BR}} \}$. $\mathbf{A}(\bm{\phi}_{\textrm{BR}})=[\mathbf{a}(\phi_{\textrm{BR}}^{1}),\ldots,\mathbf{a}(\phi_{\textrm{BR}}^{L_{\textrm{BR}}}) ] \in \mathbb{C}^{M_{\textrm{R}} \times L_{\textrm{BR}}}$, $\mathbf{A}(\bm{\theta}_{\textrm{BR}})=[\mathbf{a}(\theta_{\textrm{BR}}^{1}),\ldots,\mathbf{a}(\theta_{\textrm{BR}}^{L_{\textrm{BR}}}) ] \in \mathbb{C}^{M_{\textrm{B}} \times L_{\textrm{BR}}}$, and $\bm{\rho}_{\textrm{BR}}=[\alpha_{\textrm{BR}}^{1},\ldots,\alpha_{\textrm{BR}}^{L_{\textrm{BR}}}]^{T}$. The RIS-to-UE channel $\mathbf{H}_{\textrm{RU}}$ can be given by \begin{equation}\label{HRU} \begin{split} \mathbf{H}_{\textrm{RU}} &=\sum_{l=1}^{L_{\textrm{RU}}} \alpha_{\textrm{RU}}^{l} \mathbf{a}(\phi_{\textrm{RU}}^{l}) \mathbf{a}(\theta_{\textrm{RU}}^{l})^{H} \\ &= \mathbf{A}(\bm{\phi}_{\textrm{RU}}) \textrm{diag}(\bm{\rho}_{\textrm{RU}}) \mathbf{A}(\bm{\theta}_{\textrm{RU}})^{H} \in \mathbb{C}^{M_{\textrm{U}} \times M_{\textrm{R}}}, \end{split} \end{equation} where $L_{\textrm{RU}}$ denotes the number of signal paths between RIS and UE. $\alpha_{\textrm{RU}}^{l}$, $\phi_{\textrm{RU}}^{l}$, and $\theta_{\textrm{RU}}^{l}$ respectively denote the channel gain, the RIS-to-UE AoA, and the RIS-to-UE AoD of the $l$-th signal path. $\bm{\phi}_{\textrm{RU}}= \{ {\phi}^{1}_{\textrm{RU}},\ldots,{\phi}^{L_{\textrm{RU}}}_{\textrm{RU}} \}$ and $\bm{\theta}_{\textrm{RU}}= \{ {\theta}^{1}_{\textrm{RU}},\ldots,{\theta}^{L_{\textrm{RU}}}_{\textrm{RU}} \}$. $\mathbf{A}(\bm{\phi}_{\textrm{RU}})=[\mathbf{a}(\phi_{\textrm{RU}}^{1}),\ldots,\mathbf{a}(\phi_{\textrm{RU}}^{L_{\textrm{RU}}}) ] \in \mathbb{C}^{M_{\textrm{U}} \times L_{\textrm{RU}}}$, $\mathbf{A}(\bm{\theta}_{\textrm{RU}})=[\mathbf{a}(\theta_{\textrm{RU}}^{1}),\ldots,\mathbf{a}(\theta_{\textrm{RU}}^{L_{\textrm{RU}}}) ] \in \mathbb{C}^{M_{\textrm{R}} \times L_{\textrm{RU}}}$, and $\bm{\rho}_{\textrm{RU}}=[\alpha_{\textrm{RU}}^{1},\ldots,\alpha_{\textrm{RU}}^{L_{\textrm{RU}}}]^{T}$. \begin{figure}[!t] \begin{center} \includegraphics[width=0.9\columnwidth]{./figure/figure1_v2.pdf} \caption{A scheme of RIS-aided MIMO system. The signal path between BS and UE is blocked.} \label{RIS} \end{center} \end{figure} A RIS control matrix $\bm{\Omega}$ can be given by \begin{equation}\label{Omega} \bm{\Omega}= \begin{bmatrix} \beta_{1} e^{j \vartheta_{1}} & 0 & \ldots & 0 \\ 0 & \beta_{2} e^{j \vartheta_{2}} & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & \beta_{M_{\textrm{R}}} e^{j \vartheta_{M_{\textrm{R}}}} \end{bmatrix} \in \mathbb{C}^{M_{\textrm{R}} \times M_{\textrm{R}}}, \end{equation} where $\beta_{m}$ and $\vartheta_{m}$ respectively denote a reflection coefficient and a phase shift of the $m$-th antenna in RIS. $\vartheta_{m} \in [0,2\pi)$ and $\beta_{m}$ can be either $0$ or $1$, where $0$ and $1$ respectively denotes the deactivation and the activation of the $m$-th antenna in the RIS. For a convenient representation of $\bm{\Omega}$, a RIS control vector $\bm{\omega}$ is defined as \begin{equation}\label{vecOmega} \bm{\omega}=\left[\beta_{1} e^{j \vartheta_{1}},\beta_{2} e^{j \vartheta_{2}},\ldots,\beta_{M_{\textrm{R}}} e^{j \vartheta_{M_{\textrm{R}}}} \right]^{T} \in \mathbb{C}^{M_{\textrm{R}} \times 1}. \end{equation} Note that $\bm{\Omega}=\textrm{diag}(\bm{\omega})$. The cascaded channel for RIS-aided MIMO system, $\mathbf{H}$ can be given by \begin{equation}\label{Cascade} \mathbf{H}=\mathbf{H}_{\textrm{RU}} \bm{\Omega} \mathbf{H}_{\textrm{BR}} \in \mathbb{C}^{M_{\textrm{U}} \times M_{\textrm{B}}}. \end{equation} A frame structure for beam training procedure in RIS-aided MIMO systems is depicted in Fig.~\ref{Fig1}. To simplify notations, $M_{\textrm{B}}/N_{\textrm{B}}$ and $M_{\textrm{U}}/N_{\textrm{U}}$ are respectively defined as $P_{\textrm{B}}$ and $P_{\textrm{U}}$. There are $P_{\textrm{B}}$ precoding matrices and $P_{\textrm{U}}$ combining matrices, and $\mathbf{F}_{i} \in \mathbb{C}^{M_{\textrm{B}} \times N_{\textrm{B}}}$ and $\mathbf{C}_{j} \in \mathbb{C}^{M_{\textrm{U}} \times N_{\textrm{U}}}$ respectively denote the $i$-th precoding matrix and the $j$-th combining matrix. The RIS control matrix changes frame by frame, and the BS and the UE perform a total $P_{\textrm{B}}P_{\textrm{U}}$ beam training at each frame. Letting $B$ denotes the number of frames, the total number of beam training $P$ equals to $B P_{\textrm{B}}P_{\textrm{U}}$. $\mathbf{X}^{i,j}_{b}$, a received pilot signal at the $b$-th frame which uses the $i$-th precoding matrix and the $j$-th combining matrix, can be given by \begin{equation}\label{rxsignal1} \mathbf{X}^{i,j}_{b}=\mathbf{C}_{j}^{H} \mathbf{H}_{\textrm{RU}} \bm{\Omega}_{b} \mathbf{H}_{\textrm{BR}} \mathbf{F}_{i} \mathbf{S} + \mathbf{N}^{i,j}_{b} \in \mathbb{C}^{N_{\textrm{U}} \times D}, \end{equation} where $\bm{\Omega}_{b}$ denotes the RIS control matrix at the $b$-th frame, and $D$ denotes the number of signal samples per one beam training. $\mathbf{S}=\left[\mathbf{s}_{1},\ldots,\mathbf{s}_{N_{\textrm{B}}} \right]^{T} \in \mathbb{C}^{N_{\textrm{B}} \times D}$, where $\mathbf{s}_{n}$ is the $n$-th unit-energy pilot signal that satisfies $\mathbf{s}^{H}_{n}\mathbf{s}_{n}/D=1$. Note that pilot signals are orthogonal to each other so that $\mathbf{S}\mathbf{S}^{H}/D=\mathbf{I}_{N_{\textrm{B}}}$. $\mathbf{N}^{i,j}_{b} \in \mathbb{C}^{N_{\textrm{U}} \times D}$ is a noise matrix whose entries follow a circularly-symmetric complex Gaussian distribution. Here, the mean and the covariance are $0$ and $\sigma^{2}$ respectively. \begin{figure}[!t] \includegraphics[width=1\columnwidth]{./figure/figure2_v2.pdf} \caption{A frame structure for beam training procedure in RIS-aided MIMO communication.} \label{Fig1} \end{figure} \begin{figure*}[!t] \begin{center} \captionsetup[subfigure]{justification=centering} \begin{subfigure}[t]{0.82\columnwidth} \includegraphics[width=\columnwidth,center]{./figure/no_BW_widening-eps-converted-to.pdf} \caption{Radiation patterns without training beamwidth adaptation}\label{2a} \end{subfigure} \begin{subfigure}[t]{0.82\columnwidth} \includegraphics[width=\columnwidth,center]{./figure/BW_widening-eps-converted-to.pdf} \caption{Radiation patterns with training beamwidth adaptation}\label{2b} \end{subfigure} \caption{Radiation patterns of beams created by RIS when $L_{\textrm{BR}}=1$, $L_{\textrm{RU}}=3$, $M_{\textrm{R}}=16$, and $B=10$. For successful multipath signal reception, every RIS-to-UE AoD has to be captured within one of beams.} \label{all} \end{center} \end{figure*} After receiving $\mathbf{X}^{i,j}_{b}$, $\mathbf{S}^{H}$ are multiplied to $\mathbf{X}^{i,j}_{b}$ to filter the noise. We define this filtered signal $\mathbf{Y}^{i,j}_{b}$ as \begin{equation}\label{rxsignal2} \mathbf{Y}^{i,j}_{b}=\frac{\mathbf{X}^{i,j}_{b}\mathbf{S}^{H}}{D}=\mathbf{C}_{j}^{H} \mathbf{H}_{\textrm{RU}} \bm{\Omega}_{b} \mathbf{H}_{\textrm{BR}} \mathbf{F}_{i} + \frac{\mathbf{N}^{i,j}_{b}\mathbf{S}^{H}}{D} \in \mathbb{C}^{N_{\textrm{U}} \times N_{\textrm{B}}}. \end{equation} A total $P_{\textrm{B}}P_{\textrm{U}}$ filtered signals received at the $b$-th frame is organized as \begin{equation}\label{merged} \mathbf{Y}_{b}= \begin{bmatrix} \mathbf{Y}^{1,1}_{b} & \mathbf{Y}^{2,1}_{b} & \cdots & \mathbf{Y}^{P_{\textrm{B}},1}_{b} \\ \mathbf{Y}^{1,2}_{b} & \mathbf{Y}^{2,2}_{b} & \cdots & \mathbf{Y}^{P_{\textrm{B}},2}_{b} \\ \vdots & \vdots & \cdots & \vdots \\ \mathbf{Y}^{1,P_{\textrm{U}}}_{b} & \mathbf{Y}^{2,P_{\textrm{U}}}_{b} & \cdots & \mathbf{Y}^{P_{\textrm{B}},P_{\textrm{U}}}_{b} \end{bmatrix} \in \mathbb{C}^{M_{\textrm{U}} \times M_{\textrm{B}}}, \end{equation} where $\mathbf{Y}_{b}$ is a compilation of all filtered signals at the $b$-th frame. $\mathbf{Y}_{b}$ can also be represented as \begin{equation}\label{merged2} \mathbf{Y}_{b}=\mathbf{C}^{H} \mathbf{H}_{\textrm{RU}} \bm{\Omega}_{b} \mathbf{H}_{\textrm{BR}} \mathbf{F} + \mathbf{V}_{b}, \end{equation} where $\mathbf{F} \in \mathbb{C}^{M_{\textrm{B}} \times M_{\textrm{B}}}$ and $\mathbf{C} \in \mathbb{C}^{M_{\textrm{U}} \times M_{\textrm{U}}}$ respectively denote a full-rank precoding matrix and a full-rank combining matrix. $\mathbf{V}_{b} \in \mathbb{C}^{M_{\textrm{U}} \times M_{\textrm{B}}}$ is a matrix that represents the remaining noise. \section{The Proposed Low-Overhead Channel Estimation for RIS-aided MIMO Systems} \subsection{Compilation of Filtered Signals and Its Representation via Kronecker Product and Khatri-Rao Product} To define the atomic norm of the channel, we compile filtered signals and present an organized representation of compiled filtered signals. For the compilation, two following properties are employed. \begin{itemize} \item Property 1: $\textrm{vec}\left(\mathbf{A \textrm{diag}(\mathbf{b}) C} \right)=(\mathbf{C}^{T} \diamond \mathbf{A})\mathbf{b}.$ \item Property 2: $(\mathbf{AB} \diamond \mathbf{CD})=(\mathbf{A} \otimes \mathbf{C})(\mathbf{B} \diamond \mathbf{D}).$ \end{itemize} Definitions and properties of Kronecker product and Khatri-Rao product are well-explained in~\cite{KR}. Letting a lengthy column vector $\mathbf{y}_{b}$ equals $\textrm{vec}(\mathbf{Y}_{b})$, $\mathbf{y}_{b}$ can be represented as follows by using Property 1. \begin{equation} \mathbf{y}_{b}=\textrm{vec}(\mathbf{Y}_{b})=\left(\mathbf{F}^{T} \mathbf{H}^{T}_{\textrm{BR}} \diamond \mathbf{C}^{H} \mathbf{H}_{\textrm{RU}}\right) \bm{\omega}_{b} + \mathbf{v}_{b} \in \mathbb{C}^{M_{\textrm{B}}M_{\textrm{U}} \times 1}, \end{equation} where $\bm{\omega}_{b}$ denotes the RIS control vector at the $b$-th frame, and $\mathbf{v}_{b}=\textrm{vec}(\mathbf{V}_{b})$. Then, we form a matrix $\bm{\mathcal{Y}}$ which is constructed by stacking $\mathbf{y}_{b}$ for $b=1,\ldots,B$ as follows. \begin{equation} \begin{split} \bm{\mathcal{Y}}&=\left[\mathbf{y}_{1},\ldots,\mathbf{y}_{B} \right]\\&=\left(\mathbf{F}^{T} \mathbf{H}^{T}_{\textrm{BR}} \diamond \mathbf{C}^{H} \mathbf{H}_{\textrm{RU}}\right) \mathbf{W} + \bm{\mathcal{V}} \in \mathbb{C}^{M_{\textrm{B}}M_{\textrm{U}} \times B}, \end{split} \end{equation} where $\mathbf{W}=[\bm{\omega}_{1},\ldots,\bm{\omega}_{B}] \in \mathbb{C}^{M_{\textrm{R}} \times B}$ and $\bm{\mathcal{V}}=[\mathbf{v}_{1},\ldots,\mathbf{v}_{B}] \in \mathbb{C}^{M_{\textrm{B}}M_{\textrm{U}} \times B}$. By using Property 2, $\bm{\mathcal{Y}}$ can be also represented as \begin{equation} \bm{\mathcal{Y}}=\left(\mathbf{F}^{T} \otimes \mathbf{C}^{H}\right) \left( \mathbf{H}^{T}_{\textrm{BR}} \diamond \mathbf{H}_{\textrm{RU}} \right) \mathbf{W} + \bm{\mathcal{V}}. \end{equation} Here, $\mathbf{H}^{T}_{\textrm{BR}} \diamond \mathbf{H}_{\textrm{RU}} \in \mathbb{C}^{M_{\textrm{B}}M_{\textrm{U}} \times M_{\textrm{R}}}$ contains a channel information that is independent of $\mathbf{F}$, $\mathbf{C}$, and $\mathbf{W}$. Once $\mathbf{H}^{T}_{\textrm{BR}} \diamond \mathbf{H}_{\textrm{RU}}$ is successfully estimated, the optimal RIS control matrix that maximizes SNR can be derived by conducting singular value decomposition (SVD) to $\mathbf{H}^{T}_{\textrm{BR}} \diamond \mathbf{H}_{\textrm{RU}}$~\cite{he2021channel}. Throughout this paper, we define $\mathbf{H}^{T}_{\textrm{BR}} \diamond \mathbf{H}_{\textrm{RU}}$ as an \textit{effective channel} $\mathbf{H}_{\textrm{eff}}$, which is a goal of channel estimation for RIS-aided MIMO systems. With (\ref{HBR}), (\ref{HRU}), and Property 2, $\bm{\mathcal{Y}}$ can be fully unfolded as \begin{equation} \begin{split} \bm{\mathcal{Y}}&=\left(\mathbf{F}^{T} \otimes \mathbf{C}^{H}\right) \left( \mathbf{A}(\bm{\theta}_{\textrm{BR}})^{*} \otimes \mathbf{A}(\bm{\phi}_{\textrm{RU}}) \right)\\ & \left(\textrm{diag}(\bm{\rho}_{\textrm{BR}}) \otimes \textrm{diag}(\bm{\rho}_{\textrm{RU}}) \right) \left( \mathbf{A}(\bm{\phi}_{\textrm{BR}})^{T} \diamond \mathbf{A}(\bm{\theta}_{\textrm{RU}})^{H} \right) \mathbf{W} + \bm{\mathcal{V}}. \end{split} \end{equation} To simplify $\mathbf{A}(\bm{\phi}_{\textrm{BR}})^{T} \diamond \mathbf{A}(\bm{\theta}_{\textrm{RU}})^{H} \in \mathbb{C}^{L_{\textrm{BR}}L_{\textrm{RU}} \times M_{\textrm{R}}}$, $\bm{\varphi}$ is defined as \begin{equation} \begin{split} \bm{\varphi}= \{ \varphi_{i,j}&: \cos^{-1} (\cos \theta^{j}_{\textrm{RU}} - \cos \phi^{i}_{\textrm{BR}} ), \\ & i=1,\ldots,L_{\textrm{BR}},j=1,\ldots,L_{\textrm{RU}}\}. \end{split} \end{equation} Then, $\mathbf{A}(\bm{\phi}_{\textrm{BR}})^{T} \diamond \mathbf{A}(\bm{\theta}_{\textrm{RU}})^{H}$ can be rewritten as \begin{equation} \begin{split} &\mathbf{A}(\bm{\phi}_{\textrm{BR}})^{T} \diamond \mathbf{A}(\bm{\theta}_{\textrm{RU}})^{H}=\mathbf{A}(\bm{\varphi})^{H}\\&=\left[\mathbf{a}(\varphi_{1,1}),\ldots,\mathbf{a}(\varphi_{1,L_{\textrm{RU}}}),\ldots,\mathbf{a}(\varphi_{L_{\textrm{BR}},1}),\ldots,\mathbf{a}(\varphi_{L_{\textrm{BR}},L_{\textrm{RU}}}) \right]^{H}. \end{split} \end{equation} \subsection{Robust Multipath Signal Reception When Using Fewer Beam Training via Training Beamwidth Adaptation} One of general ways to control RIS is to use a discrete Fourier transform (DFT) matrix~\cite{5707050}. $N \times N$ DFT matrix $\bm{\Psi}_{N}$ can be given by \begin{equation} \bm{\Psi}_{N}= \begin{bmatrix} 1 & 1 & \cdots & 1 \\ 1 & e^{j \frac{2\pi}{N}} & \cdots & e^{j \frac{2\pi (N-1)}{N}} \\ 1 & e^{j \frac{4\pi}{N}} & \cdots & e^{j \frac{4\pi (N-1)}{N}} \\ \vdots & \vdots & \cdots & \vdots \\ 1 & e^{j 2\pi} & \cdots & e^{j 2\pi(N-1)} \end{bmatrix} \in \mathbb{C}^{N \times N}. \end{equation} For mainlobes of beams to cover the entire angular domain, $\mathbf{W}$ should be equal to $\bm{\Psi}_{M_{\textrm{R}}}$ so that $B=M_{\textrm{R}}$. However, considering $P = B P_{\textrm{B}}P_{\textrm{U}}$, $B$ should be reduced in order to prevent the beam training overhead from getting excessively large. In this subsection, we discuss on the erroneous multipath signal reception that occurs when $B < M_{\textrm{R}}$ and how to resolve it. The simplest way of determining RIS control vectors when $B < M_{\textrm{R}}$ is to select $B$ columns from $\bm{\Psi}_{M_{\textrm{R}}}$ as in~\cite{9354904}, but lack of beams can cause erroneous multipath signal reception as in Fig.~\ref{2a}. Fig.~\ref{2a} shows the case of the erroneous multipath signal reception when one of RIS-to-UE signal paths does not fall onto mainlobes of $B$ beams. In this case, the channel estimation fails since signals from all paths are required for perfect channel estimation. To address the issue on the erroneous multipath signal reception, a training beamwidth adaptation is proposed to make multipath signal reception robust when $B<M_{\textrm{R}}$. The beamwidth of RIS can be widened by deactivating the part of RIS, and $\mathbf{W}$ that contains $B$ widened beams can be given by \begin{equation} \mathbf{W}= \begin{bmatrix} \bm{\Psi}_{B} \\ \mathbf{O}_{M_{\textrm{R}}-B,B} \end{bmatrix} \in \mathbb{C}^{M_\textrm{R} \times B}, \end{equation} where $\mathbf{O}_{M,N}$ denotes a $M \times N$ zero matrix. Fig.~\ref{2b} shows beams that are widened by training beamwidth adaptation when $M_{\textrm{R}}=16$ and $B=10$. In Fig.~\ref{2b}, mainlobes of $B$ beams cover the entire angular domain so that every RIS-to-UE signal path is captured within one of $B$ beams, although the beam gain decreases. \subsection{Atomic Norm Minimization-based Channel Estimation for RIS-aided MIMO Systems} The simplest approach to estimate the effective channel is to use least square (LS) estimator. If $\mathbf{F}^{T} \otimes \mathbf{C}^{H}$ and $\mathbf{W}$ are both full-rank matrices, the effective channel can be estimated via LS estimator as follows. \begin{equation} \hat{\mathbf{H}}_{\textrm{eff}}^{\textrm{LS}}=\left(\mathbf{F}^{T} \otimes \mathbf{C}^{H}\right)^{-1} \bm{\mathcal{Y}} \mathbf{W}^{-1}, \end{equation} where $\hat{\mathbf{H}}_{\textrm{eff}}^{\textrm{LS}}$ is the effective channel estimated by LS estimator. The LS estimator requires at least $M_{\textrm{R}} P_{\textrm{B}}P_{\textrm{U}}$ beam training since $B$ should be larger than $M_{\textrm{R}}$ to make $\mathbf{W}$ a full-rank matrix. $\mathbf{F}^{T} \otimes \mathbf{C}^{H}$ is full-rank since $\mathbf{F}$ and $\mathbf{C}$ are both full-rank matrices. It is worth noting that the beam training overhead of LS estimator is generally large considering $M_{\textrm{R}}$ can be more than hundreds in practice~\cite{9086766}. On the other hand, the effective channel can be estimated by ANM even when $B<M_{\textrm{R}}$ if signal paths are sparse and all multipath signals are received properly. In this subsection, an atomic norm of the effective channel is defined, where defining proper atomic norm leads to accurate channel estimation. \begin{figure}[!t] \includegraphics[width=1\columnwidth]{./figure/figure3_v3.pdf} \caption{A summary of the proposed ANM-based low-overhead channel estimation for RIS-aided MIMO systems.} \label{Fig3} \end{figure} We assume the signal paths are sparse so that $B>L_{\textrm{BR}}L_{\textrm{RU}}$. For simplification of equations and notations, we define $\mathbf{G}$ and $\mathbf{Z}$ as follows. \begin{equation} \begin{split} \mathbf{G}=\left\{ \left(\mathbf{F}^{T} \otimes \mathbf{C}^{H}\right)^{-1} \bm{\mathcal{Y}} \right\}^{H}=\mathbf{W}^{H} \mathbf{Z} + \mathbf{E} \in \mathbf{C}^{B \times M_{\textrm{B}}M_{\textrm{U}}}, \end{split} \end{equation} \begin{equation} \begin{split} \mathbf{Z}&=\mathbf{H}_{\textrm{eff}}^{H}=\left( \mathbf{H}^{T}_{\textrm{BR}} \diamond \mathbf{H}_{\textrm{RU}} \right)^{H}\\ &=\mathbf{A}(\bm{\varphi}) \left(\textrm{diag}(\bm{\rho}_{\textrm{BR}}) \otimes \textrm{diag}(\bm{\rho}_{\textrm{RU}}) \right)^{H} \left( \mathbf{A}(\bm{\theta}_{\textrm{BR}})^{*} \otimes \mathbf{A}(\bm{\phi}_{\textrm{RU}}) \right)^{H}. \end{split} \end{equation} Here, $\mathbf{E}=\{ \left(\mathbf{F}^{T} \otimes \mathbf{C}^{H}\right)^{-1} \bm{\mathcal{V}} \}^{H}$. For the estimation of $\mathbf{Z}$, an atomic set $\mathcal{A}$ is defined as follows. \begin{equation}\label{atom} \mathcal{A}=\left\{\mathbf{a}(\theta)\mathbf{b}^{T} \in \mathbb{C}^{M_{\textrm{R}} \times M_{\textrm{B}}M_{\textrm{U}}}: 0^{\circ} < \theta < 180^{\circ}, \lVert \mathbf{b} \rVert_{2}=1 \right\}, \end{equation} where $\mathbf{a}(\theta)\mathbf{b}^{T}$ is defined as an atom, and $\mathbf{Z}$ can be represented as a linear combination of atoms. Properties of the atom defined in~(\ref{atom}) and its atomic norm have been studied in seminal works of the ANM~\cite{9016105,7484756,7313018}. The atomic norm of $\mathbf{Z}$, $\lVert \mathbf{Z} \rVert_{\mathcal{A}}$ can be represented by following SDP~\cite{9016105}: \begin{equation} \begin{split} \lVert \mathbf{Z} \rVert_{\mathcal{A}}=&\min_{\mathbf{u},\mathbf{T}} \; \frac{1}{2M_{\textrm{R}}} \textrm{Tr}(\textrm{Toep}(\mathbf{u}))+ \frac{1}{2} \textrm{Tr}(\mathbf{T}) \\ &\; \textrm{s.t.} \begin{bmatrix} \textrm{Toep}(\mathbf{u}) & \mathbf{Z} \\ \mathbf{Z}^{H} & \mathbf{T} \end{bmatrix} \succeq 0, \end{split} \end{equation} where $\textrm{Toep}(\mathbf{u})$ denotes a Hermitian Toeplitz matrix whose first column equals $\mathbf{u}$. With the ANM denoising theorem studied in~\cite{7313018}, an equation that estimates $\mathbf{Z}$ from $\mathbf{G}$ can be given by \begin{equation}\label{final} \hat{\mathbf{Z}}=\argmin_{\bar{\mathbf{Z}}} \; \tau \lVert \bar{\mathbf{Z}} \rVert_{\mathcal{A}} + \frac{1}{2} \lVert \mathbf{G}-\mathbf{W}^{H}\bar{\mathbf{Z}} \rVert_{\textrm{F}}^{2}, \end{equation} where $\hat{\mathbf{Z}}$ and $\bar{\mathbf{Z}}$ respectively denote the estimate of $\mathbf{Z}$ and the variable for estimation of $\mathbf{Z}$. A regularization parameter $\tau$ is set as in~\cite{7313018}: \begin{equation}\label{tau} \begin{split} \tau=\frac{\sigma}{\sqrt{D}}&\Big(1+\frac{1}{\log M_{\textrm{R}}} \Big)^{\frac{1}{2}} \Big( M_{\textrm{B}}M_{\textrm{U}} + \log(\alpha M_{\textrm{B}}M_{\textrm{U}}) + \\ &\sqrt{2M_{\textrm{B}}M_{\textrm{U}} \log(\alpha M_{\textrm{B}}M_{\textrm{U}})} + \sqrt{\frac{\pi M_{\textrm{B}}M_{\textrm{U}}}{2}} +1 \Big)^{\frac{1}{2}}, \end{split} \end{equation} where $\alpha=8\pi M_{\textrm{R}} \log M_{\textrm{R}}$. (\ref{final}) can be fully unfolded as \begin{equation}\label{final2} \begin{split} &\{\hat{\mathbf{u}},\hat{\mathbf{T}},\hat{\mathbf{Z}} \}=\\ & \argmin_{\mathbf{u},\mathbf{T},\bar{\mathbf{Z}}} \; \frac{\tau}{2M_{\textrm{R}}} \textrm{Tr}(\textrm{Toep}(\mathbf{u}))+ \frac{\tau}{2} \textrm{Tr}(\mathbf{T}) + \frac{1}{2} \lVert \mathbf{G}-\mathbf{W}^{H}\bar{\mathbf{Z}} \rVert_{\textrm{F}}^{2} \\ &\; \textrm{s.t.} \begin{bmatrix} \textrm{Toep}(\mathbf{u}) & \bar{\mathbf{Z}} \\ \bar{\mathbf{Z}}^{H} & \mathbf{T} \end{bmatrix} \succeq 0. \end{split} \end{equation} Finally, $\mathbf{H}_{\textrm{eff}}$ can be approximated as $\hat{\mathbf{Z}}^{H}$. The proposed ANM-based low-overhead channel estimation for RIS-aided MIMO systems can be summarized as Fig.~\ref{Fig3}. \section{Simulation Results and Discussions}\label{simulation} In this section, the proposed algorithm is compared with \cite{9354904} and \cite{he2021channel}, and LS is added as a benchmark if $B=M_{\textrm{R}}$. Note that \cite{9354904} and \cite{he2021channel} also employ the training beamwidth adaptation in this simulation. The algorithm in \cite{9103231} is excluded from the comparison since the computation time becomes excessively high when reforming the algorithm to work under RIS-aided MIMO systems. For simulation, $M_{\textrm{B}}$, $M_{\textrm{R}}$, and $M_{\textrm{U}}$ are respectively set to 8, 32, and 4, and $N_{\textrm{B}}$ and $N_{\textrm{U}}$ are set to 4 and 2 so that there are 4 beam training per frame. $\alpha^{l}_{\textrm{BR}} \sim \mathcal{CN}(0,1)$ for $l=1,\ldots,L_{\textrm{BR}}$ and $\alpha^{l}_{\textrm{RU}} \sim \mathcal{CN}(0,1)$ for $l=1,\ldots,L_{\textrm{RU}}$. All AoAs and AoDs such as $\phi_{\textrm{BR}}^{l}$, $\theta_{\textrm{BR}}^{l}$, $\phi_{\textrm{RU}}^{l}$, and $\theta_{\textrm{RU}}^{l}$ are chosen randomly between $\left[30^{\circ},150^{\circ}\right]$. The SNR is defined as \begin{equation}\label{SNR} \textrm{SNR}=10 \log_{10} \frac{\left| \left( \sum_{l=1}^{L_{\textrm{RU}}} \alpha_{\textrm{RU}}^{l} \right) \left( \sum_{l=1}^{L_{\textrm{BR}}} \alpha_{\textrm{BR}}^{l} \right) \right|^{2}}{\sigma^{2}} (\textrm{dB}). \end{equation} The SNR defined in~(\ref{SNR}) is a ratio of signal power and noise power when $M_{\textrm{B}}=M_{\textrm{R}}=M_{\textrm{U}}=1$. Note that the defined SNR does not depend on precoding matrix, combining matrix, and RIS control matrix. The NMSE is defined as \begin{equation} \textrm{NMSE}=\frac{1}{Q}\sum_{q=1}^{Q} \frac{\lVert \hat{\mathbf{H}}^{q}_{\textrm{eff}} - \mathbf{H}^{q}_{\textrm{eff}} \rVert^{2}_{\textrm{F}}}{\lVert \mathbf{H}^{q}_{\textrm{eff}} \rVert^{2}_{\textrm{F}}}, \end{equation} where $Q$ is the number of Monte Carlo trials for NMSE calculation and is set to 300. $\hat{\mathbf{H}}^{q}_{\textrm{eff}}$ and $\mathbf{H}^{q}_{\textrm{eff}}$ respectively denote the estimated effective channel and the actual effective channel on the $q$-th Monte Carlo trial. \begin{figure}[!t] \includegraphics[width=1\columnwidth]{./figure/NMSEvsDiff_0415_total-eps-converted-to.pdf} \caption{NMSE versus interval between two RIS-to-UE AoAs. The SNR is set to 0 dB. $L_{\textrm{BR}}=1$, $L_{\textrm{RU}}=2$, and $B=16$.} \label{NMSEDiff} \end{figure} Fig.~\ref{NMSEDiff} shows the NMSE versus interval between two RIS-to-UE AoAs when $L_{\textrm{BR}}=1$, $L_{\textrm{RU}}=2$, and $B=16$. The SNR is set to 0 dB. Considering that AoDs and AoAs are uniformly distributed in all angles~\cite{7501500}, there is a possibility that AoDs and AoAs of signal paths are closely separated. In \cite{9354904} and \cite{he2021channel}, AoD/AoA estimation can be inaccurate when AoDs and AoAs are closely separated, where inaccurate AoD/AoA estimation leads to channel estimation failure. To show the correlation between the channel estimation accuracy and the separation between AoAs or AoDs, we analyze NMSE with respect to the interval between two RIS-to-UE AoAs. In Fig.~\ref{NMSEDiff}, the NMSE of \cite{9354904} and \cite{he2021channel} are high when the two RIS-to-UE AoAs are closely separated as the estimation on RIS-to-UE AoAs fails. On the other hand, the NMSE of the proposed algorithm remains relatively low even when the interval between two RIS-to-UE AoAs is close. When the interval between two RIS-to-UE AoAs is $4^{\circ}$, the NMSE of proposed algorithm, \cite{9354904}, and \cite{he2021channel} are $0.014$, $0.29$, and $0.66$ respectively. \begin{figure}[!t] \includegraphics[width=1\columnwidth]{./figure/NMSEvsRISTrain_0415_total-eps-converted-to.pdf} \caption{NMSE versus number of frames. The SNR is set to 0 dB. $L_{\textrm{BR}}=2$ and $L_{\textrm{RU}}=2$. The number of frames equals to the number of beams created by RIS during beam training.} \label{NMSERISTrain} \end{figure} Fig.~\ref{NMSERISTrain} shows the NMSE versus number of beam training when $L_{\textrm{BR}}=2$ and $L_{\textrm{RU}}=2$. The SNR is set to 0 dB. To show that the training beamwidth adaptation ensures robust channel estimation when there is less beam training, the NMSE of the proposed algorithm without training beamwidth adaptation is also presented in Fig.~\ref{NMSERISTrain}. Since the deficiency of beam training may cause the erroneous multipath signal reception, the proposed algorithm without training beamwidth adaptation shows high NMSE when $B<M_{\textrm{R}}$. On the other hand, the NMSE of the proposed algorithm is lower than those of other algorithms at every $B$. When $B$ reaches $M_{\textrm{R}}$, there is no need for broadening beamwidth so that the NMSE of the proposed algorithm without training beamwidth adaptation becomes equivalent to the NMSE of the proposed algorithm. Since the channel estimation is inaccurate when AoDs or AoAs are closely separated, the NMSE of \cite{9354904} and \cite{he2021channel} remains high even when sufficient beam training is performed. \begin{figure}[!t] \includegraphics[width=1\columnwidth]{./figure/NMSEvsSNR_0415_total-eps-converted-to.pdf} \caption{NMSE versus SNR. $L_{\textrm{BR}}=1$, $L_{\textrm{RU}}=2$, and $B=16$.} \label{NMSESNR} \end{figure} Fig.~\ref{NMSESNR} shows the NMSE versus SNR when $L_{\textrm{BR}}=2$, $L_{\textrm{RU}}=2$, and $B=16$. The NMSE of the proposed algorithm is lower than those of other algorithms at every SNR. On the other hand, the NMSE of \cite{9354904} and \cite{he2021channel} do not improve with the increase of SNR and remains relatively high. For the same reason as previous results, this is because the estimation of \cite{9354904} and \cite{he2021channel} fails when AoDs or AoAs are closely separated, and this failure is independent of the SNR. From Fig.~\ref{NMSERISTrain} and Fig.~\ref{NMSESNR}, we can conclude that the proposed algorithm shows the best channel estimation accuracy when the identical number of beam training and SNR are given. As shown in Fig.~\ref{NMSEDiff}, the superiority of the proposed algorithm is attributed to the robustness against close separation between AoDs and AoAs. Also, Fig.~\ref{NMSERISTrain} supports that the training beamwidth control makes channel estimation accurate when there is less beam training. However, the channel estimation becomes inaccurate as the number of active RIS antennas decreases, where this inaccuracy is induced by the decrease of beam gain and the decrease of the number of detectable signal paths. Thus, the idea to maintain high channel estimation accuracy while extremely lowering beam training overhead needs to be discussed in further work, as well as the analysis on the relationship between channel estimation accuracy and the number of active RIS antennas. \section{Conclusions} In this paper, we propose a low-overhead channel estimation algorithm for RIS-aided MIMO systems. When there is less beam training, some multipath signals may not be received, and this causes the channel estimation failure. To address this issue, the beamwidth of RIS is adaptively widened so that the beamwidth is inversely proportional to the number of beams created by RIS. The atomic norm of the effective channel for RIS-aided MIMO systems is defined, where defining the atomic norm requires the compilation of pilot signals received during beam training. The effective channel is estimated by solving the SDP that represents the atomic norm. Simulation results show that the proposed channel estimation algorithm has the lowest NMSE among other algorithms for RIS-aided MIMO systems when the identical number of beam training and SNR are given. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetr}
1,108,101,562,673
arxiv
\section{Introduction} \label{Intro} \numberwithin{equation}{section} The motion of a three-dimensional (3D) compressible viscous fluid without heat conductivity in the presence of a uniform gravitational field in a bounded domain $\Omega\subset {\mathbb R}^3$ with smooth boundary is governed by the following Navier-Stokes equations: \begin{equation}\label{0101}\left\{\begin{array}{l} \rho_t+\mathrm{div}(\rho{v})=0,\\[1mm] \rho v_t+\rho {v}\cdot\nabla {v}+\nabla p=\mu\Delta v+\mu_0\nabla\mathrm{div} v-\rho g{e}_3,\\[1mm] \rho e_t+\rho v\cdot\nabla e+p\mathrm{div}v= {\mu}|\nabla v+\nabla v^\mathrm{T}|^2/2+\lambda(\mathrm{div}v)^2. \end{array}\right.\end{equation} Here the unknowns $\rho:=\rho(t, x)$, $v:= v(t, x)$, $ e:= e(t, x)$ and $p=a\rho e$ denote the density, velocity, specific internal energy and pressure of the fluid respectively, $\mu_0=\mu+\lambda$ and $a=\gamma-1$. The known constants $\lambda$, $\mu$ and $\gamma$ are the viscosity coefficients and the ratio of specific heats satisfying the natural restrictions: $$\mu>0,\quad 3\lambda+2\mu\geq 0;\quad\gamma>1. $$ $g>0$ is the gravitational constant, ${e}_3=(0,0,1)^{\mathrm{T}}$ is the vertical unit vector, and $-g{e}_3$ is the gravitational force. In this paper we consider the problem of the Rayleigh-Taylor (RT) instability for the system \eqref{0101}. Thus, we choose a RT (steady-state) density profile $\bar{\rho}:=\bar{\rho}(x_3)$ which is independent of $(x_1,x_2)$ and satisfies \begin{eqnarray}\label{0102} \bar{\rho}\in C^{4}(\bar{\Omega}),\quad \inf_{ x\in \Omega}\bar{\rho}>0,\quad \bar{\rho}'(x_3^0)>0\; \mbox{ for some }x_3^0\in \{x_3~|~(x_1,x_2,x_3)\in \Omega\}, \end{eqnarray} where $\bar{\rho}':=\mathrm{d}\bar{\rho}/\mathrm{d}x_3$. We remark that the first condition in \eqref{0102} guarantees that the steady density profile belongs to some $C^0([0,T),H^3(\Omega))$, the second one in \eqref{0102} prevents us from treating vacuum in the construction of unstable solutions, while the third one in \eqref{0102} assures that there is at least a region in which the RT density profile has larger density with increasing $x_3$ (height), thus leading to the classical RT instability as will be shown in Theorem \ref{thm:0102} below. By the theory of first-order linear ODE, for given $\bar{\rho}$ in \eqref{0102} we can find a corresponding steady internal energy $\bar{e}$ that only depends on $x_3$ and is unique up to a constant divided by $\bar{\rho}$, i.e., $$\bar{e}=-g(a{\bar{\rho}})^{-1}{\int\bar{\rho}(x_3)\mathrm{d}x_3},$$ such that \begin{equation}\label{0104} 0<\bar{e}\in C^4(\bar{\Omega})\;\; \mbox{ and }\;\; \nabla \bar{p}=-\bar{\rho}g {e}_3\;\; \mbox{ in }\Omega, \end{equation} where $\bar{p}:=a\bar{\rho}\bar{e}$. Clearly, the RT density profile $(\bar{\rho},v\equiv 0,\bar{e})$ gives a steady state to the system \eqref{0101}. Now, we define the perturbation of $(\rho ,v,e)$ by $$ \varrho=\rho -\bar{\rho},\quad u= v- {0},\quad \theta=e-\bar{e}. $$ Then, the triple $(\varrho , u,\theta)$ satisfies the perturbed equations \begin{equation}\label{0105}\left\{\begin{array}{l} \varrho_t+\mathrm{div}((\varrho+\bar{\rho}){ u})=0, \\[1mm] (\varrho+\bar{\rho}){ u}_t+(\varrho+\bar{\rho}){ u}\cdot\nabla { u}+a\nabla [ {({\varrho}+\bar{\rho})(\theta+\bar{e})}-{\bar{\rho}\bar{e}}]=\mu\Delta{ {{u}}} +\mu_0\nabla\mathrm{div} u-g\varrho {e}_3,\\[1mm] \theta_t+ u\cdot\nabla (\theta+\bar{e})+a(\theta+\bar{e})\mathrm{div} u=\{ \mu |\nabla u+\nabla u^\mathrm{T}|^2/2 +\lambda(\mathrm{div} u)^2\}/(\varrho+\bar{\rho}). \end{array}\right. \end{equation} To complete the statement of the perturbed problem, we specify the initial and boundary conditions: \begin{equation}\label{0106} (\varrho,{ u},\theta)|_{t=0}=(\varrho_0,{ u}_0,\theta_0)\quad\mbox{in } \Omega \end{equation} and \begin{equation}\label{0107} { u}(t, {x})|_{\partial\Omega}={ 0}\quad \mbox{ for any }t>0. \end{equation} Moreover, the initial data should satisfy the compatibility condition $$ \{(\varrho_0+\bar{\rho}){u_0}\cdot\nabla {u_0}+a\nabla [ {({\varrho_0}+\bar{\rho})(\theta_0 +\bar{e})}-{\bar{\rho}\bar{e}}]\}|_{\partial\Omega}=(\mu\Delta{u_0} +\mu_0\nabla\mathrm{div} u_0 -g\varrho_0 {e}_3)|_{\partial\Omega}. $$ If we linearize the equations (\ref{0105}) around the steady state $(\bar{\rho}, {0},\bar{e})$, then the resulting linearized equations read as \begin{equation}\label{0108} \left\{\begin{array}{ll} \varrho_t+\mathrm{div}( \bar{\rho} u)=0, \\[1mm] \bar{\rho} u_t +a\nabla(\bar{e} {\varrho} +\bar{\rho}\theta)=\mu\Delta{ {{u}}}+\mu_0\nabla \mathrm{div} u-g\varrho {e}_3,\\[1mm] \theta_t+ \bar{e}'u_3+a\bar{e}\mathrm{div} u = 0. \end{array}\right.\end{equation} The RT instability is well-known as gravity-driven instability in fluid dynamics when a heavy fluid is on top of a light one. Instability of the linearized problem (i.e. linear instability) for an incompressible fluid was first introduced by Rayleigh in 1883 \cite{RLAP}. In the recent years, the study on the mathematical theory of the RT instability for fluid dynamics and magnetohydrodynamics (MHD), based on the (generalized) variational method, has attracted much attention, and some progress has been made. In 2003, Hwang and Guo \cite{HHJGY} first proved the nonlinear RT instability of $\|(\varrho, u)\|_{L^2(\Omega)}$ in the sense of Hadamard for a 2D nonhomogeneous incompressible inviscid fluid with boundary condition $ u\cdot {n}|_{\partial\Omega}=0$, where $\Omega=\{(x_1,x_2)\in \mathbb{R}^2~|~-l<x_2<m\}$ and $ {n}$ denotes the outer normal vector to $\partial\Omega$. Later, Jiang, Jiang and Ni \cite{NJTSC2} showed the nonlinear RT instability of $\|{u}_3\|_{L^2(\mathbb{R}^3)}$ for the Cauchy problem of nonhomogeneous incompressible viscous flows in the sense of the Lipschitz structure, and further gave the nonlinear RT instability of $\|u_3\|_{L^2(\Omega)}$ in \cite{JFJSWWWN} in the sense of Hadamard in a unbounded horizontal period domain $\Omega$. In addition, similar results on the nonlinear RT instability were established for two layer incompressible viscous fluids with a free interface (so-called stratified fluids), where the RT steady-state solution is a denser fluid lying above a lighter one separated by a free interface and the domain is also a flat domain (such as $\mathbb{R}^3$ and a horizontal period domain), please see \cite{PJSGOI5,wang2011viscous}. We mention that the analogue of the RT instability arises when fluids are electrically conducting and a magnetic field is present, and the growth of the instability will be influenced by the magnetic field due to the generated electromagnetic induction and the Lorentz force. The aforementioned partial results of the RT instability have been extended to the case of MHD fluids by circumventing additional difficulties induced by presence of the magnetic field, see \cite{JFJSWWWOA,DRJFJS,JFJSWWWN} for examples. All the above mentioned results are obtained for a flat or horizontal period domain, because in such a case one can apply the method of the Fourier transform (or discrete mode-$e^{i\xi\cdot x}$) to analyze properties of spectrums of the associated linearized problems. This basic technique has also been applied to the instability study for other problems, for example, for the periodic BGK equilibria \cite{GYSWIC}, for the space periodic quasi-geostrophic equation \cite{FSNPVVNC}, for an ideal space periodic fluid \cite{FSSWVMNA,GECPAMO25,VMFSNC22} and for the space periodic and whole space forced incompressible MHD equations \cite{BIIS413,GDOS2}. Recently, Guo and Tice \cite{GYTI2} used a modified variational method to investigate a ODE problem arising in the investigation of the linear RT instability for compressible stratified flows. Motivated by their work, Jiang and Jiang \cite{JFJSO2014} adapted the modified variational method to avoid the Fourier transform and constructed the unstable linear solutions of a nonhomogeneous incompressible viscous flow in a general bounded domain $\Omega$, and they proved the nonlinear RT instability by developing a new energy functional to overcome the difficulty induced by the compatibility conditions on boundary under the restriction \begin{equation}\label{lowbounded} \inf_{x\in\Omega}\{\bar{\rho}'(x)\} >0. \end{equation} In contrast to the incompressible fluid case, there are very few results on the nonlinear RT instability for compressible flows which are much more complicated and involved to deal with mathematically due to the difficulties induced by compressibility, and hence, new techniques have to be employed (see Remarks \ref{strongconden}, \ref{rem:w0102} and the paragraph below Remark \ref{rem:n0103} for more comments). In \cite{HHVQ} Hwang investigated the nonlinear RT instability of a compressible inviscid MHD fluid in a period domain. We also mention that there are some articles studying the the role of the compressibility effects on the linear RT instability, we refer to \cite{GYTI1,HYHXWJZHCC,lafay2007compressibility,gauthier2010compressibility,LDCP1,livescu2005comment} for more details. The above mentioned nonlinear RT instability results are concerned either with incompressible flows or with compressible isentropic flows for a spatially periodic domain. To our best knowledge, there is no result on the nonlinear RT instability in compressible non-isentropic flows in a general bounded domains. In this paper we shall prove the nonlinear RT instability for the initial-boundary problem (\ref{0105})--(\ref{0107}) of a compressible non-isentropic flow without heat diffusion in a general bounded domain in the sense of Hadamard. Moreover, we shall show that the sharp growth rate of solutions to the linearized problem \eqref{0108} is not less than that of the solutions in the corresponding incompressible fluid case \cite{JFJSO2014}, this means that that the compressibility does not have a stabilizing effect in the linearized problem \eqref{0106}--\eqref{0108} (also see Remark \ref{rem:0101}). Besides, the condition \eqref{lowbounded} is not needed in the proof of the nonlinear instability. The current work is a further continuation of our previous studies \cite{JFJSO2014} where incompressible fluids were investigated. Before stating the main result of this paper, we explain the notations used throughout this paper. For simplicity, we drop the domain $\Omega$ in Sobolve spaces and the corresponding norms as well as in integrands over $\Omega$, for example, \begin{equation*} \begin{aligned}& L^p:=L^p(\Omega),\quad {H}^1_0:=W^{1,2}_0(\Omega),\;\; {H}^k:=W^{k,2}(\Omega),\;\; \int:=\int_\Omega . \end{aligned}\end{equation*} In addition, a product space $(X)^n$ of vector functions are still denoted by $X$, for examples, the vector function $ u\in (H^2)^3$ is denoted by $ u\in H^2$ with norm $\| u\|_{H^2}:=(\sum_{k=1}^3\|u_k\|_{H^2}^2)^{1/2}$. We shall use the abbreviations: $$D^k:=\{\partial_{x_{1}}^{k_1}\partial_{x_{2}}^{k_2}\partial_{x_{3}}^{k_3}\}_{k_1+k_2+k_3=k},\quad |\|D^k f|\|:=\sum_{k_1+k_2+k_3=k}|\|\partial_{x_{1}}^{k_1}\partial_{x_{2}}^{k_2}\partial_{x_{3}}^{k_3}f|\| \;\mbox{ for some norm }|\|\cdot|\|.$$ Now we are able to state our main result on the nonlinear RT instability of the problem \eqref{0105}--\eqref{0107}. \begin{thm}\label{thm:0102} Assume that the RT density profile $\bar{\rho}$ and the steady internal energy $\bar{e}$ satisfy \eqref{0102}--\eqref{0104}. Then, the steady state $(\bar{\rho}, 0,\bar{e})$ of the system (\ref{0105})--(\ref{0107}) is unstable in the Hadamard sense, that is, there are positive constants $\Lambda$, $m_0$, $\varepsilon$ and $\delta_0$, and functions $(\bar{\varrho}_0,\bar{ u}_0,\bar{\theta}_0,{ u}_\mathrm{r})\in H^3$, such that for any $\delta\in (0,\delta_0)$ and the initial data $$ (\varrho_0, u_0,\theta_0):=\delta(\bar{\varrho}_0,\bar{u}_0,\bar{\theta}_0) +\delta^2(\bar{\varrho}_0,u_\mathrm{r},\bar{\theta}_0)\in H^3, $$ there is a unique solution $({\varrho},u,\theta)\in C^0([0,T^{\max}),H^3)$ of \eqref{0105}--\eqref{0107} satisfying the compatibility condition and \begin{equation}\label{0115} \|(u_1,u_2)(T^\delta)\|_{L^2},\ \|{u}_3(T^\delta)\|_{L^2}\geq {\varepsilon}\; \end{equation} for some escape time $T^\delta:=\frac{1}{\Lambda}\mathrm{ln}\frac{2\varepsilon}{m_0\delta}\in (0,T^{\max})$, where $T^{\max}$ denotes the maximal time of existence of the solution $(\varrho, u,\theta)$, and $u_i$ denotes the $i$-th component of $u=(u_1,u_2,u_3)^\mathrm{T}$. \end{thm} \begin{rem}\label{strongconden} Under the assumption of Theorem \ref{thm:0102}, if we further assume that \begin{equation}\label{dengeq0} \bar{\rho}'\geq 0, \end{equation} then we can get the instability of the perturbed density, i.e., Thoerm \ref{thm:0102} holds with $\|\varrho(T^\delta)\|_{L^2}\geq {\varepsilon}$. The additional condition \eqref{dengeq0} is used to show $\tilde{ \varrho}_0:=\mathrm{div}(\bar{\rho}\tilde{v}_0)\not\equiv 0$ in the conctruction of a linear unstable solution (cf. \eqref{qh0209}), where $(\tilde{\varrho}_0,\tilde{ v}_0)$ is a solution to the time-independent system \eqref{0109}. It is not clear to the authors whether one could get $\tilde{ \varrho}_0\not\equiv 0$ without the condition \eqref{dengeq0}. In the incompressible fluid case, however, we can obtain $\tilde{ \varrho}_0\not\equiv 0$ without \eqref{dengeq0}. \end{rem} \begin{rem}\label{rem:0101} The constant $\Lambda >0$ in Theorem \ref{thm:0102} is called sharp growth rate, since any solution $(\hat\varrho,\hat u,\hat\theta)$ to \eqref{0106}--\eqref{0108} satisfies $\|(\hat\varrho,\hat u,\hat\theta)(t)\|_{H^2}^2\leq Ce^{2\Lambda t}\|(\varrho_0,u_0,\theta_0)\|_{H^2}^2$ for some constant $C$ (see Appendix). Moreover, it is unique defined by the relation \eqref{sharprate}. Recently, we proved the nonlinear RT instability in nonhomogeneous incompressible viscous fluids in \cite{JFJSO2014}, where the sharp growth rate $\Lambda_\mathrm{inc}$ is defined by \begin{equation*}\label{} \Lambda^2_\mathrm{inc}:=\sup_{\tilde{ v}\in \{H_0^1~|~\mathrm{div}u=0,\ \int \bar{\rho}|\tilde{v}|^2=1\}} \left\{g\int \bar{\rho}'\tilde{{v}}_3^2\mathrm{d}x-\Lambda_\mathrm{inc}\mu\int|\nabla \tilde{ v}|^2\mathrm{d} x\right\}. \end{equation*} If we consider the incompressible fluid case corresponding to \eqref{0106}--\eqref{0108}, we easily find that $\Lambda_\mathrm{inc}$ is also the sharp growth rate of the incompressible fluid case corresponding to \eqref{0106}--\eqref{0108}. On the other hand, by the relation \eqref{sharprate}, one easily gets $\Lambda\geq \Lambda_\mathrm{inc}$ by contradiction. Hence, we can conclude that the compressibility does not have a stabilizing effect in the linearized problem for compressible non-isentropic flows without heat conductivity. \end{rem} \begin{rem}\label{rem:w0102} Let $\bar{e}>0$ be a constant and $\bar{\rho}$ satisfy $$\sup_{x\in \Omega}\bar{\rho}'<0\;\;\mbox{ and }\;\;\nabla \bar{p}=-\bar{\rho}ge_3,$$ then the linearized system \eqref{0106}--\eqref{0108} around $(\bar{\rho},0,\bar{e})$ is stable, more precisely, any solution to the system \eqref{0106}--\eqref{0108} satisfies $$\int\left(\frac{\varrho^2}{-\bar{\rho}'}+\frac{\bar{\rho}u^2}{g}+ \frac{\bar{\rho}\theta^2}{g\bar{e}}\right)\mathrm{d}x+ \int_0^t\int\left(\frac{2\mu}{g}|\nabla u|^2 +\frac{2\mu_0}{g}|\mathrm{div}u|^2\right)\mathrm{d}x\mathrm{d}t=\int\left(\frac{\varrho^2_0}{-\bar{\rho}'} +\frac{\bar{\rho}u^2_0}{g}+ \frac{\bar{\rho}\theta^2_0}{g\bar{e}}\right)\mathrm{d}x.$$ However, it is not clear to authors whether the corresponding nonlinear system \eqref{0105}--\eqref{0107} around the state $(\bar{\rho},0,\bar{e})$ is stable, even if $\bar{\rho}'$ is a positive constant. We mention that the stability of a nonhomogeneous incompressible viscous flow around some steady state $(\bar{\rho},0)$ with $\bar{\rho}'$ being a positive constant was shown by making use of the incompressible condition $\mathrm{div}u=0$, see \cite[Theorem 1.2]{JFJSO2014} for details. \end{rem} \begin{rem}\label{rem:n0103} We remark that our results can not be generalized to the case with heat conduction, i.e., adding the term $\kappa_\nu \Delta e$ to the right hand of the equation \eqref{0105}$_3$, where $\kappa_\nu=\kappa/c_\nu$, and $\kappa$ is the heat conductivity coefficient and $c_\nu$ is the specific heat at constant volume, since there does not exist a steady solution $(\bar{\rho},0,\bar{e})$ satisfying \eqref{0102}, \eqref{0104} and $\Delta \bar{e}=0$. In fact, if such a steady solution existed, then $\bar{e}$ would enjoy the form: $$\bar{e}=c_1\int 1\mathrm{d}s=-g(a{\bar{\rho}}(x_3))^{-1}{\int\bar{\rho}(x_3)\mathrm{d}x_3}>0\quad\mbox{in } \Omega ,\quad \mbox{for some constant }c_1>0. $$ Thus, one has $ -{g\int\bar{\rho}(x_3)\mathrm{d}x_3}=ac_1{\bar{\rho}}(x_3) \int 1\mathrm{d}x_3,$ whence, $$0>-g\bar{\rho}(x_3)=ac_1{\bar{\rho}}(x_3)+ ac_1{\bar{\rho}}'(x_3)\int 1\mathrm{d}x_3= ac_1{\bar{\rho}}(x_3)+a{\bar{\rho}}'\bar{e} (x_3)>0\;\;\mbox{ for }\; x_3=x_3^0 ,$$ which obviously is a contradiction. \end{rem} Next, we sketch the main idea in the proof of Theorem \ref{thm:0102}. The proof is broken up into three steps. Firstly, as in \cite{JFJSO2014} we make the following ansatz of growing mode solutions to the linearized problem: \begin{equation}\label{ansatzmode} (\varrho (x,t),u(x,t),\theta (x,t))=e^{\Lambda t}(\tilde{\rho}(x),\tilde{v}(x),\tilde{\theta}(x)) \quad\mbox{for some }\Lambda>0 \end{equation} and deduce \eqref{0108} thus into a time-independent PDE system on the unknown function $\tilde{ v}$. Then we adapt and modify the modified variational method in \cite{GYTI2} to the time-independent system to get a non-trivial solution $\tilde{ v}$ with a sharp growth rate $\Lambda$, which immediately implies that the linearized problem has a unstable solution in the form \eqref{ansatzmode}. This idea was used probably first by Guo and Tice to deal with an ODE problem arising in constructing unstable linear solutions, and later adapted by other researchers to treat other linear instability problems of viscous fluids, see \cite{JFJSWWWOA,JJTIIA}. Here we directly adapt this idea to the time-independent PDE system to avoid the use of the Fourier transform and to relax the restriction on domains. Secondly, we establish energy estimates of Gronwall-type in $H^3$-norm. Similar (global in time) estimates were obtained for the non-isentropic compressible Navier-Stokes equations with heat conductivity under the condition of small initial data and external forces \cite{MANTIC481,MANTIJ321}. Here we have to modify the arguments in \cite{MANTIC481,MANTIJ321} to deal with the compressible Navier-Stokes equations without heat conductivity. Namely, we control the sum $\bar{e}\varrho+\bar{\rho}\theta$ as one term (see \eqref{asnoenet}) instead of dividing it into two terms in \cite{MANTIC481}; and we use the equations \eqref{0105}$_1$ and \eqref{0105}$_2$ independently to bound $\|\varrho\|_{H^3}$ and $\|\theta\|_{H^3}$ (i.e. Lemma \ref{lem:0301}), rather than coupling the equations together to bound $\|\varrho\|_{H^3}$ in \cite{MANTIC481}. With these slight modifications in techniques, we can get the desired estimates. Finally, we use the frame of bootstrap arguments in \cite{GYHCSDDC} to show Theorem \ref{thm:0102} and we have to circumvent two additional difficulties due to presence of boundary which do not appear for spatially periodic problems considered in \cite{GYHCSDDC}: (i) The idea of Duhamel's principle on the linear solution operator in \cite{GYHCSDDC} can not be directly applied here to our boundary problem, since the nonlinear term in \eqref{0105}$_2$ does not vanish on boundary. To overcome this trouble, we employ some specific energy estimates to replace Duhamel's principle (see Lemma \ref{erroestimate} on the error estimate for $\|(\varrho^{\mathrm{d}},u^{\mathrm{d}},\theta^{\mathrm{d}})\|^2_{L^2}$). (ii) At the boundary the initial data of the linearized problem may not satisfy the compatibility condition imposed for the initial data of the corresponding nonlinear system \eqref{0105}--\eqref{0107}. To circumvent this difficulty, we use the elliptic theory to construct initial data of \eqref{0105}--\eqref{0107} that satisfy the compatibility condition and are close to the initial data of the linearized problem. We also mention that in \cite{JFJSO2014} the authors got around a similar problem of compatibility conditions for incompressible flows by imposing the condition \eqref{lowbounded} and introducing a new energy functional to show that the initial data of the linearized problem can be used as the initial data of the corresponding nonlinear incompressible system. The rest of this paper is organized as follows. In Section \ref{sec:02} we construct unstable linear solutions, while in Section \ref{sec:0312} we deduce the nonlinear energy estimates. Section \ref{sec:04} is dedicated to the proof of Theorem \ref{thm:0102}, and finally, in Appendix we give a proof of the sharp growth rate of solutions to the linearized problem in $H^2$-norm. \section{Linear instability}\label{sec:02} In this section, we adapt the modified variational method in \cite{GYTI2} to construct a solution to the linearized equations \eqref{0108} that has growing $H^3$-norm in time. We first make a solution ansatz \eqref{ansatzmode} of growing normal mode. Substituting this ansatz into (\ref{0108}), one obtains the following time-independent system: \begin{equation}\label{0109} \left\{ \begin{array}{ll} \Lambda\tilde{\rho}+\mathrm{div}(\bar{\rho}\tilde{v})=0,\\[1mm] \Lambda \bar{\rho}\tilde{v}+a\nabla (\bar{e} \tilde{\rho} +\bar{\rho}\tilde{\theta})=\mu\Delta\tilde{ v}+\mu_0\nabla \mathrm{div}\tilde{v}-g \tilde{\rho}e_3,\\[1mm] \Lambda {\tilde{\theta}}+ \bar{e}'\tilde{v}_3 +a\bar{e}\mathrm{div}\tilde{v}= 0,\\[1mm] \tilde{ v}|_{\partial\Omega}= 0. \end{array} \right. \end{equation} Eliminating $\tilde{\varrho}$ and $\tilde{\theta}$, one has \begin{equation}\label{0113} \left\{ \begin{array}{ll} \Lambda^2 \bar{\rho}\tilde{v}+\nabla( g\bar{\rho}\tilde{{v}}_3-(1+a)\bar{p}\mathrm{div}\tilde{v} )=\Lambda\mu\Delta\tilde{ {{v}}}+\Lambda\mu_0\nabla \mathrm{div}\tilde{v}+( g\bar{\rho}'\tilde{{v}}_3 +g \bar{\rho}\mathrm{div}\tilde{v} ){e}_3,\\[1mm] \tilde{ v}|_{\partial\Omega}=0, \end{array} \right. \end{equation} where $\tilde{v}$ denotes the third component of $v$. In view of the basic idea of the modified variational method, we modify the boundary problem \eqref{0113} as follows. \begin{equation}\label{nnn0113} \left\{ \begin{array}{ll} \Lambda^2 \bar{\rho}\tilde{v}+\nabla( g\bar{\rho}\tilde{{v}}_3-(1+a)\bar{p}\mathrm{div}\tilde{v} )=s\mu\Delta\tilde{ v}+s\mu_0\nabla \mathrm{div}\tilde{v}+ (g\bar{\rho}'\tilde{{v}}_3+g \bar{\rho}\mathrm{div}\tilde{v}) e_3 ,\\[1mm] \tilde{ v}|_{\partial\Omega}= 0, \end{array} \right. \end{equation} We remark that if $s=\Lambda$ (fixed point), then the problem (\ref{nnn0113}) becomes (\ref{0113}). Now, multiplying \eqref{nnn0113}$_1$ by $\tilde{ v}$ and integrating the resulting identity, we get \begin{equation}\begin{aligned} \label{js1} \Lambda^2 \int \bar{\rho}|\tilde{v}|^2=&\int \{g\bar{\rho}'\tilde{{v}}_3^2 +[2g\bar{\rho}\tilde{{v}}_3-(1+a)\bar{p}\mathrm{div}\tilde{v}]\mathrm{div}\tilde{v}\}\mathrm{d} x\\ &-s\int\left(\mu|\nabla \tilde{ v}|^2+\mu_0 |\mathrm{div}\tilde{v}|^2\right)\mathrm{d} x. \end{aligned}\end{equation} We define $$ E_1(\tilde{ v})=\int \{g\bar{\rho}'\tilde{{v}}_3^2 +[2g\bar{\rho}\tilde{{v}}_3-(1+a)\bar{p}\mathrm{div}\tilde{v}]\mathrm{div}\tilde{v}\} \mathrm{d} x,$$ and $$E_2(\tilde{v})= \int(\mu|\nabla \tilde{v}|^2+\mu_0|\mathrm{div} \tilde{v}|^2)\mathrm{d} x.$$ Then the standard energy functional for the problem \eqref{nnn0113} is given by \begin{equation}\label{0204}E(\tilde{v})=E_1(\tilde{ v})- sE_2(\tilde{v})\end{equation} with an associated admissible set \begin{equation}\label{0205} \mathcal{A}:=\left\{\tilde{ v}\in H^1_0~\bigg|~J(\tilde{v}):=\int \bar{\rho}\tilde{ v}^2\mathrm{d} x=1\right\}. \end{equation} Recalling (\ref{js1}), we can thus find $\Lambda$ by maximizing \begin{equation}\label{0206} \Lambda^2:=\sup_{\tilde{ v}\in \mathcal{A}}E(\tilde{ v}).\end{equation} Obviously, $\sup_{\tilde{ v}\in\mathcal{A}}E(\tilde{ v})<\infty$ for any $s\geq 0$. In order to emphasize the dependence of $E(\tilde{ v})$ upon $s>0$, we shall sometimes write \begin{equation*}E(\tilde{v},s):=E(\tilde{v})\quad\mbox{ and }\quad \alpha(s):=\sup_{\tilde{ v}\in \mathcal{A}}E(\tilde{ v},s). \end{equation*} Next we show that a maximizer of (\ref{0206}) exists and that the corresponding Euler-Lagrange equations are equivalent to (\ref{nnn0113}). \begin{pro}\label{pro:0201} Assume that $(\bar{\rho},\bar{e})$ satisfies \eqref{0102}--\eqref{0104}, then for any but fixed $s>0$, the following assertions hold. \begin{enumerate} \item[(1)] $E({\tilde{ v}})$ achieves its supremum on $\mathcal{A}$. \item[(2)] Let $\tilde{ v}_0$ be a maximizer and $\Lambda:=\sqrt{\sup_{\tilde{ v}\in\mathcal{A}}E(\tilde{ v})}>0$, then there exists a function $\tilde{ v}_0\in H^4$ satisfying the boundary problem (\ref{nnn0113}) and \begin{equation}\label{qh0208} \tilde{ v}_{01}^2+\tilde{v}_{02}^2\not\equiv 0 . \end{equation} In addition \begin{equation}\label{qh0209} \mathrm{div}(\bar{\rho}\tilde{ v}_0) \not\equiv 0, \mbox{ provided } \bar{\rho}'\geq 0. \end{equation} \end{enumerate} \end{pro} \begin{pf} (1) Let $\tilde{ v}_n\in \mathcal{A}$ be a maximizing sequence, then $E(\tilde{ v}_n)$ is bounded from below. This fact together with (\ref{0205}) implies that $\tilde{ v}_n$ is bounded in $H^1$. So, there exists a $\tilde{ v}_0\in H^1\cap\mathcal{A}$ and a subsequence (still denoted by $v_n$ for simplicity), such that $\tilde{ v}_n\rightarrow \tilde{ v}_0$ weakly in $H^1$ and strongly in $L^2$. Moreover, by the lower semi-continuity, one has \begin{equation*} \begin{aligned} \sup_{\tilde{ v}\in \mathcal{A}}E(\tilde{ v}) =& \limsup_{n\rightarrow \infty}E(\tilde{ v}_n)\\ = &\lim_{n\rightarrow \infty} \int (g\bar{\rho}'\tilde{v}_{n3}^2 +2g\bar{\rho}\tilde{{v}}_{n3}\mathrm{div}\tilde{v}_n)\mathrm{d} x \\ & -\liminf_{n\rightarrow \infty} \int [(1+a)\bar{p}\mathrm{div}\tilde{v}_n\mathrm{div}\tilde{v}_n+s\left(\mu|\nabla \tilde{v}_n|^2+\mu_0 |\mathrm{div}\tilde{v}_n|^2\right)]\mathrm{d} x \\ \leq & E(\tilde{ v}_0)\leq \sup_{\tilde{ v}\in\mathcal{A}}E(\tilde{ v}), \end{aligned}\end{equation*} which shows that $E(\tilde{ v})$ achieves its supremum on $\mathcal{A}$. (2) To show the second assertion, we notice that since $E(\tilde{ v})$ and $J(\tilde{ v})$ are homogeneous of degree $2$, (\ref{0206}) is equivalent to \begin{equation}\label{0227} \Lambda^2=\sup_{\tilde{ v}\in {H}^1_0}\frac{E(\tilde{ v})}{J(\tilde{ v})}. \end{equation} For any $\tau\in \mathbb{R}$ and $ w\in {H}^1_0$, we take $\tilde{ w}(\tau):=\tilde{ v}_0+\tau w$. Then (\ref{0227}) gives \begin{equation*}E(\tilde{ w}(\tau))-\Lambda^2J(\tilde{ w}(\tau))\leq 0. \end{equation*} If we set $I(\tau)=E(\tilde{ w}(\tau))-\Lambda^2J(\tilde{ w}(\tau))$, then we see that $I(\tau)\in C^1(\mathbb{R})$, $I(\tau)\leq 0$ for all $\tau\in \mathbb{R}$ and $I(0)=0$. This implies $I'(0)=0$. Hence, a direct computation leads to \begin{equation}\label{weakform}\begin{aligned} & \int_\Omega \{s\mu\nabla\tilde{v}_0:\nabla w +[s\mu_0+(1+a)\bar{p}]\mathrm{div}\tilde{v}_0\mathrm{div} w\}\mathrm{d} x\\ &=\int_\Omega [g\bar{\rho}{\mathrm{div}} \tilde{ v}_0 e_3 + g \bar{\rho}'\tilde{{v}}_{03} e_3 -\nabla(g\bar{\rho}\tilde{v}_{03})-\Lambda^2\bar{\rho}\tilde{ v}_0]\cdot\tilde{ w} \mathrm{d} x.\end{aligned} \end{equation} which shows that $\tilde{ v}$ is a weak solution to the boundary problem \eqref{nnn0113}. Recalling that $0<p\in C^4(\bar{\Omega})$, $\bar{\rho}\in C^4(\bar{\Omega})$ and $\tilde{v}_0\in H^1(\Omega)$, by a bootstrap argument and the classical elliptic theory, we infer from the weak form \eqref{weakform} that $\tilde{v}_0\in H^4(\Omega)$. Next we turn to the proof of \eqref{qh0208} and \eqref{qh0209} by contradiction. Suppose that $\tilde{ v}_{01}^2+\tilde{v}_{02}^2\equiv 0$ or $\mathrm{div}(\bar{\varrho}\tilde{v}_{0})\equiv 0$, then \begin{equation}\label{horizve}\begin{aligned}0<\Lambda^2 =&\int \{g\bar{\rho}'\tilde{{v}}_{03}^2 +[2g\bar{\rho}\tilde{{v}}_{03}-(1+a)\bar{p}\partial_{x_3}\tilde{v}_{03}] \partial_{x_3}\tilde{v}_{03}\}\mathrm{d} x-s\int\left(\mu|\nabla \tilde{ v}_{03}|^2+\mu_0 |\partial_{x_3}\tilde{v}_{03}|^2\right)\mathrm{d} x\\ =&-\int (1+a)\bar{p}|\partial_{x_3}\tilde{v}_{03}|^2\mathrm{d} x-s\int\left(\mu|\nabla \tilde{ v}_{03}|^2+\mu_0 |\partial_{x_3}\tilde{v}_{03}|^2\right)\mathrm{d} x< 0, \end{aligned} \end{equation} or \begin{equation}\begin{aligned} \label{densityproe} 0<\Lambda^2 =&\int \{g\bar{\rho}'\tilde{{v}}_{03}^2 +[2g\bar{\rho}\tilde{{v}}_{03}-(1+a)\bar{p}\mathrm{div}\tilde{v}_{0}]\mathrm{div}\tilde{v}_0\}\mathrm{d} x-s\int\left(\mu|\nabla \tilde{ v}_0|^2+\mu_0 |\mathrm{div}\tilde{v}_0|^2\right)\mathrm{d} x\\ =&-\int[g\bar{\rho}'\tilde{{v}}_{03}^2 +(1+a)\bar{p}|\mathrm{div}\tilde{v}_0|^2]\mathrm{d} x-s\int\left(\mu|\nabla \tilde{ v}_0|^2+\mu_0 |\mathrm{div}\tilde{v}_0|^2\right)\mathrm{d} x<0 , \end{aligned}\end{equation} which contradicts. Therefore, \eqref{qh0208} and \eqref{qh0209} hold. This completes the proof. \hfill $\Box$ \end{pf} Next, we want to show that there is a fixed point such that $\Lambda=s>0$. To this end, we first give some properties of $\alpha(s)$ as a function of $s> 0$. \begin{pro}\label{pro:0202} Assume that $(\bar{\rho},\bar{e})$ satisfies \eqref{0102}--\eqref{0104}. Then the function $\alpha(s)$ defined on $(0,\infty)$ enjoys the following properties: \begin{enumerate}[\quad \ (1)] \item $\alpha(s)\in C_{\mathrm{loc}}^{0,1}(0,\infty)$ is nonincreasing. \item There are constants $c_1$, $c_2>0$ which depend on $g$, $\bar{\rho}$ and $\mu$, such that \begin{equation}\label{0210}\alpha(s)\geq c_1-sc_ .\end{equation} \end{enumerate} \end{pro} \begin{pf} (1) Let $\{\tilde{ v}^n_{s_i}\}\subset\mathcal{A}$ be a maximizing sequence of $\sup_{\tilde{ v}\in\mathcal{A}}E(\tilde{ v},s_i)=\alpha(s_i)$ for $i=1$ and $2$. Then \begin{equation*} \alpha(s_1)\geq \limsup_{n\rightarrow\infty}E(\tilde{ v}_{s_2}^n,s_1) \geq \liminf_{n\rightarrow\infty}E(\tilde{ v}_{s_2}^n,s_2)=\alpha(s_2)\; \mbox{ for any }0<s_1<s_2<\infty. \end{equation*} Hence $\alpha(s)$ is nonincreasing on $(0,\infty)$. Next we use this fact to show the continuity of $\alpha(s)$. Let $I:=[b,c]\subset (0,\infty)$ be a bounded interval. Noting that, by Cauchy-Schwarz's inequality, \begin{equation*} \label{} \begin{aligned} E(\tilde{v}) \leq &\int (g\bar{\rho}'\tilde{{v}}_3^2 +2g\bar{\rho}\tilde{{v}}_3\mathrm{div}\tilde{v})\mathrm{d} x- (1+a)\int\bar{p}|\mathrm{div}\tilde{v}|^2\mathrm{d} x \\ \leq &{g}\left[\left\|\frac{\bar{\rho}'}{\bar{\rho}} \right\|_{L^\infty}+\frac{g}{(1+a)}\left\|\frac{\bar{\rho}}{\bar{p}}\right\|_{L^\infty}\right]. \end{aligned}\end{equation*} Hence, by the monotonicity of $\alpha (s)$ we have \begin{equation} \label{02321} |\alpha(s)|\leq \max\left\{|\alpha(b)|,{g}\left[\left\|\frac{\bar{\rho}'}{\bar{\rho}} \right\|_{L^\infty}+\frac{g}{(1+a)}\left\|\frac{\bar{\rho}}{\bar{p}}\right\|_{L^\infty}\right]\right\}:= L<\infty\quad\mbox{ for any }s\in I. \end{equation} On the other hand, for any $s\in I$, there exists a maximizing sequence $\{\tilde{ v}^n_{s}\}\subset\mathcal{A}$ of $\sup_{\tilde{ v}\in \mathcal{A}}E(\tilde{ v},s)$, such that \begin{equation}\label{0232}\begin{aligned}|\alpha(s)-E(\tilde{ v}_{s}^n,s)|<1 \end{aligned}.\end{equation} Making use of (\ref{0204}), (\ref{02321}) and (\ref{0232}), we infer that \begin{equation*}\begin{aligned}\label{0234}0\leq& \int\left(\mu |\nabla \tilde{ v}|^2+\mu_0 |\mathrm{div} \tilde{ v}|^2\right)\mathrm{d} x \\=&\frac{1}{s}\int \{g\bar{\rho}'|\tilde{{v}}_{s3}^n|^2 +[2g\bar{\rho}\tilde{{v}}_{s3}^n -(1+a)\bar{p}\mathrm{div}\tilde{v}_s^n]\mathrm{div}\tilde{v}_s^n\}\mathrm{d} x -\frac{E(\tilde{ v}_s^n,s)}{s} \\ \leq & \frac{1+L}{b}+\frac{g}{b}\left[\left\|\frac{\bar{\rho}'}{\bar{\rho}} \right\|_{L^\infty}+\frac{g}{(1+a)}\left\|\frac{\bar{\rho}}{\bar{p}}\right\|_{L^\infty}\right] :=K. \end{aligned}\end{equation*} Thus, for $s_i\in I$ ($i=1,2$), we further find that \begin{equation}\begin{aligned}\label{0235}\alpha(s_1)= \limsup_{n\rightarrow \infty}E(\tilde{ v}_{s_1}^n,s_1)\leq & \limsup_{n\rightarrow \infty}E(\tilde{ v}_{s_1}^n,s_2)\\ &+ |s_1-s_2|\limsup_{n\rightarrow \infty}\int_{\Omega}(\mu|\nabla \tilde{ v}_{s_1}^n|^2+\mu_0|\mathrm{div} \tilde{ v}_{s_1}^n|^2)\mathrm{d} x\\ \leq & \alpha(s_2)+K|s_1-s_2|. \end{aligned}\end{equation} Reversing the role of the indices $1$ and $2$ in the derivation of the inequality (\ref{0235}), we obtain the same boundedness with the indices switched. Therefore, we deduce that \begin{equation*}\begin{aligned}|\alpha(s_1)-\alpha(s_2)|\leq K|s_1-s_2|, \end{aligned}\end{equation*} which yields $\alpha(s)\in C_{\mathrm{loc}}^{0,1}(0,\infty)$. (2) We turn to prove \eqref{0210}. First we construct a function ${v}\in H_0^1$, such that \begin{equation}\label{0214} \mathrm{div}v=0,\quad \int\bar{\rho}'{ {v}}_{3}^2 \mathrm{d} x>0. \end{equation} Noting that since $\bar{\rho}'( x^0_3)>0$ for some point $ x^0_3\in \{x_3~|~(x_1,x_2,x_3)\in \Omega\}$, there is a ball $B_{ x_0}^{\delta}:=\{ x ~|~| x- x_0|<\delta\}\subset \Omega$, such that $\bar{\rho}'>0$ on $B_{ x_0}^{\delta}$. Now, choose a smooth function $f(r)\in C^1(\mathbb{R})$, such that \begin{equation*}f(r)=-f(-r),\ |f(r)|>0\hbox{ if }0<|r|<{\delta}/{4},\ \mbox{ and }f(r)=0\mbox{ if }|r|\geq{\delta}/{4}, \end{equation*} and define then \begin{equation*}\bar{v}( x):=f(x_1)\left(0,- f(x_3)\int_{-{\delta/4}}^{x_2} f(r )\mathrm{d}r,f(x_2)\int_{-{\delta/4}}^{x_3}f(r)\mathrm{d}r\right).\end{equation*} It is easy to check that the non-zero function $\bar{v}( x)\in H_0^1(B_{ 0}^{\delta})$, thus ${ { v}}:= u( x- x_0)\in H_0^1(\Omega) $ and satisfies \eqref{0214} With \eqref{0214} to hand, one has \begin{equation*}\begin{aligned} \alpha(s)=& \sup_{\tilde{ v}\in \mathcal{A}}E(\tilde{ v},s)=\sup_{\tilde{ v}\in {H}^1_0}\frac{E(\tilde{ v},s)}{J(\tilde{ v})} \\ &\geq \frac{E({ { v}},s)}{J({ { v}})}= \frac{g \int{\bar{\rho}' {v}}_{3}^2\mathrm{d} x}{\int\bar{\rho}{ v}^2\mathrm{d} x} -s\frac{ \mu \int|\nabla { v}|^2\mathrm{d} x}{\int\bar{\rho}{ v}^2\mathrm{d} x}:= c_1-sc_2 \end{aligned}\end{equation*} for two positive constants $c_1:=c_1(g,\bar{\rho})$ and $c_2:=c_2(g,\mu,\bar{\rho})$. This completes the proof of Proposition \ref{pro:0202}. \hfill $\Box$ \end{pf} Next we show that there exists a function $\tilde{{v}}$ satisfying \eqref{0113} with a grow rate $\Lambda$. Let \begin{equation*}\label{}\mathfrak{S} :=\sup\{s~|~\alpha(\tau)>0\;\mbox{ for any }\tau\in (0,s)\}. \end{equation*} By virtue of Proposition \ref{pro:0202}, $\mathfrak{S}>0$; and moreover, $\alpha(s)>0$ for any $s<\mathfrak{S}$. Since $\alpha(s)=\sup_{\tilde{ v}\in\mathcal{A}}E(\tilde{ v},s)<\infty$, we make use of the monotonicity of $\alpha(s)$ to deduce that \begin{equation}\label{zero} \lim_{s\rightarrow 0}\alpha(s)\mbox{ exists and the limit is a positve constant.} \end{equation} On the other hand, by virtue of Poinc\'{a}re's inequality, there is a constant $c_3$ dependent of $g$, $\bar{\rho}$ and $\Omega$, such that $$ \begin{aligned} g\int( \bar{\rho}'\tilde{v}_3^2+2\bar{\rho}\tilde{{v}}_3\mathrm{div}\tilde{v})\mathrm{d} x\leq c_3 \int|\nabla\tilde{v}|^2\mathrm{d} x\quad\mbox{ for any }\tilde{ v}\in\mathcal{A}. \end{aligned}$$ Thus, if $s>c_3/\mu$, then $$g\int(\bar{\rho}'\tilde{{v}}_3^2 +2\bar{\rho}\tilde{{v}}_3\mathrm{div}\tilde{v})\mathrm{d} x-s\mu\int|\nabla \tilde{ v}|^2\mathrm{d} x<0\quad\mbox{ for any }\tilde{ v}\in\mathcal{A},$$ which implies that $$\alpha(s)\leq 0\quad \mbox{ for any } s>c_3/\mu. $$ Hence $\mathfrak{S}<\infty$, and moreover, \begin{equation}\label{zerolin} \lim_{s\rightarrow \mathfrak{S}}\alpha(s)=0. \end{equation} Now, employing a fixed-point argument, exploiting \eqref{zero}, \eqref{zerolin}, and the continuity of $\alpha(s)$ on $(0,\mathfrak{S})$, we find that there exists a unique $\Lambda\in(0,\mathfrak{S})$, such that \begin{equation} \label{growth} \Lambda=\sqrt{\alpha(\Lambda)}=\sqrt{\sup_{\tilde{ w}\in \mathcal{A}}E(\tilde{ w}, \Lambda)}>0. \end{equation} In view of Proposition \ref{pro:0201}, there is a solution $\tilde{v}\in H^4$ to the boundary problem (\ref{nnn0113}) with $\Lambda$ constructed in \eqref{growth}. Moreover, $\Lambda^2=E(\tilde{v},\Lambda)$, $\tilde{ v}_{1}^2+\tilde{v}_{2}^2\not\equiv 0$ and $\tilde{{v}}_3\not\equiv 0$ by \eqref{growth} and \eqref{0204}. In addition, $\mathrm{div}(\bar{\rho}\tilde{v})\not\equiv 0$ provided $\bar{\rho}'\geq 0$. Thus we have proved \begin{pro}\label{pro:nnn0203} Assume that $(\bar{\rho},\bar{e})$ satisfies \eqref{0102}--\eqref{0104}. Then there exists a $\tilde{v}\in H^{4}$ satisfying the boundary problem \eqref{0113} with a growth rate $\Lambda>0$ defined by \begin{equation} \label{sharprate} \Lambda^2=\sup_{\tilde{ w}\in {H}_0^1(\Omega)}\frac{E_1(\tilde{ w}) -\Lambda E_2(\tilde{ w})}{\int\bar{\rho}|\tilde{ w}|^2\mathrm{d} x}. \end{equation} Moreover, $\tilde{v}$ satisfies $\mathrm{div}(\bar{\rho}\tilde{v})\not\equiv 0$, $\tilde{ v}_{1}^2+\tilde{v}_{2}^2\not\equiv 0$ and $\tilde{{v}}_3\not\equiv 0$. In particular, let $(\tilde{\rho},\tilde{\theta}):=-(\mathrm{div}(\bar{\rho}\tilde{v}),\bar{\rho}\bar{e}'\tilde{v}_3 +\bar{p}\mathrm{div}\tilde{v})/\Lambda$, then $(\tilde{\rho},\tilde{v},\tilde{\theta})\in H^3$ satisfies \eqref{0109}. In addition, $\tilde{\rho}\not\equiv 0$ provided $\bar{\rho}'\geq 0$. \end{pro} As a result of Proposition \ref{pro:nnn0203}, one immediately gets the following linear instability. \begin{thm}\label{thm:0101} Assume that $(\bar{\rho},\bar{e})$ satisfies (\ref{0102})--(\ref{0104}). Then the steady state $(\bar{\rho}, {0},\bar{e})$ of the linearized system (\ref{0106})--(\ref{0108}) is linearly unstable. That is, there exists an unstable solution $$(\mathbf{\varrho}, u,\theta):=e^{\Lambda t}(\tilde{\rho},\tilde{v},\tilde{\theta})$$ to \eqref{0106}--\eqref{0108}, such that $(\tilde{\rho},\tilde{v},\tilde{\theta})\in H^3$ and \begin{equation*} \|({u}_1,u_2)(t)\|_{L^2}\mbox{ and } \|{u}_3(t)\|_{L^2}\to \infty\mbox{ as }t\to\infty , \end{equation*} where the constant growth rate $\Lambda$ is the same as in Proposition \ref{pro:nnn0203}. Moreover, $\tilde{\rho}\not\equiv 0$ provided $\bar{\rho}'\geq 0$. \end{thm} \section{Nonlinear energy estimates}\label{sec:0312} In this section, we derive some nonlinear energy estimates for the perturbed Cauchy problem \eqref{0105}--\eqref{0107} and an estimate of Gronwall-type in $H^3$-norm, which will be used in the proof of Theorem \ref{thm:0102} in the next section. To this end, let $(\varrho,{u},\theta)$ be a solution of the perturbed problem \eqref{0105}--\eqref{0107}, such that \begin{equation}\label{enerdienf} \mathcal{E}(t):=\mathcal{E}(\varrho,u,\theta)(t):=\|(\varrho, u,\theta)(t)\|_{H^3} \end{equation} is sufficiently small (the smallness depends on the physical parameters in \eqref{0105}), and \begin{equation*}\label{}0<\underline{\rho} \leq \rho(t, x)\leq \bar{\rho}:=\varrho+\bar{\rho}<\infty\mbox{ for any }t\geq 0,\ x\in \Omega, \end{equation*} where $\underline{\rho}$ and $\bar{\rho}$ are two constants. We remark here that these assumptions will be repeatedly used in what follows. Moreover, we assume that the solution $(\varrho,{u},\theta)$ possesses proper regularity, so that the procedure of formal calculations makes sense. For simplicity, we only sketch the outline and shall omit the detailed calculations for which we remind that we shall repeatedly use the Soblev embedding theorem \cite[Subsection 1.3.5.8]{NASII04}, Young's, H\"oldear's and Poincar\'e's inequalities, and the following interpolation inequality \cite[Chapter 5]{ARAJJFF}: $$\|f\|_{H^j}\lesssim \|f\|_{L^2}^{\frac{1}{j+1}}\|f\|_{H^{j+1}}^{\frac{j}{j+1}} \leq C_\epsilon\|f\|_{L^2} +\epsilon\|f\|_{H^{j+1}}\mbox{ for any constant }\epsilon>0. $$ In addition, we shall always use the following abbreviations in what follows. \begin{eqnarray} &&\mathcal{E}_0={\mathcal{E}}(\varrho_0,{u}_0,{N}_0),\nonumber\\ &&\frac{d}{dt}:=\partial_t+u\cdot\nabla \;\mbox{ denotes the material derivative},\nonumber\\ &&\label{masssimply} L^\varrho\equiv L^\varrho(\varrho, u):= \varrho_t+\bar{\rho}'{u}_3 + \bar{\rho}\mathrm{div} u= -\mathrm{div}(\varrho u) :=N^\mathbf{\varrho}(\varrho, u)\equiv N^\varrho, \\[1mm] &&\label{mometursim} \begin{array}{ll} {L}^u\equiv {L}^u(\varrho, u,\theta):= \bar{\rho} u_t +a\nabla(\bar{e} {\varrho} +\bar{\rho}\theta )-\mu \Delta u-\mu_0\nabla \mathrm{div} u+g\varrho e_3 \\ \qquad = -( \varrho+\bar{\rho}) u\cdot\nabla u- \varrho u_t -a\nabla(\varrho \theta) :={N}^u(\varrho,\theta, u)\equiv {N}^u, \end{array}\\[1mm] &&\label{masssimply2} \begin{array}{ll} L^\theta\equiv L^\theta(\varrho, u,\theta):=\theta_t +\bar{e}'{u}_3+a\bar{e}\mathrm{div} u= - u\cdot\nabla\theta-a\theta\mathrm{div} u \\[1mm] \qquad\quad+[{\mu}|\nabla u+\nabla ( u)^\mathrm{T}|^2/2+ \lambda(\mathrm{div} u)^2]/(\varrho+\bar{\rho}) :=N^\theta(\varrho,u,\theta)\equiv N^\theta, \end{array} \\ && \mathcal{R}(t):= \left\|\left(\varrho,\theta,u_t,\frac{\mathrm{d}}{\mathrm{d}t}\left(\bar{e}\varrho+\bar{\rho} \theta\right)\right) \right\|_{H^2}^2+\mathcal{E}(\|u\|_{H^3}+\|u\|_{H^4}^2+\mathcal{E}^2),\nonumber\end{eqnarray} and the symbol $a\lesssim b$ means that $a\leq Cb$ for some constant $C>0$ which may depend on some physical parameters in the perturbed equations \eqref{0105}. Now, we start to establish a series of lemmas which imply a priori estimates for the perturbed density, velocity and temperature. Firstly, from the following identities \begin{eqnarray*} && \int_0^t\int D^k L^{\varrho}D^k\varrho\mm{d} x\mm{d}\tau=\int_0^t\int D^k N^\varrho D^k \varrho\mm{d} x\mm{d}\tau, \\ && \int_0^t\int D^k L^{\theta}D^k\theta\mm{d} x\mm{d}\tau=\int_0^t\int D^k N^\theta D^k \theta\mm{d} x\mm{d}\tau\quad\mbox{ for } 0\leq k\leq 3, \end{eqnarray*} the following estimate on the perturbed density and temperature follows. \begin{lem}\label{lem:0301} For $0\leq k\leq 3$, it holds that $$\| (\varrho,\theta)\|_{H^k}^2\lesssim \|(\varrho,\theta)(0)\|_{H^k}^2+\int_0^t\mathcal{E}(\|u\|_{H^{k+1}} +\mathcal{E}^2)\mathrm{d}\tau.$$ \end{lem} Secondly, we control the perturbed velocity. Since the viscosity term of \eqref{mometursim} defines a strongly elliptic operator on $u$, we have for $u\in H^k\cap H_0^1$ ($1\leq k\leq 3$) that \begin{equation}\label{ellioper} \|u\|_{H^k}^2\lesssim \|\mu \Delta u+\mu_0\nabla \mathrm{div} u\|_{H^{k-2}}^2. \end{equation} Thus, applying \eqref{ellioper} to the system \begin{equation}\label{ellipicequation} -\mu \Delta u-\mu_0\nabla \mathrm{div} u ={N}^u-\bar{\rho} u_t-g\varrho e_3-a\nabla(\bar{e} {\varrho} +\bar{\rho}\theta ), \end{equation} one concludes that \begin{lem}\label{lem:0303} It holds that \begin{equation*}\label{}\begin{aligned}\|u\|_{H^{3}}^2 \lesssim \| u_t\|_{H^{1}}^2+ \|(\varrho,\theta)\|_{H^2}^2+\mathcal{E}^4. \end{aligned}\end{equation*} \end{lem} Thirdly, we bound the time-derivative of the perturbed velocity. \begin{lem}\label{lem:0304} It holds that \begin{eqnarray} && \label{detetim1} \|(\varrho,\theta)_t\|_{H^k}^2\lesssim\| u\|_{H^{k+1}}^2+\mathcal{E}^4 \lesssim \mathcal{E}^2\quad \mbox{ for }0\leq k\leq 2, \\ && \label{momentum2} \| u_t(t)\|_{H^1}^2+\int_0^t\|u_{tt}\|^2_{L^2} \mathrm{d}\tau \lesssim \|Du_t(0)\|_{L^{2}}^2+ \int_0^t( \|u\|_{H^{2}}^2 +\mathcal{E}^4)\mathrm{d}\tau , \\ && \label{utt} \| u_t(t)\|_{H^{2}}^2\lesssim \| u_{tt}\|_{L^2}^2+ \|u\|_{H^2}^2+\mathcal{E}^4. \end{eqnarray} \end{lem} \begin{pf} The inequality (\ref{detetim1}) follows directly from \eqref{masssimply} and \eqref{masssimply2}. By \eqref{mometursim}, we see that \begin{equation}\label{detetim2}\|u_t\|_{H^1}^2\lesssim \|(\varrho,\theta)\|_{H^2}^2+\| u\|_{H^{3}}^2+\mathcal{E}^4\lesssim \mathcal{E}^2.\end{equation} Hence, using \eqref{detetim1} with $k=1$, \eqref{detetim2} and Poincar\'e's inequality, we get \eqref{momentum2} from $$\int_0^t\int {L}^{ u}_t\cdot u_{tt}\mm{d} x\mm{d}\tau =\int_0^t\int {N}^{ u}_t\cdot u_{tt}\mm{d} x\mm{d}\tau. $$ Finally, applying \eqref{ellioper} to $\partial_t$\eqref{ellipicequation} and making use of \eqref{detetim1} and \eqref{detetim2}, we obtain \eqref{utt}. \hfill$\Box$ \end{pf} Fourthly, we establish the interior estimates of higher-order mass derivatives of $\bar{e}\varrho+\bar{\rho}\theta$. Let $\chi_0$ be an arbitrary but fixed function in $C_0^\infty(\Omega)$. Then, recalling the equation $$\begin{aligned} &\int_0^t\int\left( \frac{a\chi_0^2\bar{e}}{\bar{\rho}}D^kL^\varrho D^k\varrho+ \chi_0^2D^k {L}^{u}\cdot D^k u+\frac{\chi_0^2\bar{\rho}}{\bar{e}}D^kL^\theta D^k\theta\right) \mm{d} x\mm{d}\tau\\ &=\int_0^t\int\left(\frac{\chi_0^2\bar{e}}{\bar{\rho}}D^kN^\varrho D^k\varrho+ \chi_0^2D^k {N}^{ u}\cdot D^k u+\frac{\chi_0^2\bar{\rho}}{\bar{e}}D^kN^\theta D^k\theta \right)\mm{d} x\mm{d}\tau , \end{aligned}$$ one obtains \begin{lem}\label{lem:0305} For $1\leq k\leq 3$, it holds that $$\begin{aligned} &\|\chi_0D^k(\varrho, u,\theta)(t)\|_{L^2}^2+\int_0^t\left(\|\chi_0D^{k+1} u\|_{L^2}^2 +\left\|\chi_0D^k\frac{\mathrm{d}}{\mathrm{d}t}\left(\bar{e}\varrho+\bar{\rho} \theta\right)\right\|_{L^2}^2\right)\mathrm{d}\tau \\ &\lesssim \mathcal{E}_0^2+\int_0^t \mathcal{R}\mathrm{d}\tau . \end{aligned}$$ \end{lem} Fifthly, let us establish the estimates near the boundary. Similarly to that in \cite{MANTIC481,MANTTP351}, we choose a finite number of bounded open sets $\{O_j\}_{j=1}^N$ in $\mathbb{R}^3$, such that $\cup_{j=1}^NO_j\supset \partial\Omega$. In each open set $O_j$ we choose the local coordinates $(\psi,\phi,r)$ as follows: \begin{enumerate} \item[(1)] The surface $O_j\cap \partial\Omega$ is the image of a smooth vector function $y=(y^1,y^2,y^3)(\psi,\phi)$ (e.g., take the local geodesic polar coordinate), satisfying $|y_\psi|=1$, $y_\psi\cdot y_\phi=0$, and $|y_\phi|\geq \delta>0$, where $\delta$ is some positive constant independent of $1\leq j\leq N$. \item[(2)] Any $ x\in O_j$ is represented by \begin{equation}\label{transform}{x}^i={x}^i(\psi,\phi,r)= rn^i(\psi,\phi)+y^i(\psi,\phi),\end{equation} where $n=(n^1,n^2,n^3)(\psi,\phi)$ represents the internal unit normal vector at the point of the surface coordinated $(\psi,\phi)$. \end{enumerate} For the simplicity of presentation, we omit the subscript $j$ in what follows. For $k=1,2$, we define the unit vector $\tilde{e}_k=(\tilde{e}_k^1,\tilde{e}_k^2,\tilde{e}_k^3)$ by $\tilde{e}_1^i=y_\psi^i$, $\tilde{e}_2^i=y_\phi^i/|y_\phi|$. Then Frenet-Serret's formula implies that there are smooth functions $\alpha,\beta,\gamma,\alpha',\beta',\gamma'$ of $(\psi,\phi)$ satisfying \begin{eqnarray*} &&\frac{\partial}{\partial \psi}\left(\begin{array}{c} \tilde{e}_1^i \\ \tilde{e}_2^i \\ n^i \end{array}\right)=\left( \begin{array}{ccc} 0 & -\gamma & -\alpha \\ \gamma & 0 & -\beta \\ \alpha & \beta & 0 \end{array} \right)\left(\begin{array}{c} \tilde{e}_1^i \\ \tilde{e}_2^i \\ n^i \end{array}\right), \\[2mm] && \frac{\partial}{\partial \phi}\left(\begin{array}{c} \tilde{e}_1^i \\ \tilde{e}_2^i \\ n^i \end{array}\right)=\left(\begin{array}{ccc} 0 & -\gamma' & -\alpha' \\ \gamma' & 0 & -\beta' \\ \alpha' & \beta' & 0 \end{array}\right)\left(\begin{array}{c} \tilde{e}_1^i \\ \tilde{e}_2^i \\ n^i \end{array}\right). \end{eqnarray*} An elementary calculation shows that the Jacobian $J$ of the transform \eqref{transform} is \begin{equation}\label{jajobit} J=| x_\psi\times x_\phi|=|y_\phi|+(\alpha|y_\phi|+\beta')r +(\alpha\beta'-\beta\alpha')r^2.\end{equation} By \eqref{jajobit}, we find the transform \eqref{transform} is regular by choosing $r$ small if needed. Therefore, the functions $(\psi,\phi,r)_{x_i}( x)$ make sense and can be expressed by, using a straightforward calculation, \begin{equation}\label{relations}\displaystyle \left\{\begin{array}{l} \displaystyle \psi_{x_i}=\frac{1}{J}( x_\phi\times x_r)_i=\frac{1}{J}(A\tilde{e}_1^i+Be_2^i), \\[0.8em] \displaystyle\phi_{x_i}=\frac{1}{J}( x_r\times x_\phi)_i=\frac{1}{J}(C\tilde{e}_1^i+\tilde{D} \tilde{e}_2^i), \\[0.8em] \displaystyle r_{x_i}=\frac{1}{J} (x_\psi\times x_\phi)_i=n_i, \end{array}\right. \end{equation} where $A=|y_\phi|+\beta'r$, $B=-r\alpha'$, $C=-\beta r$, $\tilde{D}=1+\alpha r$ and $J=A\tilde{D}-BC>0$. Hence, \eqref{relations} gives $$\frac{\partial}{\partial x_i}=\frac{1}{J}(A\tilde{e}_1^i+B \tilde{e}_2^i)\frac{\partial}{\partial \psi}+\frac{1}{J}(C \tilde{e}_1^i+\tilde{D} \tilde{e}_2^i)\frac{\partial }{\partial \phi}+n^i\frac{\partial}{\partial r}.$$ Thus, in each $O_j$, we can rewrite the equations \eqref{masssimply}--\eqref{masssimply2} in the local coordinates $(\psi,\phi,r)$ as follows: $$\left\{\begin{array}{l} \displaystyle \tilde{L}^\varrho:={\varrho}_t+\mbox{ zero order terms of }u_3+\frac{\bar{\rho}}{J}[(A\tilde{e}_1+B\tilde{e}_2)\cdot u_\psi+ (C\tilde{e}_1+\tilde{D}\tilde{e}_2)\cdot u_\phi+Jn\cdot u_r]=N^\varrho \\[0.8em] \displaystyle \tilde{L}^u:=\bar{\rho}{u}_t-\frac{\mu}{J^2}[(A^2+ B^2)u_{\psi\psi}+2(AC+B\tilde{D})u_{\psi\phi}+ (C^2+\tilde{D}^2)u_{\phi\phi}+J^2u_{rr} ] \\[0.8em] \displaystyle \quad +\mbox{ less two order terms of }u+g\varrho e_3+ \frac{1}{J}(A\tilde{e}_1+B\tilde{e}_2)\left[\frac{\mu_0}{\bar{\rho}\bar{e}+\bar{p}} \frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)+a\bar{e}\varrho+a\bar{\rho}\theta\right]_\psi \\[0.8em] \displaystyle \quad + \frac{1}{J}(C\tilde{e}_1+\tilde{D}\tilde{e}_2)\left[\frac{\mu_0}{\bar{\rho}\bar{e}+\bar{p}} \frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)+a\bar{e}\varrho+ a\bar{\rho}\theta\right]_\phi +n\left[\frac{\mu_0}{\bar{\rho}\bar{e}+\bar{p}}\frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)+a\bar{e}\varrho+ a\bar{\rho}\theta\right]_r \\[0.8em] \displaystyle\quad =N^u+\mu_0\nabla \{[\bar{\rho}(N^\theta +u\cdot \nabla \theta)-\bar{e}\varrho\mathrm{div}u+(\bar{e}'\varrho+\bar{\rho}'\theta )u_3 ]/ (\bar{\rho}\bar{e}+\bar{p})\}:=\tilde{N}^u,\\[0.8em] \displaystyle \tilde{L}^\theta:= \theta_t+\mbox{ zero order terms of }u_3+\frac{a\bar{e}}{J}[(A\tilde{e}_1+B\tilde{e}_2)\cdot u_\psi+ (C\tilde{e}_1+\tilde{D}\tilde{e}_2)\cdot u_\phi+Jn\cdot u_r]=N^\theta, \end{array}\right.$$ where we note that $J^2=(AC+B\tilde{D})^2-(A^2+B^2)(C^2+\tilde{D}^2)$. Let $\chi_j$ be an arbitrary but fixed function in $C_0^\infty(O_j)$. Estimating the integral $$\begin{aligned} &\int_0^t\int\left( \frac{a\chi_j^2\bar{e}}{\bar{\rho}}D^k_{\psi,\phi}\tilde{L}^\varrho D^k_{\psi,\phi}\varrho+ \chi_j^2D^k_{\psi,\phi} \tilde{L}^{ u}\cdot D^k_{\psi,\phi} u+\frac{\chi_j^2\bar{\rho}}{\bar{e}}D^k_{\psi,\phi}\tilde{L}^\theta D^k_{\psi,\phi}\theta \right)\mm{d} x\mm{d}\tau\\ &=\int_0^t\int\left(\frac{a\chi_j^2\bar{e}}{\bar{\rho}}D^k_{\psi,\phi}N^\varrho D^k_{\psi,\phi}\varrho+ \chi_j^2D^k_{\psi,\phi} \tilde{N}^{ u}\cdot D^k_{\psi,\phi} u+\frac{\chi_j^2\bar{\rho}}{\bar{e}}D^k_{\psi,\phi}N^\theta D^k_{\psi,\phi}\theta \right)\mm{d} x\mm{d}\tau\end{aligned}$$ in a way similar to that in Lemma \ref{lem:0305}, we obtain the following estimates on tangential derivatives: \begin{lem}\label{lem:0306} For $1\leq k\leq 3$, it holds that $$\begin{aligned} &\|\chi_jD^k_{\psi,\phi}(\varrho, u,\theta)(t)\|_{L^2}^2+\int_0^t\left(\|\chi_jD^{k}_{\psi,\phi}D u\|_{L^2}^2 +\left\|\chi_jD^k_{\psi,\phi}\frac{\mathrm{d}}{\mathrm{d}t}\left(\bar{e}\rho+\bar{\rho}\theta \right)\right\|_{L^2}^2\right)\mathrm{d}\tau\\ & \lesssim \mathcal{E}_0^2 +\int_0^t \mathcal{R}\mathrm{d}\tau , \end{aligned}$$ where $\|\chi_jD^k_{\psi,\phi}f\|_{L^2}^2 :=\sum_{k_1+k_2=k} \|\chi_j\partial_{\psi}^{k_1}\partial_{\phi}^{k_2}f\|_{L^2}^2$. \end{lem} In order to bound the normal derivatives, we use the equations $D_r(\bar{e}\tilde{L}^\varrho+\bar{\rho}\tilde{L}^\theta- \bar{e}{N}^\varrho-\bar{\rho}{N}^\theta)=0$ and $n\cdot(\tilde{L}^u-\tilde{N}^u)=0$, which have the form $$\begin{aligned} &\left[\frac{d}{dt}\left(\bar{e}\rho+\bar{\rho}\theta\right)\right]_r+\frac{ \bar{\rho}\bar{e}+\bar{p}}{J}[(A\tilde{e}_1+B\tilde{e}_2)\cdot u_{r\psi}+ (C\tilde{e}_1+\tilde{D}\tilde{e}_2)\cdot u_{r\phi}+Jn\cdot u_{rr}]\\ &\quad +\mbox{less than second order terms of }u =[\bar{\rho}(N^\theta+u \cdot\nabla \theta)-\bar{e}\varrho\mathrm{div}u +(\bar{e}'\varrho+\bar{\rho}'\theta )u_3]_r \end{aligned}$$ and \begin{equation}\label{moetnromal}\begin{aligned} &\displaystyle \bar{\rho}n\cdot{u}_t- \frac{\mu n}{J^2}\cdot[(A^2+B^2)u_{\psi\psi}+2(AC+B\tilde{D})u_{\psi\phi}+ (C^2+\tilde{D}^2)u_{\phi\phi}+J^2u_{rr} ] \\ & \displaystyle +\mbox{less than second order terms of }u +g\varrho e_3\cdot n +\left[\frac{\mu_0}{\bar{\rho}\bar{e}+\bar{p}}\frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)+ a\bar{e}\varrho+ a\bar{\rho}\theta\right]_r \\ & = n\cdot\tilde{N}^u \end{aligned} \end{equation} Eliminating $\mu n\cdot u_{rr}$ from \eqref{moetnromal}, we get \begin{eqnarray} && \hspace{-12mm} \displaystyle \left[\frac{(\mu+\mu_0)}{\bar{\rho}\bar{e}+\bar{p}}\frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)+ a\bar{e}\varrho+a\bar{\rho}\theta\right]_r =- \bar{\rho}n\cdot{u}_t+ \frac{\mu n}{J^2}[(A^2+B^2)u_{\psi\psi}+2(AC+B\tilde{D})u_{\psi\phi} \nonumber \\ && \hspace{-8mm} + (C^2+\tilde{D}^2)u_{\phi\phi} ]-\frac{\mu}{J}[(A\tilde{e}_1+B\tilde{e}_2)\cdot u_{r\psi}+ (C\tilde{e}_1+D\tilde{e}_2)\cdot u_{r\phi}] +\mbox{less than second} \label{moetnromal2} \\ && \hspace{-10mm} \mbox{order terms of }u \displaystyle +g\varrho e_3\cdot n =n\cdot\tilde{N}^u +\frac{\mu}{\bar{\rho}\bar{e}+\bar{p}}[\bar{\rho}(N^\theta+u \cdot\nabla \theta)-\bar{e}\varrho\mathrm{div}u +(\bar{e}'\varrho+\bar{\rho}'\theta )u_3]_r. \nonumber \end{eqnarray} If we apply $D_{\psi,\phi}^kD_r^l$ ($k+l=0$, $1$, $2$) to \eqref{moetnromal2}, multiply then by $\chi_j^2 D_{\psi,\phi}^kD_r^l[d(\bar{e}\rho +\bar{\rho}\theta)/dt]_r$ and $\chi_j^2 D_{\psi,\phi}^kD_r^l(\bar{e}\varrho +\bar{\rho}\theta)_r$ respectively, and integrate them, we can bound the derivatives in the normal direction to the boundary as follows. \begin{lem}\label{lem:0307} For $0\leq k+l\leq 2$, it holds that $$\begin{aligned} &\|\chi_j D_{\psi,\phi}^k D_r^{l+1}(\bar{e}\rho+\bar{\rho}\theta)(t)\|_{L^2}^2\\ &\quad +\int_0^t\left\{\|\chi_j D_{\psi,\phi}^k D_r^{l+1}(\bar{e}\rho+\bar{\rho}\theta)\|_{L^2}^2+\left\|\chi_j D_{\psi,\phi}^k D_r^{l+1}\left[\frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)\right]\right\|_{L^2}^2\right\}\mathrm{d}\tau \\ &\lesssim \|(\varrho_0,\theta_0)\|_{H^{3}}^2+\int_0^t(\|D_{\psi,\phi}^{k+1}D_r^lDu\|_{L^2}^2+ \mathcal{R})\mathrm{d}\tau. \end{aligned}$$ \end{lem} Finally, we introduce the following lemma on the stationary Stokes equations to get the estimates on the tangential derivatives of both $u$ and $\bar{e}\varrho+\bar{\rho}\theta$. \begin{lem}\label{lem:0302} Consider the problem $$\left\{\begin{array}{l} -\mu\Delta u+a\nabla \sigma=g, \\ \mathrm{div}u=f, \\ u|_{\partial \Omega}=0, \end{array}\right.$$ where $f\in H^{k+1}$ and $g\in H^k$ ($k\geq 0$). Then the above problem has a solution $(\sigma,u)\in H^{k+1}\times H^{k+2}\cap H_0^{1}$ which is unique modulo a constant of integration for $\sigma$. Moreover, this solution satisfies $$\|u\|_{H^{k+2}}^2+\|D\sigma\|_{H^k}^2\lesssim \|f\|_{H^{k+1}}^2+\|g\|_{H^k}^2.$$ \end{lem} Now, taking $\chi_j D_{\psi,\phi}^k$ ($k=1,2$) to the Stokes problem: \begin{equation*}\label{stokesu}\left\{\begin{array}{ll} -\mu \Delta u+a\nabla(\bar{e} {\varrho} +\bar{\rho}\theta ) ={N}^u-\bar{\rho} u_t-g\varrho e_3+\mu_0\nabla \mathrm{div} u, \\ \displaystyle ({\bar{\rho}\bar{e}+\bar{p}})\mathrm{div} u =\bar{\rho}(N^\theta+u\cdot\nabla \theta)-{\bar{e}\varrho\mathrm{div}u+(\bar{e}'\varrho+\bar{\rho}'\theta)u_3-\frac{d}{dt}(\bar{e}\varrho+\bar{\rho}\theta)}-(\bar{\rho}\bar{e})'{u}_3, \\ u|_{\partial\Omega}=0, \end{array}\right.\end{equation*} we obtain \begin{equation*}\label{}\left\{\begin{array}{ll} -\mu \Delta (\chi_jD^k_{\psi,\phi}u)+a\nabla[\chi_jD^k_{\psi,\phi}(\bar{e} {\varrho} +\bar{\rho}\theta )] =\mbox{less than fourth order of }u \\ \qquad \qquad \qquad +\mbox{ less than third order of }(\varrho,\theta) +\chi_jD^k_{\psi,\phi}({N}^u-\bar{\rho} u_t-g\varrho e_3+\mu_0\nabla \mathrm{div} u),\\[2mm] \displaystyle \mathrm{div} (\chi_j D^k_{\psi,\phi}u) =\mbox{less than third order of }u+\chi_jD^k_{\psi,\phi} \left\{\left[{\bar{\rho}(N^\theta+u\cdot\nabla \theta)}\right.\right. \\ \qquad \qquad \qquad \qquad \left.\left.{{-\bar{e}\varrho\,\mathrm{div}u+(\bar{e}'\varrho+\bar{\rho}'\theta)u_3- \frac{d}{dt}(\bar{e}\varrho+\bar{\rho}\theta)}-(\bar{\rho}\bar{e})'{u}_3}\right] ({\bar{\rho}\bar{e}+\bar{p}})^{-1}\right\}, \\[1mm] \chi_j D_{\psi,\phi}u|_{\partial\Omega}=0, \end{array}\right.\end{equation*} Applying Lemma \ref{lem:0302} to the above problem, we obtain \begin{lem}\label{lem:0308} For $0\leq l+k\leq 2$, we have $$\begin{aligned} \|\chi_j D^{2+l}D_{\psi,\phi}^k u\|_{L^2}^2+ \|\chi_j D^{1+l}D_{\psi,\phi}^k (\bar{e}\rho+\bar{\rho}\theta)\|_{L^2}^2 \lesssim \left\|\chi_j D^{1+l} D_{\psi,\phi}^{k}\frac{d}{dt} (\bar{e}\rho+\bar{\rho}\theta)\right\|_{L^2}^2 +\mathcal{R}(t). \end{aligned}$$ \end{lem} Now, we are able to establish the desired energy estimate. Putting Lemmas \ref{lem:0306}--\ref{lem:0308} together, we conclude that $$ \sum_{k=0}^3\int_0^t\left\{\|\chi_j D^{k+1} u\|_{L^2}^2+\left\|\chi_j D^k\left[\frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)\right]\right\|_{L^2}^2\right\}\mathrm{d}\tau \lesssim \mathcal{E}_0^2 +\int_0^t\mathcal{R}\mathrm{d}\tau, $$ which, together with Lemma \ref{lem:0305}, yields that \begin{equation*}\label{}\begin{aligned} \int_0^t\left\{\| u\|_{H^4}^2+\left\| \frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)\right\|_{H^3}^2\right\}\mathrm{d}\tau \lesssim \mathcal{E}_0^2 +\int_0^t\mathcal{R}\mathrm{d}\tau. \end{aligned}\end{equation*} Noting that, by Lemma \ref{lem:0304}, the interpolation inequality (for $j=4$) and Young's inequality, one has $$\begin{aligned}\int_0^t\mathcal{R}\mathrm{d}\tau\lesssim \mathcal{E}_0^2+ \int_0^t[ \|(\varrho,\theta)\|_{H^2}^2 +\mathcal{E}(\|u\|_{H^3}+\mathcal{E}^2)]\mathrm{d}\tau \end{aligned},$$ whence \begin{equation} \label{asnoenet} \begin{aligned} \int_0^t\left\{\|u\|_{H^4}^2+\left\| \frac{d}{dt}(\bar{e}\rho+\bar{\rho}\theta)\right\|_{H^3}^2\right\}\mathrm{d}\tau \lesssim \mathcal{E}_0^2+ \int_0^t\big[ \|(\varrho,\theta)\|_{H^2}^2 +\mathcal{E}(\|u\|_{H^3}+\mathcal{E}^2)\big] \mathrm{d}\tau . \end{aligned} \end{equation} On the other hand, by Lemmas \ref{lem:0301}--\ref{lem:0303}, and \eqref{momentum2} in Lemma \ref{lem:0304}, we find that $$\begin{aligned}\mathcal{E}^2(t)+\|(\varrho,\theta)_t\|_{H^2}^2+\|u_t(t)\|_{H^1}^2+\int_0^t\|u_{tt}\|^2_{L^2} \mathrm{d}\tau\lesssim \mathcal{E}^2_0+\int_0^t[\|u\|_{H^2}^2+ \mathcal{E}(\|u\|_{H^{4}} +\mathcal{E}^2)]\mathrm{d} \tau . \end{aligned}$$ Consequently, in view of the above inequality and (\ref{asnoenet}), and the interpolation inequality, we obtain \begin{equation}\label{ernygfor}\begin{aligned} &\mathcal{E}^2(t)+\|(\varrho,\theta)_t\|_{H^2}^2+\|u_t(t)\|_{H^1}^2+\int_0^t\|u_{tt}\|^2_{L^2} \mathrm{d}\tau\\ & \quad \lesssim \mathcal{E}_0^2 + \int_0^t\big[ C_\epsilon\|(\varrho,u,\theta)\|_{L^2}^2 +\mathcal{E}^2(\epsilon+C_\epsilon\mathcal{E})\big] \mathrm{d}\tau, \end{aligned} \end{equation} where the constant $C_\epsilon$ depends on $\epsilon$ and some physical parameters in \eqref{0105}. In particular, we shall take $\epsilon=\Lambda$ later on. Now, let us recall that the local existence and uniqueness of solutions to the perturbed equations \eqref{0105} have been established in \cite[Remark 6.1]{kawashima1984systems} for $\bar{\rho}$ and $\bar{e}$ being constants, while the global existence and uniqueness of small solutions to the perturbed equations \eqref{0105} with heat conductivity have been shown in \cite{MANTIJ321} for $(\bar{\rho},\bar{e})$ being close to a constant state. By a slight modification in the proof of the local existence in \cite{kawashima1984systems,MANTIJ321}, one can easily obtain the existence and uniqueness of a local solution $(\rho,v,\theta)\in C^0([0,T],H^3)$ to the perturbed problem (\ref{0105})--(\ref{0107}) for some $T>0$. Moreover, this local solution satisfies the above \emph{a priori} estimate (\ref{ernygfor}). Therefore, we arrive at the following conclusion: \begin{pro} \label{pro:0401} Assume that $(\bar{\rho},\bar{e})$ satisfies \eqref{0102}--\eqref{0104}. For any given initial data $(\varrho_0, u_0,\theta_0)\in H^3$ satisfying the compatibility condition and $$\inf_{x \in\Omega}\{\varrho_0 +\bar{\rho},\ \bar{\theta}_0+\bar{e}\}>0,$$ then there exist a $T>0$ and a unique solution $(\varrho, u,\theta)\in C^0([0,T],H^3)$ to the perturbed problem \eqref{0105}--\eqref{0107} satisfying $$\inf_{(0,T)\times\Omega}\{\varrho+\bar{\rho},\theta+\bar{e}\}>0.$$ Moreover, there is a sufficiently small constant ${\delta}^0_1\in (0,1]$, such that if $\mathcal{E}(t)\leq {\delta}^0_1$ on $[0,T]$, then the solution $(\varrho, u,\theta)$ satisfies \begin{equation}\label{energyinequality} \begin{aligned} &\mathcal{E}^2(t)+\|(\varrho,\theta)_t(t)\|_{H^2}^2+\|u_t(t)\|_{H^1}^2+\int_0^t\|u_{tt}(\tau)\|^2_{L^2} \mathrm{d}\tau\\ &\quad \leq C\mathcal{E}_0^2+ \int_0^t(C\|(\varrho, u,\theta)(\tau)\|_{L^2}^2 +\Lambda\mathcal{E}^2(\tau)) \mathrm{d}\tau , \end{aligned}\end{equation} where the constant $C$ only depends on $ {\delta}_1^0$, $\Lambda$, $\Omega$ and the known physical parameters in \eqref{0105}. \end{pro} \section{Nonlinear instability}\label{sec:04} Now we are in a position to prove Theorem \ref{thm:0102} by adopting and modifying the ideas in \cite{JFJSO2014,JJTIIA,GYHCSDDC}. In view of Theorem \ref{thm:0101}, we can construct a (linear) solution \begin{equation}\label{0501} \left(\varrho^\mathrm{l}, { u}^\mathrm{l},\theta^\mathrm{l}\right)=e^{{\Lambda t}} \left(\bar{\varrho}_0, \bar{ u}_0,\bar{\theta}_0\right) \end{equation} to the linearized problem \eqref{0106}--\eqref{0108} with the initial data $(\bar{\varrho}_0,\bar{ u}_0,\bar{\theta}_0)\in H^3 $. Furthermore, this solution satisfies \begin{equation}\label{wangweiwe16} \|({\bar{u}}_{01},{\bar{u}}_{02})\|_{L^2}\|{\bar{u}}_{03}\|_{L^2}>0, \end{equation} where $\bar{u}_{0i}$ stands for the $i$-th component of $\bar{ u}_0$ for $i=1$, $2$ and $3$. In what follows, $C_1,\cdots ,C_7$ will denote generic constants that may depend on $(\bar{\varrho}_0, \bar{ u}_0,\bar{\theta}_0)$, $ {\delta}_1^0$, $\Lambda$, $\Omega$ and the known physical parameters in \eqref{0105}, but are independent of $\delta$. Obvious, we can not directly use the initial data of the linearized equations \eqref{0106}--\eqref{0108} as the one of the associated nonlinear problem, since the linearized and nonlinear equations enjoy different compatibility conditions at the boundary. A similar problem also arises in \cite{JJTIIA}, where Jang and Tice studied the instability of the spherically symmetric Navier-Stokes-Poisson equations. To get around this obstacle, Jang and Tice used the implicit function theorem to produce a curve of initial data that satisfy the compatibility conditions and are close to the linear growing modes. Since our problem involves higher-dimension, we instead use the elliptic theory to construct initial data of the nonlinear equations problem which are close to the linear growing modes. \begin{lem}\label{lem:modfied} Let $(\bar{\varrho}_0,\bar{ u}_0, \bar{ {\theta}}_0)$ be the same as in \eqref{0501}. Then there exists a $ {\delta}^0_2\in (0,1)$ depending on $(\bar{\varrho}_0,\bar{ u}_0, \bar{ {\theta}}_0)$, such that for any $\delta\in (0, {\delta}^0_2)$, there is a $u_\mathrm{r}$ which may depend on $\delta$ and enjoys the following properties: \begin{enumerate} \item[(1)] The modified initial data \begin{equation}\label{mmmode}( {\varrho}_0^\delta,{ u}_0^\delta,{{\theta}}_0^\delta ) =\delta (\bar{\varrho}_0,\bar{ u}_0, \bar{ {\theta}}_0) + \delta^2( \bar{\varrho}_0,{ u}_\mathrm{r}, \bar{ {\theta}}_0) \end{equation} satisfy ${ u}_0^\delta|_{\partial\Omega}= 0$ and the compatibility condition: $$\big\{(\varrho_0^\delta+\bar{\rho}){ u}_0^\delta\cdot\nabla {u}_0^\delta+a\nabla [{({\varrho}_0^\delta +\bar{\rho})(\theta_0^\delta+\bar{e})} -{\bar{\rho}\bar{e}}]-\mu\Delta{ u}_0^\delta -\mu_0\nabla\mathrm{div} u_0^\delta+g\varrho_0^\delta e_3\big\}|_{\partial\Omega}=0. $$ \item[(2)] $( {\varrho}_\mathrm{r} ,{ u}_\mathrm{r}, {{\theta}}_\mathrm{r} )$ satisfies the following estimate: $$ \| { u}_\mathrm{r}\|_{H^{3}} \leq C_1, $$ where the constant $C_1$ depends on $\|(\bar{\varrho}_0,\bar{ u}_0, \bar{ {\theta}}_0)\|_{H^3}$ and other physical parameters, but is independent of $\delta$. \end{enumerate} \end{lem} \begin{pf} Notice that $(\bar{\varrho}_0,\bar{ u}_0, \bar{ {\theta}}_0)$ satisfies $$\bar{ u}_0|_{\partial\Omega}= 0,\quad [a\nabla(\bar{e} \bar{\varrho}_0 +\bar{\rho}\bar{\theta}_0)-\mu\Delta\bar{ u}_0-\mu_0\nabla \mathrm{div}\bar{ u}_0+g\bar{\varrho}_0 e_3]|_{\partial\Omega}=0.$$ Hence, if the modified initial data satisfy \eqref{mmmode}, then we expect ${u}_\mathrm{r}$ to satisfy the following problem: \begin{equation} \left\{\begin{array}{l} \mu\Delta{ u}_\mathrm{r} +\mu_0\nabla\mathrm{div} u_\mathrm{r} -\delta^2\varrho_0^{**}{ u}_\mathrm{r} \cdot\nabla {u}_\mathrm{r}-\delta\varrho_0^{**}(\bar{ u}_0\cdot\nabla {u}_\mathrm{r}+{ u}_\mathrm{r}\cdot\nabla\bar{ u}_0) \\ \quad=a\nabla(\bar{e} \bar{\varrho}_0 +\bar{\rho}\bar{\theta}_0)+g\bar{\varrho}_0 e_3+ \varrho_0^{**}\bar{ u}_0\cdot\nabla \bar{ u}_0-a\nabla ({\varrho}_0^*{\theta}_0^*):=F(\bar{\varrho}_0,\bar{u}_0,\bar{\theta}_0), \\[1mm] u_\mathrm{r}|_{\partial\Omega} = 0 \end{array} \right. \label{js2} \end{equation} where $\varrho_0^{*}:=(1 +\delta )\bar{\varrho}_0$, $\theta_0^{*}=(1 +\delta )\bar{\theta}_0$ and $\varrho_0^{**}:=(\varrho_0^\delta+\bar{\rho})=(\delta +\delta^2 )\bar{\varrho}_0+\bar{\rho}$. Thus the modified initial data naturally satisfy the compatibility condition. Next we shall look for a solution $u_\mathrm{r}$ to the boundary problem (\ref{js2}) when $\delta$ is sufficiently small. We begin with the linearization of (\ref{js2}) which reads as \begin{equation}\label{elliequation} \begin{aligned}&\mu\Delta{ u}_\mathrm{r} +\mu_0\nabla\mathrm{div} u_\mathrm{r} =F(\bar{\varrho}_0,\bar{ u}_0,\bar{\theta}_0)+\delta^2\varrho_0^{**}{v} \cdot\nabla {v}+\delta\varrho_0^{**}(\bar{ u}_0\cdot\nabla {v}+{v}\cdot\nabla\bar{ u}_0) \end{aligned} \end{equation} with boundary condition \begin{equation}\label{boundery0} { u}_\mathrm{r}|_{\Omega}= 0. \end{equation} Let $v\in H^{3}$, then it follows from the elliptic theory that there is a solution ${u}_\mathrm{r}$ of \eqref{elliequation}--\eqref{boundery0} satisfying $$ \begin{aligned} \|{ u}_\mathrm{r}\|_{H^{3}} \leq & \|F(\bar{\varrho}_0,\bar{u}_0,\bar{\theta}_0) +\delta^2\varrho_0^{**}{v}\cdot\nabla {v}+\delta\varrho_0^{**}(\bar{u}_0\cdot\nabla {v}+{v}\cdot\nabla\bar{u}_0)\|_{H^1}\\[1mm] \leq & C_{\mathrm{m}}(1+\|(\bar{\varrho}_0, \bar{u}_0, \bar{\theta}_0)\|_{H^{2}}^2+\delta^2 \|v\|_{H^{2}}^2). \end{aligned} $$ Now, we take $C_1=C_{\mathrm{m}}(2+\|(\bar{\varrho}_0,\bar{ u}_0,\bar{\theta}_0)\|_{H^{2}}^2)$ and $\delta\leq \min\{C^{-1}_1,1\}$. Then for any $\| v\|_{H^{3}}^2\leq C_1$, one has $$\begin{aligned}\|{ u}_\mathrm{r}\|_{H^{3}} \leq C_1. \end{aligned} $$ Therefore we can construct an approximate function sequence ${u}_\mathrm{r}^n$, such that \begin{equation*} \begin{aligned} & \mu\Delta{ u}_\mathrm{r}^{n+1} +\mu_0\nabla\mathrm{div} u_\mathrm{r}^{n+1} -\delta^2\varrho_0^{**}{ u}^n_\mathrm{r} \cdot\nabla {u}^n_\mathrm{r}-\delta\varrho_0^{**}(\bar{ u}_0\cdot\nabla {u}^n_\mathrm{r} +{u}^n_\mathrm{r}\cdot\nabla\bar{ u}_0)=F(\bar{\varrho}_0, \bar{u}_0, \bar{\theta}_0), \end{aligned} \end{equation*} and for any $n$, $$\|{ u}_\mathrm{r}^n\|_{H^{3}} \leq C_1,\quad \|{ u}_\mathrm{r}^{n+1}-u_\mathrm{r}^{n}\|_{H^{3}} \leq C_2\delta\|{ u}_\mathrm{r}^{n}-u_\mathrm{r}^{n-1}\|_{H^{3}}$$ for some constant $C_2$ independent of $\delta$ and $n$. Finally, we choose a $\delta$ sufficiently small so that $C_2\delta<1$, and then use a compactness argument to get a limit function which solves the nonlinear boundary problem (\ref{js2}). Moreover $\|{u}_\mathrm{r}\|_{H^{3}} \leq C_1$. Thus we have proved Lemma \ref{lem:modfied}. \hfill$\Box$ \end{pf} Let $({\varrho}_0^\delta,{ u}_0^\delta,{{\theta}}_0^\delta )$ be constructed as in Lemma \ref{lem:modfied}. Then there is a constant $$C_3\geq \max\{1, \|\left(\bar{\varrho}_0, \bar{ u}_0,\bar{\theta}_0\right)\|_{L^2}\}$$ depending on $(\bar{\varrho}_0,\bar{u}_0,\bar{{\theta}}_0)$, such that for any $\delta\in (0,{\delta}_2^0)\subset (0,1)$, $$\mathcal{E}({\varrho}_0^\delta,{ u}_0^\delta,{{\theta}}_0^\delta )\leq C_3\delta, $$ where $\mathcal{E}$ is defined by \eqref{enerdienf}. Recalling $\inf_{ x\in\Omega}\{\bar{\rho},\bar{e}\}>0$ and the embedding theorem $H^2\hookrightarrow L^\infty$, we can choose a sufficiently small $\delta$, such that \begin{equation}\label{inferfds} \inf_{ x\in\Omega}\{\varrho_0^\delta+\bar{\rho}, \theta_0^\delta+\bar{e}\}>0.\end{equation} Hence, by virtue of Proposition \ref{pro:0401}, there is a $ {\delta}^0_3\in (0, {\delta}^0_2)$, such that for any $\delta<{\delta}^0_3$, there exists a unique local solution $(\varrho^\delta, u^\delta, \theta^\delta)\in C([0,T],H^3)$ to \eqref{0105} and \eqref{0107}, emanating from the initial data $(\varrho_0^\delta, u_0^\delta,\theta_0^\delta)$. Moreover, \eqref{inferfds} holds for any $\delta$ satisfying $\mathcal{E}({\varrho}_0^\delta,{u}_0^\delta,{{\theta}}_0^\delta )\leq C_3{\delta}^0_3$. Let $C>0$ and ${\delta}^0_1>0$ be the same constants as in Proposition \ref{pro:0401} and $\delta_0=\min\{{\delta}^0_3, {\delta}^0_1/C_3\}$. Let $\delta\in (0,\delta_0)$ and \begin{equation}\label{times} T^{\delta}=\frac{1}{\Lambda}\mathrm{ln}\frac{2\varepsilon_0}{\delta}>0,\;\quad\mbox{i.e.,}\;\; \delta e^{\Lambda T^\delta}=2\varepsilon_0, \end{equation} where $\varepsilon_0\leq 1$, independent of $\delta$, is sufficiently small and will be fixed later. In what follows, we denote $\mathcal{E}_\delta(t):={\mathcal{E}}(\varrho^\delta,{ u}^\delta,{\theta}^\delta )(t)$. Define \begin{equation*} T^*=\sup\left\{t\in (0,T^{\max})\left|~{\mathcal{E}}_\delta(t)\leq C_3{\delta_0}\right.\right\}\end{equation*} and \begin{equation*} T^{**}=\sup\left\{t\in (0,T^{\max})\left|~\left\|\left(\varrho^\delta, {u}^\delta,\theta^\delta\right)(t)\right\|_{{L}^2}\leq 2\delta C_3e^{\Lambda t}\right\}\right., \end{equation*} where $T^{\mathrm{max}}$ denotes the maximal time of existence of the solution $(\varrho^\delta,{u}^\delta,\theta^\delta)$. Obviously, $T^*T^{**}>0$, and furthermore, \begin{eqnarray}\label{0502n1} &&\mathcal{E}_\delta(T^*)=C_3{\delta_0}\quad\mbox{ if }T^*<\infty , \\[1mm] \label{0502n111} && \left\|\left(\varrho^\delta, { u}^\delta,\theta^\delta\right)(T^{**})\right\|_{{L}^2} =2\delta C_3e^{\Lambda T^{**}}\quad\mbox{ if }T^{**}<T^{\max}. \end{eqnarray} Then for all $t\leq \min\{T^\delta,T^*,T^{**}\}$, we deduce from the estimate \eqref{energyinequality} and the definition of $T^*$ and $T^{**}$ that \begin{equation*}\begin{aligned} &{\mathcal{E}}^2_\delta(t) +\|(\varrho^\delta,\theta^\delta)_t(t)\|_{H^2}^2 +\|u_t^\delta(t)\|_{H^1}^2 +\int_0^t\|u_{tt}^\delta\|^2_{L^2} \mathrm{d}\tau \\ & \leq C[ \mathcal{E}^2({\varrho}_0^\delta,{ u}_0^\delta,{{\theta}}_0^\delta ) +2 C_3^2\delta^2e^{2\Lambda t}/\Lambda] +\Lambda\int_0^t{\mathcal{E}}^2_\delta(\tau) \mathrm{d}\tau \\ & \leq C_4\delta^2e^{2\Lambda t} +\Lambda\int_0^t{\mathcal{E}}^2_\delta(\tau) \mathrm{d}\tau \end{aligned} \end{equation*} for some constant $C_4>0$. Thus, applying Gronwall's inequality, one concludes \begin{equation}\begin{aligned}\label{0503} {\mathcal{E}}^2_\delta(t) +\|(\varrho^\delta,\theta^\delta)_t(t)\|_{H^2}^2 + \|u_t^\delta(t)\|_{H^1}^2 +\int_0^t\|u_{tt}^\delta\|^2_{L^2} \mathrm{d}\tau \leq C_5\delta^2e^{2\Lambda t} \end{aligned} \end{equation} for some constant $C_5>0$. Let $(\varrho^{\mathrm{d}}, {u}^{\mathrm{d}}, {\theta}^{\mathrm{d}})=(\varrho^{\delta}, {u}^{\delta},{\theta}^{\delta})-\delta(\varrho^{\mathrm{l}},{u}^{\mathrm{l}},{\theta}^{\mathrm{l}})$. Noting that $(\varrho^\mathrm{a}_\delta, u^{\mathrm{a}}_\delta, \theta^{\mathrm{a}}_\delta):= \delta(\varrho^{\mathrm{l}}, u^{\mathrm{l}}, \theta^{\mathrm{l}})$ is also a solution to the linearized problem \eqref{0106}--\eqref{0108} with the initial data $\delta(\bar{\varrho}_0, \bar{u}_0, \bar{\theta}_0)\in H^3$, we find that $(\varrho^{\mathrm{d}}, {u}^{\mathrm{d}}, {\theta}^{\mathrm{d}})$ satisfies the following non-homogenous equations: \begin{equation}\label{h0407}\left\{\begin{array}{ll} \varrho_t^{\mathrm{d}} + \mathrm{div}(\bar{\rho} u^{\mathrm{d}})= -\mathrm{div}(\varrho^{\delta} u^{\delta}) :=N^{\varrho}(\rho^\delta, u^{\delta}):=N^\varrho_\delta, \\[2mm] \bar{\rho} u_t^{\mathrm{d}} + a\nabla(\bar{e} {\varrho}^{\mathrm{d}} +\bar{\rho}\theta^{\mathrm{d}})-\mu\nabla\mathrm{div} u^{\mathrm{d}}-\mu_0\Delta u^{\mathrm{d}}+g\varrho^{\mathrm{d}} e_3 \\ \qquad = -( \varrho^{\delta}+\bar{\rho}) u^{\delta}\cdot\nabla u^{\delta} -\varrho^\delta u^{\delta}_t -a\nabla (\varrho^\delta \theta^{\delta}):=N^u(\rho^\delta,u^{\delta},\theta^{\delta}):=N^u_\delta, \\[2mm] \theta_t^{\mathrm{d}}+\bar{e}'u^{\mathrm{d}}_3 +a\bar{e}\mathrm{div}u^{\mathrm{d}}= [{\mu}|\nabla u^{\delta}+\nabla (u^{\delta})^\mathrm{T}|^2/2 + \lambda(\mathrm{div}u^{\delta})^2]/(\varrho^{\delta}+\bar{\rho}) \\[2mm] \qquad\quad\qquad\quad\qquad \qquad -u^{\delta}\cdot\nabla \theta^{\delta} -a\theta^{\delta}\mathrm{div}u^{\delta}:=N^e(\rho^\delta,u^{\delta}, \theta^{\delta}):=N^\theta_\delta, \end{array}\right.\end{equation} with initial data $(\varrho^{\mathrm{d}}(0),{u}^{\mathrm{d}}(0),\theta^{\mathrm{d}}(0))= \delta^2(\bar{\varrho}_0,{u}_\mathrm{r},\bar{{\theta}}_0)$ and boundary condition $u^{\mathrm{d}}|_{\partial\Omega}=0$. Next, we shall establish the error estimate for $(\varrho^{\mathrm{d}},u^{\mathrm{d}},\theta^{\mathrm{d}})$ in $L^2$-norm. \begin{lem}\label{erroestimate} There is a constant $C_6$, such that for all $t\leq \min\{T^\delta,T^*,T^{**}\}$, \begin{equation}\label{ereroe} \begin{aligned} \| (\varrho^{\mathrm{d}},u^{\mathrm{d}},\theta^{\mathrm{d}})(t)\|^2_{L^2} \leq C_6\delta^3\theta^{3\Lambda t}. \end{aligned} \end{equation} \end{lem} \begin{pf} We differentiate the linearized momentum equations \eqref{h0407}$_{2}$ in time, multiply the resulting equations by $u_t$ in $L^2(\Omega)$, and use the equations \eqref{h0407}$_{1}$ and \eqref{h0407}$_{3}$ to deduce \begin{equation}\label{nnn0314P} \begin{aligned} &\frac{d}{dt} \int \left\{\bar{\rho}| u_t^{\mathrm{d}}|^2 -g\bar{\rho}'({u}_3^{\mathrm{d}})^2+ [(1+a)\bar{p}\mathrm{div} u^{\mathrm{d}} -2g\bar{\rho}{u}_3^{\mathrm{d}}]\mathrm{div}{ u}^{\mathrm{d}}\right\}\mathrm{d} x\\ &=- 2\mu\int |\nabla u^{\mathrm{d}}_t|^2\mathrm{d} x- 2\mu_0\int |\mathrm{div} u_t^{\mathrm{d}}|^2 \mathrm{d} x\\ &\quad+ 2\int [\partial_tN^u_\delta- gN^\varrho_\delta e_3- a\nabla(\bar{e}N^\varrho_\delta +\bar{\rho} N^\theta_\delta)]\cdot u_t^{\mathrm{d}}\mathrm{d} x . \end{aligned}\end{equation} Thanks to \eqref{sharprate}, one has \begin{equation*}\label{0302}\begin{aligned} &\int \{g\bar{\rho}'({u}^{\mathrm{d}}_3)^2 +[2g\bar{\rho}u_3^{\mathrm{d}}-(1+a)\bar{p} \mathrm{div}{ u}^{\mathrm{d}}]\mathrm{div}{ u}^{\mathrm{d}}\} \mathrm{d} x\\ & \quad \leq\Lambda\int\left(\mu|\nabla {u}^{\mathrm{d}}|^2+\mu_0|\mathrm{div} {u}^{\mathrm{d}}|^2\right)\mathrm{d}x +\Lambda^2{\int\bar{\rho}|{ u}^{\mathrm{d}}|^2\mathrm{d} x}. \end{aligned}\end{equation*} Thus, integrating (\ref{nnn0314P}) in time from $0$ to $t$, we get \begin{equation}\label{0314} \begin{aligned} &\|\sqrt{\bar{\rho}} u_t^\mathrm{d}(t)\|^2_{L^2}+2\int_0^t(\mu\|\nabla u_\tau ^\mathrm{d}\|^2_{L^2} +\mu_0\|\mathrm{div} u_\tau^\mathrm{d}\|^2_{L^2}) \mathrm{d}\tau \\ & \leq I_1^0 + {\Lambda^2} \|\sqrt{\bar{\rho}} u^\mathrm{d}(t)\|_{L^2} + {\Lambda}\mu\|\nabla u^\mathrm{d}(t)\|^2_{L^2}+ {\Lambda}\mu_0\|\mathrm{div} u^\mathrm{d}(t)\|^2_{L^2}\\ & \quad + 2\int_0^t\int [\partial_\tau N^u_\delta - gN^\varrho_\delta e_3 - a\nabla(\bar{e}N^\varrho_\delta +\bar{\rho} N^\theta_\delta)]\cdot u^{\mathrm{d}}_\tau\mathrm{d} x\mathrm{d}\tau, \end{aligned}\end{equation} where $$I_1^0=\left\{\int \left\{\bar{\rho}| u_t^{\mathrm{d}}|^2 -g\bar{\rho}'({u}_3^{\mathrm{d}})^2+ [(1+a)\bar{p}\mathrm{div} u^{\mathrm{d}} -2g\bar{\rho}{u}_3^{\mathrm{d}}]\mathrm{div}{ u}^{\mathrm{d}}\right\}\mathrm{d} x\right\}\bigg|_{t=0}.$$ Using Newton-Leibniz's formula and Cauchy-Schwarz's inequality, we find that \begin{equation}\begin{aligned}\label{0316} & \Lambda (\mu\|\nabla u^\mathrm{d}(t)\|_{L^2}^2+\mu_0\|\mathrm{div} u^\mathrm{d}(t)\|^2_{L^2}) \\ & =I_2^0+ 2\Lambda\int_0^t\int_{\Omega}\left(\mu\sum_{1\leq i,j\leq 3}\partial_{x_i} u_{j\tau}^\mathrm{d}\partial_{x_i} u_{j\tau}^\mathrm{d} \mathrm{d} x\mathrm{d}\tau +\mu_0\mathrm{div} u_\tau^\mathrm{d}\mathrm{div} u^\mathrm{d}\right)\mathrm{d} x\mathrm{d}\tau \\ & \leq I_2^0+\int_0^t(\mu\|\nabla u_\tau^\mathrm{d}\|_{L^2}^2 +\mu_0\|\mathrm{div} u_\tau^\mathrm{d}\|^2_{L^2}) \mathrm{d}\tau +\Lambda^2\int_0^t(\mu\|\nabla u^\mathrm{d}\|_{L^2}^2+\mu_0\|\mathrm{div} u^\mathrm{d} \|^2_{L^2})\mathrm{d}\tau, \end{aligned}\end{equation} where $I_2^0=\Lambda (\mu\|\nabla u^\mathrm{d}(0)\|_{L^2}^2+\mu_0\|\mathrm{div} u^\mathrm{d}(0)\|^2_{L^2})$ and $u_{j\tau}^\mathrm{d}$ denotes the $j$-th component of $u_{\tau}^\mathrm{d}$ . On the other hand, \begin{equation}\begin{aligned}\label{0317} \Lambda\partial_t\|\sqrt{\bar{\rho}} u^\mathrm{d}(t)\|^2_{L^2}=2\Lambda\int_{\Omega} \bar{\rho} u^\mathrm{d}(t)\cdot u^\mathrm{d}_t(t)\mathrm{d} x\leq\|\sqrt{\bar{\rho}} u_t^\mathrm{d}(t)\|^2_{L^2} +\Lambda^2\|\sqrt{\bar{\rho}} u^\mathrm{d}(t)\|^2_{L^2}. \end{aligned}\end{equation} Hence, putting (\ref{0314})--(\ref{0317}) together, we obtain the differential inequality \begin{equation}\label{12safd} \begin{aligned} & \partial_t \|\sqrt{\bar{\rho}} u^\mathrm{d}(t)\|^2_{L^2}+ \mu\| \nabla u^\mathrm{d}(t)\|_{L^2}^2+ \mu_0\| \mathrm{div} u^\mathrm{d}(t)\|^2_{L^2} \\ &\leq2\Lambda\left[ \|\sqrt{\bar{\rho}} u^\mathrm{d}\|^2_{L^2} +\int_0^t(\mu\| \nabla u^\mathrm{d}\|_{L^2}^2 + {\mu}_0\|\mathrm{div} u^\mathrm{d}\|_{L^2}^2) \mathrm{d}s\right] \\ & \quad +\frac{I_1^0+2I^0_2}{\Lambda}+ \frac{2}{\Lambda}\int_0^t \int [\partial_\tau N^u_\delta- gN^\varrho_\delta e_3- a\nabla(\bar{e}N^\varrho_\delta +\bar{\rho} N^\theta_\delta)]\cdot u^{\mathrm{d}}_\tau\mathrm{d} x\mathrm{d}\tau. \end{aligned} \end{equation} Next, we control the last two terms on the right hand of \eqref{12safd}. Noting that \begin{equation}\label{timesd} \delta e^{\Lambda t}\leq 2\varepsilon_0\leq 2 \quad\mbox{for any }t\leq \min\{T^\delta,T^*,T^{**}\}, \end{equation} we utilize \eqref{0503} and \eqref{0501}, H\"oldear's inequality and Sobolev's embedding theorem to infer that \begin{equation}\label{esitmaeintial1} \begin{aligned} & \left|2\int_0^t \int [\partial_\tau N^u_\delta - gN^\varrho_\delta e_3 - a\nabla(\bar{e}N^\varrho_\delta +\bar{\rho} N^\theta_\delta)]\cdot u^{\mathrm{d}}_\tau\mathrm{d} x\mathrm{d}\tau\right| \\ & \lesssim \int_0^t (\|(N^\varrho_\delta,N_\delta^\theta)\|_{H^1}+\|\partial_\tau N^u_\delta\|_{L^2})(\|u_\tau^\mathrm{a} \|_{L^2}+\|u_\tau^\delta\|_{L^2})\mathrm{d}\tau \\ & \lesssim \int_0^t (\delta^3e^{3\Lambda \tau}+\delta^2e^{2\Lambda \tau} +\delta e^{\Lambda \tau}\|u_{\tau\tau}^\delta\|_{L^2})\delta e^{\Lambda \tau}\mathrm{d}\tau \\ & \lesssim \delta^3e^{3\Lambda t}+ \delta^4e^{4\Lambda t}\lesssim \delta^3e^{3\Lambda t}, \end{aligned} \end{equation} and \begin{equation}\label{esitmaeintial2} \begin{aligned} ({I_1^0 +2I^0_2})/{\Lambda}\lesssim&(\|\sqrt{\bar{\rho}} u_t^\mathrm{d}\|^2_{L^2}+\|\nabla u^\mathrm{d}_0\|_{L^2}^2+\|u^\mathrm{d}_3\|_{L^2}^2)|_{t=0}\\ \lesssim& [\|(\varrho^\mathrm{d},\theta^\mathrm{d})\|_{H^1}^2+\|u^\mathrm{d}\|_{H^2}^2+\mathcal{E}_\delta^2 (\mathcal{E}_\delta^2+\mathcal{E}_\delta^4+\|u_t^\delta\|_{L^2}^2)]|_{t=0} \\ \lesssim & \delta^4(\|(\bar{\varrho}_\mathrm{0},\bar{\theta}_{0})\|_{H^1}^2 +\|u_{\mathrm{r}}\|_{H^2}^2)+\delta^2 e^{2\Lambda t}(\delta^2 e^{2\Lambda t}+\delta^4 e^{4\Lambda t})\lesssim \delta^3 e^{3\Lambda t} . \end{aligned} \end{equation} Thus, substituting (\ref{esitmaeintial2}) and (\ref{esitmaeintial1}) into (\ref{12safd}), we obtain $$ \begin{aligned} & \partial_t \|\sqrt{\bar{\rho}} u^{\mathrm{d}}(t)\|^2_{L^2}+ \mu\|\nabla u^{\mathrm{d}}(t)\|_{L^2}^2 + {\mu_0}\|\mathrm{div} u(t)\|^2_{L^2} \\ & \leq 2\Lambda\left[ \|\sqrt{\bar{\rho}} u^{\mathrm{d}}(t)\|^2_{L^2} +\int_0^t(\mu\| \nabla u^\mathrm{d}\|_{L^2}^2 + {\mu}_0\|\mathrm{div} u^{\mathrm{d}}\|^2_{L^2}) \mathrm{d}\tau\right]+C_7\delta^3e^{3\Lambda t}. \end{aligned}$$ Applying Gronwall's inequality to the above inequality, one obtains \begin{equation}\label{estimerrvelcoity} \begin{aligned} \|\sqrt{\bar{\rho}} u^{\mathrm{d}}(t)\|^2_{L^2}+ \int_0^t({\mu}\|\nabla u^{\mathrm{d}}\|^2_{L^2}+{\mu}_0\|\mathrm{div} u^{\mathrm{d}}\|^2_{L^2})\mathrm{d}\tau \lesssim \delta^3e^{3\Lambda t}+\delta^4\|\sqrt{\bar{\rho}}u_{\mathrm{r}}\|_{L^2}^2\lesssim \delta^3e^{3\Lambda t} \end{aligned} \end{equation} for all $t\leq \min\{T^\delta,T^*,T^{**}\}$. Thus, making use of \eqref{0314}, \eqref{0316} and \eqref{esitmaeintial1}--\eqref{estimerrvelcoity}, we deduce that \begin{equation}\label{inequalemee}\begin{aligned} &\frac{1}{\Lambda}\|\sqrt{\bar{\rho}} u_t^{\mathrm{d}}(t)\|^2_{L^2}+ {\mu}\|\nabla u^{\mathrm{d}}(t)\|_{L^2}^2 +\mu_0\|\mathrm{div} u^{\mathrm{d}} (t)\|^2_{L^2}\\ & \leq {\Lambda}\|\sqrt{\bar{\rho}} u^{\mathrm{d}}(t)\|^2_{L^2}+2 {\Lambda}\int_0^t({\mu}\| \nabla u^{\mathrm{d}}\|_{L^2}^2+ {\mu}_0\|\mathrm{div} u^{\mathrm{d}}\|^2_{L^2})\mathrm{d}\tau\\ &\quad +\frac{I_1^0+2I^0_2}{\Lambda}+ \frac{2}{\Lambda}\int_0^t \int [\partial_\tau N^u_\delta- gN^\varrho_\delta e_3- a\nabla(\bar{e}N^\varrho_\delta +\bar{\rho} N^\theta_\delta)]\cdot u^{\mathrm{d}}_\tau\mathrm{d} x\mathrm{d}\tau\lesssim \delta^3e^{3\Lambda t}. \end{aligned}\end{equation} which, together with Poincar\'e's inequality and the estimates \eqref{estimerrvelcoity}, yields \begin{eqnarray}\label{uestimate1n} \| u^{\mathrm{d}}(t)\|_{H^1 }^2+\| u_t^{\mathrm{d}}(t)\|^2_{L^2 }+ \int_0^t\|\nabla u^{\mathrm{d}}\|^2_{L^2}\mathrm{d}\tau \lesssim \delta^3e^{3\Lambda t}. \end{eqnarray} Finally, using the equations \eqref{h0407}$_1$ and \eqref{h0407}$_2$, and the estimates \eqref{timesd} and \eqref{uestimate1n}, we find that \begin{equation*}\begin{aligned}\label{} \|(\varrho^{\mathrm{d}},\theta^{\mathrm{d}})(t)\|_{L^2}\leq & \delta^2\|(\varrho_\mathrm{r},\theta_\mathrm{r})\|_{L^2}+\int_0^t \|(\varrho^{\mathrm{d}},\theta^{\mathrm{d}})_\tau\|_{L^2}\mathrm{d}\tau \\ \lesssim &\delta^{2}+\int_0^t(\| u\|_{H^1}+\|(N^\varrho_\delta,N^\theta_\delta)\|_{L^2})\mathrm{d}\tau \\ \lesssim & \delta^2+\int_0^t(\delta^\frac{3}{2}e^{\frac{3\Lambda}{2}\tau}+\mathcal{E}_\delta^2(\tau))\mathrm{d}\tau\lesssim \delta^\frac{3}{2}e^{\frac{3\Lambda}{2} t}. \end{aligned}\end{equation*} Putting the previous estimates together, we get \eqref{ereroe} immediately. This completes the proof of Lemma \ref{erroestimate}. \hfill$\Box$ \end{pf} Now, we claim that \begin{equation}\label{n0508} T^\delta=\min\left\{T^\delta,T^*,T^{**}\right\}, \end{equation} provided that small $\varepsilon_0$ is taken to be \begin{equation}\label{defined} \varepsilon_0=\min\left\{\frac{{C_3\delta_0}} {4\sqrt{C_5}},\frac{C_3^2}{8C_6},\frac{m_0^2}{C_6},1 \right\}>0, \end{equation}where $m_0=\min\{\|(\bar{u}_{01},\bar{u}_{02})\|_{L^2},\|\bar{u}_{03}\|_{L^2}\}>0$ due to \eqref{wangweiwe16}. Indeed, if $T^*=\min\{T^{\delta},T^*,T^{**}\}$, then $T^*<\infty$. Moreover, from \eqref{0503} and \eqref{times} we get \begin{equation*} {\mathcal{E}}_\delta(T^*)\leq \sqrt{C_5}\delta e^{\Lambda T^*} \leq \sqrt{C_5}\delta e^{\Lambda T^\delta}=2\sqrt{C_5}\varepsilon_0<C_3{\delta_0}, \end{equation*} which contradicts with \eqref{0502n1}. On the other hand, if $T^{**}=\min\{T^{\delta},T^*,T^{**}\}$, then $T^{**}<T^{\mathrm{max}}$. Moreover, in view of \eqref{0501}, \eqref{times} and \eqref{ereroe}, we see that \begin{equation*}\begin{aligned} \left\|\left(\varrho^\delta, { {u}}^\delta,{ {\theta}}^\delta \right)(T^{**})\right\|_{L^2} \leq & \left\|\left(\varrho^\mathrm{a}_{\delta}, { {u}}^\mathrm{a}_{\delta},{ {\theta}}_\delta^{\mathrm{a}} \right)(T^{**})\right\|_{L^2} +\left\|\left(\varrho^{\mathrm{d}}, { {u}}^{\mathrm{d}},{ {\theta}}_\delta^{\mathrm{d}} \right)(T^{**})\right\|_{L^2} \\ \leq &\delta \left\|\left(\varrho^\mathrm{l}, { {u}}^{\mathrm{l}},{ {\theta}}^{\mathrm{l}} \right)(T^{**})\right\|_{L^2}+\sqrt{C_6}\delta^{3/2}e^{3\Lambda T^{**}/2} \\ \leq & \delta C_3e^{\Lambda T^{**}}+\sqrt{C_6}\delta^{3/2} e^{3\Lambda T^{**}/2} \\ \leq & \delta e^{\Lambda T^{**}}(C_3+\sqrt{2C_6\varepsilon_0}) <2\delta C_3 e^{\Lambda T^{**}}, \end{aligned} \end{equation*} which also contradicts with \eqref{0502n111}. Therefore, \eqref{n0508} holds. Finally, we again use \eqref{defined} and \eqref{ereroe} to deduce that \begin{equation*}\begin{aligned} \|u_3^{\delta}(T^\delta)\|_{L^2}\geq & \|u^{\mathrm{a}}_{3\delta}(T^{\delta})\|_{L^2}-\|u_3^{\mathrm{d}}(T^{\delta})\|_{L^2} = \delta e^{\Lambda T^\delta}\|\bar{u}_{03}\|_{L^2}-\|u_3^{\mathrm{d}}(T^{\delta})\|_{L^2} \\ \geq & \delta e^{\Lambda T^\delta}\|\bar{u}_{03}\|_{L^2}-\sqrt{C_6}\delta^{3/2}e^{3\Lambda^* T^{\delta}/2} \geq 2m_0\varepsilon_0-\sqrt{C_6}\varepsilon_0^{3/2} \geq m_0\varepsilon_0, \end{aligned} \end{equation*} where $u^{\delta}_{3}(T^{\delta})$ denote the third component of $u^{\delta}(T^{\delta})$. Similar, we also have $$\|(u_1^{\delta},u_2^{\delta})(T^\delta)\|_{L^2}\geq m_0\varepsilon_0.$$ This completes the proof of Theorem \ref{thm:0102} by defining $\varepsilon=m_0\varepsilon_0$. In addition, if $\bar{\rho}'\geq 0$, then the function $\bar{\rho}_0$ constructed in \eqref{0501} satisfies $\|\bar{\rho}_0\|_{L^2}>0$. Thus we also obtain $\|\varrho^\delta(T^\delta)\|_{L^2}\geq m_0\varepsilon_0$, if we define $m_0=\min\{\|\bar{\varrho}_{0}\|_{L^2}, \|(\bar{u}_{01}, \bar{u}_{02})\|_{L^2},\|\bar{u}_{03}\|_{L^2}\}>0$. Hence, the assertion in Remark \ref{strongconden} holds. \setcounter{equation}{0} \section*{Appendix}\label{sec:03}
1,108,101,562,674
arxiv
\section{Introduction} Last few decades, the phase structure of QCD has been one of the main concerns regarding the physics of strong interaction. For the baryon chemical potential lower than the nucleon mass minus binding energy per baryon in nuclei, the phase realized in nature is the QCD vacuum where the chiral symmetry is spontaneously broken. The chiral symmetry breaking is responsible for the mass generation of hadrons as well as the mass splittings of chiral partners. At sufficiently large chemical potential and/or temperature, the chiral symmetry is expected to restore. An interesting possibility arises when one allows the chiral condensate to vary in space; % Nakano and Tatsumi demonstrated in \cite{Nakano:2004cd} using the NJL model that the symmetry restoration may take place via several steps; going up in density from the vacuum, the system first goes into an intriguing state named as dual chiral density wave (DCDW), that is, a particular type of inhomogeneous chiral phases, making a spiral in the $(\sigma_0,\pi_0)$ chiral plane along $z$ direction. Up to present, various inhomogeneous chiral phases are discussed. These includes the real kink crystal (RKC) phase belonging to another class of inhomogeneous phases \cite{Nickel:2009ke}. In both cases, the chiral symmetry is partially restored in either the momentum space (DCDW) or the real space (RKC). Such inhomogeneous chiral phases may be realized in the neutron stars and may lead to some interesting astrophysical implications \cite{Tatsumi:2014cea,Buballa:2015awa}. There are number of approaches to chiral inhomogeneous phases. One of the major strategies is to apply the mean-field approximation \cite{Nickel:2009wj,Carignano:2010ac,Karasawa:2013zsa,Adhikari:2017ydi} or the Ginzburg-Landau (gradient) expansion \cite{Abuki:2011pf,Abuki:2013pla,Carignano:2017meb} to the quark-based models such as NJL-type models or quark-meson model \cite{Buballa:2014tba}. Recently a self-consistent mean-field framework has also been applied \cite{Lee:2017yea}. One of the advantages of this kind of approach is that these models are capable of realizing the QCD vacuum properties as well as the color-flavor locked phase of quark matter which is known to be the densest phase of QCD \cite{Alford:2007xm}. On the other hand, the main disadvantage is the lack of the ability to reproduce the normal nuclear matter, the QCD phase next to the vacuum phase, followed right after the liquid-gas phase transition. A quite different approach was taken recently in \cite{Heinz:2013hza}. Using the parity doublet hadron model tuned to reproduce the bulk properties of normal nuclear matter, the authors have shown that the DCDW phase appears at density several times larger than the normal nuclear density. In the present paper, we adopt a hadronic model with parity doublet structure (mirror assignment) \cite{Detar:1988kn,Jido:2001nt}, with vector mesons included in a manner guided by the hidden local symmetry \cite{Bando:1987br,Harada:2003jx}. With the six-point scalar interaction included, the model is known to successfully reproduce bulk properties of normal nuclear matter for a wide range of chiral invariant mass \cite{Motohiro:2015}. Our main concerns here are, 1) if the inhomogeneous chiral phase is possible or not within our model, 2) how phase transition points, if any, as a function of $\mu_B$, change with the chiral invariant mass, and 3) what is the effect of current quark mass on the inhomogeneous chiral phase. In particular, the point 3) was missed in \cite{Heinz:2013hza}. In order to incorporate this in our analyses, we extend the ansatz for the DCDW phase so as to take into account the effect of the explicit symmetry breaking. The extended ansatz smoothly interpolates between the DCDW phase and a nearly symmetry-restored phase. With this setup, we construct the effective potential by diagonalizing the Bogoliubov-de Gennes (BdG) Hamiltonian for nucleons, and determine phases via numerically minimizing the potential. Our main finding is the emergence of another type of DCDW phase which occupies the lower density region according to the value of chiral invariant mass. The paper is organized as follows. In Sec.~\ref{sec:model}, we describe our model setup and approximation scheme. In Sec.~\ref{sec:PhaseStructure}, we present our numerical results for phases and discuss the phase structure in the plane of $\mu_B$ and chiral invariant mass. Sec.~\ref{sec:summary} summarizes the present work. \section{Model}\label{sec:model} In our analysis, we introduce $N^\ast(1535)$ as the chiral partner to the ordinary nucleon based on the parity doublet structure~\cite{Detar:1988kn,Jido:2001nt}. For constructing a relativistic mean field model to describe nuclear matter, following Ref.~\cite{Motohiro:2015}, we include $\omega$ meson using the hidden local symmetry~\cite{Bando:1987br,Harada:2003jx} in addition to the scalar and pseudoscalar mesons. The baryon part of the Lagrangian is expressed as~\cite{Motohiro:2015} \begin{align} {\cal L}_N=&\bar\psi_{1r}i\gamma^\mu D_\mu\psi_{1r}% +\bar\psi_{1l}i\gamma^\mu D_\mu\psi_{1l}&\nonumber \\ &+\bar\psi_{2r}i\gamma^\mu D_\mu\psi_{2r}% +\bar\psi_{2l}i\gamma^\mu D_\mu\psi_{2l}&\nonumber \\ &-m_0[\bar\psi_{1l}\psi_{2r}-\bar\psi_{1r}\psi_{2l}% -\bar\psi_{2l}\psi_{1r}+\bar\psi_{2r}\psi_{1l}]&\nonumber\\ &-g_1[\bar\psi_{1r}M^\dagger\psi_{1l}% +\bar\psi_{1l}M\psi_{1r}]&\nonumber\\ &-g_2[\bar\psi_{2r}M\psi_{2l}% +\bar\psi_{1l}M^\dagger\psi_{2r}]&\\ &+a_{\rho NN}[\bar\psi_{1l}\gamma^\mu\xi_L^\dagger% \hat\alpha_{\parallel\mu}\xi_L\psi_{1l} +\bar\psi_{1r}\gamma^\mu\xi_R^\dagger% \hat\alpha_{\parallel\mu}\xi_R\psi_{1r}]&\nonumber\\ &+a_{\rho NN}[\bar\psi_{2l}\gamma^\mu\xi_R^\dagger% \hat\alpha_{\parallel\mu}\xi_R\psi_{2l}% +\bar\psi_{2r}\gamma^\mu\xi_L^\dagger% \hat\alpha_{\parallel\mu}\xi_L\psi_{2r}]&\nonumber\\ &+a_{0NN}{\rm tr}[\hat\alpha_{\parallel\mu}]% (\bar\psi_{1r}\gamma^\mu\psi_{1r}+\bar\psi_{1l}% \gamma^\mu \psi_{1l}&\nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ % +\bar\psi_{2r}\gamma^\mu \psi_{2r}% +\bar\psi_{2l}\gamma^\mu \psi_{2l})&\nonumber \label{eq:LN} \end{align} The part for the scalar and pseudoscalar mesons meson part is given by \begin{align} {\cal L}_M=&\frac{1}{4}{\rm tr}\left[% \partial_\mu M\partial^\mu M^\dagger% \right]+\frac{1}{4}\bar\mu^2{\rm tr}\left[% MM^\dagger\right]&\nonumber \\ &-\frac{1}{16}\lambda_4\left({\rm tr}\left[% MM^\dagger\right]\right)^2+\frac{1}{48}% \lambda_6\left({\rm tr}\left[% MM^\dagger\right]\right)^3&\nonumber \\ &+\frac{1}{4}m_\pi^2 f_\pi{\rm tr}\left[% M+M^\dagger\right]& \end{align} In this paper, we omit the kinetic and mass terms for vector mesons. Details of the above Lagrangian terms are seen in \cite{Motohiro:2015}. In the present analysis, we adopt the following extended DCDW ansatz \begin{align} \braket{M}=M(z)\equiv\delta\sigma+\sigma_0e^{2ifz\tau^3} \label{eq:ansatz} \end{align} where $\delta \sigma$, $\sigma_0$ and $f$ are parameters with dimension one, and $\tau^a$ ($a=1,2,3$) are the Pauli matrices. Space independent part $\delta\sigma$ accommodates the possibility that the space average of DCDW condensate would get nonvanishing shift into $\sigma$-direction due to the explicit chiral symmetry breaking. Applying the mean-field approximation, the Lagrangian for nucleon is cast into \begin{align} {\cal L}_N=&\bar\psi_1\left[i\slash{\partial}-g_1\left(% \delta\sigma+\sigma_0e^{2ifz\tau^3\gamma_5}% \right)+\gamma^0\mu_B^*\right]\psi_1&\nonumber \\ &+\bar\psi_2\left[i\slash{\partial}-g_2\left(% \delta\sigma+\sigma_0e^{-2ifz\tau^3\gamma_5}% \right)+\gamma^0\mu_B^*\right]\psi_2&\nonumber \\ &-m_{0}\left(\bar\psi_1\gamma_5\psi_2% -\bar\psi_2\gamma_5\psi_1\right)& \end{align} where $\mu_B^*$ is effective chemical potential which include $\omega$ contribution as \begin{equation*} \mu_B^*=\mu_B-g_{\omega NN}\omega_0\, . \end{equation*} The nucleon contribution to the effective potential can be written as \begin{align} \Omega_{N}=\frac{i}{V_4}\mathrm{Tr}\,% \mathrm{Log}(i\partial_0-({\mathcal H}(z)-\mu_B^*)), \label{eq:omegaB} \end{align} where $V_4$ is the space-time volume and ${\mathcal H}(z)$ is the single particle Bogoliubov-de Gennes (BdG) Hamiltonian defined in the space of fermion bispinor $\psi=(\psi_1,\psi_2)$ as \begin{equation*} {\mathcal H}=\left(% \begin{array}{cc} i\gamma^0\bm{\gamma}\cdot\nabla+g_1\gamma^0M(z) & m_{0}\gamma^0\gamma_5 \\ -m_{0}\gamma^0\gamma_5 & i\gamma^0\bm{\gamma}\cdot\nabla+g_2\gamma^0M(z)^* \\ \end{array} \right) \end{equation*} This is nothing but the Dirac Hamiltonian in the presence of a periodic potential field $M(z)(=M(z+\frac{\pi}{f}))$. Then the functional trace in Eq.~(\ref{eq:omegaB}) can be evaluated by finding eigenvalues of the operator ${\mathcal H}(z)$ \cite{Nickel:2008ng}. The eigenvalue has a discrete label as well as continuous three-momentum ${\bf p}$ in addition to internal quantum numbers; This is because of the Bloch theorem which states that the eigenfunctions in the presence of a periodic potential are the modified plane wave; the plane wave distorted by periodic functions. To be specific, we decompose the bispinor as \begin{equation*} \psi({\bf x})=\sum_{\ell=-\infty}^{\infty}\sum_{\bf p}% \psi_{{\bf p},\ell}\,e^{i\left({\bm K}_\ell+{\bf p}\right)\cdot{\bf x}} \end{equation*} where ${\bm K}_\ell=(0,0,2f\ell)$ is the reciprocal lattice vector. Moving on to the quasimomentum base $\{\psi_{{\bf p},\ell}\}$, the BdG Hamiltonian for proton $(I_3=+1/2)$ sector is cast into the following block-diagonalized form: \begin{align} H_{\ell\ell'}({\bf p})=% &\left( % \begin{array}{@{\,}cccc@{\,}} H^{1}_{\ell\ell'}&\gamma^0\gamma_5m_0\delta_{\ell\ell'}\\ -\gamma^0\gamma_5m_0\delta_{\ell\ell'}&H^{2}_{\ell\ell'} \end{array}% \right)&\nonumber \\ H^{1}_{\ell\ell'}=% &\left[\left({\bf p}+{\bm K}_\ell\right)\cdot\gamma^0% {\bm\gamma}+g_1\delta\sigma\gamma^0\right]\delta_{\ell\ell'}% &\nonumber \\ &+g_1\sigma_0\gamma^0\left[P_r\delta_{\ell\ell'+1}% +P_l\delta_{\ell\ell'-1}\right]% &\nonumber \\ H^{2}_{\ell\ell'}=&\left[\left({\bf p}+{\bm K}_\ell\right)% \cdot{\gamma^0\bm \gamma}+g_2\delta\sigma\gamma^0\right]% \delta_{\ell\ell'}&\nonumber \\ &+g_2\sigma_0\gamma^0\left[P_r\delta_{\ell\ell'-1}% +P_l\delta_{\ell\ell'+1}\right]% &\nonumber \end{align} where $P_{r,l}$ is projection operator defined as \begin{equation*} P_{r}=\frac{1+\gamma_5}{2} \ \ , \ \ P_{l}=\frac{1-\gamma_5}{2}. \end{equation*} Since the isospin remains a good quantum number, we can simply double the proton contribution in the full effective potential. Then omitting the antiprotons which would not contribute at zero temperature, and diagonalizing $H_{\ell\ell^\prime}({\bf p})$ results in an infinite tower of eigenvalues at each ${\bf p}$, which repeatedly appears for every Brillouin % Zone (BZ), ${\bf p}\to {\bf p}+\bm{K}_\ell$ ($\ell=\cdots,-1,0,1,\cdots$): \begin{equation*} \sum_{\ell^\prime}% H_{\ell\ell'}({\bf p})\psi^{(i)}_{n,{\bf p},\ell^\prime}% =E_{n,\bf{p}}^{(i)}\psi^{(i)}_{n,\bf{p},\ell}\,% \quad(n=0,1,\cdots,\infty), \end{equation*} with $i(=1, 2, 3, 4)$ labeling the internal quantum number $(p,p^*)\otimes(\uparrow,\downarrow)$, where $p^\ast$ implies the $I_3=+1/2$ part of $N^\ast(1535)$. Equation~(\ref{eq:omegaB}) is now evaluated as \begin{align} \Omega_N=\sum_{n=0}^{\infty}% \sum_{i=1}^4\int_{-f}^f\frac{dp_z}{\pi}% \!\!\int\frac{d{\bf p}_\perp}{(2\pi)^2}% (E^{(i)}_{n,{\bf p}}-\mu_B^*)\theta(\mu_B^*-E^{(i)}_{n,{\bf p}}) \label{eq:OmegaN} \end{align} with ${\bf p}_\perp=(p_x,p_y,0)$. The following meson contributions add up to the full expression of the thermodynamic potential. \begin{align} \Omega_M=&-\frac{1}{2}m_\omega^2\omega_0^2% -\frac{1}{2}\bar\mu^2\left(% \delta\sigma^2+\sigma_0^2% \right)+2\sigma_0^2f^2&\nonumber \\ &+\frac{1}{4}\lambda_4\left[% \left(\delta\sigma^2+\sigma_0^2\right)^2% +2\delta\sigma^2\sigma_0^2% \right]&\nonumber \\ &-\frac{1}{6}\lambda_6\left[% \left(\delta\sigma^2+\sigma_0^2\right)^3% +6\left(\delta\sigma^2+\sigma_0^2\right)% \delta\sigma^2\sigma_0^2\right]&\nonumber \\ &-m_\pi^2f_\pi\delta\sigma.& \label{eq:Omega} \end{align} The feedback from an explicit chiral symmetry breaking is taken care by the last term. Assuming for a while the existence of normal nuclear matter within the model, model parameters except for chiral invariant mass for nucleon are determined by the pion decay constant $\sigma_0=f_\pi=92.2$~MeV in vacuum, baryon and meson masses shown in Table \ref{table:input-mass} and normal nuclear property shown in Table~\ref{table:input-normal-nuclear-density}. In homogeneous phase the baryon mass is calculated as \begin{align} m_{\pm}=\frac{1}{2}\left[% \sqrt{\left(g_1+g_2\right)^2\sigma_0^2+4m_0^2}% \mp\left(g_1-g_2\right)\sigma_0% \right]. \end{align} The determined parameters are summarized in Table \ref{table:model-para}. However, we will show later that the normal nuclear matter exists only as a metastable state as another type of DCDW phase dominates over it once chiral invariant mass becomes smaller than some critical value, $m_{0}\alt 800$~MeV. \begin{table}[thbp] \caption{Values of baryon and meson mass in unit of MeV } \label{table:data_type} \centering \begin{tabular}{cccc} \hline \hline $m_+$&$m_- $&$m_\pi$&$m_\omega$\\ \ \ 939\ \ &\ \ 1535\ \ &\ \ 140\ \ &\ \ 783\ \ \\ \hline\hline \end{tabular} \label{table:input-mass} \end{table} \begin{table}[thbp] \caption{ Physical inputs in normal nuclear density} \label{table:input-normal-nuclear-density} \centering \begin{tabular}{cc|c} \hline\hline \ \ Saturation density\ \ & \ \ $\rho_0$ \ \ &% \ \ 0.16 [${\rm fm}^{-3}$] \ \ \\ \ \ Binding energy \ \ & \ \ $\frac{E}{A}-939$ \ \ &% \ \ $-16$ [MeV] \ \ \\ \ \ Incompressibility \ \ &$ \ \ K \ \ $& \ \ 240 [MeV] \ \ \\ \hline\hline \end{tabular} \end{table} \begin{table}[h] \caption{Determined parameters for given chiral invariant mass} \label{table:model-para} \centering \begin{tabular}{c|cccccc} \hline\hline \ $m_{0}$ \ & \ $500$ \ & \ $600$ \ & \ $700$ \ & \ $800$ \ & 900 \ \\ \hline $g_1$&$9.03$&8.49&7.82&7.00&5.97\\ $g_2$&$15.5$&15.0&14.3&13.5&12.4\\ $g_{\omega NN}$&11.3&9.13&7.30&5.66&3.52\\ $\bar\mu\,[\rm{MeV}]$&441&437&406&320&114\\ $\lambda_4$&42.2&40.6&35.7&23.2&4.47\\ $\lambda_6\cdot f_\pi^2$&17.0&15.8&14.0&8.94&0.644\\ \hline\hline \end{tabular} \end{table} \section{phase structure}\label{sec:PhaseStructure} The phase diagram can be obtained by numerically solving the stationary conditions of thermodynamic potential for $f$, $\delta\sigma$, $\sigma_0$ and $\omega_0$. In Figure~\ref{fig:muB-mN0}, we show the phase structure in $\mu_B-m_{0}$ plane. We find that there exist two kinds of DCDW phase: the ordinary DCDW phase indicated as ``DCDW'' and a new DCDW phase as ``sDCDW''. \begin{figure}[tbp] \begin{center} \vspace{-80mm} \includegraphics[bb=0 0 480 600,width=10cm,clip]{DCDW-phase-muBm0.pdf} \caption{Phase structure in $\mu_B-m_{0}$ plane. } \label{fig:muB-mN0} \end{center} \end{figure} In the following, we discuss the phases and the associated phase transitions in detail for two typical cases (a)~$m_{0}=800$MeV, and (b)~$m_{0}=700$MeV. \paragraph{$m_{0}=800$MeV.} Figure~\ref{fig:muB-pres-800-a} shows the pressure for the homogeneous ($f=0$, a dashed curve) and DCDW states ($f\ne0$, a solid curve) as a function of $\mu_B$ in this case. \begin{figure}[tbp] \begin{center} \includegraphics[width=7cm,clip]{DCDW-muB-pres-800-pointed.pdf} \end{center} \caption{ Relation between chemical potential and pressure for $m_{0}=800$~MeV. The blue solid curve and red dashed curve show DCDW phase and homogeneous phase, respectively. The point of phase transition from the homogeneous phase to the DCDW phase is expressed by black square. } \label{fig:muB-pres-800-a} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=7cm,clip]{DCDW-muB-M-800.pdf} \end{center} \caption{ Relation between chemical potential and $M_1$ for $m_{0}=800$~MeV. The blue solid curve and magenta dot-dashed curve show the configuration of the extended ansatz with $\delta\sigma \neq0$ and the ordinary ansatz with $\delta\sigma = 0$, respectively. } \label{fig:muB-pres-800-b} \end{figure} These two states consist only two stationary solutions for $\delta\Omega=0$ for a given value of $\mu_B$. The DCDW state found here corresponds to the ordinary DCDW state in nuclear matter \cite{Heinz:2013hza}, but here the chiral condensate $M(z)$ has a finite offset $\delta\sigma$ due to an explicit symmetry breaking. The ordinary DCDW state takes over the homogeneous nuclear matter at $\mu_B\sim1200$~MeV denoted in the figure by square. In what follows, for the comparison with the ordinary ansatz, we define the thermodynamic potential with the condition $\delta\sigma=0$ is forced by \begin{equation*} \Omega_0=\Omega\mid_{\delta\sigma=0}. \end{equation*} Figure~\ref{fig:muB-pres-800-b} shows one of order parameter $M_1$ defined by \begin{align} M_1\equiv&\frac{1}{2V}\int_{-\infty}^{\infty}d^3x% {\rm tr}\left[\braket{M}\right]&\nonumber \\ =&\begin{cases} \delta\sigma+\sigma_0 & (f=0) \\ \delta\sigma & (f\neq0)\\ \end{cases} \label{eq:M1} \end{align} This quantity may be regarded roughly as a guide for the strength of chiral symmetry breaking. We see that solid curve in the region of the DCDW phase is nonvanishing, indicating that the effect of explicit symmetry breaking is properly taken into account in our extended ansatz. Depicted in Figure~\ref{fig:muB-pres-800-c} is the wave number $f$ as a function of $\mu_B$. The quantity provides a guide for a translational symmetry breaking. We note that, even though the effect of $\delta\sigma$ gives a minor effect on the magnitude of $f$, it brings about a sizable shift of the critical chemical potential, making the onset point several tens of MeV earlier than the case of ordinary ansatz. \begin{figure}[tbp] \begin{center} \includegraphics[width=7cm,clip]{DCDW-muB-f-800.pdf} \end{center} \caption{ Relation between chemical potential and $f$ for $m_{0}=800$~MeV. The blue solid curve and magenta dot-dashed curve show the configuration of the extended ansatz with $\delta\sigma \neq 0$ and the ordinary ansatz with $\delta \sigma =0$, respectively. } \label{fig:muB-pres-800-c} \end{figure} Figure~\ref{fig:muB-density-800} shows the baryon density in the unit of normal nuclear density as a function of $\mu_B$. \begin{figure}[tbp] \begin{center} \includegraphics[width=7.cm,clip]{DCDW-muB-density-800.pdf} \caption{ Relation between chemical potential and $f$ for $m_{0}=800$~MeV. The blue solid curve and magenta dot-dashed curve show the configuration of the extended ansatz with $\delta \sigma \neq0$ and the ordinary ansatz with $\delta \sigma=0$, respectively. } \label{fig:muB-density-800} \end{center} \end{figure} We read the DCDW onset density as $\rho_B^c\sim 4.7\rho_0$. This value is larger than the one in Ref.~\cite{Heinz:2013hza}. We would like to stress that the stable DCDW phase, on the other hand, appears already at $\rho_B \sim 4.8\rho_0$, while the stable phase appears at $\rho_B \sim 10.4\rho_0$ in Ref.~\cite{Heinz:2013hza}. This means that the magnitude of density jump from the uniform state to the DCDW phase is smaller and therefore the strength of the first order phase transition is weaker in our case. We think that one of the reasons is that the chiral invariant mass is independent of $\mu_B$, while it depends in the model used in Ref.~\cite{Heinz:2013hza}. \paragraph{$m_{0}=700$~MeV.} Next, we show the $\mu_B$ dependence of the pressure for $m_{0}=700$~MeV in Fig.~\ref{fig:muB-pres-700-a}. \begin{figure}[tbp] \includegraphics[width=7.cm,clip]{DCDW-muB-pres-700-pointed.pdf} \caption{ Relation between chemical potential and pressure for $m_{0}=700$~MeV. The blue solid curve and red dashed curve show the DCDW phase and the homogeneous phase, respectively. The green dotted curve shows the sDCDW which is another solution with $f\neq0$. The point of phase transition from the homogeneous phase to DCDW phase is expressed by black square, while that from the sDCDW phase to the homogeneous phase is indicated by black circle. } \label{fig:muB-pres-700-a} \end{figure} We first notice that in this case the normal nuclear matter exists only as a metastable state. This is due to the emergence and the stabilization of a new DCDW state:~% in addition to the ordinary DCDW phase, we find another solution with $f\neq0$ which is shown by dotted curve in Fig.~\ref{fig:muB-pres-700-a}. To distinguish two solutions with $f\neq0$, we call this phase the shifted DCDW (sDCDW) phase for a reason described shortly. We plot the $\mu_B$ dependence of $M_1$ defined in Eq.~(\ref{eq:M1}) in Fig.~\ref{fig:muB-pres-700-b}, and that of $f$ in the DCDW phase in Fig.~\ref{fig:muB-pres-700-c}. \begin{figure}[tbp] \includegraphics[width=7.cm]{DCDW-muB-M-700.pdf} \caption{ Relation between chemical potential and $M_1$ for $m_{0}=700$~MeV. The blue solid curve and magenta dot-dashed curve show the configuration of the extended ansatz with $\delta \sigma \neq 0$ and the ordinary ansatz with $\delta\sigma =0$, respectively. } \label{fig:muB-pres-700-b} \end{figure} \begin{figure}[tbp] \includegraphics[width=7.cm,clip]{DCDW-muB-f-700.pdf} \caption{ Relation between chemical potential and wavenumber $f$ for $m_{0}=700$~MeV. The blue solid curve shows the solution under the extended ansatz with $\delta\sigma\ne 0$, while the magenta dot-dashed curve corresponds to the solution with the ordinary ansatz, $\delta\sigma=0$. } \label{fig:muB-pres-700-c} \end{figure} From the former, we see that, going up in density, the chiral symmetry restores via several steps. From the latter, we clearly see that there are two regions of DCDW phase: the sDCDW state with $f \sim 50\,$--$\,100$~MeV for $900\lesssim \mu_B \lesssim 1020$~MeV, and the ordinary DCDW state with $f \gtrsim220$~MeV for $\mu_B \gtrsim 1070$~MeV. Figure~\ref{fig:muB-pres-700-b} shows that the value of $M_1$ for $\mu_B\gtrsim 1070$~MeV is less than $10$~MeV which is close to the one in the DCDW phase shown in Fig.~\ref{fig:muB-pres-800-b}, and so is the value of $f$ ($f \gtrsim220$~MeV). In fact, this phase is smoothly connected to the ordinary DCDW phase realized for $m_{0} = 800$~MeV as is seen from the phase diagram Fig.~\ref{fig:muB-mN0}. On the other hand, Fig.~\ref{fig:muB-pres-700-c} shows that the value of $f$ ($f\sim 50\,$--$\,100$~MeV) in the sDCDW phase is less than half of the value in the ordinary DCDW phase shown in Fig.~\ref{fig:muB-pres-800-c}. Furthermore, Figure~\ref{fig:muB-pres-700-b} shows that the value of $M_1$ for $900 \lesssim \mu_B \lesssim 1020$~MeV is $M_1 \sim 50\,$--$\,70$~MeV which is much larger than $10$~MeV in the ordinary DCDW phase shown in Fig.~\ref{fig:muB-pres-700-b}. Since $M_1 = \delta \sigma$ in the DCDW phase, the large difference of $M_1$ is caused by the difference of the values of $\delta \sigma$: $\delta \sigma \lesssim 10$~MeV in the ordinary DCDW phase realized for $\mu_B \gtrsim 1070$~MeV, while $\delta \sigma \sim 50$~MeV for $900 \lesssim \mu_B \lesssim 1020$~MeV. This implies that the center of the chiral spiral in the $(\sigma,\pi^0)$ chiral plane, which is near origin in the ordinary DCDW phase, is shifted to the $\sigma$ direction. That is why we called the phase realized for $900 \lesssim \mu_B \lesssim 1020$~MeV the shifted-DCDW (sDCDW) phase. We would like to stress that, on the contrary to the ordinary DCDW phase, the solution of stationary condition for $\delta\sigma$ do not become zero even if it is in chiral limit. The relation between chemical potential and baryon number density is shown in Fig~\ref{fig:muB-density-700}. From the figure, we see that the QCD phase next to the vacuum is the sDCDW phase for $m_{0}=700$~MeV. \begin{figure}[tbp] \begin{center} \vspace{-80mm} \includegraphics[bb=0 0 480 600,width=10cm,clip]{DCDW-muB-density-700.pdf} \caption{ Relation between chemical potential and baryon number density for $m_{0}=700$~MeV. The blue solid curve and magenta dot-dashed curve show the configuration of the extended ansatz with $\delta\sigma \neq0$ and the ordinary ansatz with $\delta\sigma=0$, respectively. } \label{fig:muB-density-700} \end{center} \end{figure} \section{A summary and discussions}\label{sec:summary} We studied the inhomogeneous phase structure in nuclear matter using a nucleon-based model with parity doublet structure where $N^\ast(1535)$ is introduced as the chiral partner of $N(939)$. Adopting the extended ansatz, Eq.~(\ref{eq:ansatz}), we studied the effect of $\delta\sigma$ and found that, depending on the value of the chiral invariant mass $m_{0}$, the sDCDW phase exists in addition to the ordinary DCDW phase. In the ordinary DCDW phase for large $m_{0}$, the space average of chiral condensate $M_1$, Eq.~(\ref{eq:M1}), becomes less than 10~MeV, implying that this phase is smoothly connected to the DCDW phase obtained with the familiar ansatz $\delta\sigma=0$. For $m_{0}=800$~MeV, the critical density from the homogeneous phase to DCDW phase is $4.7\rho_0$. The wave number $f$ has value of $200\sim300$~MeV which is in fair agreement with the result obtained in Ref.\cite{Heinz:2013hza}. On the other hand, when chiral invariant mass $m_{0}\lesssim780$~MeV the sDCDW phase appears at low density. This phase is characterized by a smaller wave number $f$ and a large shift of chiral condensate, $\delta\sigma$. It is noteworthy that it is not the effect of explicit chiral symmetry breaking but the dynamical symmetry breaking that produces this large shift of chiral condensate. So we expect that this sDCDW phase survives in the chiral limit. The parameter range of chiral invariant mass where the sDCDW is stabilized, fails to realize nuclear matter as the pressure of sDCDW is so strong that it diminishes the liquid-gas phase transition structure. Then, one might think that the present model for $m_{0}$ less than 780~MeV is ruled out. However, the chiral invariant mass $m_{0}$ can have density dependence as in Ref.~\cite{Heinz:2013hza} which shows that $m_{0}$ decreases against increasing density. In such a case, the sDCDW phase may be realized in high density nuclear matter in the real world. Exploring the elementary excitations in the sDCDW phase deserves further investigations in future. In the ordinary DCDW phase, apparently both the chiral symmetry and the translational invariance along $z$-direction are spontaneously broken, but a particular combination of them is left invariant \cite{Lee:2015bva,Hidaka:2015xza}. As a result, the number of spontaneously broken generator is three which implies that no extra Nambu-Goldstone boson appears other than three pions. In contrast, the combination is also spontaneously broken in the sDCDW phase. Then, we expect that a phonon mode appears in the sDCDW phase which may signal the phase. On the other hand, heavy hadrons may also serve as some interesting hard probes in the sDCDW background \cite{Suenaga:2015daa}. Another interesting extension of current work is to include an external magnetic field. For quark matter, several studies were already devoted to the topic of inhomogeneous phases under the magnetic field \cite{Frolov:2010wn,Nishiyama:2015fba,Yoshiike:2015tha,% Cao:2016fby,Yoshiike:2015wud,Abuki:2016zpv}. In \cite{Nishiyama:2015fba}, a new DCDW phase was found to occupy the low density region in an arbitrary small magnetic background. They called the phase ``weak'' DCDW since it has a smaller value of $f$. Since they did not consider the possible shift of chiral condensate, $\delta\sigma$, the analysis within the nucleon-based model with including both $\delta\sigma$ and magnetic field would be an interesting subject worth exploring. \smallskip \noindent {\it Acknowledgement.}~This work was partially supported by JPSP KAKENHI Grant Number JP16K05346 (H.A.) and 16K05345 (M.H.).
1,108,101,562,675
arxiv
\section*{Acknowledgements} The authors acknowledge K. Ishii and K. Tomiyasu for their useful remarks. The experiments at the Materials and Life Science Experimental Facility at J-PARC were performed under a user program (Proposal No. 2013A0052). M.F. is supported by a Grant-in-Aid for Scientific Research (A) (16H02125). \begin{figure}[t] \begin{center} \includegraphics[width=75mm]{Fig3_dispersion_v1.pdf} \caption{(Color online)~Momentum-dependence of (a) peak-position and (b) intensity in Sr$_3$Ir$_2$O$_7$. The gray lines are the results from RIXS [\ref{Kim2012}]. The intensity is not corrected with $|{\rm f}({\bf Q})|^2$ and the absorption coefficient.} \label{dispersion} \end{center} \end{figure}
1,108,101,562,676
arxiv
\section{Classification Framework Generalization} \label{app: binary} While outside the scope of our work, we note that there are two natural ways to extend our approach to a multiclass setting with one sensitive class. Let $\classes = \{1,2,\dots,c\}$, with \cs~being the sensitive class for which we aim to generate certificates. One approach involves a two-step architecture, where a feature-convex classifier first distinguishes between the sensitive \cs~and all other classes $\{2,3,\dots,c\}$ and an arbitrary second classifier distinguishes between the classes $\{2,3,\dots,c\}$. The first classifier could then be used to generate \cs~certificates, as described in Section~\ref{sec: certified_robustness}. Alternatively, we could define $\g$ to map directly to $c$ output logits, with the first logit convex in the input and the other logits concave in the input. Concavity can be easily achieved by negating the output of a convex network. Let the $i$th output logit then be denoted as $\g_i$ and consider an input $x$ where the classifier predicts \cs~(i.e., $\g_1(\feat(x)) \geq \g_i(\feat(x))$ for all $i \in\{2,3,\dots,c\}$); since the difference of a convex and a concave function is convex, we can generate a certificate for the nonnegativity of each convex decision function $\g_1\circ\feat - \g_i\circ\feat$ around $x$. Minimizing these certificates over all $i \in\{2,3,\dots,c\}$ yields a robustness certificate for the sensitive class. Note that $\g$ mapping to $2$ or more logits, all convex in the input, would not yield any tractable certificates. This is because the classifier decision function would now be the difference of two convex functions and have neither convex nor concave structure. We therefore choose to instantiate our binary classification networks with a single convex output logit for clarity. \subsection{Malimg Multiclass Extension} \label{app: malimg_muli} As a proof-of-concept, we provide a concrete realization of the first scheme above on the Malimg dataset. Namely, consider the setting where we want to distinguish between ``clean'' binaries and $24$ classes of malware. A malware designer seeks to maliciously perturb the bytes in their binary to fool a classifier into falsely predicting that the malware is ``clean.'' We therefore consider a cascading architecture where first a feature-convex classifier answers the ``clean or malware'' question, and then a subsequent classifier (not necessarily feature-convex) predicts the particular class of malware in the case that the feature-convex classifier assigns a ``malware'' prediction. Note that, in the initial step, we can either certify the ``clean'' binaries or the collection of all $24$ malware classes, simply by negating the feature-convex classifier output logit. We logically choose to certify the malware classes as done in our experiments of Section~\ref{sec: experiments}; these certificates provide guarantees against a piece of malware going undetected. We use the same feature-convex architecture and training details as described in Appendix~\ref{app: experimental_setup}. For the cascaded malware classifier, we use a ResNet-18 architecture trained with Adam for $150$ epochs with a learning rate of $10^{-3}$. The confusion plot for the multiclass classifier is provided in Figure~\ref{fig: malimg_multi_confusion}, with an overall accuracy of $96.5\%$. With the exception of few challenging classes to distinguish, the classifier achieves reasonable performance despite the unbalanced class sizes. \begin{figure}[ht] \centering \resizebox{0.7\textwidth}{!}{ \includegraphics{figs_gen/malimg-multiclass/confusion.pdf} } \caption{ \label{fig: malimg_multi_confusion} The row-normalized confusion plot for the Malimg multiclass classifier. The overall accuracy of the composite classifier is $96.5\%$. The various malware classes ($1$-$24$) are circumscribed with a black rectangle. These are certified against the class of ``clean'' binaries. See Section~\ref{sec: experiments} for more details on the mock clean binaries. } \end{figure} Figure~\ref{fig: malimg_multi_violin} visualizes the distribution of certified radii for the four most common malware classes in the dataset, excluding the ``Yuner.A'' class which featured duplicated images. Note that certification performance varies between classes, with high correlation across different norms for a particular malware class. Classes which tend to have larger certificates can be interpreted as clustering further away from the clean binaries, requiring larger perturbations to fool the classifier. \begin{figure}[ht] \centering \resizebox{0.8\textwidth}{!}{ \includegraphics{figs_gen/malimg-multiclass/violin.pdf} } \caption{ \label{fig: malimg_multi_violin} Certified radii distributions for four malware classes in the Malimg dataset. } \end{figure} \section{Conclusion} \label{sec: conclusions} This work introduces the problem of asymmetric certified robustness, which we show naturally applies to a number of practical adversarial settings. We define feature-convex classifiers in this context and theoretically characterize their representation power from geometric, approximation theoretic, and statistical lenses. Closed-form sensitive-class certified robust radii for the feature-convex architecture are provided for arbitrary $\ell_p$-norms. We find that our $\ell_1$-robustness certificates in particular match or outperform those of the current state-of-the-art methods, with our $\ell_2$- and $\ell_{\infty}$-radii also competitive to methods tailored for a particular norm. Unlike smoothing and bound propagation baselines, we accomplish this with a completely deterministic and near-immediate computation scheme. We also show theoretically that significant performance improvements should be realizable for natural image datasets such as CIFAR-10 cats-versus-dogs. Possible directions for future research include bridging the gap between the theoretical power of feature-convex models and their practical implementation, as well as exploring more sophisticated choices of the feature map $\feat$. \section{Experiments} \label{sec: experiments} We first describe our baseline methods, feature-convex architecture, and class accuracy balancing procedure. Our results are then reported across a variety of datasets, with further experimental setup details deferred to Appendix~\ref{app: experimental_setup}. \inlinesubsectiontight{Baseline methods} We consider several state-of-the-art randomized and deterministic baselines. For all datasets, we evaluate the randomized smoothing certificates of \citet{yang2020randomized} for the Gaussian, Laplacian, and uniform distributions trained with noise augmentation (denoted RS Gaussian, RS Laplacian, and RS Uniform, respectively), \hl{as well as the deterministic bound propagation framework $\alpha,\beta$-CROWN \citep{wang2021beta}, which is scatter plotted since certification is only reported as a binary answer at a given radius}. We also evaluate, when applicable, deterministic certified methods for each norm ball. These include the splitting-noise $\ell_1$-certificates from \citet{levine2021improved} (denoted Splitting), the orthogonality-based $\ell_2$-certificates from \citet{trockman2021orthogonalizing} (denoted Cayley), and the $\ell_{\infty}$-distance-based $\ell_{\infty}$-certificates from \citet{zhang2021boosting} (denoted $\ell_\infty$-Net). The last two deterministic methods are not evaluated on the large-scale Malimg dataset due to their prohibitive runtime. Furthermore, the $\ell_{\infty}$-Net was unable to significantly outperform a random classifier on the CIFAR-10 cats-dogs dataset, and is therefore only included in the MNIST 3-8 and Fashion-MNIST shirts experiments. \inlinesubsectiontight{Feature-convex architecture} Our simple experiments (MNIST 3-8 and Malimg) require no feature map to achieve high accuracy ($\feat=\id$); the Fashion-MNIST shirts dataset also benefited minimally from the feature map inclusion. For the CIFAR-10 cats-dogs task, we let our feature map be the concatenation $\feat(x)=(x-\mu,|x-\mu|)$, where $\mu$ is the channel-wise dataset mean (e.g., size $3$ for an RGB image) broadcasted to the appropriate dimensions. Our MNIST 3-8 and Malimg architecture then consists of a simple two-hidden-layer input-convex multilayer perceptron with $(n_1,n_2)=(200,50)$ hidden features, $\relu$ nonlinearities, and passthrough weights. For the more challenging datasets, we use various instantiations of a convex ConvNet where successive layers have a constant number of channels and image size. This allows for the addition of identity residual connections to each convolution and lets us remove the passthrough connections altogether. Convexity is enforced by projecting relevant weights onto the nonnegative orthant after each epoch and similarly constraining BatchNorm $\gamma$ parameters to be positive. We initialize positive weight matrices to be drawn uniformly from the interval $[0, \epsilon]$, where $\epsilon=0.003$ for linear weights and $\epsilon=0.005$ for convolutional weights. Jacobian regularization is also used to improve our certified radii \citep{hoffman2019robust}. \inlinesubsectiontight{Class accuracy balancing} Since we consider \emph{asymmetric} certified robustness, care must be taken to ensure a fair comparison of \cs~certificates. Indeed, a constant classifier that always outputs \cs~would achieve perfect \cs~accuracy and infinite \cs~certified radii---yet it would not be a particularly interesting classifier as its accuracy on \cns~inputs would be poor. We therefore post-process the decision threshold of each classifier such that the clean \cs~and \cns~accuracies are equivalent, allowing for a direct comparison of the certification performance for \cs. \subsection{Datasets} \label{sec: datasets} We now introduce the various datasets considered in this work. MNIST 3-8 and Malimg are relatively simple classification problems where near-perfect classification accuracy is attainable; the Malimg dataset falls in this category despite containing relatively large images. Our more challenging settings consist of a Fashion-MNIST shirts dataset as well as CIFAR-10 cats-versus-dogs dataset. Data augmentation details are deferred to Appendix~\ref{app: data}. \inlinesubsectiontight{MNIST 3-8} For our MNIST binary classification problem, we choose the problem of distinguishing between $3$ and $8$ \citep{lecun1998mnist}. These were selected as $3$ and $8$ are generally more visually similar and challenging to distinguish than other digit pairs. Images are $28 \times 28$ pixels and greyscale. \inlinesubsectiontight{Malimg} Our malware classification experiments use greyscale, bytewise encodings of raw malware binaries \citet{nataraj2011malware}. Each image pixel corresponds to one byte of data, in the range of $0$--$255$, and successive bytes are added horizontally from left to right on the image until wrapping at some predetermined width. We use the extracted malware images from the seminal dataset \citet{nataraj2011malware}, padding and cropping images to be $512 \times 512$. Note that licensing concerns generally prevent the distribution of ``clean'' executable binaries. As this work is focused on providing a general approach to robust classification, in the spirit of reproducibility we instead report classification results between different kinds of malware. Namely, we distinguish between malware from the most numerous ``Allaple.A'' class ($2949$ samples) and an identically-sized random subset of all other $24$ malware classes. To simulate a scenario where we must provide robustness against evasive malware, we provide certificates for the latter collection of classes. \inlinesubsectiontight{Fashion-MNIST shirts} The hardest classes to distinguish in the Fashion-MNIST dataset are T-shirts vs shirts, which we take as our two classes \citep{kayed2020classification,xiao2017fashion}. Images are $28 \times 28$ pixels and greyscale. \inlinesubsectiontight{CIFAR-10 cats-dogs} We take as our two CIFAR-10 classes the cat and dog classes since they are relatively difficult to distinguish \citep{giuste2020cifar, liu2018unsupervised, ho2018cifar10}. Other classes (e.g., ships) are typically easier to classify since large background features (e.g., blue water) are strongly correlated with the target label. Samples are $32 \times 32$ RGB images. \subsection{Discussion} \label{sec: discussion} Experimental results for $\ell_1$-norm balls are reported in Figure~\ref{fig: results_l1}, where our feature-convex classifier radii are similar or better than all other baselines across all datasets. Due to space constraints, we defer the corresponding plots for $\ell_2$- and $\ell_{\infty}$-norm balls to Appendix~\ref{app: othernorms}, \hl{where our certified radii are not dominant but still comparable to methods tailored specifically for a particular norm. We accomplish this while maintaining completely deterministic, closed-form certificates with orders-of-magnitude faster computation time than competitive baselines}. \begin{figure*}[ht] \centering \input{figs_combined/l1.tex} \caption{ \Cs~certified radii curves for the $\ell_1$-norm. Note the $\log$-scale on the Malimg plot. } \label{fig: results_l1} \end{figure*} For the MNIST 3-8 and Malimg datasets (Figures~\ref{fig: results_l1_mnist} and \ref{fig: results_l1_malimg}), all methods achieve high clean test accuracy. Our $\ell_1$-radii scale exceptionally well with the dimensionality of the input, with two orders of magnitude improvement over smoothing baselines for the Malimg dataset. The Malimg certificates in particular have an interesting concrete interpretation. As each pixel corresponds to one byte in the original malware file, an $\ell_1$-certificate of radius $r$ provides a robustness certificate for up to $r$ bytes in the file. Namely, even if a malware designer were to arbitrarily change $r$ malware bytes, they would be unable to fool our classifier into returning a false negative. This may not have an immediate practical impact as small semantic changes (e.g., reordering unrelated instructions) could induce large $\ell_p$-norm shifts. However, as randomized smoothing was extended from pixel-space to semantic transformations \citep{li2021tss}, we expect that similar extensions can produce practical certifiably robust malware classifiers. While our method produces competitive robustness certificates for $\ell_2$- and $\ell_{\infty}$-norms (Appendix~\ref{app: othernorms}), it offers the largest improvement for $\ell_1$-certificates in the high-dimensional image spaces considered. This is likely due to the characteristics of the subgradient dual norm factor in the denominator of Theorem~\ref{thm: false_negative_robustness}. The dual of the $\ell_1$-norm is the $\ell_{\infty}$-norm, which selects the largest magnitude element in the gradient of the output logit with respect to the input pixels. As the input image scales, it is natural for the classifier to become less dependent on any one specific pixel, shrinking the denominator in Theorem~\ref{thm: false_negative_robustness}. Conversely, when certifying for the $\ell_{\infty}$-norm, one must evaluate the $\ell_1$-norm of the gradient, which scales proportionally to the input size. Nevertheless, we find in Appendix~\ref{app: othernorms} that our $\ell_2$- and $\ell_{\infty}$-radii are generally comparable those of the baselines while maintaining speed and determinism. Our feature-convex neural network certificates are almost immediate, requiring just one forward pass and one backward pass through the network. This certification procedure requires fewer than $10$ milliseconds per sample on our hardware and scales well with network size. This is substantially faster than the runtime for randomized smoothing, which scales from several seconds per CIFAR-10 image to minutes for an ImageNet image \citep{cohen2019certified}. The only method that rivaled our $\ell_1$-norm certificates was $\alpha,\beta$-CROWN; however, such bound propagation frameworks suffer from exponential computational complexity in network size, and even for small CIFAR-10 ConvNets typically take on the order of minutes to certify nontrivial radii. Unlike the randomized smoothing baselines, our method is completely deterministic in both prediction and certification. Randomized prediction poses a particular problem for randomized smoothing certificates: even for a perturbation of a ``certified'' magnitude, repeated evaluations at the perturbed point will eventually yield misclassification for any nontrivial classifier. While the splitting-based certificates of \citet{levine2021improved} are deterministic, they only certify quantized (not continuous) $\ell_1$-perturbations, which scale poorly to $\ell_2$- and $\ell_{\infty}$-certificates (Appendix~\ref{app: othernorms}). Furthermore, the certification runtime grows linearly in the smoothing noise $\sigma$; evaluating the certified radii at $\sigma$ used for the Malimg experiment takes several minutes per sample. Ablation tests examining the impact of Jacobian regularization, the feature map $\feat$, and data augmentation are included in Appendix~\ref{app: ablation}. We illustrate the certification performance of our method across all combinations of MNIST classes in Appendix~\ref{app: mnist_sweep}. \begin{toappendix} \section{Experimental Setup} \label{app: experimental_setup} We include a detailed exposition of our experimental setup in this section, beginning with general details on our choice of epochs and batch size. We then discuss baseline methods, architecture choices for our method, class balancing, and data processing. \inlinesubsectiontight{Epochs and batch size} Exempting the randomized smoothing baselines, for the MNIST 3-8 and Fashion-MNIST shirts experiments, we use $60$ epochs for all methods. This is increased to $150$ epochs for the Malimg dataset and CIFAR-10 cats-dogs experiments. The batch size is $64$ for all datasets besides the $512 \times 512$ Malimg dataset, where it is lowered to $32$. \hl{ To ensure a fair comparison, the randomized smoothing baseline epochs are scaled larger than the aforementioned methods according to the noise value specified in the sweeps in Section~\ref{app: smoothing_noise}. The final epochs and smoothing noise values used are reported in Table~\ref{tab: smoothparams}. Note that as classifiers are typically more robust to the noise from splitting smoothing, larger values of $\sigma$ are used for only this smoothing method in the MNIST 3-8 and Malimg datasets. For Malimg, we find experimentally that even noise values of up to $\sigma=100$ are tractable for the splitting method, outside the sweep range considered in Section~\ref{app: smoothing_noise}. As verification at that $\sigma$ already takes several minutes per sample and runtime scales linearly with $\sigma$, we do not explore larger values of $\sigma$. } \begin{table}[ht] \centering \caption{Randomized smoothing final noise and epoch hyperparameters.} \hl{ \begin{tabular}{lcc} \toprule Dataset & Laplacian, Uniform, Gaussian Parameters & Splitting Parameters \\ \midrule MNIST 3-8 & $(\sigma,n)=(0.75,60)$ & $(\sigma,n)=(0.75\cdot 4,60\cdot 4)$ \\ Malimg & $(\sigma,n)=(3.5\cdot 4,150\cdot 4)$ & $(\sigma,n)=(100,150\cdot 4)$ \\ Fashion-MNIST shirts & $(\sigma,n)=(0.75,60)$ & $(\sigma,n)=(0.75,60)$ \\ CIFAR-10 cats-dogs & $(\sigma,n)=(0.75\cdot 2,600\cdot 2)$ & $(\sigma,n)=(0.75\cdot 2,600\cdot 2)$ \\ \bottomrule \end{tabular} } \label{tab: smoothparams} \end{table} \inlinesubsectiontight{Hardware} All experiments were conducted on a single Ubuntu 20.04 instance with an Nvidia RTX A6000 GPU. Complete reproduction of the experiments takes approximately $0.08$ GPU-years. \subsection{Baseline Methods} \label{app: baseline} We provide additional details on each of the baseline methods below. \inlinesubsectiontight{Randomized smoothing} Since the certification runtime of randomized smoothing is large, especially for the $512 \times 512$ pixel Malimg images, we evaluate the randomized smoothing classifiers over $10^4$ samples and project the certified radius to $10^5$ samples by scaling the number fed into the Clopper-Pearson confidence interval, as described in \cite{cohen2019certified}. This allows for a representative and improved certified accuracy curve while dramatically reducing the method's runtime. We take an initial guess for the certification class with $n_0=100$ samples and set the incorrect prediction tolerance parameter $\alpha=0.001$. For CIFAR-10 we use a depth-40 Wide ResNet base classifier, mirroring the choices from \citet{cohen2019certified, yang2020randomized}; for all other datasets we use a ResNet-18. All networks are trained using SGD with an initial learning rate of $0.1$, Nesterov momentum of $0.9$, weight decay of $10^{-4}$, and cosine annealing scheduling as described in \citet{yang2020randomized}. Final smoothing noise values are selected as in Table~\ref{tab: smoothparams}, and are determined from the noise level comparison sweeps in Appendix~\ref{app: smoothing_noise}. \inlinesubsectiontight{Splitting noise} As this method is a deterministic derivative of randomized smoothing, it avoids the many aforementioned hyperparameter choices. We use the same architectures described above for the other randomized smoothing experiments. \inlinesubsectiontight{Cayley convolutions} To maintain consistency, we use a two-hidden-layer multilayer perceptron with $(n_1,n_2)=(200,50)$ hidden features, CayleyLinear layers, and GroupSort activations for the MNIST experiment. For the more challenging Fashion-MNIST and CIFAR-10 experiments, we use the ResNet-9 architecture implementation from \cite{trockman2021orthogonalizing}. Following the authors' suggestions, we train these networks using Adam with a learning rate of $0.001$. \inlinesubsectiontight{\texorpdfstring{$\ell_\infty$}{l\_infinity}-distance nets} As the architecture of the $\ell_{\infty}$-distance net \citep{zhang2021boosting} is substantially different from traditional architectures, we use the authors' $5$-layer MNIST/Fashion-MNIST architecture and $6$-layer CIFAR-10 architecture with $5120$ neurons per hidden layer. Unfortunately, the classification accuracy on the CIFAR-10 cats-dogs experiment remained near $50\%$ throughout training. This was not the case when we tested easier classes, such as planes-versus-cars, where large features (e.g., blue sky) can be used to discriminate. We therefore only include this model in the MNIST and Fashion-MNIST experiments, and use the training procedure directly from the aforementioned paper's codebase. \inlinesubsectiontight{\texorpdfstring{$\alpha,\beta$}{alpha-beta}-CROWN} \hl{As $\alpha,\beta$-CROWN certification time scales exponentially with the network size, we keep the certified networks small in order to improve the certification performance of the baseline. For all datasets, we train and certify a one-hidden-layer network with $200$ hidden units and $\relu$ activations. All networks are adversarially trained for a $\ell_{\infty}$-perturbation radius starting at $0.001$ and linearly scaling to the desired $\epsilon$ over the first $20$ epochs, as described in \citet{kayed2020classification}, which trained the models used in \citet{wang2021beta}. The desired final $\epsilon$ is set to $0.3$ for MNIST, $0.1$ for Fashion-MNIST and Malimg, and $2/255$ for CIFAR-10. The adversarial training uses a standard PGD attack with $50$ steps and step size $2 \epsilon / 50$. Other optimizer training details are identical to \citet{wang2021beta}. The branch-and-bound timeout is set to $30$ seconds to maintain comparability to other methods, and robustness is evaluated over a dataset-dependent range of discrete radii for each adversarial norm.} \subsection{Convex ConvNet Architecture and Training} \label{app: architecture} The convex ConvNet architecture consists of a sequence of convolutional layers, BatchNorms, and $\relu$ nonlinearities. The first convolutional layer is unconstrained, as the composition of a convex function with an affine function is still convex \citep{amos2017input}. All subsequent convolutions and the final linear readout layer are uniformly initialized from some small positive weight interval ($[0,0.003]$ for linear weights, $[0,0.005]$ for convolutional weights) and projected to have nonnegative weights after each gradient step. We found this heuristic initialization choice helps to stabilize network training, as standard Kaiming initialization assumptions are violated when weights are constrained to be nonnegative instead of normally distributed with mean zero. More principled weight initialization strategies for this architecture would form an exciting area of future research. Before any further processing, inputs into the network are fed into an initial BatchNorm---this enables flexibility with different feature augmentation maps. Since the first convolutional layer is permitted negative weights, we generally attain better performance by enlarging the first convolution kernel size (see Table~\ref{tab: architecture}). For subsequent convolutions, we set the stride to $1$, the input and output channel counts to the output channel count from the first convolution, and the padding to half the kernel size, rounded down. This ensures that the output of each of these deeper convolutions has equivalent dimension to its input, allowing for an identity residual connection across each convolution. If $C_i(z)$ is a convolutional operation on a hidden feature $z$, this corresponds to evaluating $C_i(z) + z$ instead of just $C_i(z)$. The final part of the classifier applies MaxPool and BatchNorm layers before a linear readout layer with output dimension $1$. See Figure~\ref{fig: convex_convnet} for a diagram depicting an exemplar convex ConvNet instantiation. For training, we use a standard binary cross entropy loss, optionally augmented with a Jacobian regularizer on the Frobenius norm of the network Jacobian scaled by $\lambda > 0$ \citep{hoffman2019robust}. As our certified radii in Theorem~\ref{thm: false_negative_robustness} vary inversely to the norm of the Jacobian, this regularization helps boost our certificates at a minimal loss in clean accuracy. We choose $\lambda=0.0075$ for CIFAR-10, $\lambda=0.075$ for Malimg and $\lambda=0.01$ for MNIST and Fashion-MNIST. Further ablation tests studying the impact of regularization are reported in Appendix~\ref{app: ablation}. All feature-convex networks are trained using SGD with a learning rate of $0.001$, momentum $0.9$, and exponential learning rate decay with $\gamma=0.99$. \begin{table}[ht] \centering \caption{Convex ConvNet architecture parameters. $C_1$ denotes the first convolution, with $C_{2,\dots}$ denoting all subsequent convolutions. The ``Features'' column denotes the number of output features of $C_1$, which is held fixed across $C_{2,\dots}$. The ``Pool'' column refers to the size of the final MaxPool window before the linear readout layer. The MNIST and Malimg architectures are simple multilayer perceptrons and are therefore not listed here.} \begin{tabular}{lccccccc} \toprule Dataset & Features & Depth & $C_1$ size & $C_1$ stride & $C_1$ dilation & $C_{2,\dots}$ size & Pool \\ \midrule Fashion-MNIST & 4 & 3 & 5 & 1 & 1 & 3 & 1 \\ CIFAR-10 & 16 & 5 & 11 & 1 & 1 & 3 & 1 \\ \bottomrule \end{tabular} \label{tab: architecture} \end{table} \begin{figure}[ht] \centering \resizebox{1.0\textwidth}{!}{ \includegraphics{figs/convex_convnet.pdf} } \caption{ \label{fig: convex_convnet} An example convex ConvNet of depth $4$ with a $C_1$ stride of $2$, pool size of $4$, and $32 \times 32$ RGB images. There are $6$ input channels from the output of the feature map $\feat\colon x\mapsto (x - \mu, |x - \mu|)$. } \end{figure} \subsection{Class Accuracy Balancing} \label{app: balancing} As discussed in Section~\ref{sec: experiments}, a balanced \cs~and \cns~test accuracy is essential for a fair comparison of different methods. For methods where the output logits can be directly balanced, this is easily accomplished by computing the ROC curve and choosing the threshold that minimizes $|\mathrm{TPR} - (1 - \mathrm{FPR})|$. This includes both our feature-convex classifiers with one output logit and the Cayley orthogonalization and $\ell_{\infty}$-Net architectures with two output logits. Randomized smoothing classifiers are more challenging as the relationship between the base classifier threshold and the smoothed classifier prediction is indirect. We address this using a binary search balancing procedure. Namely, on each iteration, the classifier's prediction routine is executed over the test dataset and the ``error'' between the \cs~accuracy and the \cns~accuracy is computed. The sign of the error then provides the binary signal for whether the threshold should be shifted higher or lower in the standard binary search implementation. This procedure is continued until the error drops below $1\%$. \subsection{Data Processing} \label{app: data} For consistency with \cite{zhang2021boosting}, we augment the MNIST and Fashion-MNIST training data with $1$-pixel padding and random cropping. The CIFAR-10 dataset is augmented with $3$-pixel edge padding, horizontal flips, and random cropping. The Malimg dataset is augmented with $20$-pixel padding and random $512 \times 512$ cropping. For CIFAR-10, MNIST, and Fashion-MNIST, we use the preselected test sets. For Malimg we hold out a random $20\%$ test dataset, although this may not be entirely used during testing. The training set is further subdivided by an $80\%$-$20\%$ validation split. For all experiments, we use the first $1000$ test samples to evaluate our methods. \section{\texorpdfstring{$\ell_2$}{l\_2}- and \texorpdfstring{$\ell_{\infty}$}{l\_infinity}-Certified Radii} \label{app: othernorms} This section reports the counterpart to Figure~\ref{fig: results_l1} for the $\ell_2$- and $\ell_{\infty}$-norms. Across all experiments, we attain substantial $\ell_2$- and $\ell_{\infty}$-radii without relying on computationally expensive sampling schemes or nondeterminism. Methods that certify to another norm $\|\cdot\|_p$ are converted to $\ell_q$-radii at a factor of $1$ if $p > q$ or $d^{1/p - 1/q}$ otherwise. Certified $\ell_2$-radii are reported in Figure~\ref{fig: results_l2}. Our $\ell_2$-radii are moderate, generally slightly smaller than those produced by Gaussian randomized smoothing. Certified $\ell_{\infty}$-radii are reported in Figure~\ref{fig: results_linf}. For the MNIST 3-8 experiment, the $\ell_{\infty}$-distance nets produce exceptional certified radii. Likewise, the $\ell_{\infty}$-distance net certificates are dominant for the Fashion-MNIST dataset, despite achieving slightly inferior clean accuracy. We note however that the applicability of $\ell_{\infty}$-distance nets for sophisticated vision tasks is uncertain as the method is unable to achieve better-than-random performance for CIFAR-10 cats-dogs (Section~\ref{app: baseline}). Our method is comparable to randomized-smoothing and $\alpha,\beta$-CROWN in all $\ell_{\infty}$ experiments. \begin{figure}[ht] \centering \input{figs_combined/l2.tex} \caption{ \Cs~certified radii curves for the $\ell_2$-norm. } \label{fig: results_l2} \end{figure} \begin{figure}[ht] \centering \input{figs_combined/linf.tex} \caption{ \Cs~certified radii curves for the $\ell_{\infty}$-norm. } \label{fig: results_linf} \end{figure} \section{Ablation Tests} \label{app: ablation} We conduct a series of ablation tests on the CIFAR-10 cats-dogs dataset, examining the impact of regularization, feature maps, and data augmentation. \subsection{Regularization} Figure~\ref{fig: regularization} examines the impact of Jacobian regularization over a range of regularization scaling factors $\lambda$, with $\lambda=0$ corresponding to no regularization. As is typical, we see a tradeoff between clean accuracy and certified radii. Further increases in $\lambda$ yield minimal additional benefit. \begin{figure}[ht] \centering \input{figs_gen/cifar10_catsdogs-ablation/cert_L1_ablation_reg.pgf} \caption{Impact of the Jacobian regularization parameter $\lambda$ on CIFAR-10 cats-dogs classification.} \label{fig: regularization} \end{figure} \subsection{Feature Map} In this section, we investigate the importance of the feature map $\feat$. Figure~\ref{fig: featuremap} compares our standard feature-convex classifier with $\feat(x) = (x - \mu, |x - \mu|)$ against an equivalent architecture with $\feat = \id$. Note that the initial layer in the convex ConvNet is a BatchNorm, so even with $\feat = \id$, features still get normalized before being passed into the convolutional architecture. We perform this experiment across both the standard cats-dogs experiment (cats are certified) in the main text and the reverse dogs-cats experiment (dogs are certified). As expected, the clean accuracies for both datasets are lower for $\feat = \id$, while the certified radii are generally larger due to the Lipschitz scaling factor in Theorem~\ref{thm: false_negative_robustness}. Interestingly, while the standard $\feat$ produces comparable performance in both experiments, the identity feature map classifier is more effective in the dogs-cats experiment, achieving around $7\%$ greater clean accuracy. This reflects the observation that convex separability is an asymmetric condition and suggests that feature maps can mitigate this concern. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \input{figs_gen/cifar10_catsdogs-ablation/cert_L1_ablation_feature_map.pgf} \phantomcaption{} \label{fig: featuremap_catsdogs} \end{subfigure} \begin{subfigure}[t]{0.4\linewidth} \input{figs_gen/cifar10_dogscats-ablation/cert_L1_ablation_feature_map.pgf} \phantomcaption{} \label{fig: featuremap_dogscats} \end{subfigure} \caption{ (\subref{fig: featuremap_catsdogs}) Certification performance with cats as \cs~and dogs as \cns. (\subref{fig: featuremap_dogscats}) Certification performance with dogs as \cs~and cats as \cns. } \label{fig: featuremap} \end{figure} \subsection{Unaugmented Accuracies} \label{app: unaugmented_accuracies} Table~\ref{tab: unaugmented_accuracies} summarizes the experimental counterpart to Section~\ref{sec: representation_power}. Namely, Corollary~\ref{cor: cats_dogs_separability} proves that there exists an input-convex classifier ($\feat=\id$) that achieves perfect training accuracy on the CIFAR-10 cats-dogs dataset with no dataset augmentations (random crops, flips, etc.). Our practical experiments are far from achieving this theoretical guarantee, with just $73.4\%$ accuracy for cats-dogs and $77.2\%$ for dogs-cats. Improving the practical performance of input-convex classifiers to match their theoretical capacity is an exciting area of future research. \begin{table}[ht] \centering \caption{CIFAR-10 accuracies with no feature augmentation ($\feat=\id$) and no input augmentation.} \begin{tabular}{lcc} \toprule \Cs-\cns~data & Training accuracy & Test accuracy (balanced) \\ \midrule Cats-dogs & $73.4\%$ & $57.3\%$ \\ Dogs-cats & $77.2\%$ & $63.9\%$ \\ \bottomrule \end{tabular} \label{tab: unaugmented_accuracies} \end{table} \section{MNIST Classes Sweep} \label{app: mnist_sweep} \hl{ For our comparison experiments, we select a specific challenging MNIST class pair ($3$ versus $8$). For completeness, this section includes certification results for our method over all combinations of class pairs in MNIST. As this involves training models over $90$ combinations, we lower the number of epochs from $60$ to $10$, maintaining all other architectural details described in Appendix~\ref{app: experimental_setup}. \begin{figure}[ht] \begin{subfigure}[t]{0.32\linewidth} \resizebox{1.0\textwidth}{!}{ \includegraphics{figs_gen/mnist_38-sweep/sweep_Norm.L1.pdf} } \caption{$\ell_1$-certificates.} \label{fig: l1sweep} \end{subfigure} \begin{subfigure}[t]{0.32\linewidth} \resizebox{1.0\textwidth}{!}{ \includegraphics{figs_gen/mnist_38-sweep/sweep_Norm.L2.pdf} } \caption{$\ell_2$-certificates.} \label{fig: l2sweep} \end{subfigure} \begin{subfigure}[t]{0.32\linewidth} \resizebox{1.0\textwidth}{!}{ \includegraphics{figs_gen/mnist_38-sweep/sweep_Norm.LInf.pdf} } \caption{$\ell_{\infty}$-certificates.} \label{fig: linfsweep} \end{subfigure} \caption{ Plotting the median certified radii for the MNIST feature-convex architecture over a range of class combinations. The horizontal axis is the class being certified. The MNIST 3-8 experiment considered throughout therefore corresponds to the cell $(3,8)$ in each plot. } \label{fig: mnistsweep} \end{figure} Our certified radii naturally scale with the complexity of the classification problem. As expected, $3$ and $8$ are among the most challenging digits to distinguish, along with 2-8, 5-8, 4-9, and 7-9. Particularly easy combinations to classify typically include $0$ or $1$. The certification performance is remarkably symmetric across the diagonal despite the asymmetry in our convex architectures. In other words, when classifying between digits $i$ and $j$, if a convex classifier exists which generates strong certificates for $i$, then we can generally train an asymmetric classifier that generates strong certificates for $j$. A few exceptions to this can be seen in Figure~\ref{fig: mnistsweep}; the most notable are the 1-9 versus 9-1 pairs and the 4-8 versus 8-4 pairs. A deeper understanding of how class characteristics affect asymmetric certification is an exciting avenue of future research. } \section{Randomized Smoothing Noise Level Sweeps} \label{app: smoothing_noise} \hl{ In this section, we reproduce the performance randomized smoothing classifiers under different noise distributions for a range of noise parameters $\sigma$. Namely, we sweep over multiples of base values of $\sigma$ reported in the subcaptions of Figures \ref{fig: randsmooth_results_l1}, \ref{fig: randsmooth_results_l2}, and \ref{fig: randsmooth_results_linf}. The base values of $\sigma$ were set to $\sigma=0.75$ for the MNIST 3-8, Fashion-MNIST, and CIFAR-10 cats-dogs experiments. For the higher-resolution Malimg experiment, we increase the base noise to $\sigma=3.5$, matching the highest noise level examined in \citet{levine2021improved}. The epochs used for training were similarly scaled by $n$, starting from the base values provided in Section~\ref{app: experimental_setup}, with the exception of the CIFAR-10 base epochs being increased to $600$ epochs. } \begin{figure}[ht] \centering \input{figs_combined/l1_randsmooth.tex} \caption{ Randomized smoothing certified radii sweeps for the $\ell_1$-norm. Line shade indicates value of the integer noise multiplier $n$, with $n$ ranging from $1$ (darkest line) to $4$ (lightest line). } \label{fig: randsmooth_results_l1} \end{figure} \begin{figure}[ht] \centering \input{figs_combined/l2_randsmooth.tex} \caption{ Randomized smoothing certified radii sweeps for the $\ell_2$-norm. Line shade indicates value of the integer noise multiplier $n$, with $n$ ranging from $1$ (darkest line) to $4$ (lightest line). For higher-dimensional inputs (Malimg and CIFAR-10) methods which certify to a different norm and convert are uncompetitive. } \label{fig: randsmooth_results_l2} \end{figure} \begin{figure}[ht] \centering \input{figs_combined/linf_randsmooth.tex} \caption{ Randomized smoothing certified radii sweeps for the $\ell_\infty$-norm. Line shade indicates value of the integer noise multiplier $n$, with $n$ ranging from $1$ (darkest line) to $4$ (lightest line). } \label{fig: randsmooth_results_linf} \end{figure} \end{toappendix} \section{Introduction} \label{sec: introduction} Although neural networks achieve state-of-the-art performance across a range of machine learning tasks, researchers have shown that they can be highly sensitive to adversarial inputs that are maliciously designed to fool the model \citep{biggio2013evasion,szegedy2014intriguing,nguyen2015deep}. For example, the works \citet{eykholt2018robust} and \citet{liu2019perceptual} show that small physical and digital alterations of vehicle traffic signs can cause image classifiers to fail. In safety-critical applications of neural networks, such as autonomous driving \citep{bojarski2016end,wu2017squeezedet} and medical diagnostics \citep{amato2013artificial,yadav2019deep}, this sensitivity to adversarial inputs is clearly unacceptable. A line of heuristic defenses against adversarial inputs has been proposed, only to be defeated by stronger attack methods \citep{carlini2017adversarial,kurakin2017adversarial,athalye2018obfuscated,uesato2018adversarial,madry2018towards}. This has led researchers to develop certifiably robust methods that provide a provable guarantee of safe performance. The strength of such certificates can be highly dependent on network architecture; general off-the-shelf models tend to have large Lipschitz constants, leading to loose Lipschitz-based robustness guarantees \citep{hein2017formal,fazlyab2019efficient,yang2020closer}. Consequently, lines of work that impose certificate-amenable structures onto networks have been popularized, e.g., \hl{specialized model layers \citep{trockman2021orthogonalizing, zhang2021boosting}}, randomized smoothing-based networks \citep{li2019certified,cohen2019certified,zhai2020macer,yang2020randomized,anderson2022certified}, and $\relu$ networks that are certified using convex optimization and mixed-integer programming \citep{wong2018provable,weng2018towards,raghunathan2018semidefinite,anderson2020tightened,ma2021sequential}. \hl{The first category only directly certifies against one specific choice of norm, producing poorly scaled radii for other norms in high dimensions.} The latter two method families incur serious computational challenges: randomized smoothing typically requires the classification of thousands of randomly perturbed samples per input, while optimization-based solutions scale poorly to large networks. Despite the moderate success of these certifiable classifiers, conventional assumptions in the literature are unnecessarily restrictive for most practical adversarial settings. Specifically, most works consider a multiclass setting where certificates are desired for inputs of any class. By contrast, many real-world adversarial attacks involve a binary setting with only one \textit{sensitive class} that must be made robust to adversarial perturbations. Consider the representative problem of spam classification; a malicious adversary crafting a spam email will always attempt to fool the classifier toward the ``not-spam'' class---never conversely \citep{kuchipudi2020adversarial}. Similar logic applies for a range of applications, including malware detection \citep{grosse2017adversarial}, malicious network traffic filtering \citep{sadeghzadeh2021adversarial}, fake news and social media bot detection \citep{cresci2021adversarial}, hate speech removal \citep{grolman2022hateversarial}, insurance claims filtering \citep{finlayson2019adversarial}, and financial fraud detection \citep{cartella2021adversarial}. These applications motivate us to introduce a narrower, asymmetric robustness problem and develop a novel classifier architecture to address this challenge. \subsection{Problem Statement and Contributions} \label{sec: contributions} This work considers the problem of \textit{asymmetric robustness certification}. Specifically, we assume a classification setting wherein one class is ``sensitive'' and seek to certify that, if some input is classified into this sensitive class, then adversarial perturbations of sufficiently small magnitude cannot change the prediction. To tackle the asymmetric robustness certification problem and attain state-of-the-art certified radii, we propose \emph{feature-convex neural networks}, and achieve the following contributions in doing so: \begin{enumerate} \item We provide easily-computable \cs~certified robust radii for feature-convex classifiers with respect to arbitrary $\ell_p$-norms. \item We characterize the decision region geometry of feature-convex classifiers, extend the universal approximation theorem for input-convex $\relu$ neural networks to the classification setting, and show that, in high dimensions, feature-convex classifiers can perfectly fit even unstructured, uniformly distributed datasets. \item We evaluate against several baselines on MNIST 3-8 \citep{lecun1998mnist}, Malimg malware classification \citep{nataraj2011malware}, \hl{Fashion-MNIST shirts \citep{xiao2017fashion}}, and CIFAR-10 cats-dogs \citep{krizhevsky2009learning}, and show that our classifiers yield state-of-the-art certified robust radii. \end{enumerate} \subsection{Related Works} \label{sec: related_works} \paragraph{Certified adversarial robustness.} Three of the most popular approaches for generating robustness certificates are Lipschitz-based bounds, randomized smoothing, and convex optimization. Successfully bounding the Lipschitz constant of a neural network can give rise to an efficient certified radius of robustness, e.g., via the methods proposed in \citet{hein2017formal}. However, in practice such Lipschitz constants are too large to yield meaningful certificates, or it is computationally burdensome to compute or bound the Lipschitz constants in the first place \citep{virmaux2018lipschitz,fazlyab2019efficient,yang2020closer}. To overcome these computational limitations, certain methods impose special structures on their model layers to provide immediate Lipschitz guarantees. \hl{Specifically, \citet{trockman2021orthogonalizing} uses the Cayley transform to derive convolutional layers with immediate $\ell_2$-Lipschitz constants, and \citet{zhang2021boosting} introduces a $\ell_{\infty}$-distance neuron that provides similar Lipschitz guarantees with respect to the $\ell_{\infty}$-norm. We compare with both these approaches in our experiments.} Randomized smoothing, popularized by \citet{lecuyer2019certified,li2019certified,cohen2019certified}, uses the expected prediction of a model when subjected to Gaussian input noise. These works derive $\ell_2$-norm balls around inputs on which the smoothed classifier remains constant, but suffer from nondeterminism and high computational burden. Follow-up works generalize randomized smoothing to certify input regions defined by different metrics, e.g., Wasserstein, $\ell_1$-, and $\ell_\infty$-norms \citep{levine2020wasserstein,teng2019ell_1,yang2020randomized}. Other works focus on enlarging the certified regions by optimizing the smoothing distribution \citep{zhai2020macer,eiras2021ancer,anderson2022towards}, incorporating adversarial training into the base classifier \citep{salman2019provably,zhang2020black}, and employing dimensionality reduction at the input \citep{pfrommer2022projected}. Convex optimization-based certificates seek to derive a convex over-approximation of the set of possible outputs when the input is subject to adversarial perturbations, and show that this over-approximation is safe. Various over-approximations have been proposed, e.g., based on linear programming and bounding \citep{wong2018provable,weng2018towards}, semidefinite programming \citep{raghunathan2018semidefinite}, and branch-and-bound approaches \citep{anderson2020tightened,ma2021sequential,wang2021beta}. \hl{The $\alpha,\beta$-CROWN method \citep{wang2021beta} uses an efficient bound propagation to linearly bound the neural network output in conjunction with a per-neuron branching heuristic to achieve state-of-the-art certified radii, winning both the 2021 and the 2022 VNN certification competitions \citep{bak2021second,muller2022third}. In contrast to these optimization-based methods, our approach in this paper is to directly exploit the convex structure of input-convex neural networks to derive closed-form robustness certificates for our proposed architecture, altogether avoiding the common efficiency-tightness tradeoffs of prior methods, which we find to compete with and even outperform the state-of-the-art $\alpha,\beta$-CROWN in several settings.} \paragraph{Input-convex neural networks.} Input-convex neural networks, popularized by \citet{amos2017input}, are a class of parameterized models whose input-output mapping is convex (in at least a subset of the input variables). In \citet{amos2017input}, the authors develop tractable methods to learn an input-convex neural network $f\colon \Rd \times \mathbb{R}^n \to \mathbb{R}$ and show that utilizing it for the convex optimization-based inference $x\mapsto \argmin_{y\in \mathbb{R}^n} f(x,y)$ yields state-of-the-art results in a variety of domains. Subsequent works propose novel applications of input-convex neural networks in areas such as optimal control and reinforcement learning \citep{chen2019optimal,zeng2022convex}, optimal transport \citep{makkuva2020optimal}, and optimal power flow \citep{chen2020data,zhang2021convex}. Other works have generalized input-convex networks to input-invex networks \citep{nesterov2022learning} and global optimization networks \citep{zhao2022global} so as to maintain the benign optimization properties of input-convexity. The authors of \citet{siahkamari2022faster} present algorithms for efficiently learning convex functions, while \citet{chen2019optimal,kim2022parameterized} derive universal approximation theorems for input-convex neural networks in the convex regression setting. The work \citet{sivaprasad2021curious} shows that input-convex neural networks do not suffer from overfitting, and generalize better than multilayer perceptrons on common benchmark datasets. In this work, we incorporate input-convex neural networks as a part of our overall feature-convex architecture, and we leverage convexity properties to derive our novel robustness guarantees. \subsection{Notations} \label{sec: notations} The sets of natural numbers and real numbers are denoted by $\N$ and $\mathbb{R}$, respectively. The $d\times d$ identity matrix is written as $I_d \in \mathbb{R}^{d\times d}$, and the identity map on $\Rd$ is denoted by $\id\colon x\mapsto x$. For $A\in\mathbb{R}^{n\times d}$, we define $|A|\in\mathbb{R}^{n\times d}$ by $|A|_{ij} = |A_{ij}|$ for all $i,j$, and we write $A \ge 0$ if and only if $A_{ij} \ge 0$ for all $i,j$. The $\ell_p$-norm on $\Rd$ is given by $\|\cdot\|_p \colon x \mapsto \left(|x_1|^p+\cdots+|x_d|^p\right)^{1 / p}$ for $p\in[1,\infty)$ and by $\|\cdot\|_p \colon x \mapsto \max\{|x_1|,\dots,|x_d|\}$ for $p=\infty$. The dual norm of $\|\cdot\|_p$ is denoted by $\|\cdot\|_{p,*}$. The convex hull of a set $X\subseteq\Rd$ is denoted by $\conv(X)$. The subdifferential of a convex function $g\colon\Rd\to\mathbb{R}$ at $x\in\Rd$ is denoted by $\partial g (x)$. If $\epsilon\colon \Omega \to \Rd$ is a random variable on a probability space $(\Omega,\mathcal{B},\prob)$ and $P$ is a predicate defined on $\Rd$, then we write $\prob(P(\epsilon))$ to mean $\prob(\{\omega\in\Omega : P(\epsilon(\omega))\})$. Lebesgue measure on $\Rd$ is denoted by $m$. We define $\relu\colon\mathbb{R}\to\mathbb{R}$ as $\relu(x) = \max\{0,x\}$, and if $x\in\Rd$, $\relu(x)$ denotes $(\relu(x_1),\dots,\relu(x_d))$. For a function $\feat\colon \Rd \to \Rq$ and $p\in[1,\infty]$, we define $\lip_p(\feat) = \inf\{K\ge 0 : \text{$\|\feat(x)-\feat(x')\|_p \le K \|x-x'\|_p$ for all $x,x'\in\Rd$}\}$, and if $\lip_p(\feat) < \infty$ we say that $\feat$ is Lipschitz continuous with constant $\lip_p(\feat)$ (with respect to the $\ell_p$-norm). \section{Feature-Convex Classifiers} \label{sec: method} Let $d,q\in\N$ and $p\in[1,\infty]$ be fixed, and consider the task of classifying inputs from a subset of $\Rd$ into a fixed set of classes $\classes \subseteq \N$. In what follows, we restrict to the binary setting where $\classes = \{1,2\}$ and \cs~is the sensitive class for which we desire robustness certificates (Section~\ref{sec: introduction}). In Appendix~\ref{app: binary}, we briefly discuss avenues to generalize our framework to multiclass settings using one-versus-all and sequential classification methodologies and provide a proof-of-concept example for the Malimg dataset. \begin{toappendix} \input{binary} \end{toappendix} We now formally define the classifiers considered in this work. \begin{definition} \label{def: feature-convexity} Let $\f\colon\Rd \to \{1,2\}$ be defined by \begin{equation*} \f(x) = \begin{aligned} \begin{cases} 1 & \text{if $\g(\feat(x)) > 0$}, \\ 2 & \text{if $\g(\feat(x)) \le 0$}, \end{cases} \end{aligned} \end{equation*} for some $\feat\colon\Rd \to\Rq$ and some $\g\colon\Rq\to\mathbb{R}$. Then $\f$ is said to be a \emph{feature-convex classifier} if the \emph{feature map} $\feat$ is Lipschitz continuous with constant $\lip_p(\feat)<\infty$ and $\g$ is a convex function. \end{definition} We denote the class of all feature-convex classifiers by $\F$. Furthermore, for $q=d$, the subclass of all feature-convex classifiers with $\feat=\id$ is denoted by $\FId$. As we will see in Section~\ref{sec: certified_robustness}, defining our classifiers using the composition of a convex classifier with a Lipschitz feature map enables the fast computation of certified regions in the input space. This naturally arises from the global underestimation of convex functions by first-order Taylor approximations. Since sublevel sets of such $\g$ are restricted to be convex, the feature map $\feat$ is included to increase the representation power and practical performance of our architecture (see Appendix~\ref{app: feature_map} for a motivating example). In practice, we find that it suffices to choose $\feat$ to be a simple map with a small closed-form Lipschitz constant. For example, in our experiments that follow with $q=2d$, we choose $\feat(x) = (x - \mu, |x - \mu|)$ with a constant channel-wise dataset mean $\mu$, yielding $\lip_1(\feat) \le 2$, $\lip_2(\feat) \le \sqrt{2}$, and $\lip_\infty(\feat) \le 1$. Although this particular choice of $\feat$ is convex, the function $\g$ need not be monotone, and therefore the composition $\g \circ \feat$ is nonconvex in general. The prediction and certification of feature-convex classifiers are illustrated in Figure \ref{fig: method}. \begin{toappendix} \section{Feature Map Motivation} \label{app: feature_map} This section examines the importance of the feature map $\feat$ with a low-dimensional example. Consider the binary classification setting where one class $X_2 \subseteq \Rd$ is clustered around the origin and the other class $X_1 \subseteq \Rd$ surrounds it in a ring. Here, the pair $(X_1, X_2)$ is convexly separable (see Definition \ref{def: convexly_separable_sets}) as an $\ell_2$-norm ball decision region covering $X_2$ is convex (Figure~\ref{fig: c1cert}). Note that the reverse pair $(X_2, X_1)$ is \textit{not} convexly separable, as there does not exist a convex set containing $X_1$ but excluding $X_2$. A standard input-convex classifier with $\feat=\id$ would therefore be unable to discriminate between the classes in this direction (Proposition~\ref{prop: convex_decision_regions}), i.e., we would be able to learn a classifier that generates certificates for points in $X_1$, but not $X_2$. \begin{figure} \begin{subfigure}[t]{0.49\linewidth} \resizebox{1.0\textwidth}{!}{ \includegraphics{figs_gen/circles-c1/decision_convex_noreg.pdf} } \caption{} \label{fig: c1cert} \end{subfigure} \hfil \begin{subfigure}[t]{0.49\linewidth} \resizebox{1.0\textwidth}{!}{ \includegraphics{figs_gen/circles-c2/decision_convex_noreg.pdf} } \caption{} \label{fig: c2cert} \end{subfigure} \caption{ Experiments demonstrating the role of the feature map $\feat=(x,|x|)$ in $\mathbb{R}^2$, with the output logit shaded. Certified radii from our method are shown as black rings. (\subref{fig: c1cert}) Certifying the outer class (dark red points). This is possible using an input-convex classifier as a convex sublevel set contains the inner class (dark blue points). (\subref{fig: c2cert}) Certifying the inner class (dark red points). This would not be possible with $\feat=\id$ as there is no convex set containing the outer class (dark blue points) but excluding the inner. The feature map $\feat$ enables this by permitting convex separability in the higher dimensional space. Note that although the shaded output logit is not convex in the input, we still generate certificates. } \label{fig: circlecerts} \end{figure} The above problem is addressed by choosing the feature map to be the simple concatenation ${\feat(x) = (x, |x|)}$ mapping from $\Rd$ to $\Rq=\mathbb{R}^{2d}$, with associated Lipschitz constants $\lip_1(\feat) \le 2$, $\lip_2(\feat) \le \sqrt{2}$, and $\lip_\infty(\feat) \le 1$. In this augmented feature space, $X_1$ and $X_2$ are convexly separable in both directions, as they are each contained in a convex set (specifically, a half-space) whose complement contains the other class. We are now able to learn a classifier that takes $X_2$ as the sensitive class for which certificates are required (Figure~\ref{fig: c2cert}). This parallels the motivation of the support vector machine ``kernel trick,'' where inputs are augmented to a higher-dimensional space wherein the data is linearly separable (instead of convexly separable as in our case). \end{toappendix} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs/method.tikz} \caption{Illustration of feature-convex classifiers and their robustness certification. Since $\g$ is convex, it can be globally underapproximated by its tangent plane at $\feat(x)$, yielding certified sets for all norm balls in the higher-dimensional feature space. Lipschitzness of $\feat$ then yields appropriately scaled certificates in the original input space. } \label{fig: method} \end{figure} In practice, we implement feature-convex classifiers using parameterizations of $\g$, which we now make explicit. Following \citet{amos2017input}, we instantiate $\g$ as a neural network with nonnegative weight matrices and nondecreasing convex nonlinearities. Specifically, we consider $\relu$ nonlinearities, which is not restrictive, as our universal approximation result in Theorem \ref{thm: uat} proves. \begin{definition} \label{def: icnn} A \emph{feature-convex $\relu$ neural network} is a function $\fhat \colon \Rd \to \{1,2\}$ defined by \begin{equation*} \fhat(x) = \begin{aligned} \begin{cases} 1 & \text{if $\ghat(\feat(x)) > 0$}, \\ 2 & \text{if $\ghat(\feat(x)) \le 0$}, \end{cases} \end{aligned} \end{equation*} with $\feat\colon\Rd\to\Rq$ Lipschitz continuous with constant $\lip_p(\feat)<\infty$ and $\ghat \colon \Rq\to\mathbb{R}$ defined by \begin{align*} x^{(1)} &= \relu\left(A^{(1)}x^{(0)} + b^{(1)}\right), \\ x^{(l)} &= \relu\left(A^{(l)}x^{(l-1)} + b^{(l)} + C^{(l)}x^{(0)}\right), \\ \ghat(x^{(0)}) &= A^{(L)}x^{(L-1)} + b^{(L)} + C^{(L)}x^{(0)}, \end{align*} for all $l\in\{2,3,\dots,L-1\}$ for some $L\in\N$, $L>1$, and for some consistently sized matrices $A^{(l)},C^{(l)}$ and vectors $b^{(l)}$ satisfying $A^{(l)} \ge 0$ for all $l\in\{2,3,\dots,L\}$. \end{definition} Going forward, we denote the class of all feature-convex $\relu$ neural networks by $\Fhat$. Furthermore, if $q=d$, the subclass of all feature-convex $\relu$ neural networks with $\feat=\id$ is denoted by $\FIdhat$, which corresponds to the input-convex $\relu$ neural networks proposed in \citet{amos2017input}. For every $\fhat\in \Fhat$, it holds that $\ghat$ is a convex function due to the rules for composition and nonnegatively weighted sums of convex functions \citep[Section 3.2]{boyd2004convex}, and therefore $\Fhat\subseteq\F$ and $\FIdhat \subseteq \FId$. The ``passthrough'' weights $C^{(l)}$ were originally included by \citet{amos2017input} to improve the practical performance of the architecture. In some of our more challenging experiments that follow, we remove these passthrough operations and instead add residual identity mappings between hidden layers, which also preserves convexity. We note that the transformations defined by $A^{(l)}$ and $C^{(l)}$ can be taken to be convolutions, which are nonnegatively weighted linear operations and thus preserve convexity \citep{amos2017input}. \section{Certification and Analysis of Feature-Convex Classifiers} \label{sec: theory} We begin by deriving asymmetric robustness certificates for our feature-convex classifier in Section~\ref{sec: certified_robustness}. In Section~\ref{sec: representation_power}, we introduce convexly separable sets and theoretically analyze the clean performance of our classifiers through this lens. Namely, we show that there exists a feature-convex classifier with $\feat=\id$ that perfectly classifies the CIFAR-10 cats-dogs training dataset. We show that this strong learning capacity generalizes by proving that feature-convex classifiers can perfectly fit high-dimensional uniformly distributed data with high probability. Proofs are deferred to the appendices. \subsection{Certified Robustness Guarantees} \label{sec: certified_robustness} In this section, we address the asymmetric certified robustness problem by providing \cs~robustness certificates for feature-convex classifiers $\f\in\F$. Such robustness corresponds to proving the absence of false negatives in the case that \cs~represents positives and \cns~represents negatives. For example, if in a malware detection setting \cs~represents malware and \cns~represents non-malware, the following certificate gives a lower bound on the magnitude of the malware file alteration needed in order to misclassify the file as non-malware. \begin{theoremrep} \label{thm: false_negative_robustness} Let $\f\in\F$ be as in Definition~\ref{def: feature-convexity} and let $x\in \f^{-1}(\{1\}) = \{x'\in\Rd : \f(x')=1\}$. If $\nabla\g(\feat(x))\in\Rq$ is a nonzero subgradient of the convex function $\g$ at $\feat(x)$, then $\f(x+\delta) = 1$ for all $\delta\in \Rd$ such that \begin{equation*} \|\delta\|_p < r(x) \coloneqq \frac{\g(\feat(x))}{\lip_p(\feat) \| \nabla \g (\feat(x)) \|_{p,*}}. \end{equation*} \end{theoremrep} \begin{proof} Suppose that $\nabla\g(\feat(x)) \in\Rq$ is a nonzero subgradient of $\g$ at $\feat(x)$, so that $\g(y) \ge \g(\feat(x)) + \nabla\g(\feat(x))^\top (y-\feat(x))$ for all $y\in \Rq$. Let $\delta\in\Rd$ be such that $\|\delta\|_p < r(x)$. Then it holds that \begin{align*} \g(\feat(x+\delta)) &\ge \g(\feat(x)) + \nabla\g(\feat(x))^\top (\feat(x+\delta)-\feat(x)) \\ &\ge \g(\feat(x)) - \|\nabla\g(\feat(x))\|_{p,*} \|\feat(x+\delta)-\feat(x)\|_p \\ &\ge \g(\feat(x)) - \|\nabla\g(\feat(x))\|_{p,*} \lip_p(\feat)\|\delta\|_p \\ &> 0, \end{align*} so indeed $\f(x+\delta) = 1$. \end{proof} \begin{remark} For $\f\in\F$ and $x\in \f^{-1}(\{1\})$, a subgradient $\nabla\g(\feat(x))\in\Rq$ of $\g$ always exists at $\feat(x)$, since the subdifferential $\partial \g(\feat(x))$ is a nonempty closed bounded convex set, as $\g$ is a finite convex function on all of $\Rq$---see Theorem 23.4 in \citet{rockafellar1970convex} and the discussion thereafter. Furthermore, if $\f$ is not a constant classifier, such a subgradient $\nabla\g(\feat(x))$ must necessarily be nonzero, since, if it were zero, then $\g(y) \ge \g(\feat(x))+\nabla\g(\feat(x))^\top(y-\feat(x)) = \g(\feat(x)) > 0$ for all $y\in\Rq$, implying that $\f$ identically predicts \cs, which is a contradiction. Thus, the certified radius given in Theorem \ref{thm: false_negative_robustness} is always well-defined in practical settings. \end{remark} Theorem~\ref{thm: false_negative_robustness} is derived from the fact that a convex function is globally underapproximated by any tangent plane. The nonconstant terms in Theorem~\ref{thm: false_negative_robustness} afford an intuitive interpretation: the radius scales proportionally to the confidence $\g(\feat(x))$ and inversely with the input sensitivity $\| \nabla\g(\feat(x)) \|_{p,*}$. In practice, $\lip_p(\feat)$ can be made quite small as mentioned in Section \ref{sec: method}, and furthermore the subgradient $\nabla\g(\feat(x))$ is easily evaluated as the Jacobian of $\g$ at $\feat(x)$ using standard automatic differentiation packages. This provides fast, deterministic \cs~certificates for any $\ell_p$-norm without modification of the feature-convex network's training procedure or architecture. \subsection{Representation Power Characterization} \label{sec: representation_power} We now restrict our analysis to the class $\FId$ of feature-convex classifiers with an identity feature map. This can be equivalently considered as the class of classifiers for which the input-to-logit map is convex. We therefore refer to models in $\FId$ as \emph{input-convex classifiers}. While the feature map $\feat$ is useful in boosting the practical performance of our classifiers, the theoretical results in this section suggest that there is significant potential in using input-convex classifiers as a standalone solution. \inlinesubsection{Classifying convexly separable sets} We begin by introducing the notion of convexly separable sets, which are intimately related to decision regions representable by the class $\FId$. \begin{definition} \label{def: convexly_separable_sets} Let $X_1,X_2\subseteq \Rd$. The ordered pair $(X_1,X_2)$ is said to be \emph{convexly separable} if there exists a nonempty closed convex set $X\subseteq\Rd$ such that $X_2 \subseteq X$ and $X_1 \subseteq \Rd\setminus X$. \end{definition} Notice that it may be the case that a pair $(X_1,X_2)$ is convexly separable yet the pair $(X_2,X_1)$ is not. Although low-dimensional intuition may cause concerns regarding the convex separability of sets of binary-labeled data, we will soon see in Theorem \ref{thm: convex_separation_of_high-dimensional_data} that, even for relatively unstructured data distributions, binary datasets are actually convexly separable in high dimensions with high probability. We now show that convexly separable datasets possess the property that they may always be perfectly fit by input-convex classifiers. \begin{toappendix} \begin{lemma} \label{lem: convex_sets_are_sublevel_sets_of_convex_functions} For any nonempty closed convex set $X \subseteq \Rd$, there exists a convex function $g\colon \Rd\to\mathbb{R}$ such that $X = g^{-1}((-\infty,0]) = \{ x\in \Rd : g(x) \le 0\}$. \end{lemma} \begin{proof} Let $X\subseteq \Rd$ be a nonempty closed convex set. We take the distance function $g = d_X$ defined by $d_X(x) = \inf_{y\in X} \|y-x\|_2$. Since $X$ is closed and $y\mapsto \|y-x\|_2$ is coercive for all $x\in \Rd$, it holds that $y\mapsto \|y-x\|_2$ attains its infimum over $X$ \citep[Proposition A.8]{bertsekas2016nonlinear}. Let $x^{(1)},x^{(2)}\in\Rd$ and let $\theta\in[0,1]$. Then there exist $y^{(1)},y^{(2)}\in X$ such that $g(x^{(1)})=\|y^{(1)}-x^{(1)}\|_2$ and $g(x^{(2)}) = \|y^{(2)}-x^{(2)}\|_2$. Since $X$ is convex, it holds that $\theta y^{(1)}+(1-\theta)y^{(2)}\in X$, and therefore \begin{align*} g(\theta x^{(1)}+(1-\theta)x^{(2)}) &= \inf_{y\in X} \|y-(\theta x^{(1)}+(1-\theta) x^{(2)}) \|_2 \\ &\le \| \theta y^{(1)} + (1-\theta)y^{(2)} - (\theta x^{(1)} + (1-\theta)x^{(2)}) \|_2 \\ &\le \theta \|y^{(1)} - x^{(1)}\|_2 + (1-\theta)\|y^{(2)}-x^{(2)}\|_2 \\ &= \theta g(x^{(1)}) + (1-\theta)g(x^{(2)}). \end{align*} Hence, $g=d_X$ is convex. Since $X = \{x\in \Rd : \inf_{y\in X}\|y-x\|_2 = 0\} = \{x\in \Rd : d_X(x) = 0 \} = \{x\in\Rd : d_X(x) \le 0\} = \{x\in\Rd : g(x) \le 0\}$ by nonnegativity of $d_X$, the lemma holds. \end{proof} \end{toappendix} \begin{propositionrep} \label{prop: convex_sets_are_decision_regions_of_input-convex_classifiers} For any nonempty closed convex set $X\subseteq \Rd$, there exists $\f\in\FId$ such that $X = \f^{-1}(\{2\}) = \{x\in\Rd : \f(x) = 2\}$. In particular, this shows that if $(X_1,X_2)$ is a convexly separable pair of subsets of $\Rd$, then there exists $\f\in\FId$ such that $\f(x)=1$ for all $x\in X_1$ and $\f(x)=2$ for all $x\in X_2$. \end{propositionrep} \begin{proof} Let $X\subseteq\Rd$ be a nonempty closed convex set. By Lemma \ref{lem: convex_sets_are_sublevel_sets_of_convex_functions}, there exists a convex function $g \colon \Rd \to \mathbb{R}$ such that $X = \{x\in\Rd : g(x) \le 0\}$. Define $\f\colon \Rd \to \{1,2\}$ by $\f(x) =1$ if $g(x)>0$ and $\f(x)=2$ if $g(x)\le 0$. Clearly, it holds that $\f\in\FId$. Furthermore, for all $x\in X$ it holds that $g(x)\le 0$, implying that $\f(x) = 2$ for all $x\in X$. Conversely, if $x\in\Rd$ is such that $\f(x) = 2$, then $g(x) \le 0$, implying that $x\in X$. Hence, $X = \{x\in\Rd : \f(x) = 2\}$. If $(X_1,X_2)$ is a convexly separable pair of subsets of $\Rd$, then there exists a nonempty closed convex set $X\subseteq\Rd$ such that $X_2\subseteq X$ and $X_1 \subseteq \Rd\setminus X$, and therefore there exists $\f\in\FId$ such that $X_2 \subseteq X = \f^{-1}(\{2\})$ and $X_1 \subseteq \Rd\setminus X = \f^{-1}(\{1\})$, implying that indeed $f(x)=1$ for all $x\in X_1$ and $f(x)=2$ for all $x\in X_2$. \end{proof} We also show that the converse of Proposition \ref{prop: convex_sets_are_decision_regions_of_input-convex_classifiers} holds: the geometry of the decision regions of classifiers in $\FId$ consists of a convex set and its complement. \begin{propositionrep} \label{prop: convex_decision_regions} Let $\f \in \FId$. The decision region under $\f$ associated to \cns, namely $X \coloneqq \f^{-1}(\{2\}) = \{x\in \Rd : \f(x) = 2\}$, is a closed convex set. \end{propositionrep} \begin{proof} For all $x\in \Rd$, it holds that $\f(x)=2$ if and only if $\g(x)\le 0$. Since $\f\in\FId$, $\g$ is convex, and hence, $X = \{x\in \Rd : \g(x) \le 0\}$ is a (nonstrict) sublevel set of a convex function and is therefore a closed convex set. \end{proof} Note that this is not necessarily true for our more general feature-convex architectures with $\feat \neq \id$. We continue our theoretical analysis of input-convex classifiers by extending the universal approximation theorem for regressing upon real-valued convex functions (given in \citet{chen2019optimal}) to the classification setting. In particular, Theorem \ref{thm: uat} below shows that any input-convex classifier $\f\in\FId$ can be approximated arbitrarily well on any compact set by $\relu$ neural networks with nonnegative weights. Here, ``arbitrarily well'' means that the set of inputs where the neural network prediction differs from that of $\f$ can be made to have arbitrarily small Lebesgue measure. \begin{toappendix} In order to apply the universal approximation results in \citet{chen2019optimal}, we now introduce their parameterization of input-convex $\relu$ neural networks. Note that it imposes the additional constraint that the first weight matrix $A^{(1)}$ is elementwise nonnegative. \begin{definition} Define $\FIdtil$ to be the class of functions $\ftil\colon\Rd\to\{1,2\}$ given by \begin{equation*} \ftil(x) = \begin{aligned} \begin{cases} 1 & \text{if $\gtil(x) > 0$}, \\ 2 & \text{if $\gtil(x) \le 0$}, \end{cases} \end{aligned} \end{equation*} with $\gtil\colon\Rd\to\mathbb{R}$ given by \begin{align*} x^{(1)} &= \relu \left(A^{(1)}x + b^{(1)}\right), \\ x^{(l)} &= \relu \left(A^{(l)} x^{(l-1)} + b^{(l)} + C^{(l)} x\right), ~ l\in\{2,3,\dots,L-1\}, \\ \gtil(x) &= A^{(L)}x^{(L-1)} + b^{(L)} + C^{(L)}x, \end{align*} for some $L\in\N$, $L>1$, and some consistently sized matrices $A^{(1)},C^{(1)},\dots,A^{(L)},C^{(L)}$, all of which have nonnegative elements, and some consistently sized vectors $b^{(1)},\dots,b^{(L)}$. \end{definition} The following preliminary lemma relates the class $\FIdhat$ from Definition~\ref{def: icnn} to the class $\FIdtil$ above. \begin{lemma} \label{lem: equivalent_network_classes} It holds that $\FIdtil \subseteq \FIdhat$. \end{lemma} \begin{proof} Let $\ftil\in\FIdtil$. Then certainly $A^{(l)} \ge 0$ for all $l\in\{2,3,\dots,L\}$, so indeed $\ftil\in\FIdhat$. Hence, $\FIdtil\subseteq \FIdhat$. \end{proof} Theorem~1 in \citet{chen2019optimal} shows that a Lipschitz convex function can be approximated within an arbitrary tolerance. We now provide a technical lemma adapting Theorem~1 in \citet{chen2019optimal} to show that convex functions can be \textit{underapproximated} within an arbitrary tolerance on a compact convex subset. \begin{lemma} \label{lem: icnn_lower_bound} For any convex function $g\colon\Rd\to\mathbb{R}$, any compact convex subset $X$ of $\Rd$, and any $\epsilon>0$, there exists $\fhat\in \FIdhat$ such that $\ghat(x) < g(x)$ for all $x\in X$ and $\sup_{x\in X}\left(g(x) - \ghat(x)\right) < \epsilon$. \end{lemma} \begin{proof} Let $g\colon\Rd\to\mathbb{R}$ be a convex function, let $X$ be a compact convex subset of $\Rd$, and let $\epsilon>0$. Since $g-\epsilon / 2$ is a real-valued convex function on $\Rd$ (and hence is proper), its restriction to the closed and bounded set $X$ is Lipschitz continuous \citep[Theorem 10.4]{rockafellar1970convex}, and therefore Lemma \ref{lem: equivalent_network_classes} together with Theorem 1 in \citet{chen2019optimal} gives that there exists $\fhat\in \FIdtil\subseteq \FIdhat$ such that $\sup_{x\in X}\left| \left(g(x) - \epsilon / 2\right) - \ghat(x)\right| < \epsilon / 2$. Thus, for all $x\in X$, \begin{align*} g(x) - \ghat(x) &= \left(g(x) - \frac{\epsilon}{2} \right) - \ghat(x) + \frac{\epsilon}{2} \\ &> \left(g(x) - \frac{\epsilon}{2}\right) - \ghat(x) + \sup_{y\in X} \left|\left(g(y) - \frac{\epsilon}{2}\right) - \ghat(y)\right| \\ &\ge \left(g(x) - \frac{\epsilon}{2}\right) - \ghat(x) + \left|\left(g(x) - \frac{\epsilon}{2}\right) - \ghat(x)\right| \\ &\ge 0. \end{align*} Furthermore, \begin{align*} \sup_{x\in X}\left(g(x) - \ghat(x)\right) &= \sup_{x\in X}\left|g(x) - \ghat(x)\right| \\ &= \sup_{x\in X}\left|\left(g(x) - \frac{\epsilon}{2}\right) - \ghat(x) + \frac{\epsilon}{2}\right| \\ &\le \sup_{x\in X}\left|\left(g(x) - \frac{\epsilon}{2}\right) - \ghat(x)\right| + \frac{\epsilon}{2} \\ &< \epsilon, \end{align*} which proves the lemma. \end{proof} We leverage Lemma~\ref{lem: icnn_lower_bound} to construct a uniformly converging sequence of underapproximating functions. \begin{lemma} \label{lem: monotone_sequence_of_icnns} For all $\f\in\FId$ and all compact convex subsets $X$ of $\Rd$, there exists a sequence $\{\fhat_n \in \FIdhat : n\in\N\} \subseteq \FIdhat$ such that $\ghat_n(x) < \ghat_{n+1}(x) < \g(x)$ for all $x\in X$ and all $n\in\N$ and $\ghat_n$ converges uniformly to $\g$ on $X$ as $n\to\infty$. \end{lemma} \begin{proof} Let $\f\in\FId$ and let $X$ be a compact convex subset of $\Rd$. Let $\{\epsilon_n > 0 : n\in\N\}$ be a sequence such that $\epsilon_{n+1}<\epsilon_n$ for all $n\in \N$ and $\epsilon_n \to 0$ as $n\to\infty$. Such a sequence clearly exists, e.g., by taking $\epsilon_n = 1 / n$ for all $n\in\N$. Now, for all $n\in\N$, the function $\g - \epsilon_{n+1}$ is convex, and therefore by Lemma \ref{lem: icnn_lower_bound} there exists $\fhat_n\in \FIdhat$ such that $\ghat_n(x) < \g(x) - \epsilon_{n+1}$ for all $x\in X$ and $\sup_{x\in X} \left((\g(x) - \epsilon_{n+1}) - \ghat_{n}(x)\right) < \epsilon_n - \epsilon_{n+1}$. Fixing such $\fhat_n, \ghat_n$ for all $n\in\N$, we see that $\sup_{x\in X} \left((\g(x) - \epsilon_{n+2}) - \ghat_{n+1}(x)\right) < \epsilon_{n+1}-\epsilon_{n+2}$, which implies that \begin{equation*} \ghat_{n+1}(x) > \g(x) - \epsilon_{n+1} > \ghat_n(x) \end{equation*} for all $x\in X$, which proves the first inequality. The second inequality comes from the fact that $\ghat_{n+1}(x) < \g(x) - \epsilon_{n+2} < \g(x)$ for all $x\in X$. Finally, since $\g(x) - \ghat_n(x) > \epsilon_{n+1} > 0$ for all $x\in X$ and all $n\in\N$, we see that \begin{equation*} \sup_{x\in X}\left|\g(x) - \ghat_n(x)\right| = \sup_{x\in X}\left(\g(x) - \ghat_n(x)\right) < \epsilon_n \to 0 ~ \text{as $n\to\infty$}, \end{equation*} which proves that $\lim_{n\to\infty} \sup_{x\in X} \left|\g(x) - \ghat_n(x)\right| = 0$, so indeed $\ghat_n$ converges uniformly to $\g$ on $X$ as $n\to\infty$. \end{proof} With all the necessary lemmas in place, we now present our main theoretical results. \end{toappendix} \begin{theoremrep} \label{thm: uat} For any $\f\in\FId$, any compact convex subset $X$ of $\Rd$, and any $\epsilon>0$, there exists $\fhat\in \FIdhat$ such that $m(\{x\in X : \fhat(x) \ne \f(x)\}) < \epsilon$. \end{theoremrep} \begin{proof} Let $\f\in\FId$ and let $X$ be a compact convex subset of $\Rd$. By Lemma \ref{lem: monotone_sequence_of_icnns}, there exists a sequence $\{\fhat_n \in \FIdhat : n\in\N\} \subseteq \FIdhat$ such that $\ghat_n(x) < \ghat_{n+1}(x) < \g(x)$ for all $x\in X$ and all $n\in\N$ and $\ghat_n$ converges uniformly to $\g$ on $X$ as $n\to\infty$. Fix this sequence. For all $n\in\N$, define \begin{equation*} E_n = \{x\in X : \fhat_n(x) \ne \f(x)\}, \end{equation*} i.e., the set of points in $X$ for which the classification under $\fhat_n$ does not agree with that under $\f$. Since $\ghat_n(x) < \g(x)$ for all $x\in X$ and all $n\in \N$, we see that \begin{gather*} E_n = \{x\in X : \text{$\ghat_n(x)>0$ and $\g(x) \le 0$}\} \cup \{x\in X : \text{$\ghat_n(x) \le 0$ and $\g(x) > 0$}\} \\ = \{x\in X : \text{$\ghat_n(x) \le 0$ and $\g(x) > 0$}\}. \end{gather*} Since $\g$ is a real-valued convex function on $\Rd$, it is continuous \citep[Corollary 10.1.1]{rockafellar1970convex}, and therefore $\g^{-1}((0,\infty)) = \{x\in \Rd : \g(x) > 0\}$ is measurable. Similarly, $\ghat_n^{-1}((-\infty,0]) = \{x\in\Rd : \ghat_n(x)\le 0\}$ is also measurable for all $n\in\N$ since $\ghat_n$ is continuous. Furthermore, $X$ is measurable as it is compact. Therefore, $E_n$ is measurable for all $n\in\N$. Now, since $\ghat_n(x) < \ghat_{n+1}(x)$ for all $x\in X$ and all $n\in \N$, it holds that $E_{n+1}\subseteq E_n$ for all $n\in\N$. It is clear that to prove the result, it suffices to show that $\lim_{n\to\infty}m(E_n)=0$. Therefore, if we show that $m\left(\bigcap_{n\in\N} E_n\right) = 0$, then the fact that $m(E_1) \le m(X) < \infty$ together with Lebesgue measure's continuity from above yields that $\lim_{n\to\infty} m(E_n) = 0$, thereby proving the result. It remains to be shown that $m\left(\bigcap_{n\in\N} E_n\right) = 0$. To this end, suppose for the sake of contradiction that $\bigcap_{n\in\N} E_n \ne \emptyset$. Then there exists $x\in \bigcap_{n\in\N} E_n$, meaning that $\g(x) > 0$ and $\ghat_n(x) \le 0$ for all $n\in\N$. Thus, for this $x\in X$, we find that $\limsup_{n\to\infty} \ghat_n(x) \le 0 < \g(x)$, which contradicts the fact that $\ghat_n$ uniformly converges to $\g$ on $X$. Therefore, it must be that $\bigcap_{n\in\N} E_n = \emptyset$, and thus $m\left(\bigcap_{n\in\N}E_n\right) = 0$, which concludes the proof. \end{proof} An extension of the proof of Theorem \ref{thm: uat} combined with Proposition~\ref{prop: convex_sets_are_decision_regions_of_input-convex_classifiers} yields that input-convex $\relu$ neural networks can perfectly fit convexly separable sampled datasets. \begin{theoremrep} \label{thm: relu_nets_fit_finite_data} If $(X_1,X_2)$ is a convexly separable pair of finite subsets of $\Rd$, then there exists $\fhat \in \FIdhat$ such that $\fhat(x)=1$ for all $x\in X_1$ and $\fhat(x) = 2$ for all $x\in X_2$. \end{theoremrep} \begin{proof} Throughout this proof, we denote the complement of a set $Y\subseteq\Rd$ by $Y^c = \Rd\setminus Y$. Suppose that $X_1=\{x^{(1)},\dots,x^{(M)}\}\subseteq \Rd$ and $X_2 = \{y^{(1)},\dots,y^{(N)}\}\subseteq\Rd$ are such that $(X_1,X_2)$ is convexly separable. Then, by definition of convex separability, there exists a nonempty closed convex set $X'\subseteq\Rd$ such that $X_2\subseteq X'$ and $X_1\subseteq \Rd\setminus X'$. Let $X = X' \cap \conv(X_2)$. Since $X_2\subseteq X'$ and both sets $X'$ and $\conv(X_2)$ are convex, the set $X$ is nonempty and convex. By finiteness of $X_2$, the set $\conv(X_2)$ is compact, and therefore by closedness of $X'$, the set $X$ is compact and hence closed. By Proposition \ref{prop: convex_sets_are_decision_regions_of_input-convex_classifiers}, there exists $\f\in\FId$ such that $\f^{-1}(\{2\}) = X$. Since $\conv(X_1\cup X_2)$ is compact and convex, Lemma \ref{lem: monotone_sequence_of_icnns} gives that there exists a sequence $\{\fhat_n\in\FIdhat : n\in\N\} \subseteq \FIdhat$ such that $\ghat_n(x)<\ghat_{n+1}(x) < \g(x)$ for all $x\in \conv(X_1\cup X_2)$ and all $n\in\N$ and $\ghat_n$ converges uniformly to $\g$ on $\conv(X_1\cup X_2)$ as $n\to\infty$. Fix this sequence. Let $x\in X_2$. Then, since $X_2\subseteq X'$ and $X_2\subseteq\conv(X_2)$, it holds that $x\in X'\cap\conv(X_2)=X=\f^{-1}(\{2\})$, implying that $\f(x)=2$ and hence $\g(x)\le 0$. Since $\ghat_n(x) < \g(x)$ for all $n\in\N$, this shows that $\fhat_n(x)=2$ for all $n\in\N$. On the other hand, let $i\in\{1,\dots,M\}$ and consider $x=x^{(i)}\in X_1$. Since $X_1 \subseteq \Rd \setminus X' = \Rd \cap (X')^c \subseteq \Rd \cap(X'\cap \conv(X_2))^c = \Rd \cap X^c = \Rd \cap \f^{-1}(\{1\})$, it holds that $\f(x) = 1$ and thus $\g(x) > 0$. Suppose for the sake of contradiction that $\fhat_n(x)=2$ for all $n\in\N$. Then $\ghat_n(x) \le 0$ for all $n\in\N$. Therefore, for this $x\in X_1$, we find that $\limsup_{n\to\infty} \ghat_n(x) \le 0 < \g(x)$, which contradicts the fact that $\ghat_n$ uniformly converges to $g$ on $\conv(X_1\cup X_2)$. Therefore, it must be that there exists $n_i\in\N$ such that $\fhat_{n_i}(x)=1$, and thus $\ghat_{n_i}(x)>0$. Since $\ghat_n(x) < \ghat_{n+1}(x)$ for all $n\in\N$, this implies that $\ghat_n(x) > 0$ for all $n \ge n_i$. Hence, $\fhat_n(x)=\fhat_n(x^{(i)})=1$ for all $n\ge n_i$. Let $n^\star$ be the maximum of all such $n_i$, i.e., $n^\star = \max\{n_i : i\in\{1,\dots,M\}\}$. Then the above analysis shows that $\fhat_{n^\star}(x)=2$ for all $x\in X_2$ and that $\fhat_{n^\star}(x)=1$ for all $x\in X_1$. Since $\fhat_{n^\star}\in \FIdhat$, the claim has been proven. \end{proof} Theorems \ref{thm: uat} and \ref{thm: relu_nets_fit_finite_data} theoretically justify the particular parameterization in Definition \ref{def: icnn} for learning feature-convex classifiers to fit convexly separable data. \inlinesubsection{Empirical convex separability} Interestingly, we find empirically that high-dimensional image training data is convexly separable. We illustrate this in Appendix~\ref{app: convex_comb} by attempting to reconstruct a CIFAR-10 cat image from a convex combination of the dogs and vice versa; the error is significantly positive for \emph{every} sample in the training dataset, and image reconstruction is visually poor. This fact, combined with Theorem~\ref{thm: relu_nets_fit_finite_data}, immediately yields the following result. \begin{corollary} \label{cor: cats_dogs_separability} There exists $\fhat \in \FIdhat$ such that $\fhat$ achieves perfect training accuracy for the unaugmented CIFAR-10 cats-versus-dogs dataset. \end{corollary} The gap between this theoretical guarantee and our practical performance is large; without the feature map, our CIFAR-10 cats-dogs classifier achieves just $73.4\%$ training accuracy (Table~\ref{tab: unaugmented_accuracies}). While high training accuracy may not necessarily imply strong test set performance, Corollary~\ref{cor: cats_dogs_separability} demonstrates that the typical deep learning paradigm of overfitting to the training dataset is attainable and that there is at least substantial room for improvement in the design and optimization of input-convex classifiers \citep{nakkiran2021deep}. We leave the challenge of overfitting to the CIFAR-10 cats-dogs training data with an input-convex classifier as an open research problem for the field. \inlinesubsection{Convex separability in high dimensions} We conclude by investigating \emph{why} the convex separability property that allows for Corollary~\ref{cor: cats_dogs_separability} may hold for natural image datasets. We argue that dimensionality facilitates this phenomenon by showing that data is easily separated by some $\f\in\FIdhat$ when $d$ is sufficiently large. In particular, although it may seem restrictive to rely on models in $\FIdhat$ with convex \cns~decision regions, we show in Theorem \ref{thm: convex_separation_of_high-dimensional_data} below that even uninformative data distributions that are seemingly difficult to classify may be fit by such models with high probability as the dimensionality of the data increases. \begin{theoremrep} \label{thm: convex_separation_of_high-dimensional_data} \hl{ Consider $M,N\in\N$. Let $X_1 = \{x^{(1)},\dots,x^{(M)}\}\subseteq\Rd$ and $X_2 = \{y^{(1)},\dots,y^{(N)}\}\subseteq\Rd$ be samples with all elements $x^{(i)}_k,y^{(j)}_l$ drawn independently and identically from the uniform probability distribution on $[-1,1]$. Then, it holds that \begin{equation} \begin{multlined} \prob\big( \text{$(X_1,X_2)$ is convexly separable} \big) \\ \ge \begin{aligned} \begin{cases} 1 - \left(1-\frac{M!N!}{(M+N)!}\right)^d & \text{for all $d\in \N$}, \\ 1 & \text{if $d \ge M+N$}. \end{cases} \end{aligned} \end{multlined} \label{eq: convex_separation_of_high-dimensional_data} \end{equation} In particular, $\FIdhat$ contains an input-convex $\relu$ neural network that classifies all $x^{(i)}$ into \cs~and all $y^{(j)}$ into \cns~almost surely for sufficiently large dimensions $d$. } \end{theoremrep} \begin{proof} \hl{Throughout the proof, we denote the cardinality of a set $S$ by $|S|$. For the reader's convenience, we also recall that, for $n\in\N$, the symmetric group $S_n$ consists of all permutations (i.e., bijections) on the set $\{1,2,\dots,n\}$, and that $|S_n| = n!$. If $\sigma\colon \{1,2,\dots,n\}\to\{1,2,\dots,n\}$ is a permutation in $S_n$, we denote the restriction of $\sigma$ to the domain $I\subseteq\{1,2,\dots,n\}$ by $\sigma|_I \colon I \to \{1,2,\dots,n\}$, which we recall is defined by $\sigma|_I (i) = \sigma(i)$ for all $i\in I$, and is not necessarily a permutation on $I$ in general. Consider first the case where $d \ge M+N$. Let $b\in\mathbb{R}^{M+N}$ be the vector defined by $b_i = 1$ for all $i\in\{1,\dots,M\}$ and $b_i = -1$ for all $i\in\{M+1,\dots,M+N\}$. Then, since $x_k^{(i)},y_l^{(j)}$ are independent uniformly distributed random variables on $[-1,1]$, it holds that the matrix \begin{equation*} \begin{bmatrix} {x^{(1)}}^\top \\ \vdots \\ {x^{(M)}}^\top \\ {y^{(1)}}^\top \\ \vdots \\ {y^{(N)}}^\top \end{bmatrix} \in \mathbb{R}^{(M+N)\times d} \end{equation*} has rank $M+N$ almost surely, and therefore the linear system of equations \begin{equation*} \begin{bmatrix} {x^{(1)}}^\top \\ \vdots \\ {x^{(M)}}^\top \\ {y^{(1)}}^\top \\ \vdots \\ {y^{(N)}}^\top \end{bmatrix} a = b \end{equation*} has a solution $a\in\Rd$ with probability $1$, and we note that from this solution we find that $X_2$ is a subset of the nonempty closed convex set $\{x\in\Rd : a^\top x \le 0\}$ and that $X_1$ is a subset of its complement. Hence, $(X_1,X_2)$ is convexly separable with probability $1$ in this case. Now let us consider the general case: $d \in\N$ and in general it may be the case that $d < M+N$.} For notational convenience, let $P$ be the probability of interest: \begin{equation*} P = \prob\big(\text{$(X_1,X_2)$ is convexly separable}\big). \end{equation*} Suppose that there exists a coordinate $k\in\{1,2,\dots,d\}$ such that $x^{(i)}_k < y^{(j)}_k$ for all pairs $(i,j)\in\{1,2,\dots,M\}\times\{1,2,\dots,N\}$ and that $a \coloneqq \min \{y^{(1)}_k,\dots,y^{(N)}_k\} < \max\{y^{(1)}_k,\dots,y^{(N)}_k\} \eqqcolon b$. Then, let $X = \{x\in \Rd : x_k\in[a,b]\}$. That is, $X$ is the extrusion of the convex hull of the projections $\{y^{(1)}_k,\dots,y^{(N)}_k\}$ along all remaining coordinates. The set $X$ is a nonempty closed convex set, and it is clear by our supposition that $X_2 \subseteq X$ and $X_1 \subseteq \Rd\setminus X$. Therefore, the supposition implies that $(X_1,X_2)$ is convexly separable, and thus \begin{align*} P &\ge \prob\big(\text{there exists $k\in\{1,2,\dots,d\}$ such that $x^{(i)}_k < y^{(j)}_k$ for all pairs $(i,j)$}\\ &\quad \text{and that $\min\{y^{(1)}_k,\dots,y^{(N)}_k\} < \max\{y^{(1)}_k,\dots,y^{(N)}_k\}$}\big) \\ &= 1 - \prob\big(\text{for all $k\in\{1,2,\dots,d\}$, it holds that $x_k^{(i)} \ge y_k^{(j)}$ for some pair $(i,j)$}\\ &\quad \text{or that $\min\{y_k^{(1)},\dots,y_k^{(N)}\} = \max\{y^{(1)}_k,\dots,y_k^{(N)}\}$} \big) \\ &= 1 - \prod_{k=1}^d \prob\big( \text{$x_k^{(i)} \ge y_k^{(j)}$ for some pair $(i,j)$ or $\min\{y_k^{(1)},\dots,y_k^{(N)}\} = \max\{y^{(1)}_k,\dots,y_k^{(N)}\}$} \big), \end{align*} where the final equality follows from the independence of the coordinates of the samples. Since $\min\{y_k^{(1)},\dots,y_k^{(N)}\} < \max\{y_k^{(1)},\dots,y_k^{(N)}\}$ almost surely, we find that \begin{equation} \begin{aligned} P &\ge 1 - \prod_{k=1}^d \bigg( \prob(\text{$x_k^{(i)} \ge y_k^{(j)}$ for some pair $(i,j)$}) \\ &\quad + \prob(\min\{y_k^{(1)},\dots,y_k^{(N)}\} = \max\{y_k^{(1)},\dots,y_k^{(N)}\}) \bigg) \\ &= 1 - \prod_{k=1}^d \prob( \text{$x_k^{(i)} \ge y_k^{(j)}$ for some pair $(i,j)$}) \\ &= 1- \prod_{k=1}^d \left( 1 - \prob(\text{$x_k^{(i)} < y_k^{(j)}$ for all pairs $(i,j)$}) \right) \\ &= 1- \prod_{k=1}^d \left( 1 - \prob\left(\max_{i\in\{1,2,\dots,M\}}x_k^{(i)} < \min_{j\in\{1,2,\dots,N\}}y_k^{(j)}\right) \right) \\ &= 1 - \prod_{k=1}^d \left(1-\prob \left((x_k^{(1)},\dots,x_k^{(M)},y_k^{(1)},\dots,y_k^{(N)})\in \bigcup_{\sigma \in S} E_\sigma\right)\right), \end{aligned} \label{eq: high-dimensions_proof-intermediate_bound_1} \end{equation} \hl{where we define $S$ to be the set of permutations on $\{1,\dots,M+N\}$ whose restriction to $\{1,\dots,M\}$ is also a permutation; \begin{equation*} S = \left\{\sigma\in S_{M+N} : \sigma|_{\{1,\dots,M\}}\in S_M\right\}, \end{equation*} and where, for a permutation $\sigma\in S_{M+N}$, $E_\sigma$ is the event where an $(M+N)$-vector has indices ordered according to $\sigma$; \begin{equation*} E_\sigma = \{z\in\mathbb{R}^{M+N} : z_{\sigma(1)} < \cdots < z_{\sigma(M+N)}\}. \end{equation*} We note that the final equality in \eqref{eq: high-dimensions_proof-intermediate_bound_1} relies on the fact that $\prob(x_k^{(i)}=x_k^{(i')})=\prob(y_k^{(j)}=y_k^{(j')})=0$ for all $i'\ne i$ and all $j'\ne j$, which is specific to our uniform distribution at hand. Now, since $E_{\sigma},E_{\sigma'}$ are disjoint for distinct permutations $\sigma,\sigma' \in S_{M+N}$, the bound \eqref{eq: high-dimensions_proof-intermediate_bound_1} gives that \begin{equation} P \ge 1 - \prod_{k=1}^d \left(1-\sum_{\sigma \in S} \prob((x_k^{(1)},\dots,x_k^{(M)},y_k^{(1)},\dots,y_k^{(N)})\in E_\sigma)\right). \label{eq: high_dimensions_proof-intermediate_bound_2} \end{equation} Since $x_k^{(1)},\dots,x_k^{(M)},y_k^{(1)},\dots,y_k^{(N)}$ are independent and identically distributed samples, they define an exchangeable sequence of random variables, implying that $\prob((x_k^{(1)},\dots,x_k^{(M)},y_k^{(1)},\dots,y_k^{(N)})\in E_\sigma) = \prob(x_k^{(1)} < \dots < x_k^{(M)} < y_k^{(1)} < \dots < y_k^{(N)})$ for all permutations $\sigma\in S_{M+N}$. Since, under the uniform distribution at hand, $(x_k^{(1)},\dots,x_k^{(M)},y_k^{(1)},\dots,y_k^{(N)})\in E_\sigma$ for some $\sigma\in S_{M+N}$ almost surely, it holds that \begin{align*} 1 &= \prob\left((x_k^{(1)},\dots,x_k^{(M)},y_k^{(N)},\dots,y_k^{(N)})\in\bigcup_{\sigma\in S_{M+N}} E_\sigma\right) \\ &= \sum_{\sigma\in S_{M+N}} \prob((x_k^{(1)},\dots,x_k^{(M)},y_k^{(1)},\dots,y_k^{(N)})\in E_\sigma) \\ &= |S_{M+N}| \prob(x_k^{(1)}<\cdots x_k^{(M)}<y_k^{(1)}<\cdots <y_k^{(N)}). \end{align*} This implies that \begin{equation*} \prob((x_k^{(1)},\dots,x_k^{(M)},y_k^{(1)},\dots,y_k^{(N)})\in E_\sigma) = \frac{1}{|S_{M+N}|} = \frac{1}{(M+N)!} \end{equation*} for all permutations $\sigma\in S_{M+N}$. Hence, our bound \eqref{eq: high_dimensions_proof-intermediate_bound_2} becomes \begin{equation*} P \ge 1- \prod_{k=1}^d \left(1-\frac{|S|}{(M+N)!}\right) = 1 - \left(1-\frac{|S|}{(M+N)!}\right)^d. \end{equation*} Finally, we immediately see that that map $\Gamma \colon S_M\times S_N \to S_{M+N}$ defined by \begin{equation*} \Gamma(\sigma,\sigma')(i) = \begin{aligned} \begin{cases} \sigma(i) & \text{if $i\in\{1,\dots,M\}$}, \\ \sigma'(i-M) + M & \text{if $i\in\{M+1,\dots,M+N\}$}, \end{cases} \end{aligned} \end{equation*} is injective and has image $S$, implying that $|S| = |S_M\times S_N| = |S_M| |S_N| = M!N!$. Thus, \begin{equation*} P \ge 1 - \left(1-\frac{M!N!}{(M+N)!}\right)^d, \end{equation*} which proves \eqref{eq: convex_separation_of_high-dimensional_data-apx}. The unit probability of $\FIdhat$ containing a classifier that classifies all $x^{(i)}$ into \cs~and all $y^{(j)}$ into \cns~for large $d$ follows immediately from Theorem \ref{thm: relu_nets_fit_finite_data}.} \end{proof} Although the uniformly distributed data in Theorem \ref{thm: convex_separation_of_high-dimensional_data} is unrealistic in practice, the result demonstrates that the class $\FIdhat$ of input-convex $\relu$ neural networks has sufficient complexity to fit even the most unstructured data in high dimensions. Despite this ability, researchers have found that current input-convex neural networks tend to not overfit in practice, yielding small generalization gaps relative to conventional neural networks \citep{sivaprasad2021curious}. Achieving the modern deep learning paradigm of overfitting to the training dataset with input-convex networks is an exciting open challenge \citep{nakkiran2021deep}. \begin{toappendix} \section{CIFAR-10 Cats-versus-Dogs Convex Separability} \label{app: convex_comb} In order to establish that the cat and dog images in CIFAR-10 are convexly separable, we experimentally attempt to reconstruct an image from one class using a convex combination of all images in the other class (without augmentation such as random crops, flips, etc.). Namely, if $x$ is drawn from one class and $y^{(1)},\dots,y^{(N)}$ represent the entirety of the other class, we form the following optimization problem: \begin{equation*} \begin{aligned} & \underset{\alpha \in \mathbb{R}^N}{\text{minimize}} && \Big \| x - \sum_{j=1}^N \alpha_j y^{(j)} \Big \|_2 \\ & \text{subject to} && \alpha \geq 0, \\ &&& \sum_{j=1}^N \alpha_j = 1. \end{aligned} \end{equation*} The reverse experiment for the other class follows similarly. We solve the optimization using MOSEK \citep{mosek}, and report the various norms of $x - \sum_{j=1}^N \alpha_j y^{(j)}$ in Figure~\ref{fig: reconstructions}. Reconstruction accuracy is generally very poor, with no reconstruction achieving better than an $\ell_1$-error of $52$. A typical reconstructed image is shown in Figure~\ref{fig: reconstruction_example}. \begin{figure}[ht] \centering \resizebox{0.8\textwidth}{!}{ \input{figs/reconstruction.pgf} } \caption{ \label{fig: reconstructions} Reconstructing CIFAR-10 cat and dog images as convex combinations. The label ``Dogs $\to$ cat'' indicates that a cat image was attempted to be reconstructed as a convex combination of all $5000$ dog images. } \end{figure} \begin{figure}[ht] \centering \resizebox{0.8\textwidth}{!}{ \includegraphics{figs/reconstruction_example.png} } \caption{ \label{fig: reconstruction_example} Reconstructing a CIFAR-10 cat image (left) from a convex combination of dog images (right). The reconstruction error norms are $294.57$, $6.65$, and $0.38$ for the $\ell_1$-, $\ell_2$-, and $\ell_{\infty}$-norms, respectively. These are typical, as indicated by Figure~\ref{fig: reconstructions}. } \end{figure} \end{toappendix}
1,108,101,562,677
arxiv
\section{Introduction\label{sec:Introduction}} Quantum coherences are at the heart of many nonlinear and quantum optical phenomena. They are responsible for the appearance of such effects as coherent population trapping \cite{Arimondo1996}, electromagnetically induced transparency \cite{Fleischhauer2005}, extremely slow light propagation \cite{Dutton2004} and its storage \cite{Fleischhauer2000} in a medium, etc. A possibility of controllable generation and modification of an arbitrary quantum state also lays at the very foundations of quantum-state engineering and quantum-information processing (see, for example, Ref.~\cite{Hammerer2009Quantum}). A specific example of quantum coherence phenomenon is nonlinear magneto-optical rotation (NMOR) \cite{NMORreview,Gawlik2009}. The effect consists of light-intensity-dependent rotation of the polarization plane of linearly polarized light upon its propagation through a medium placed in an external magnetic field. The effect is based on the generation, evolution, and detection of non-equilibrium population distribution and/or quantum coherences between Zeeman sublevels of a given atomic state. In a typical Faraday geometry, in which the magnetic field and light propagation direction are parallel, linearly polarized light generates coherences between Zeeman sublevels differing in the magnetic quantum number $m$ by even values, $\Delta m=2n_i$, where $n_i$ is an integer \cite{FootnoteQuantization}. Despite the fact that different types of coherences can be generated in atoms, it is not usually possible to separate contributions from coherences with particular $\Delta m$ to the NMOR signal that is observed around zero magnetic field, $B\approx 0$ \cite{Lobodzinski1996Multipole,Gateva2007Shape}. An important breakthrough in the study of NMOR was the application of frequency- \cite{Budker2002FMNMOR} and amplitude-modulated light \cite{Gawlik2006AMOR}, which resulted in the FM NMOR (frequency modulated nonlinear magneto-optical rotation) and AMOR (amplitude modulated optical rotation) techniques. Application of the modulation technique enables the generation of given types of atomic coherences, i.e., coherences between Zeeman sublevels with specific $m$. By exploiting spatial symmetries of the atomic angular-momentum distribution associated with a given quantum state, it is possible to selectively create superpositions between sublevels differing in a magnetic quantum number $m$ by 2, 4, or even more, if only the system supports such coherences \cite{Yashchuk2003Selective,Pustelny2006Pump}. Information about the system's quantum state may also be obtained from the angular momentum distribution by analyzing time-dependent rotation of the polarization plane of light measured at non-zero magnetic field. Thus, in addition to selective generation and detection of the coherences, the technique constitutes a powerful tool in analyzing the evolution of a quantum state of a system. In particular, it allowed detailed investigations of relaxation processes of ground-state coherences in atoms contained in a paraffin-coated vapor cell \cite{Budker2005Antirelaxation,Seltzer2008Testing,Pustelny2006Influence}. This article presents the investigations on the generation, evolution, and detection of long-living ground-state observables (non-equilibrium population distribution and ground-state coherences, the latter represented by non-diagonal density matrix elements between the ground-state sublevels) in atomic vapor subjected to a longitudinal magnetic field. The measurements are performed in the pump-probe arrangement with one beam used for the creation of atomic polarization and another beam employed for the detection of the system's quantum state. With this arrangement we create the $\Delta m=2$ Zeeman coherences in non-zero magnetic fields. By appropriate tuning of the pumping laser, the coherences are generated in the $F=1$ and $F=2$ hyperfine ground states of $^{87}$Rb. These coherences evolve in the external magnetic field with the frequency determined by the energy splitting between respective sublevels (the Larmor frequency), and are continuously probed with CW light. We study the generation and evolution of the coherences at various pumping and probing conditions. Using magnetic fields such that the nonlinear-Zeeman splitting of the ground-state sublevels is comparable to or exceeds the relaxation rate of this state coherences, we resolve and selectively analyze all three $\Delta m=2$ coherences generated in the $F=2$ state. The article is organized as follows. The next section presents a theoretical approach demonstrating the relation between ground-state Zeeman coherences and nonlinear magneto-optical rotation with amplitude-modulated light. Section \ref{sec:Experimental} describes experimental apparatus, while Sec.~\ref{sec:Results} presents the results and their analysis. Final remarks are collected in Sec.~\ref{sec:Conclusions}. \section{Theoretical relation between AMOR signal and ground-state coherences\label{sec:Theory}} The optical properties of a medium are characterized by the complex refractive index $\eta$ \begin{equation} \eta=n+i\kappa=\sqrt{1+\chi}, \label{eq:ComplexRefractiveIndex} \end{equation} where $n$, $\kappa$, $\chi$ denote, respectively, the refractive index, absorption coefficient, and medium electric susceptibility. Knowledge of $\eta$ for different polarizations allows one to determine anisotropic properties of the medium. In particular, the Faraday angle $\varphi$, which is the angle of polarization rotation upon transition through a medium, may be calculated using refractive indices $n_+$ and $n_-$ of two circular polarizations $\sigma^+$ and $\sigma^-$ \begin{equation} \varphi=\frac{\omega L}{2c}(n_+-n_-), \label{eq:Rotation} \end{equation} where $\omega$ is the light frequency, $L$ the length of the medium, and $c$ the speed of light. In order to calculate the susceptibility of a medium, and hence the refractive indices and Faraday angle, the density-matrix formalism may be used \cite{Boyd2003} \begin{equation} \chi_\pm=\frac{N}{E_0}\sum_{m=-F}^{F}d_{mm\pm 1}\rho_{m\pm 1m}, \label{eq:Chi} \end{equation} where $\rho_{m\pm 1m}$ is the optical coherence between the ground-state sublevel $m$ and excited state sublevel $m\pm 1$ and $d_{mm\pm 1}$ is the corresponding dipole matrix element, $N$ the number density of atoms, and $E_0$ is the amplitude of the electric field of the light. Substituting Eq.~(\ref{eq:Chi}) into Eq.~(\ref{eq:ComplexRefractiveIndex}) and expanding the equation into the power series with respect to $\chi$ allows one to link the Faraday angle with the density matrix elements \begin{widetext} \begin{equation} \varphi=\frac{\omega L N}{2cE_0}\text{Re}\left(\sum_{m=-F}^{F-1}d_{mm+1}\rho_{m+1m}-\sum_{m=-F+1}^{F}d_{mm-1}\rho_{m-1m}\right). \label{eq:RotationFinal} \end{equation} \end{widetext} In order to calculate the density matrix elements, the evolution of the density matrix $\rho$, governed by the Liouville equation, needs to be considered \begin{equation} \dot{\varrho}=-\frac{i}{\hbar}[H,\varrho]-\frac{1}{2}\{\Gamma,\varrho\}+\Lambda, \label{eq:LiouvilleEquation} \end{equation} where $H$ is the total Hamiltonian of the system, $\Gamma$ the relaxation operator, $\Lambda$ the repopulation operator describing $\rho$-independent mechanisms such as transit and wall relaxation \cite{FootnoteNeglectingDecay}, and the square and curly brackets denote the commutator and anticommutator \cite{Auzinsh2009Light}. In the considered case, the total Hamiltonian of the system is a sum of the unperturbed Hamiltonian $H_0$ and the Hamiltonians describing interactions of atoms with light $V_l$ and magnetic field $V_B$ \begin{equation} H=H_0+V_l+V_B. \label{eq:Hamiltonian} \end{equation} Assuming the $y$-polarized laser beam, the Hamiltonian describing the light-atom interaction may be written in the dipole approximation as \begin{equation} V_l=-\vect{E}\cdot\vect{d}=-E_0e^{-i\omega t}d_y=-\frac{1}{\sqrt{2}}E_0e^{-i\omega t}(d_-+d_+), \label{eq:HamiltonianLight} \end{equation} where $\vect{E}$ is the electric field of the light of amplitude $E_0$, $\vect{d}$ denotes the electric dipole moment operator, and $d_\pm$ are the dipole-matrix elements corresponding to the transitions between the ground- and excited-state Zeeman sublevels differing in the magnetic quantum number $m$ by $\pm 1$. The magnetic-field interaction Hamiltonian $V_B$ may be presented in the form \begin{equation} V_B=-\vect{\mu}\cdot\vect{B}, \label{eq:HamiltonianMagnetic} \end{equation} where $\vect{\mu}$ is the magnetic dipole moment operator, and $\vect{B}$ the magnetic-field induction. Since in our geometry the magnetic field is applied along the quantization axis, it changes the Zeeman-sublevel energies removing their degeneracy but does not mix them. For an alkali atom, one can calculate the energy shift $\hbar\omega_B$ of a given ground-state magnetic sublevel $m_F$ using the Breit-Rabi formula \begin{equation} \omega_B(m_F)=\frac{E_u}{\hbar}-\frac{\Delta_{HF}}{2(2I+1)}\pm\frac{\Delta_{HF}}{2}\sqrt{1+\frac{4m}{2I+1}x+x^2}, \label{eq:BreitRabi} \end{equation} where $x=\omega_L/\Delta_{HF}$ with $\omega_L=g_F\mu_BB/\hbar$ being the Larmor frequency, $g_F$ the Land\'e factor of the state with a total angular momentum $F$, $\mu_B$ the Bohr magneton, $\Delta_{HF}$ the energy splitting of the ground-state hyperfine doublet, $E_u$ the "center of mass" energy of the level with no hyperfine interaction, and the signs $\pm$ correspond to two hyperfine components $F=I\pm 1/2$. Combining Eqs.~(\ref{eq:Hamiltonian})-(\ref{eq:BreitRabi}) with Eq.~(\ref{eq:LiouvilleEquation}) allows one to formulate equations describing time evolution of a given density-matrix element $\rho_{a b}$ \begin{equation} \dot{\rho}_{a b}=-i\omega_{a b}\rho_{ab}+i\sum_j\left( \Omega_{aj}\rho_{j b}-\rho_{aj}\Omega_{j b}\right)- \Gamma_{ab}\left(\rho_{ab}-\rho_{ab}^{eq}\right), \label{eq:TimeExolutionDMelement} \end{equation} where $\Omega_{aj}=E_{aj}d_{aj}/\sqrt{2}\hbar$ is the Rabi frequency associated with the transition between $|a\rangle$ and $|j\rangle$ states, $\omega_{ab}=\omega_B(m_a)-\omega_B(m_b)$ denotes the frequency splitting of the levels, and $\rho_{ab}^{eq}$ the equilibrium value of $\rho_{ab}$. In order to demonstrate the role of ground-state coherences in rotation of the polarization plane, the explicit formulae for optical coherences need to be written. To derive such analytical formulae, we use the perturbation approach, where the density matrix is expanded into the power series of the electric field of the light \begin{equation} \rho=\sum_{n=0}^\infty\rho^{(n)}E_0^n. \end{equation} In such a case, the relation for time evolution of a given density matrix element takes the form \begin{equation} \begin{split} \dot{\rho}_{a b}^{(n)}=-&i\omega_{a b}\rho_{ab}^{(n)}+i\sum_j\left( \Omega_{a j}\rho_{j b}^{(n-1)}-\rho_{a j}^{(n-1)}\Omega_{j b}\right)-\\-&\Gamma_{ab}\left(\rho_{ab}^{(n)}-\rho_{ab}^{eq}\right). \end{split} \label{eq:DMperturbationGeneral} \end{equation} Even though in this paper nonlinear magneto-optical rotation with amplitude-modulated light is studied, we first derive analytical formulae for populations, optical and Zeeman coherences when CW light is used (NMOR). Such an approach facilitates the understanding of the problem and provides an insight into the modulated case. By simple generalization of the CW formulae one can write relations for the density-matrix elements when AM light is applied. \subsection{Low magnetic field, unmodulated light} Application of the unmodulated light allows one to employ the steady-state approximation ($\dot{\rho}\equiv 0$). Within this approximation the density-matrix elements calculated in the first three orders of the expansion are given by \begin{equation} \begin{split} \rho^{(0)}_{aa}&=\rho^{eq}_{aa},\\ \sigma^{(1)}_{ab}&=\frac{\Omega_{ab}}{\Delta\omega_{ab}+i\Gamma_{ab}}\rho_{aa}^{(0)},\\ \rho^{(2)}_{aa}&=-i\frac{\Omega_{ab}\sigma^{(1)}_{ba}-\sigma_{ab}^{(1)}\Omega_{ba}}{\Gamma_{aa}},\\ \rho^{(2)}_{aa'}&=\frac{\Omega_{ab}\sigma^{(1)}_{ba'}-\sigma_{ab}^{(1)}\Omega_{ba'}}{-\omega_{aa'}+i\Gamma_{aa'}},\\ \sigma^{(3)}_{ab}&=\frac{\Omega_{ab}\rho_{aa}^{(2)}+\rho_{aa'}^{(2)}\Omega_{a'b}}{\Delta\omega_{ab}+i\Gamma_{ab}}, \end{split} \label{eq:DMperturbation} \end{equation} where $\rho_{aa}$ denotes the population of the ground-state sublevel $|a\rangle$, $\sigma_{ab}$ the amplitude of the optical coherence $\rho_{ab}$ ($\rho_{ab}=\sigma_{ab} e^{-i\omega t}$), $\rho_{aa'}$ the ground-state coherence, $\Delta\omega_{ab}=\omega-\omega_{ab}$ the light detuning from the transition between the ground state $|a\rangle$ and the excited state $|b\rangle$, and the superscript to $\rho$ denotes the order of the expansion \cite{FootnoteRWA}. From Eqs.~(\ref{eq:DMperturbation}), the amplitude of the third-order optical-coherence $\sigma_{ab}^{(3)}$ depends on the ground-state Zeeman coherence $\rho_{aa'}^{(2)}$ which is characterized by the ground-state relaxation rate $\Gamma_{aa'}$. Since the ground-state relaxation is much slower than the relaxation of the optical coherence, $\Gamma_{aa'}\ll \Gamma_{ab}$, the third-order optical coherence manifests in the absorption and dispersion via spectral features much narrower than those associated with the first-order coherence. Moreover, Eqs.~(\ref{eq:DMperturbation}) additionally show that not only the widths but also the light-intensity dependences differentiate between the two contributions; while the first-order optical coherence is linear in $\Omega$, and thus is the amplitude of the electric field $E_0$, the third-order coherence depends on $\Omega^3$. Thus, based on the intensity dependences, one demonstrates that the first-order optical coherence is responsible for linear optical phenomena, such as polarization rotation independent of light intensity, and the third-order coherence determines nonlinear phenomena like NMOR and EIT. As shown above, NMOR is associated with ground-state Zeeman coherences. In particular for the $F=2$ state, three different ground-state coherences ($\rho_{-2,0}$, $\rho_{-1,1}$, $\rho_{0,2}$) contribute to NMOR. Thus, the effect may be used for investigation of the ground-state coherences, in particular, their generation and evolution under interaction with external fields. It is noteworthy, however, that at low magnetic fields independent studies of a given coherence are not possible because of the same dependence of all contributions on the magnetic field and light intensity. Such a distinction would be possible for higher magnetic fields when Zeeman sublevels depend nonlinearly on a magnetic field, however, at such fields the NMOR signals are not observed. \subsection{Stronger magnetic field, modulated light} When the intensity of light is sinusoidally modulated, the electric field of the light takes the form \begin{equation} E=\frac{E_0e^{-i\omega t}}{\sqrt{2}}\sqrt{1-a_m\cos\omega_m t}, \label{eq:IntensityModulation} \end{equation} where $a_m$ is the modulation amplitude and $\omega_m$ the modulation frequency. It may be easily shown that for the full modulation ($a_m=1$), relation (\ref{eq:IntensityModulation}) simplifies to $E=E_0\exp(-i\omega t)\sin(\omega_m t/2)$ \cite{FootnoteModulation}. This modification of the light spectrum results in a change of the light-atom interaction Hamiltonian \begin{equation} \begin{split} V_l=&-\left(e^{-i(\omega+\omega_m/2)t}+e^{-i(\omega-\omega_m/2)t}\right)\left(d_{-}+d_{+}\right). \end{split} \label{eq:HamiltonianLightModulated} \end{equation} Application of the modulated light rules out the standard steady-state approximation. This is caused by the appearance of the time-dependent Rabi frequency $\Omega=\Omega e^{i\omega_m/2 t}+\Omega e^{-i\omega_m/2 t}$ that drives oscillation of the density-matrix elements at different frequencies. In order to solve Eq.~(\ref{eq:TimeExolutionDMelement}), the density matrix needs to be expanded into the Fourier series of the modulation frequency $\omega_m/2$ \begin{equation} \rho=\sum_{k=-\infty}^\infty\rho^{(k)}e^{ik\omega_m/2 t}, \label{eq:DMExpansion} \end{equation} where $\rho^{(k)}$ is the $k$-th Fourier coefficient. Introduction of the Fourier expansion (\ref{eq:DMExpansion}) into Eqs.~(\ref{eq:DMperturbation}) enables application of the steady-state approximation for a given Fourier coefficient of the density matrix. In such a case, one can calculate the time-dependent density matrix elements $\rho^{(l,k)}$ [superscripts $(l,k)$ denote the $l$-th order of the perturbation expansion and the $k$-th order of the Fourier expansion in half the modulation frequency] \begin{widetext} \begin{equation} \begin{split} \rho^{(0,k)}_{aa}&=\rho^{eq}_{aa}\delta_{k,0},\\ \sigma^{(1,k)}_{ab}&=\Omega_{ab}\frac{\rho_{aa}^{(0,k-1)}+\rho_{aa}^{(0,k+1)}}{\Delta\omega_{ab}-k\omega_m/2+i\Gamma_{ab}},\\ \rho^{(2,k)}_{aa}&=\frac{\Omega_{ab}\left(\sigma^{(1,k-1)}_{ba}+\sigma^{(1,k+1)}_{ba}\right)-\Omega_{ba}\left(\sigma_{ab}^{(1,k-1)}+\sigma_{ab}^{(1,k+1)}\right)} {-k\omega_m/2+i\Gamma_{aa}},\\ \rho^{(2,k)}_{aa'}&=\frac{\Omega_{ab}\left(\sigma^{(1,k-1)}_{ba'}+\sigma^{(1,k+1)}_{ba'}\right)-\Omega_{ba'}\left(\sigma_{ab}^{(1,k-1)}-\sigma_{ab}^{(1,k+1)}\right)} {-\omega_{aa'}-k\omega_m/2+i\Gamma_{aa'}},\\ \sigma^{(3,k)}_{ab}&=\frac{\Omega_{ab}\left(\rho_{aa}^{(2,k-1)}+\rho_{aa}^{(2,k+1)}\right) +\Omega^{(1)}_{a'b}\left(\rho_{aa'}^{(2,k-1)}+\rho_{aa'}^{(2,k+1)}\right)}{\Delta\omega_{ab}-k\omega_m/2+i\Gamma_{ab}}, \end{split} \label{eq:OpticalCoherenceFourier} \end{equation} \end{widetext} where $\delta_{lm}$ is the Kronecker delta. Although relations for CW and AM light [Eqs.~(\ref{eq:DMperturbation}) and (\ref{eq:OpticalCoherenceFourier})] are similar there are some significant differences between them. First is the appearance of cross terms that couple different orders of the Fourier expansion. For instance, the $k$-th order populations and Zeeman coherences couple to the $k\pm 1$-th orders of optical coherences. Since the largest density-matrix elements are those with low $k$ (in zeroth order the only non-zero elements are the ground-state populations), the coupling to the higher-order density-matrix elements is weaker. This enables truncation of the formally infinite series (\ref{eq:DMExpansion}) at some finite $k_c$ (usually not bigger than 5). The $k$-$(k\pm 1)$ dependence additionally results in the zeroing of some density matrix elements. It may be shown that populations and Zeeman coherences are nonzero only at even $k$ (for odd $k$ $\rho_{aa'}^{(l,k)}=0$) and the only non-zero optical coherences are these evaluated at odd $k$. The last difference manifests in the appearance of the $-k\omega_m/2$ term in denominators of the formulae for the density-matrix elements, which leads to the generation of additional resonances of the density-matrix elements vs. the modulation frequency. Assuming that the whole energy splitting of the ground-state Zeeman sublevels is due to the magnetic field the resonance arises at non-zero magnetic field; the time-dependent rotation of the polarization plane arises when the Larmor splitting of the levels coincides with a given multiplicity of half of the modulation frequency ($k\omega_m/2=-\omega_{aa'}$). For $F>1$ and low magnetic field all $\Delta m=2$ ground-state coherences are generated with equal efficiency (same energy splitting of the levels) and single AMOR resonance is observed. At stronger fields, i.e., when the nonlinear Zeeman splitting of the sublevels is comparable to, or exceeds the ground-state relaxation rate, each $\Delta m=2$ coherence has different resonance frequency. It is this difference, which allows selective addressing of a given ground-state Zeeman coherence. In order to calculate dynamic nonlinear magneto-optical rotation, i.e., the AMOR signal, one needs to combine all Fourier coefficients of the density matrix and introduce them into Eq.~(\ref{eq:Chi}) \begin{equation} \chi_\pm(t)=\frac{N\text{Tr}\left(\sum_{k=-k_c}^{k_c}\rho^{(k)}e^{ik\omega_m/2t}d_\pm\right)}{E_0\left(e^{-i\omega_m/2 t}+e^{i\omega_m/2 t}\right)}. \label{eq:SusceptibilityDymanic} \end{equation} Multiplication of Eq.~(\ref{eq:SusceptibilityDymanic}) by $\sin(m\omega_m t)$ or $\cos(m\omega_m t)$ and integrating them over the modulation period gives \begin{equation} \begin{split} \chi_{\pm,in}^{(m)}&=\int_0^{2\pi/m\omega_m}\chi_\pm(t)\sin(m\omega_m t),\\ \chi_{\pm,quad}^{(m)}&=\int_0^{2\pi/m\omega_m}\chi_\pm(t)\cos(m\omega_m t), \end{split} \label{eq:SusceptibilityInQuad} \end{equation} where $m$ is the harmonic number, allows one to find formulae for the in-phase/quadrature amplitude of electric susceptibility at a given harmonic of the modulation frequency. Substituting Eqs.~(\ref{eq:SusceptibilityInQuad}) into Eq.~(\ref{eq:ComplexRefractiveIndex}) and then into Eq.~(\ref{eq:RotationFinal}) enables calculation of the amplitude of time-dependent nonlinear magneto-optical rotation (AMOR signal). It should be noted that the AMOR signal measured in our experiment, i.e. at the first harmonic of the modulation frequency ($m=1$), is described by the third-order optical coherence and $k=1$. \section{Experimental apparatus\label{sec:Experimental}} The layout of the experimental apparatus is shown in Fig.~\ref{fig:Setup}. A paraffin-coated buffer-gas-free cylindrical glass cell of 2-cm in diameter and length of 2 cm contains isotopically enriched sample of $^{87}$Rb. The cell is heated to 50$^\circ$C by a non-magnetic resistive oven providing atomic density of about $5\times 10^{10}$~atoms/cm$^3$ \cite{FootnoteDesorption}. The cell is placed inside a three-layer mu-metal magnetic shield reducing the external, uncontrollable magnetic fields by a factor higher than $10^4$. The residual fields are compensated with two sets of orthogonal magnetic-field coils: one for the first-order magnetic-field gradients and another for the second-order gradients. An additional solenoid is used to generate a highly homogenous and well controlled magnetic field along the light propagation direction which is varied within a range of $\pm 1$~G. The rubidium atoms interact with two co-propagating, linearly polarized light beams: the pump and the probe. Both beams are generated with the same external-cavity diode laser but their intensities are controlled independently. The laser-light frequency is monitored with a saturated-absorption-spectroscopy system and can be stabilized to a particular transition of the Rb $D1$ line (795~nm) with a dichroic-atomic-vapor laser lock \cite{Corwin1998DAVLL}. The intensity of the pump light is modulated with a single-pass acousto-optical modulator (AOM) optimized for the first order diffraction. Application of AOM enables modulation of light with an arbitrary frequency, amplitude, and waveform. It also leads to a frequency shift of the pump light relative to the probe by 80~MHz. After traversing AOM, the pump light illuminates atoms contained in the vapor cell. The atoms are simultaneously probed with the unmodulated light beam, split off from the main beam before AOM. A balanced polarimeter situated after the shield is employed to analyze the polarization state of the probe. A small angle between the beams allows blocking of the pump before the polarimeter. The polarimeter consists of a Glan polarizer and two photodiodes. The polarimeter differential signal is demodulated with a lock-in amplifier at the first harmonic of the modulation frequency. This signal is than electronically divided by twice the sum of photodiode signals which, for not-too-big rotations, yields information about the amplitude of the polarization rotation [$\varphi\approx(I_1-I_2)/2(I_1+I_2)$, where $I_{1,2}$ are the respective light intensities in the first and second channel of the polarimeter]. Eventually, the signal is stored on a computer which also controls the light modulation and the magnetic-field strength. \begin{figure}[h] \includegraphics[width=\columnwidth]{Setup.eps} \caption{Experimental setup. D is the detector, IO the optical isolator, AOM the acousto-optical modulator, P the polarizer, BS the beam splitter, $\lambda/2$, $\lambda/4$ the half- and quarter-waveplates, respectively, and A is the iris.} \label{fig:Setup} \end{figure} \section{Results and discussion\label{sec:Results}} Figure~\ref{fig:AMORsignal} shows the amplitude and phase of the AMOR signal measured vs. the modulation frequency for the pump tuned to the center of the $F=2\rightarrow F'=1$ transition. \begin{figure}[h] \includegraphics[width=\columnwidth]{AMORsignal.eps} \caption{Amplitude and phase of the AMOR signal measured vs. the modulation frequency. For a magnetic field of about 640~mG the AMOR signal is split into three resonances due to the quadratic Zeeman effect. Each resonance corresponds to the different atomic superpositions of the ground-state magnetic sublevels. The insets depict unsplit resonance at $B=19$~mG, where contribution from all superpositions superimpose. Both signals were measured for a pump power of 11~$\mu$W tuned to the center of the $F=2\rightarrow F'=1$ transition and a probe power of 3~$\mu$W.} \label{fig:AMORsignal} \end{figure} For lower magnetic fields, the amplitude dependence is characterized by a single Lorentz resonance and the phase is described by an asymmetric curve, both centered at twice the Larmor frequency (insets to Fig.~\ref{fig:AMORsignal}). For stronger fields, the single AMOR resonance splits into three resonances: the largest central resonance and two smaller resonances shifted symmetrically with respect to the central one. The splitting is also observed at the corresponding phase dependence. Each of the resonances corresponds to different atomic superposition of an individual pair of the ground-state magnetic sublevels. As discussed in Sec.~\ref{sec:Theory}, the appearance of the AMOR resonance/resonances is associated with generation of the ground-state Zeeman coherences. The process is most efficient when the modulation frequency matches the frequency splitting of the magnetic sublevels differing in the magnetic quantum number $m$ by 2, $\Delta m=2$. In order to calculate the splitting of the sublevels, we expand Eq.~(\ref{eq:BreitRabi}) into the power series of $x$ up to the second order, which for the $F=2$ state of $^{87}$Rb ($I=3/2$) is equal to \begin{equation} \omega_{m,m'}\approx (m-m')\omega_L-(m^2-m'^2)\frac{\omega_L^2}{\Delta_{HF}}. \label{eq:MageneticSplitting} \end{equation} For three pairs of $\Delta m=2$ sublevels in the $F=2$ state, one obtains \begin{equation} \begin{split} \omega_{-2,0}\approx&\ 2\omega_L-4\frac{\omega_L^2}{\Delta_{HF}},\\ \omega_{-1,1}\approx&\ 2\omega_L,\\ \omega_{0,2}\approx&\ 2\omega_L+4\frac{\omega_L^2}{\Delta_{HF}}. \end{split} \label{eq:MageneticSplittingThree} \end{equation} The first terms in Eqs.~(\ref{eq:MageneticSplittingThree}) arise from the linear Zeeman effect, while the second ones appear due to the quadratic Zeeman effect. For weak magnetic fields, the contribution from the nonlinear effect is significantly smaller than the ground-state relaxation rate ($4\omega_L^2/\Delta_{HF}\ll \gamma$). In that case, the AMOR resonances associated with individual coherences overlap and appear as a single resonance (insets to Fig.~\ref{fig:AMORsignal}). For stronger magnetic fields, the sublevel-splitting frequencies differ sufficiently and the separation of the resonances is observed, as seen in Fig.~\ref{fig:AMORsignal}. The amplitude of the given resonance is determined by the amplitude of the corresponding coherence and appropriate dipole matrix elements. Thus using the NMOR signal and also the relations~(\ref{eq:OpticalCoherenceFourier})-(\ref{eq:SusceptibilityInQuad}) determination of the amplitude of the coherence is possible. In Fig.~\ref{fig:NonlinearZeeman}, the measured splitting of the resonances is presented as a function of the magnetic field. \begin{figure}[htb!] \includegraphics[width=\columnwidth]{NLZsplitting.eps} \caption{(Color online) Average splitting of the AMOR resonances, $(\omega_r-\omega_l)/2$, measured vs. the Larmor frequency (square). The observed dependences is in very good agreement with theoretical curve plotted based on Eq.~(\ref{eq:MageneticSplittingThree}) (solid line) and the data from Ref.~\cite{Steck87Rb}. The data were measured with a single sinusoidally modulated light beam of 8-$\mu$W power acting as pump and probe simultaneously.} \label{fig:NonlinearZeeman} \end{figure} For low magnetic fields, the resonances are unresolved and no splitting is measured \cite{FootnoteMeasuringSplitting}. The splitting becomes measurable for fields corresponding to Larmor frequencies above a few tens of kHz and increases quadratically with $B$ and $\omega_L$, which is in good agreement with predictions of Eqs.~(\ref{eq:MageneticSplittingThree}). In order to verify the model developed in Sec.~\ref{sec:Theory}, we simulate AMOR signals for the first harmonic of the modulation frequency, $\omega_m\approx 920$~kHz and a magnetic field of 640~mG (Fig.~\ref{fig:AMORsimulated}). \begin{figure}[htb!] \includegraphics[width=\columnwidth]{AMORsimulated.eps} \caption{Amplitude and phase of the AMOR signal simulated for $F=2\rightarrow F'=1$ transition using Eqs.~(\ref{eq:OpticalCoherenceFourier}). The signal reveals all the salient features of real signal (Fig.~\ref{fig:AMORsignal}), i.e. resonance splitting, similar amplitude relations.} \label{fig:AMORsimulated} \end{figure} The simulations reveal all the features observed experimentally (Fig.~\ref{fig:AMORsignal}). For a magnetic field inducing significant nonlinear Zeeman splitting of the levels, three AMOR resonances associated with $\rho_{-2,0}$, $\rho_{-1,1}$, and $\rho_{0,2}$ are observed. The strongest resonance is related to the coherence between the $|-1\rangle$ and $|1\rangle$ sublevels. The remaining two resonances are equally split with respect to the central one and have equal amplitudes. Also the phase of the simulated signal follows the experimentally measured dependence crossing zero at the center of the largest signal. We attribute the observed deviation of the simulated from the measured dependences due to higher order processes, such as power broadening and saturation, which are not included in our model. The splitting of the AMOR resonance into three resonances illustrates a possibility of selective addressing of a particular $\Delta m=2$ ground-state coherence. Figure~\ref{fig:DMsimulations} presents simulations of the density matrix at the first harmonic of the modulation frequency calculated at lower and higher magnetic fields corresponding to single and split resonances (Fig.~\ref{fig:AMORsimulated}). The top row is calculated for 10~mG, whereas the bottom for 640~mG, that corresponds to resonance splitting of 120~Hz, $(\omega_r-\omega_l)/2\approx 4\gamma$. The simulations are performed for the modulation frequencies equal to $2\omega_L-3\gamma$ (left column), $2\omega_L$ (middle column), and $2\omega_L+3\gamma$ (right column). As shown in the top row, exact tuning of the modulation frequency to twice the Larmor frequency results in generation of all $\Delta m=2$ coherences with the highest and equal efficiency, while detuning it away uniformly reduces the amplitudes of all of them. In stronger fields the situation is different (bottom row). While for $\omega_m=2\omega_L$ the $\rho_{-1,1}$ coherence is generated with the highest efficiency, the two other coherences are created significantly less efficient. One can selectively increase the amplitude of either of these coherences by appropriate tuning of the modulation frequency. For $\omega_m=2\omega_L-3\gamma$ (left column), the $\rho_{-2,0}$ coherence is most efficiently generated, while for $\omega_m=2\omega_L+3\gamma$ (right column), the $\rho_{0,2}$ coherence has the strongest amplitude. This dependence of the amplitude of the specific coherence on the modulation frequency proves the possibility of selective addressing and the control of specific $\Delta m=2$ coherences. \begin{figure}[htb!] \includegraphics[width=\columnwidth]{DM.eps} \caption{(Color online) Calculated absolute values of the density-matrix elements ($|\rho_{m,m'}|$) of the $F=2$ ground under interaction with linearly-polarized, AM light tuned to the $F=2\rightarrow F'=1$ transition. The top row corresponds to a magnetic field of about $10$~mG, when energy splittings of all $\Delta m=2$ sublevels are equal and the bottom row to a much stronger field of 640~mG causing the AMOR-resonance splittings of 120~Hz, that is four times the ground-state relaxation rate. The central column shows the density-matrix elements for $\omega_m=2\omega_L$, while the side columns correspond to $\omega_m=2\omega_L\pm 3\gamma$, respectively.} \label{fig:DMsimulations} \end{figure} The AMOR technique is a powerful tool in the analysis of a quantum state of the system. In particular, scanning the modulation frequency, fitting the data with three Lorentz curves, and taking into account strengths of the specific transitions, allows one to extract information about amplitudes of the density-matrix elements. This measurement, however, requires modulation-frequency scan within a range strongly exceeding the splitting of the resonances. In order not to scan the modulation frequency, one may perform free-induction decay measurements, where information about the coherence amplitude is extracted from time-dependent rotation signal (for more details see Ref.~\cite{Acosta2008Production}). As described above, the theoretical model developed in Sec.~\ref{sec:Theory} is valid only for low light powers; for higher light intensities such effects as ac Stark shift start to play an important role, e.g. by broadening of the AMOR resonances. Due to this fact not only the modulation frequency but also the pump and probe powers determine the amplitudes of the Zeeman coherences and hence the AMOR signals. In Fig.~\ref{fig:Amplitude} the amplitude of the AMOR signal is presented vs. the pump- and probe-light powers. \begin{figure}[htb!] \includegraphics[width=\columnwidth]{Amplitude.eps} \caption{(Color online) AMOR signal as a function of the pump- and probe-light power. The data was measured at 850~mG. The top inset show a cross-section across the plot, i.e. amplitude pump-power dependence measured with a probe-intensity of 2~$\mu$W. Similarly, the bottom inset shows the data taken with varied probe power and fixed pump light intensity 4~$\mu$W.} \label{fig:Amplitude} \end{figure} For low pump-light power the AMOR amplitude is small (see the upper inset to Fig.~\ref{fig:Amplitude}), which reflects low efficiency of the ground-state coherence generation. This efficiency, hence the AMOR-resonance amplitude, increases with the pump power and reaches its maximum at about 30~$\mu$W. Appearance of the maximum and further decrease of the amplitude results from the higher-order effects, such as saturation, hyperfine pumping, and repumping/regeneration of the existing Zeeman coherences. For instance, the hyperfine pumping leads to a decrease of a number of atoms in the $F=2$ ground state transferred to the $F=1$ ground state via spontaneous emission. Degradation of the AMOR signal is also associated with the applied modulation. Efficiency of the ground-state coherence generation and hence the number of atoms existing in a particular quantum state follows the light modulation. At low light intensities, it effects in a sinusoidal variation of numbers of atoms evolving with specific phases, which results in strong anisotropy of the medium. For more intense light, i.e., when saturation processes become significant, the efficiency does not reproduce the sinusoidal modulation pattern. In particular, higher harmonics of the modulation arise in the efficiency of coherence generation and number of atoms generated during successive pumping phases does not follow the sinusoidal dependence. It results in weaker anisotropy of the medium and a decrease of the AMOR-resonance amplitude for higher pump-light powers (Fig.~\ref{fig:Amplitude}). The dependence of the AMOR signals on the probe-light intensity is different. As seen in the lower inset to Fig.~\ref{fig:Amplitude}, the amplitude of the AMOR resonance decreases with the probe-beam power on the whole accessible range of powers. This is caused by the probe light being resonant with the medium (the 80-MHz difference between the probe and pump frequencies caused by AOM is negligible relative to the Doppler broadening of the transition). In such a case, the probe perturbs the atoms; absorption of a photon from the probe beam results in a new quantum state which, in general, is different from the state created initially with the pump-light photon. In that way, the probe-light absorption decoheres the system and acts as additional relaxation mechanism reducing the AMOR-signal amplitude It was shown in Sec.~\ref{sec:Theory} that efficiency of the ground-state coherence generation strongly depends on the pump light tuning (dependence on $\Omega$). In particular, various pump-power dependences may be observed for the light coupling a given ground state with different excited states. Such an example is depicted in Fig.~\ref{fig:AmplitudeRatio}, which shows the ratio of the amplitude of the central resonance to the averaged amplitudes of the side resonances vs. the pump-light power for the $F=2\rightarrow F'=1$ and the $F=2\rightarrow F'=2$ transitions. \begin{figure}[htb!] \includegraphics[width=\columnwidth]{AmplitudeRatio.eps} \caption{(Color online) (a) Ratio of the amplitude of the central of AMOR signal to the averaged amplitude of the side components of the signal vs. the light power, (b) ratio of the geometric mean of the $|-1\rangle$ and $|1\rangle$ sublevels population to the averaged geometric mean of the $|-2\rangle$ and $|0\rangle$ populations and $|0\rangle$ and $|2\rangle$ population vs. normalized Rabi frequency. Signals were measured at a magnetic field of 790~mG with the 1.2~$\mu$W~probe power while the calculations were performed for a single unmodulated light beam exploiting the rate equations. The red points and the red curve corresponds to the $F=2\rightarrow F'=1$ excitation while the black squares and black curve to the $F=2\rightarrow F'=2$ transition.} \label{fig:AmplitudeRatio} \end{figure} At low pump powers the AMOR signals observed at these two transitions are similar in shape and amplitudes, with well-resolved triple-component structures [see left-handed side inset to Fig.~\ref{fig:AmplitudeRatio}(a)]. However, the pump-power dependences of the resonances amplitudes measured at the two transitions are very distinct. While for the $F=2\rightarrow F'=1$ transition, the ratio increases with the pump-light power [see top right-handed inset to Fig.~\ref{fig:AmplitudeRatio}(a)], and the opposite dependence is observed at the other transition. In the latter case, the ratio decreases with the power [bottom right-handed inset to Fig.~\ref{fig:AmplitudeRatio}(a)] even below one when the side resonances have larger amplitudes than the central resonance. The observed dependences reflect the different behavior of the $\rho_{-11}$ coherence and the $\rho_{-20}$ and $\rho_{02}$ coherences at the two transitions. Among others, the behavior originates from optical pumping and population redistribution between Zeeman sublevels within a given hyperfine state. Since for the $F=2\rightarrow F'=1$ transition the maximal Clebsch-Gordan coefficients are those from the sublevels with maximal $m$ ($m=\pm F$), these states are most efficiently depopulated. Simultaneously, the depletion of the other sublevels is smaller, which effectively leads to an aligned state with the highest population in the $m=0$ sublevel and the lowest population in the $m=\pm 2$ sublevels. The change in the population distribution is reflected in the amplitudes of the Zeeman coherences associated with these states. For the $F=2\rightarrow F'=1$ transition and intense light, the lower amplitudes have the $\rho_{-20}$ and $\rho_{02}$ coherences, while the higher one has the $\rho_{-11}$ coherence. The opposite is true for the $F=2\rightarrow F'=2$ transition, while the $m=0$ state is most efficiently depleted. In order to qualitatively verify the mechanism described above, the geometric means of the population of the magnetic sublevels constituting the coherence were calculated vs. the pump intensity. Such a geometric mean of the two sublevels' populations sets an upper limit on the amplitude of the coherence between the sublevels (Schwarz inequality), $|\rho_{\alpha\beta}|\leq\sqrt{\rho_{\alpha\alpha}\rho_{\beta\beta}}$. Figure~\ref{fig:AmplitudeRatio}(b) shows the ratio of $\sqrt{\rho_{-1-1}\rho_{11}}$ to $(\sqrt{\rho_{-2-2}\rho_{00}}+\sqrt{\rho_{00}\rho_{22}})/2$ for the $F=2$ state coupled with light to the $F'=1,2$ states. Populations of the sublevels were calculated based on the rate equations using CW light and neglecting hyperfine optical pumping, saturation, etc. As seen in Fig.~\ref{fig:AmplitudeRatio}, the simulations qualitatively reproduce the observed dependence, i.e. increase of the ratio for the $F=2\rightarrow F'=1$ transition and decrease for the other transition. In alkali atoms there are two hyperfine ground states supporting long-living quantum coherences. In particular, in $^{87}$Rb there are $F=1$ and $F=2$ ground states separated by about 6.8~GHz that can be selectively addressed by appropriate tuning of the pump and probe lasers. Figure \ref{fig:DifferentTransitions} presents the AMOR signals measured at the $F=2\rightarrow F'=1$ and $F=1\rightarrow F'=1$ transitions for the same set of experimental parameters. \begin{figure}[h] \includegraphics[width=\columnwidth]{DifferentTransitions.eps} \caption{(Color online) AMOR signals recorded for light tuned to the $F=2\rightarrow F'=1$ transition (left curve) and the $F=1\rightarrow F'=1$ transition (right curve). The difference in the position of the AMOR resonances arises from nuclear contribution to the Land\'e factor. Signals were measured at a magnetic field of about $780$~mG and pump- and probe-light powers of 6~$\mu$W and 3~$\mu$W, respectively.} \label{fig:DifferentTransitions} \end{figure} As shown, the two signals are significantly different; not only different are the numbers of the AMOR resonances associated with the $\Delta m=2$ coherences (there is only one $\Delta m=2$ coherence in the $F=1$ ground state), but also their amplitudes are distinct. The latter difference originates from the transition probabilities and less efficient hyperfine pumping at the $F=1\rightarrow F'=1$ transition than at the $F=2\rightarrow F'=1$ one. Moreover, the positions of the AMOR resonances in a strong magnetic field are different. It results from the difference in the Land\'e factors for the two ground-state \begin{equation} \begin{split} g_{F=2}=&-\frac{1}{4}g_J+\frac{5}{4}g_I,\\ g_{F=1}=&\frac{1}{4}g_J+\frac{3}{4}g_I, \end{split} \label{eq:Splitting} \end{equation} where $g_J$ is the electron and $g_I$ the nuclear $g$-factor. Based on Eq.~(\ref{eq:Splitting}) it can be easily shown that although the splittings due to the electron spin are opposite and hence indistinguishable in the AMOR experiment, the nuclear-spin contributions are different, which leads to the separation of the resonance $\Delta\omega_m=2\times 2 g_I\mu_N B/\hbar$, where $\mu_N$ denotes the nuclear magneton . For a magnetic field of about 780~mG we predicted a splitting of about 4.35~kHz which is consistent with experimentally measured value of 4.32~kHz. This difference in frequencies of AMOR resonances for the transitions is important from a point of view of quantum-state engineering since it offers an additional way of controlling and modifying the quantum state by specific tuning of the frequency of an additional RF field \cite{Chalupczak2010Optical}. Dependences of the ground-state Zeeman-coherence lifetimes on the pump and probe powers are presented in Fig.~\ref{fig:Lifetime}. \begin{figure}[htb!] \includegraphics[width=\columnwidth]{Lifetime.eps} \caption{Lifetime of the Zeeman coherence between $|-1\rangle$ and $|1\rangle$ ground-state sublevels vs. pump- (a) and probe-light (b) powers. Increasing power of either of the light beams results in reduction of the lifetime of a quantum state which manifest as broadening of the AMOR resonances (see insets). The signals were measured for a magnetic field of about 780~$\mu$G with a probe power of 1.2~$\mu$W (a) and pump power of 2.2~$\mu$W. The laser was tuned to the center of the Doppler-broadened $F=2\rightarrow F'=1$ transition.} \label{fig:Lifetime} \end{figure} The lifetimes $\tau$ were extracted from the AMOR resonance width as $\tau=1/\pi\delta\omega_m$, where $\delta\omega_m$ is the AMOR-resonance half-width at half maximum measured vs. the modulation frequency. Figure~\ref{fig:Lifetime} shows that raising the light power of either of the beams leads to broadening of the AMOR resonance (see insets) and shortening of the ground-state coherence lifetime. In order to calculate the coherence lifetime that is not affected by the light, we performed a series of measurements of the AMOR signals at different pump and probe powers and double-extrapolate the resonance width to zero light powers. The double-extrapolated lifetime of the coherences studied in this experiment is equal to 13.2(12)~ms, which is determined by three relaxation mechanisms: collisions with the uncoated surfaces, mainly in the cell stem containing a rubidium metal droplet, spin-exchange collisions between rubidium atoms \cite{Budker2005Antirelaxation}, and temperature-dependent dephasing collisions with the cell wall coating \cite{Pustelny2008Magentometry}. Using a simple mathematical model, the relaxation rate due to collisions with uncoated surfaces was estimated at the level of $\approx 2\pi\times 8.3$~s$^{-1}$. At the same time, the relaxation rate associated with spin-exchange collisions, that is calculated based on Ref.~\cite{Happer1972Review}, gives $2\pi\times 4.1$~s$^{-1}$. The other relaxation channel most likely are dephasing collisions of atoms with the coating. \section{Summary and conclusions\label{sec:Conclusions}} We have analyzed the possibility of generating the quantum superpositions of ground-state Zeeman sublevels differing in the magnetic quantum numbers by 2. Since in the $F>1$ state the sublevels split nonlinearly with a magnetic field (nonlinear Zeeman effect), selective generation of coherences between specific sublevels by appropriate tuning of the modulation frequency is possible. In particular, it was shown that for the magnetic fields such that nonlinear magnetic-sublevel splitting exceeds the ground-state relaxation rate, selective addressing of the $\rho_{-20}$, $\rho_{-11}$, and $\rho_{02}$ coherences is possible. Efficiency of the coherence generation versus different experimental parameters, such as modulation frequency, pump- and probe-light power, light frequency was analyzed. We have shown that in our experimental setup the lifetime of the coherences exceeds 10~ms. Such a long coherence lifetime opens interesting possibilities for application of the coherences in quantum-state engineering and quantum computation. In this context particularly interesting seems to be the ability of modification of atomic quantum state by application of external field, e.g. magnetic and/or RF field. \begin{acknowledgments} The authors would like to express their gratitude to Andrew Park and Simon Rochester for stimulating discussions. The work was supported by the Polish Ministry of Science and Higher Education (grants N N202 074135 and N N202 175935). Part of the work was operated within the Foundation for Polish Science Team Programme co-financed by the EU. \end{acknowledgments}
1,108,101,562,678
arxiv
\section{Introduction} The study of lattice quantum spin systems is a difficult problem. These systems are important in condensed matter physics since they can describe strongly correlated electrons. At present, the domain of applicability of the existing analytical methods able to analyze them is still very restricted. To overcome this difficulty numerous groups have tried to study these models using computational techniques. \\ Two major techniques are currently used to study quantum spin systems numerically. The first is the exact diagonalisation method and the second is quantum Monte Carlo (for a review see \cite{rev}). Usually, both methods have a complexity which grows expo nentially with the size of the system under study, so that meaningful results are difficult to obtain. In particular, quantum Monte Carlo can suffer from the negative sign problem, which becomes exponentially serious on large lattices and at small tempera ture. \\ In this work we develop a new algorithm which is able to suppress the sign problem in a quantum Monte Carlo simulation. A general description of this algorithm and a rigorous proof of its correctness are presented. This algorithm was briefly presented in \cite{letter}. It is based on a well-controlled approximation which has a linear complexity in the lattice size.\\ We have applied it to a simple 2-dimensional models with fermions for testing its efficiency. This model has a serious sign problem if simulated with conventional quantum Monte Carlo algorithms, but if simulated with our algorithm the sign problem is completely eliminated. The generality of this algorithm allow us to apply it to any model, opening new perspectives in the numerical study of quantum spin systems.\\ This paper is organized as following: first we define the formalism and prove some lemmas (section 2 and 3) that we need for the description of the algorithm. The algorithm is then discussed and a proof of correctness is given (section 3). Finally in sect ion 4 the results of the example are presented. \section{Formalism for quantum spin systems on a finite lattice} We consider a d-dimensional finite lattice $\Lambda\subset {\sf Z \!\!\! Z}^d$. Associated to each site of the lattice we consider a particle with a finite number $M$ of internal degrees of freedom. A Hilbert space ${\cal H}_x$ is associated with each site $x\in\Lambda$ of the lattice and is isomorphic to ${\sf C \!\!\! C}^M$. We choose an orthonormal basis $\{e_\sigma^x\}_{\sigma\in I}$ with $I=\{1,...,M\}$ in the Hilbert space ${\cal H}_x$. The Hilbert space ${\cal H}_\Lambda$ over the lattice $\Lambda$ is given by the tensor product \begin{equation} {\cal H}_\Lambda=\bigotimes_{x\in\Lambda}{\cal H}_x \end{equation} The set $\Omega_\Lambda$ of all configurations ${\omega_\Lambda}$ in $\Lambda$ is defined as the assignment of an element $\sigma_x$ to each lattice site $x\in\Lambda$. The set $\{e_{\omega_\Lambda}\}_{\omega_\lambda\in\Omega_\Lambda}$ of all vectors \begin{equation} e_{\omega_\Lambda}=\bigotimes_{x\in\Lambda}e_{\sigma_x}^x \end{equation} forms an orthonormal basis of ${\cal H}_\Lambda$.\\ A Fock space is constructed to be able to incorporate the statistics of the particles \begin{equation} {\cal F}_P({\cal H}_\Lambda)=P\bigoplus_{n\geq 0}{\cal H}_\Lambda^n\label{fock} \end{equation} where ${\cal H}_\Lambda^n$ is the n-fold tensor product with itself, ${\cal H}_\Lambda^0={\sf C \!\!\! C}$ and $P$ is the projection onto the symmetric or antisymmetric subspace for bosons or fermions, respectively. \bea &&P_{Bose}(e_{\omega_\Lambda})=\frac{1}{n!}\sum_\pi \bigotimes_{x\in\Lambda}e_{\sigma_\pi(x)}^{\pi(x)}\nonumber\\ &&P_{Fermi}(e_{\omega_\Lambda})=\frac{1}{n!}\sum_\pi sgn(\pi)\bigotimes_{x\in\Lambda}e_{\sigma_\pi(x)}^{\pi(x)} \end{eqnarray} where $\pi$ is a permutation of the order of the points $(x,\sigma)\in\Lambda\times I$ and $sgn(\pi)$ is $+1$ if the permutation $\pi$ is even $-1$ if it is odd. We choose an arbitrary order of the points in $\Lambda$ denoted by $x\prec x'$ if $x$ proceeds $x'$ and we define the order of the points in $\Lambda\times I$ by $(x,\sigma)\preceq (x',\sigma')$ if $x\prec x'$ or if $x=x'$, $\sigma\leq \sigma'$.\\ The annihilation and creation operators on ${\cal F}_P({\cal H}_\Lambda)$ are defined as \begin{equation} c_{x\sigma}=Pa_{x\sigma}P\label{ann} \end{equation} and \begin{equation} c^*_{x\sigma}=Pa^*_{x\sigma}P\label{cre} \end{equation} with \bea &&a_{x\sigma}(e_{\sigma_1}^{x_1}\otimes...\otimes e_{\sigma_n}^{x_n}):= \sqrt{n}(e_{\sigma}^{x},e_{\sigma_1}^{x_1})e_{\sigma_2}^{x_2}\otimes...\otimes e_{\sigma_n}^{x_n}\nonumber\\ &&a^*_{x\sigma}(e_{\sigma_1}^{x_1}\otimes...\otimes e_{\sigma_n}^{x_n}):= \sqrt{n+1}\,e_{\sigma}^{x}\otimes e_{\sigma_1}^{x_1}\otimes...\otimes e_{\sigma_n}^{x_n} \end{eqnarray} where $(e_{\sigma}^{x},e_{\sigma_1}^{x_1})$ denotes the scalar product of the vectors $e_{\sigma}^{x}$ and $e_{\sigma_1}^{x_1}$. Furthermore, we have $a_{x\sigma}|0>:=0$ and $a_{x\sigma}^*|0>=e^x_\sigma$ where $|0>$ denotes the zero particle state (vacuum).\\ For bosons and fermions the operators defined through (\ref{ann}) and (\ref{cre}) satisfy the canonical commutation relations. For bosons we have \bea &&[c_{x\sigma},c_{x'\sigma'}]=[c^*_{x\sigma},c^*_{x'\sigma'}]=0\nonumber\\ &&[c_{x\sigma},c^*_{x'\sigma'}]=(e_\sigma^x,e_{\sigma'}^{x'}) \end{eqnarray} and for fermions \bea &&\{c_{x\sigma},c_{x'\sigma'}\}=\{c^*_{x\sigma},c^*_{x'\sigma'}\}=0\nonumber\\ &&\{c_{x\sigma},c^*_{x'\sigma'}\}=(e_\sigma^x,e_{\sigma'}^{x'}) \end{eqnarray} An orthonormal basis of ${\cal F}_P({\cal H}_\Lambda)$ is given by the vector \bea |n_{x_1\sigma_1}...n_{x_k\sigma_k}>=&& \left(\bigotimes_{q_1=1}^{n_{x_1\sigma_1}}e_{\sigma_1}^{x_1}\otimes... \bigotimes_{q_k=1}^{n_{x_k\sigma_k}}e_{\sigma_k}^{x_1}\right)_{\prec}\label{bv} =\nonumber\\ &&=\frac{1}{\sqrt{\prod_{i=1}^k n_{x_i\sigma_i}!}} \left((c_{x_1\sigma_1}^*)^{n_{x_1\sigma_1}}...(c_{x_k\sigma_k}^*)^ {n_{x_k\sigma_k}}\right)_{\prec}|0> \end{eqnarray} where the $(...)_\prec$ denotes that subscripts $(x_i\sigma_i)$ of the braced factors are ordered as discussed above.\\ We define the algebra of observables ${\cal A}_\Lambda$ as the algebra of operators generated by the identity and all monomials of even degree in the creation and annihilation operators associated with the lattice sites $x\in\Lambda$. Two algebras of observables ${\cal A}_{\Lambda_1}$ and ${\cal A}_{\Lambda_2} $ satisfy locality \begin{equation} \forall A_1\in{\cal A}_{\Lambda_1},A_2\in{\cal A}_{\Lambda_2}: [A_1,A_2]=0 \mbox{ if } \Lambda_1\cap \Lambda_2 = \emptyset \end{equation} and inclusion \begin{equation} {\cal A}_{\Lambda_1}\subseteq {\cal A}_{\Lambda_2}\mbox{ if } \Lambda_1 \subseteq \Lambda_2 \end{equation} The trace of an operator $A\in{\cal A}_\Lambda$ is defined by $Tr A =\sum_{v}<v|A|v>$ where $\{|v>\}$ is an orthonormal basis of ${\cal F}_P({\cal H}_\Lambda)$ of the form (\ref{bv}).\\ The dynamics of the system is described by a finite range Hamiltonian \begin{equation} H=\sum_{B\subset\Lambda} \Phi_B\label{hamiltonian} \end{equation} where $\Phi_B$ is a selfadjoint operator belonging to ${\cal A}_B$ defined on a bond $B\subset \Lambda$. In our work we consider only periodic boundary conditions. The definition of a bond $B$ is then adapted to our boundary conditions. If the basis vectors (\ref{bv}) are eigenstates of $H$ then the Hamiltonian describes a {\em classical} spin system, otherwise it describes a {\em quantum} spin system. The partition function of the statistical system is given by \begin{equation} Z=Tr e^{-\beta H}\label{Z} \end{equation} where $\beta$ is the inverse temperature. The expectation value of an observable $A\in{\cal A}_\Lambda$ is obtained by \begin{equation} <A>=\frac{1}{Z}Tr A e^{-\beta H} \end{equation} \section{Monte Carlo of quantum spin systems} \subsection{The Trotter formula} Classical spin systems can be simulated in d-dimensional lattice using Monte Carlo algorithms. The weight $<{v}|e^{-\beta H}|{v}>$ contributing to the partition function can be directly evaluated since the basis vector (\ref{bv}) is an eigenvector of the Hamiltonian. The evaluation of the weights is essential for any Monte Carlo algorithm.\\ For a quantum spin system a Monte Carlo simulation is not straightforward because the basis vector (\ref{bv}) is not an eigenvector of the Hamiltonian any more and the evaluation of the exponential in the partition function becomes a very complex task. In fact, if one wants to perform any calculation directly using the d-dimensional lattice state $|v>$, the request in storage and computer time grows like the dimension of the Fock space, which is exponential in the volume $|\Lambda|$ of the lattice. The on ly practical way to evaluate the weight $<v|e^{-\beta H}|{v}>$ is to apply the Trotter formula in a (d+1)-dimensional lattice, where the extra dimension is the discretisation of the inverse temperature (we call it time) \begin{equation} Z=\lim_{T\rightarrow\infty}Tr \left(e^{-\beta \epsilon H}\right)^T \label{trotter} \end{equation} where $\epsilon =1/T$. For a simulation we have to restrict the interval ${\cal I}=[0,T]\cap{\sf Z \!\!\! Z}$ to a finite set, approximating in this way the partition function. The systematic errors introduced by the approximation are of order $\epsilon^2$. We consider a decomposition of the Hamiltonian (\ref{hamiltonian}) into a sum of $q$ operators \begin{equation} H=\sum_{i=1}^q H_i\label{hq} \end{equation} where each of them is a sum of commuting selfadjoint operators \begin{equation} H_i=\sum_{B_i}\Phi^i_{B_i}\label{dec} \end{equation} From the locality of the algebra of operators we can construct this sum so that the operators $\Phi^i_B$ satisfy $$[\Phi^i_{B_i},\Phi^{i'}_{B'_{i'}}]=0\mbox{ if } B_i\cap B'_{i'} =\emptyset $$ This means that for each label $i=1,...q$ we consider the sum $ \sum_{B_i}$ over disjoint bonds.\\ We can define a classical (d+1)-dimensional spin system starting from the Trotter formula and inserting a projector \begin{equation} {\bf 1}_{(i,t)}=\sum_{v_i(t)}|v_i(t)><v_i(t)| \end{equation} for each time point in $t\in{\cal I}$ and each exponential $e^{-\beta\epsilon H_i}$. Here the sum $\sum_{v_i(t)}$ goes over an orthonormal basis of ${\cal F}_P({\cal H}_\Lambda)$. The partition function is then approximated by \bea Z\simeq &&\sum_{v(0)}<v(0)|\left[\prod_{t=1}^T \left(\prod_{i=1}^q e^{-\beta \epsilon H_i} \right) \right]|v(0)>=\nonumber\\ =&& \sum_{v}<v(0)| \left[\prod_{t=1}^T \left(\prod_{i=1}^q e^{-\beta \epsilon H_i}|v_i(t)> <v_i(t)| \right) \right]|v(0)>\label{trotta} \end{eqnarray} Here the sum $\sum_{v(0)}$ goes over an orthonormal basis of ${\cal F}_P({\cal H}_{\Lambda})$. The sum $$\sum_{v}:=\sum_{v_1(0),...,v_q(0),...,v_1(T),...,v_q(T)}$$ goes over an orthonormal basis of ${\cal F}_P^{\cal T}({\cal H}_{\Lambda})$ where ${\cal T}=[0,q\times T]\cap{\sf Z \!\!\! Z}$ and \begin{equation} {\cal F}_P^{\cal T}({\cal H}_{\Lambda}):=\bigotimes_{\tau\in{\cal T}} {\cal F}_P({\cal H}_{\Lambda}) \end{equation} replicates the Fock space ${\cal F}_P({\cal H}_{\Lambda})$ at each slice $\tau\in {\cal T}$. Using the locality of the algebra of observables and the decomposition of the Hamiltonian (\ref{dec}) in commuting operators we can factorize the contributions in (\ref{trotta}) in a product of terms localized over the bonds $B_i$ \begin{equation} Z\simeq \sum_v w(v)\label{weight} \end{equation} where the weight of the vector $|v>\in {\cal F}_P^{\cal T}({\cal H}_{\Lambda})$ is given by the product of terms \begin{equation} w(v)=\prod_{t=0}^T\label{weightdef} \prod_{i=1}^q \prod_{B_i}w_{B_i}(v,(i,t)) \end{equation} with, for $t>0$, \begin{equation} w_{B_i}(v,(i,t))=<v_i(t)|e^{-\beta \epsilon \Phi^{i'}_{B_{i'}}}|v_{i'}(t')> \label{weightlocal} \end{equation} and \begin{equation} \left\{ \begin{array}{lll} t'=t,&i'=i+1 &\mbox{ if } 1\leq i<q\\ t'=t+1,&i'=1 &\mbox{ otherwise} \end{array} \right. \end{equation} and, for $t=0$, \begin{equation} w_{B_i}(v,(i,0))= \left\{ \begin{array}{lr} <v(0)|e^{-\beta \epsilon \Phi^{1}_{B_{1}}}|v_{1}(1)> & \mbox{if } i=1\\ 1&\mbox{otherwise} \end{array} \right. \end{equation} The weight $w_{B_i}(v,(i,t))$ is a real number. We can decompose the weight $w_{B_i}(v,(i,t))$ in its modulus and its sign. \begin{equation} w_{B_i}(v,(i,t))=sgn(w_{B_i}(v,(i,t)))\times |w_{B_i}(v,(i,t))| \label{weightsign} \end{equation} The evaluation of the weight $|w_{B_i}(v,(i,t))|$ requires to perform the exponential $e^{-\beta\epsilon \Phi^{i'}_{B_{i'}}}$ of the operator $\Phi^{i'}_{B_{i'}}$. Because this operator is local, it is a finite matrix of dimension equal to the dimension of the Fock space localized on the bond $B_i$. The computational complexity of this operation is then very small. The sign of the weight $sgn(w_{B_i}(v,(i,t)))$ is not generally a local quantity. For fermionic systems its evaluation requires the calculation of the sign of the permutation $\pi$ associated to the order in (\ref{bv}) of the $<v_i(t)|$ and $|v_{i'}(t')>$ vectors. Also this operation is relatively easy. For bosonic systems the sign can also be negative but its evaluation remains local. \subsection{Example} As an example we consider a simple model of free quantum spin systems\footnote{This example was analyzed by quantum Monte Carlo in \cite{wiese}.}. We consider fermions living on the sites of a spatially 2-dimensional square lattice with periodic boundary conditions. We consider a particle with only one internal degree of freedom $I=\{1\}$. We chose $\{e_\sigma^x\}_{\sigma\in I}$ to be an orthonormal basis of the Hilbert space ${\cal H}_x$. The Fock space is constructed as in (\ref{fock}). Creation and annihilation operators $c_x^*$ and $c_x$ anticommute \begin{equation} \{c_x^*,c_y^*\} = 0, \,\,\, \{c_x,c_y\} = 0, \,\,\, \{c_x^*,c_y\} = \delta_{xy}. \label{anticommutators} \end{equation} for $x,y\in\Lambda$. We consider the Hamilton operator \begin{equation} H = \sum_{x,i} (c_x^* c_x + c_{x+\hat{i}}^* c_{x+\hat{i}} - c_x^* c_{x+\hat{i}} - c_{x+\hat{i}}^* c_x), \end{equation} where $\hat{i}$ is the unit vector in the $i$-direction. The model is trivial and can be solved in momentum space. However, when one tries to simulate it with a Monte Carlo algorithm it shows from the algorithmic point of view all the characteristic features of more complicated quantum spin systems. We can approximate the partition function $Z$ with the Trotter formula (\ref{trotta}) by decomposing the Hamiltonian into four pieces $H = H_1 + H_2 + H_3 + H_4$ (see eq. (\ref{hq})) and \begin{equation} H_1 = \sum_{x=(2m,n)} h_{x,1}, \,\,\, H_2 = \sum_{x=(m,2n)} h_{x,2}, \,\,\, H_3 = \sum_{x=(2m+1,n)} h_{x,1}, \,\,\, H_4 = \sum_{x=(m,2n+1)} h_{x,2}, \end{equation} where $h_{x,i} = c_x^* c_x + c_{x+\hat{i}}^* c_{x+\hat{i}} - c_x^* c_{x+\hat{i}} - c_{x+\hat{i}}^* c_x$ (see eq. (\ref{dec})). The individual contributions to a given $H_j$ commute with each other, but two different $H_j$ do not commute. Using (\ref{trotta}) we can write the grand canonical partition function \begin{equation} Z = Tr e^{- \beta (H-\mu N)} = lim_{T \rightarrow \infty} Tr [ e^{- \beta \epsilon (H_1-\frac{\mu}{4}N) } e^{- \beta \epsilon (H_2-\frac{\mu}{4}N) } e^{- \beta \epsilon (H_3-\frac{\mu}{4}N) } e^{- \beta \epsilon (H_4-\frac{\mu}{4}N) } ]^T \end{equation} where $\mu$ is the chemical potential and $N=\sum_{x\in\Lambda}n_x$ with $n_x=c^*_xc_x$ is the particle number operator. Inserting a complete set of Fock states between the factors and using the locality of $h_{x,i}$ for bonds $<x,x+\hat i>$ we obtain matrix elements of the exponential (\ref{weightlocal}). \bea &&e^{- \beta \epsilon (h_{x,i}-\frac{\mu}{4}(n_x+n_{x+\hat i}))} = e^{- \beta \epsilon(1-\frac{\mu}{4})} \times \nonumber\\&& \left(\begin{array}{cccc} \exp(\beta\epsilon(1-\frac{\mu}{4})) & 0 & 0 & 0 \\ 0 & \cosh(\beta\epsilon) & \sinh(\beta\epsilon) {\sf \Sigma \!\!\! \Sigma} & 0 \\ 0 & \sinh(\beta\epsilon) {\sf \Sigma \!\!\! \Sigma} & \cosh(\beta\epsilon) & 0 \\ 0 & 0 & 0 & \exp(-\beta\epsilon(1-\frac{\mu}{4})) \end{array} \right) \label{matrix} \end{eqnarray} where the $4 \times 4$ matrix is in the Fock space basis $|00>$, $|01>$, $|10>$, $|11>$ defined on the bond $<x,x+\hat{i}>$. ${\sf \Sigma \!\!\! \Sigma} $ is the sign of the weight which has to be evaluated looking at the permutations of the fermions with respect to the given order of the basis vectors (\ref{bv}). The partition function is approximated by a classical spin system over occupation numb ers $n(x,\tau) = 0,1$ (here $\tau$ labels the $T\cdot q$ time slices in ${\cal T}$ with $q=4$) with periodic boundary conditions. The systematic errors due to the finite T are of order $\epsilon^2$. \begin{equation} Z \simeq \prod_{x,\tau} \sum_{n(x,\tau) = 0,1} |w(n)| \, sgn(w(n)) . \end{equation} The modulus factor takes the form \begin{eqnarray} &&|w(n)| =\!\!\! \prod_{x=(2m,n),\tau=4p} w[n(x,\tau),n(x+\hat{1},\tau),n(x,\tau+1),n(x+\hat{1},\tau+1)] \nonumber \\ && \times \prod_{x=(m,2n),\tau=4p+1} w[n(x,\tau),n(x+\hat{2},\tau),n(x,\tau+1),n(x+\hat{2},\tau+1)] \nonumber \\ && \times \prod_{x=(2m+1,n),\tau=4p+2} w[n(x,\tau),n(x+\hat{1},\tau),n(x,\tau+1),n(x+\hat{1},\tau+1)] \nonumber \\ && \times \prod_{x=(m,2n+1),\tau=4p+3} w[n(x,\tau),n(x+\hat{2},\tau),n(x,\tau+1),n(x+\hat{2},\tau+1)], \nonumber \\ \end{eqnarray} with $w[0,0,0,0] = 1$, $w[1,1,1,1] = e^{-2 \beta\epsilon(1-\frac{\mu}{4})}$, $w[0,1,0,1] = w[1,0,1,0] = e^{-\beta \epsilon(1-\frac{\mu}{4})}\cosh(\beta\epsilon)$, $w[0,1,1,0] = w[1,0,0,1] = e^{-\beta \epsilon(1-\frac{\mu}{4})} \sinh(\beta\epsilon)$. All other values are zero. The occupation numbers $n(x,\tau) = 0,1$ are variables interacting with each other via the time-like plaquette couplings $w[n(x,\tau),n(x+\hat{i},\tau),n(x,\tau+1),n(x+\hat{i},\tau+1)]$. Each state is weighted by a sign factor which arises from the fermionic statistics. The sign factor $sgn[w(n)]$ is a product of terms $sgn[n(x,\tau),n(x+\hat{i},\tau),n(x,\tau+1),n(x+\hat{i},\tau+1)]$ associated with each plaquette interaction. One has $sgn[0,0,0,0]$ $= sgn[1,1,1,1]$ $= sgn[0,1,0,1]$ $= sgn[1,0,1,0] = 1$. A nontrivial sign $\pm 1$ may arise only for plaquette interactions of type $[0,1,1,0]$ and $[1,0,0,1]$. \subsection{The sign problem} The decomposition of the Hamiltonian in local terms (\ref{hq},\ref{dec}) allows us to treat the new (d+1)-dimensional spin system as a classical spin system with state space ${\cal F}_P^{\cal T}({\cal H}_\Lambda)$, partition function (\ref{weight}) and weight (\ref{weightdef}). The evaluation of the weight is of small complexity, contrary to the original d-dimensional quantum spin system (\ref{Z}). The price to pay is that the new classical system has a partition function (\ref{weight}) which has not generally positive semidefinite weights $w(v)$. This fundamental difficulty is usually referred to as the "negative sign" problem. It is not related to any approximations in the Monte Carlo scheme but it describes the fact that the statistical error of the observables can become very large, increasing exponentially in the inverse temperature $\beta$ and lattice volume $|\Lambda\times {\cal T}|$.\\ \begin{definition}{\rm A {\em classical observable} is an operator $A\in{\cal A}_\Lambda$ acting on the Fock space ${\cal F}_P({\cal H}_\Lambda)$ defined at $t=0$ and diagonal with respect to the basis of the form (\ref{bv}). We denote by $A(v)$ the matrix element $A(v)=<v(0)|A|v( 0)>$. }\label{cobs} \end{definition} For simplicity we restrict our discussion to classical observables. With some minor modifications our algorithm is applicable also for observables which are not diagonal with respect to the basis (\ref{bv}). We emphasize that, usually, the interesting obs ervables are calssical.\\ The expectation value of a classical observable can be written \begin{equation} <A>=\frac{\sum_v A(v)\,w(v)}{\sum_v w(v)}= \frac{\sum_v A(v)\,|w(v)|sgn(w(v))}{\sum_v |w(v)|sgn(w(v))} \label{ev} \end{equation} A Monte Carlo algorithm needs a positive semidefinite weight to be able to construct a Markov process. Redefining the observable incorporating the sign in it $ \tilde A(v)=sgn(w(v))A(v) $ we can simulate a positive semidefinite classical spin system with statistical weight $|w(v)|$ and correct the measurement of the observable \begin{equation} <A>_w= \frac{\sum_v \tilde A(v)\,|w(v)|}{\sum_v |w(v)|}\times \frac{1}{<sgn(v)>_{|w|}} =:\frac{<A\,sgn>_{|w|}}{<sgn>_{|w|}}\label{evplus} \end{equation} where $<...>_w$ and $<...>_{|w|}$ denote the averages taken with the weights $w$ and $|w(v)|$, respectively. If the average sign is small there will be large cancelations making an accurate evaluation of $<A>$ impossible.\\ \subsection{Equivalent Monte Carlo algorithms} We consider a Monte Carlo algorithm $\phi_{|w|}$ which produces a Markov chain with state space $\{|v>\in{\cal F}_P^{\cal T}({\cal H}_{\Lambda})\}$ and equilibrium distribution $|w(v)|$. We assume that this algorithm generates a state $|v'>$ from a state $|v>$ with a transition probability $T(v'\leftarrow v)$ \begin{equation} \phi_{|w|}:|v>\rightarrow |v'> \end{equation} We assume stationarity \begin{equation} \sum_v T(v'\leftarrow v)|w(v)|=|w(v')|\label{stationarity} \end{equation} and irreducibility (ergodicity\footnote{Notice that only on Markov chains with {\em finite} state space, "ergodic" is used in the physics literature as a synonym for "irreducible" (see \cite{ergodic1}, Section 2.4). On Markov chains with general state spa ce they have a different meaning (\cite{ergodic3}, p.169).}) \begin{equation} T(v'\leftarrow v)>0,\,\,\,\,\forall |v>,|v'>\in {\cal F}_P^{\cal T}({\cal H}_{\Lambda})\mbox{ with $|w(v)|>0$ and $|w(v')|>0$}\label{irreducibility} \end{equation} to ensure the convergence to equilibrium. This algorithm can be realized in different ways. The explicit realization of it is for the moment irrelevant to the discussion. We denote the expectation value measured with the algorithm $\phi_{|w|}$ by eq. (\ref{evplus}).\\ If the average of the sign is small this algorithm will suffer from the sign problem as discussed above. It is important to notice that the value $A(v)$ of a classical observable $A$ does not depend on the complete state $|v>$ but only on the part living on the time slice at $t=0$. This is a crucial property that we will use for constructing an algorithm whi ch is able to reduce the sign problem by substantially increasing the average sign. We first introduce some simple definitions and lemmas which allow us to construct this algorithm.\\ \begin{definition}{\rm A mapping \bea g:&&{\cal F}_P^{\cal T}({\cal H}_{\Lambda})\rightarrow {\cal F}_P^{\cal T}({\cal H}_{\Lambda})\nonumber\\ &&|v>\mapsto g|v> \end{eqnarray} is called {\em observable preserving} if for any classical observable $A$ its value $A(v)$ is $g$ invariant $A(v)=A(gv)$. Such a mapping is easily constructed: we can require that it does not change the state $|v>$ at $t=0$. } \label{preserving} \end{definition} \begin{lemma}{\rm We consider a set ${\cal G}$ of observable preserving and bijective mappings $g$. We define the new weight \begin{equation} \tilde w(v,g)=\frac{1}{2}(w(v)+w(gv))\label{tildew} \end{equation} then we obtain for the expectation value (\ref{ev}) of a classical observable \begin{equation} \frac{\sum_v A(v)w(v)}{\sum_v w(v)}= \frac{\sum_{g\in{\cal G}}\sum_v A(v)\tilde w(v,g)}{\sum_{g\in{\cal G}}\sum_v \tilde w(v,g)} \label{newev} \end{equation} } \label{lemmanewev} \end{lemma} {\small \underline{Proof}: Using the bijective property of $g$, it is easy to see that after summing over all states $|v>\in {\cal F}_P^{\cal T}({\cal H}_{\Lambda})$ we obtain $\sum_v w(v)=\sum_v w(gv)$. We denote by $|{\cal G}|$ the number of elements in ${\cal G}$ and after summation over all mappings $g\in {\cal G}$ we obtain that $\sum_v w(v)=\frac{1}{|{\cal G}|}\sum_{g\in{\cal G}}\sum_v \tilde w(v,g)$. Using the fact that $g$ is observable preserving (that means $A(v)=A(gv)$), in analogy as before we can see that $ \sum_v A(v) w(v)=\frac{1}{|{\cal G}|}\sum_{g\in{\cal G}}\sum_v A(v)\tilde w(v,g) $ so that (\ref{newev}) holds.$\Box$} \begin{definition}{\rm We call two Monte Carlo algorithms {\em equivalent} if for all classical observables $A$ their expectation values measured with both algorithms are equal. We call two Monte Carlo algorithms {\em $\delta$-quasi equivalent} if for all classical observables $A$ their expectation values measured with both algorithms are equal up to systematic errors of order $\delta$.} \end{definition} Using this last lemma \ref{lemmanewev} it is clear that the expectation value of $A$ measured with a Monte Carlo algorithm $\phi_{|w|}$ is the same as the one measured with a Monte Carlo algorithm $\phi_{|\tilde w|}$ so that $\phi_{|w|}$ and $\phi_{|\tilde w|}$ are equivalent. The set of mappings ${\cal G}$ can contain an arbitrary number of elements. We suppose that we can construct a mapping $g$ so that the average of the sign if measured with $\phi_{|\tilde w|}$ and $\tilde w$ defined as in (\ref{tildew}) satisfies \begin{equation} <sgn(v)>_{|\tilde w|}\,\,\,\,\geq\,\,\,\,<sgn(v)>_{|w|} \label{suppression} \end{equation} where $<...>_{|\tilde w|}$ and $<...>_{|w|}$ mean the expectation taken from the $|\tilde w|$ and $|w|$ distributions, respectively. In the case that the average sign {\em substantially} increases then it is evident that the sign problem is suppressed. Notice that the sign is not a classical observable because it is a global quantity depending on the state over the complete lattice $\Lambda\times {\cal T}$. Two equivalent Monte Carlo algorithms can thus have different sign expectation values. The sign prob lem is now shifted to the search of a mapping with the desired property. Such a mapping can be constructed looking for clusters of spins to be flipped so that (\ref{suppression}) is satisfied with high probability during the Monte Carlo process. This is a non trivial problem, since to ensure the bijectivity of the mapping the required work grows exponentially in the volume of the lattice $\Lambda\times {\cal T}$ because a deterministic cluster search is needed. Apart from some trivial example, where the amount of work is not too excessive, this approach is not convenient.\\ However, the problem can be solved by constructing a mapping which is bijective with probability $1-\delta$ at each Monte Carlo iteration where $\delta$ is very small. We will see that the construction of such a mapping has a complexity which is linear in the volume of the lattice because a randomized cluster search can be used in connection with a hashing technique for testing the bijectivity. Since the mapping is not always bijective, during the Monte Carlo process a systematic error is produced, so tha t $\phi_{|\tilde w|}$ is only $\delta$-quasi equivalent to $\phi_{|w|}$. If we choose $\delta$ to be smaller than the statistical error and the systematic error $\epsilon^2$ introduced by the Trotter formula, the precision of our algorithm is sufficient. In the next subsection we will show how this mapping can be constructed explicitly. \subsection{Reduction of the sign problem} We want to construct a mapping $g$ which has the property (\ref{suppression}) and can be used to construct a Monte Carlo algorithm $\delta$-quasi equivalent to $\phi_{|w|}$. First we introduce some concepts which allow us to construct this mapping. \begin{definition}{\rm A mapping \bea g:&&{\cal F}_P^{\cal T}({\cal H}_{\Lambda})\rightarrow {\cal F}_P^{\cal T}({\cal H}_{\Lambda})\nonumber\\ &&|v>\mapsto g|v> \end{eqnarray} is called {\em compatible} in $|v>$ if $w(v)\neq 0$ then $w(gv)\neq 0$.\label{compatible}} \end{definition} \begin{definition}{\rm A state $|v>$ of the Fock space $ {\cal F}_P^{\cal T}({\cal H}_{\Lambda})$ can be rewritten as \begin{equation} |v>=:\bigotimes_{(x,\tau)\in\Lambda\times {\cal T}} |n_\sigma(x,\tau)> \end{equation} where $\sigma\in I$ is defined in section 2. A {\em flip} $\Xi_{(x,\tau)}$ with $(x,\tau)\in\Lambda\times {\cal T}$ is a mapping from ${\cal F}_P^{\cal T}({\cal H}_{\Lambda})$ to ${\cal F}_P^{\cal T}({\cal H}_{\Lambda})$ which locally transforms $|n_\sigma(x,\tau)>$ in some $|n'_{\sigma'}(x,\tau)>$ so that $\Xi_{(x,\tau)}\circ\Xi_{(x,\tau)}=1$. The composition of flips over a subset $\Omega\subset \Lambda\times {\cal T}$ is denoted by $\Xi_\Omega$. A subset ${\cal C}\subset\Lambda\times{\cal T}$ is called a {\em cluster} of $|v>$ if $\Xi_{\cal C}$ is compatible in $|v>$. A subset ${\cal C}\subset\Lambda\times{\cal T}$ is called a {\em preserving cluster} of $|v>$ if $\Xi_{\cal C}$ is compatible in $|v>$ a nd observable preserving. We call {\em macros} of $|v>$ the set of preserving clusters of a state $|v>$ with respect to a flip $\Xi$ and denote it by ${\cal M}(v,\Xi)$. For a given flip we just denote it by ${\cal M}(v)$ if the context allows that. We defin e an arbitrary {\em order of clusters} in ${\cal M}(v)$ and denote it by ${\cal C}\prec{\cal C}'$ if ${\cal C}$ precedes ${\cal C}'$. We consider the empty cluster $\emptyset\in{\cal M}(v)$ to be $\emptyset\succ {\cal C},\,\,\,\forall {\cal C}\in{\cal M}(v)$. } \label{clusterdef} \end{definition} We now proceed to the construction of the mapping $g$ with the desired properties. \begin{algo}{\rm We consider a hashing function $h$ which assigns in an arbitrary way a non vanishing integer label to any state \bea h:&& {\cal F}_P^{\cal T}({\cal H}_{\Lambda})\rightarrow {\cal Z}\subset {\sf Z \!\!\! Z}\nonumber\\ &&|v>\mapsto h(v) \end{eqnarray} We suppose that we can store a hashing table $H_{table}$ with $|{\cal Z}|$ integer entries. Here $|{\cal Z}|$ denotes the number of different labels and $|{\cal Z}|\leq dim ({\cal F}_P^{\cal T}({\cal H}_{\Lambda}))$. Given a state $|v>$ we define $g|v>$ with the following procedure: Let ${\cal M}'\subset{\cal M}(v)$ be an arbitrary subset of ${\cal M}(v)$ containing the empty cluster $\emptyset$, then{\tt \bea &&\mbox{select the first ${\cal C}\in{\cal M}'$}\nonumber\\ \mbox{repeat} &&\nonumber\\ &&\mbox{if ({\sf condition}={\bf true}) then} \nonumber\\ &&\,\,\,\,\,\mbox{$g|v>=\Xi_{\cal C} |v>$}\nonumber\\ &&\mbox{otherwise}\nonumber\\ &&\,\,\,\,\,\mbox{$g|v>=|v>$}\nonumber\\ &&\,\,\,\,\,\mbox{select next ${\cal C}\in{\cal M}'$}\nonumber\\ &&\mbox{end if}\nonumber\\ \mbox{until}&&\mbox{({\sf condition}={\bf true}) or (${\cal C}=\emptyset$)}\nonumber \end{eqnarray}} where we define the \begin{equation} \mbox{{\sf condition}}=\{(w(v)+w(\Xi_{\cal C} v)>0)\mbox{ {\bf and} } {\cal O}(v,\Xi_{\cal C} v)\}\label{ccc} \end{equation} The boolean function ${\cal O}(v,v')$ is defined by the following procedure {\tt \bea {\cal O}(v,v'):&& \mbox{if $H_{table}(h(v'))=0$ then $H_{table}(h(v')):=h(v)$ endif}\nonumber\\ &&\mbox{if $H_{table}(h(v'))\neq h(v)$ then output ${\cal O}$={\bf false}}\nonumber\\ &&\mbox{else output ${\cal O}$={\bf true} endif}\nonumber \end{eqnarray}} }\label{alg} \end{algo} The selection of the clusters in the macros follows the chosen order of the clusters. Practically there is no need to find all the clusters of a macros. One can select a point $(x,\tau)\in\Lambda\times {\cal T}$ and construct a cluster starting from it. Duri ng the construction a fixed list of random numbers can be used. It is important, however, that this list remains always the same every time one applies this procedure to a state $|v>$. Changing the list of points or the list of random numbers is equivalen t to select a new mapping $g$ in ${\cal G}$. If the constructed cluster does not satisfy the {\sf condition} (\ref{ccc}) the next point in $\Lambda\times {\cal T}$ can be selected and a new cluster constructed and this search is repeated until the {\sf conditio n} (\ref{ccc}) is satisfied. If the {\sf condition} (\ref{ccc}) is never satisfied the procedure can be stopped, for example, when all the points in $\Lambda\times {\cal T}$ are tested once, and the original state is returned as the result. In this way the s earch of the cluster has a linear average complexity in the size of the lattice.\\ \begin{theorem}{\rm Let ${\cal G}$ a set of mappings defined by the algorithm \ref{alg}. A weight $\tilde w$ is defined by \begin{equation} \tilde w(v,g)=\frac{1}{2}(w(v)+w(gv))\label{w2} \end{equation} We consider a Monte Carlo algorithm $\phi_{|\tilde w|}$ which produces a Markov chain with state space $\{\psi=(|v>,g)\in{\cal F}_P^{\cal T}({\cal H}_{\Lambda})\times {\cal G}\}$ and equilibrium distribution $|\tilde w(v,g)|$. We assume that this algorithm generates a state $\psi'=(|v'>,g')$ from a state $\psi=(|v>,g)$ with a transition probability $T(\psi'\leftarrow \psi)$. We assume stationarity \begin{equation} \sum_\psi T(\psi'\leftarrow \psi)|\tilde w(v,g)|=|\tilde w(v',g')|\label{stationarity2} \end{equation} and irreducibility \begin{equation} T(\psi'\leftarrow \psi)>0,\,\,\,\,\forall \psi,\psi'\in {\cal F}_P^{\cal T}({\cal H}_{\Lambda})\times {\cal G} \mbox{ with $|\tilde w(v,g)|>0$ and $|\tilde w(v',g')|>0$}\label{irreducibility2} \end{equation} to ensure the convergence to equilibrium.\\ Then this Monte Carlo is $\delta$-quasi equivalent to $\phi_{|w|}$ with $\delta=\frac{1}{|{\cal Z}|}$ and \begin{equation} <sgn(v)>_{|\tilde w|}\,\,\,\,\geq\,\,\,\,<sgn(v)>_{|w|}\label{46} \end{equation} }\label{teorema} \end{theorem} {\small \underline{Proof}: The flip $\Xi_{\cal C}$ is defined observable preserving (see definition \ref{clusterdef}) so that all $g$ are also. The boolean function ${\cal O}$ in the {\sf condition} (\ref{ccc}) guarantees us that the mapping $g$ maps a state $ |v>$ bijectively to a state $|v'>$ with probability $1-O\left(\frac{1}{|{\cal Z}|}\right)$. Because of lemma \ref{lemmanewev} it is then clear that $\phi_{|\tilde w|}$ is equivalent to $\phi_{|w|}$ up to errors of order $\frac{1}{|{\cal Z}|}$ in the average of an observable, so that the two algorithms are $\delta$-quasi equivalent with $\delta=\frac{1}{|{\cal Z}|}$. The {\sf condition} (\ref{ccc}) in algorithm \ref{alg} guarantees us that $$ sgn(\tilde w(v))= \left\{\begin{array}{ll} sgn(w(v)+w(\Xi_{\cal C}(v)))=1&\mbox{if a cluster ${\cal C}$ satisfying} \\ &\mbox{{\sf condition} is found in ${\cal M}'$,}\\ sgn(w(v))&\mbox{otherwise} \end{array}\right\}\geq sgn(w(v)) $$ so that after average (\ref{46}) is satisfied. $\Box$}\\[0.5cm] It is important to notice that the systematic error of order $\delta$ introduced by this algorithm can be tuned by increasing the dimension $|{\cal Z}|$ of the hashing table. \subsection{Monte Carlo algorithm for the weight $\tilde w$.} A Monte Carlo algorithm for the spin system described by the weight $\tilde w(v,g)$ needs a method for updating the state $\psi$ to a new state $\psi'$ with a transition probability $\tilde T(\psi'\leftarrow \psi)$ which satisfies stationarity (\ref{stati onarity2}) and irreducibility (\ref{irreducibility2}). The modulus of the weight $|\tilde w|$ is not local, contrary to $|w(v)|$ (see section 3.1), because after adding the term $w(\Xi_{\cal C} v)$ in $\tilde w$ we can not factorize $\tilde w$ anymore in a product like (\ref{weightdef}). To find an efficient updating method applicable directly to $\tilde w$ is a difficult task. One can, however, update the states $|v>$ using the Monte Carlo algorithm $\phi_{|w|}$ which is supposed to be known and then correct the weight distribution using an accept/reject global Metropolis test. \begin{algo}{\rm \label{alg2} Suppose we know an Monte Carlo algorithm $\phi_{|w|}$ for the distribution $|w(v)|$. For simplicity we suppose that the Monte Carlo algorithm $\phi_{|w|}$ satisfies irreducibility and the detailed balance condition \begin{equation} T(v'\leftarrow v)|w(v)|=T(v\leftarrow v')|w(v')| \label{dbal} \end{equation} where $T(v'\leftarrow v)$ is the transition probability of $\phi_{|w|}$. It is clear that stationarity follows from this condition. We consider a set of mappings ${\cal G}$ constructed as in algorithm \ref{alg}. We define the Monte Carlo algorithm $\phi_{|\tilde w|}$ for the equilibrium distribution $|\tilde w(v,g)|$ and the state space $\{\psi=(|v>,g)\in{\cal F}_P^{\cal T}({\cal H}_{\Lambda})\times {\cal G}\}$ by the procedure \bea &&\nonumber\\ &&\mbox{{\tt input the state $\psi=(|v>,g)$}}\nonumber\\ &&\mbox{{\tt generate $|v'>=\phi_{|w|}|v>$ using $\phi_{|w|}$}}\nonumber\\ &&\mbox{{\tt select randomly a mapping $g'\in{\cal G}$}}\nonumber\\ &&P_A(\psi',\psi)=min\left(1,\left[\frac{|\tilde w(v',g')|\cdot|w(v)|}{|\tilde w(v,g)|\cdot|w(v')|}\right]\right)\nonumber\\ &&\mbox{{\tt accept $\psi'=(|v'>,g')$ as the new state with probability $P_A(\psi',\psi)$}}\nonumber \end{eqnarray} } \end{algo} \vspace{0.2cm} \begin{theorem}{\rm Let $\phi_{|\tilde w|}$ be defined as in algorithm \ref{alg2}. Then $\phi_{|\tilde w|}$ satisfies stationarity (\ref{stationarity2}) and irreducibility (\ref{irreducibility2}).}\label{teoremuccio} \end{theorem} {\small\underline{Proof}: Stationarity can be proven by showing that the detailed balance condition $$ \tilde T(\psi'\leftarrow \psi)|\tilde w(\psi)|=\tilde T(\psi\leftarrow \psi')|\tilde w(\psi')| $$ is satisfied by $\phi_{|\tilde w|}$ where $$ \tilde T(\psi'\leftarrow \psi)=T(v'\leftarrow v)\cdot P_A(\psi',\psi) $$ is the transition probability for $\phi_{|\tilde w|}$. We suppose that $P_A(\psi',\psi)<1$ for the states $|\psi>$ and $|\psi'>$. Using the property that $P_A(\psi,\psi')=1$ if $P_A(\psi',\psi)<1$ and eq. (\ref{dbal}) we have \bea \tilde T(\psi'\leftarrow \psi)|\tilde w(\psi)|&&=T(v'\leftarrow v)\cdot P_A(\psi',\psi) |\tilde w(\psi)|=T(v'\leftarrow v)\frac{|\tilde w(\psi')|\cdot|w(v)|}{|\tilde w(\psi)|\cdot|w(v')|}|\tilde w(\psi)|=\nonumber\\ &&=T(v'\leftarrow v)|w(v)|\frac{|\tilde w(\psi')|}{|w(v')|}= T(v\leftarrow v')|w(v')|\frac{|\tilde w(\psi')|}{|w(v')|}= \nonumber\\ &&=T(v\leftarrow v')|\tilde w(\psi')|=T(v\leftarrow v')\cdot P_A(\psi,\psi')|\tilde w(\psi')|=\nonumber\\ &&=\tilde T(\psi\leftarrow \psi')|\tilde w(\psi')| \nonumber \end{eqnarray} The case $P_A(\psi',\psi)=1$ and $P_A(\psi,\psi')<1$ is analogous. Irreducibility is clear because $\tilde w(\psi)\neq 0$ when $w(v)\neq 0$ so that $P_A(\psi',\psi)>0$ and because $\phi_{|w|}$ satisfies irreducibility. $\Box$}\\[0.8cm] If the set of mappings ${\cal G}$ contains only one element then the dimension of the hashing table has to be larger than the number of Monte Carlo iterations one desires to perform. In this way one avoids too many collisions in the hashing table. This, of c ourse, is doable, but requires a lot of memory. If, however, the set of mappings ${\cal G}$ contains a huge amount of elements then the hashing table can be small because the algorithm uses a selected mapping only for a short time and then selects a new one according to the acceptance probability $P_A$. If the set ${\cal G}$ is big enough, the probability that the algorithm selects the same mapping twice is infinitesimal so that there is no need to store the history of the hashing tables of old mappings.\\ Theorems \ref{teorema} and \ref{teoremuccio} show that a Monte Carlo simulation can be performed using $\phi_{|\tilde w|}$ and a classical observable $A$ can be measured using \begin{equation} <A>_w=\frac{<A\,sgn>_{|\tilde w|}}{<sgn>_{|\tilde w|}}+O\left(\frac{1}{|{\cal Z}|}\right) \end{equation} If the {\sf condition} (\ref{ccc}) is satisfied with high probability during the Monte Carlo process then the negative sign problem is eliminated. The systematic error $O\left(\frac{1}{|{\cal Z}|}\right)$ produced by the hashing technique used for checking t he bijectivity of the mappings $g$ can be explicitly measured durung the simulation. \section{Application to the example} We apply our algorithm \ref{alg2} to the example presented in section 3.2. We use for the updating of the states $|v>$ with respect to the distribution $|w(v)|$ a standard loop algorithm \cite{evertz,wiese} which we denote as $\phi_{|w|}$. A description of this algorithm applied to the example of sectio n 3.2 is given in \cite{wiese}. The reduction of the sign problem is then realized using our algorithm \ref{alg2}.\\ The loop algorithm is essentialy a cluster algorithm \cite{cluster} where a cluster ${\cal C}$ in the macros ${\cal M}(v)$ is selected with a certain probability so that an update of $|v>$ realized by a flip $\Xi_{\cal C}$ satisfies detailed balance. For c ompleteness, we briefly describe the loop algorithm we have used in the example. We define the flip $\Xi_{(x,\tau)}$ in our example so that the occupation numbers $n(x,\tau)$ of points on the cluster are changed from 0 to 1 and vice versa: \bea &&\Xi_{(x,\tau)}|1>=|0>\nonumber\\ &&\Xi_{(x,\tau)}|0>=|1>\nonumber \end{eqnarray} The clusters in ${\cal M}(v)$ are constructed searching closed loops with the following algorithm.\\ \begin{algo} This algorithm finds a cluster ${\cal C}$ in ${\cal M}(v)$: \begin{enumerate} {\rm \item To start a loop one first selects a lattice point $(x,\tau)$. \item The occupation number $n(x,\tau)$ participates in two plaquettes, one before and one after $\tau$. For $n(x,\tau) = 1$ we consider the plaquette at the later time and for $n(x,\tau) = 0$ we consider the plaquette at the earlier time. \item The corresponding plaquette is characterized by the occupation numbers of four points in $\Lambda\times{\cal T}$. One of these points will be the next point on the loop. \begin{itemize} \item For a plaquette $[0,0,0,0]$ or $[1,1,1,1]$ the next point is with probability $p_1$ the time-like nearest neighbor of $(x,\tau)$, and with probability $1 - p_1$ the next-to-nearest (diagonal) neighbor of $(x,\tau)$ on the plaquette. \item For a plaquette $[0,1,0,1]$ or $[1,0,1,0]$ the next point on the loop is with probability $p_2$ the time-like nearest neighbor, and with probability $1 - p_2$ the space-like nearest neighbor of $(x,\tau)$. \item For a plaquette $[0,1,1,0]$ or $[1,0,0,1]$ the next point is with probability $p_3$ the diagonal neighbor, and with probability $1 - p_3$ the space-like nearest neighbor of $(x,\tau)$. \end{itemize} \item Once the next point on the loop is determined the process is repeated from 2 until the loop closes. \item The points on the closed loop determine the cluster ${\cal C}$. } \end{enumerate} \label{alg3} \end{algo} The sets of all clusters determined by this algorithm for some arbitrary values of $p_1,p_2,p_3$ is only a subset of the macros ${\cal M}(v)$. This subset is, however, sufficient for the construction of our Monte Carlo algorithm.\\ If the start point in algorithm \ref{alg3} is chosen randomly and the probabilities $p_1,p_2$ and $p_3$ are chosen in the following way \begin{equation} \left(\begin{array}{c} p_1\\p_2\\p_3 \end{array}\right)= \left(\begin{array}{c} \frac{1}{2}(1+e^{-\beta\epsilon})\\ \frac{1}{2}(1+e^{-\beta\epsilon})/\cosh(\beta\epsilon)\\ (1-\frac{1}{2}(1+e^{-\beta\epsilon}))/\sinh(\beta\epsilon) \end{array}\right)\label{prob} \end{equation} an update of $|v>$ realized by a flip $\Xi_{\cal C}$ obeys detailed balance \cite{wiese}. The part of the weights proportional to $\exp(-\beta\epsilon(1 - \frac{\mu}{4}))$ is taken into account by a global Metropolis step. In the Metropolis step the cluster is flipped with probability $p = \mbox{min}(1,\exp(\beta (4 - \mu) W({\cal C})))$ where $W({\cal C}) = \frac{1}{4T} \sum_{(x,\tau) \in {{\cal C}}} (2n(x,\tau) - 1)$. This defines the loop algorithm $\phi_{|w|}$.\\ The search method of the clusters in algorithm \ref{alg} can be performed using algorithm \ref{alg3} choosing the starting point $(x,\tau)$ from a list of points in $\Lambda\times {\cal T}$ and a list of random numbers. There, the choice of the probabilities $p_1,p_2$ and $p_3$ is arbitrary. However, to obtain a good acceptance $<P_A>$ it is convenient to use the probabilities defined in (\ref{prob}). The weight $\tilde w(v)$ is defined by algorithm \ref{alg} and eq. (\ref{w2}). For our runs we have used a hashing table with 10000 entries. The loop algo rithm $\phi_{|w|}$ applied in algorithm \ref{alg2} defines our Monte Carlo algorithm $\phi_{|\tilde w|}$. In our simulation we have not found any collision in the hashing table so that the systematic error $\delta$ on the observables is zero.\\ The model in our example is trivial and can be solved in momentum space. This allows us to test our algorithm. By introducing $c_p^* = \frac{1}{L} \sum_x \exp(ipx) c_x^*$, $c_p = \frac{1}{L} \sum_x \exp(-ipx) c_x$, the Hamiltonian in momentum space becomes $H = \sum_p \hat{p}^2 c_p^* c_p$ with $\hat{p}_i = 2 \sin(p_i/2)$. In the grand canonical ensemble the expectation value of the occupation number is given by \begin{equation} \langle n_x \rangle = \frac{1}{Z} \mbox{Tr} [n_x \exp(- \beta (H - \mu N))] = \frac{1}{L^2} \sum_p \frac{1}{1 + \exp(\beta (\hat{p}^2 - \mu))}\label{exsol} \end{equation} In Table 1 we present the results of the Monte Carlo simulations performed with the loop algorithm and with our algorithm \ref{alg2} and we compare the obtained results with the exact solution (\ref{exsol}). We have applied the algorithms for various values of $\beta$ and $\mu$ at fixed lattice spacing $\beta\epsilon=1/16$. The results of both algorithms agree with the exact results within the error bars. It is evident that the sign problem becomes severe for the loop algorithm when the temperature is lowered or the chemical potential is increased. However, the sign remains always positive for our algorithm. \\ \section{Conclusion} We have presented a new algorithm for significantly improving quantum Monte Carlo simulations of models plagued by the negative sign problem. Its complexity is only linear in the volume of the lattice used for the simulation. The generality of this algorithm allows us to apply it to any quantum spin system. The efficiency of this algorithm was tested on a simple fermionic model. In this example the sign problem is solved. A more exhaustive analysis of the dynamics of the algorithm is under study. Applications of it to more physically interesting models are also planned.\\ {\Large {\bf Acknowledgments}}\\ I would like to especially thank N. Galli for discussions and help. I also would like to thank B. Jegerlehner for helpful comments and P.Weisz for reading the manuscript and helpful comments.
1,108,101,562,679
arxiv
\section{Introduction\label{section1}} Let $p$ be a prime, $\mathbf{Q}_{p}$ the $p$-adic number field and $\mathrm SL}_{n}(\mathbf{Q}_{p})$ the special linear group. It is well-known that \mathrm{SL}_{n}(\mathbf{Q}_{p})$ acts without fixed points on the affine building $X,$ which is an $(n-1)$-dimensional $\mathrm{CAT}(0)$ space. In the article, we prove that $n-1$ is the smallest dimension of $\mathrm{CAT (0)$ spaces on which $\mathrm{SL}_{n}(\mathbf{Q}_{p})$ acts without fixed points. More generally, let $R$ be an associative ring with identity and E_{n}^{\prime }(R)<\mathrm{GL}_{n}(R)$ the extended elementary subgroup (cf. Section \ref{elem}), which is an analog of $\mathrm{SL}_{n}(\mathbf{Q}_{p})$ for general rings. Out first result is the following. \begin{theorem} \label{1.1}Any isometric action of the extended elementary group E_{n}^{\prime }(R)$ on a complete $\mathrm{CAT}(0)$ space $X$ of dimension d<n-1$ has a fixed point. \end{theorem} It is a classical result of Serre \cite{Se} that any isometric action of \mathrm{SL}_{n}(\mathbb{Z})$ $(n\geq 3)$ on a simplicial tree has a fixed point. Farb \cite{fa} considered a high-dimensional analog. For a reduced, irreducible root system $\Phi $ has rank $r\geq 2$ and a finitely generated commutative ring $R,$ let $E(\Phi ,R)$ be the elementary subgroup of the Chevalley group $G(\Phi ,R).$ Farb \cite{fa} proves that any action of E(\Phi ,R)$ on a complete $\mathrm{CAT}(0)$ space $X$ of dimension $d<r-1$ by \emph{semisimple} isometries has a fixed point. This gives also a generalization of a result obtained by Fukunaga \cite{fu} concerning groups acting on trees$.$ Let $\Sigma _{g}$ be the orientable closed surface and \mathrm{MCG}(\Sigma _{g})$ the mapping class group. Bridson \cite{bdm, brid} proves any action of $\mathrm{MCG}(\Sigma _{g})$ on a complete CAT(0) space X$ of dimension $d<g-1$ by \emph{semisimple} isometries has a fixed point. The author \cite{ye} obtains similar results for actions of the elementary subgroup $E_{n}(R)$ and the quadratic elementary subgroup $EU_{2n}(R,\Lambda )$ for general rings. However, most of results are proved for semisimple actions. The group action on \textrm{CAT(0) }spaces of\textrm{\ automorphism groups of free groups is studied by Bridson \cite{brid2,brid11} and Varghese \cite{var}. Barnhill \cite{bar} considers the property $\mathrm FA}_{n}$ for Coxeter groups. In this article, we study group actions on \textrm{CAT(0) }spaces with possible parabolic isometries. Note that the semisimple isometries could be very different from generic isometries. For example, any action of the special linear group $\mathrm{SL}_{n}(\mathbb{Z})$ $(n\geq 3)$ on a complete \textrm{CAT(0) }space $X$ by semisimple isometries has a fixed point, while \mathrm{SL}_{n}(\mathbb{Z})$ acts on the symmetric space $\mathrm{SL}_{n} \mathbb{R})/SO(n)$ properly and thus without fixed points. It is an open problem in geometric group theory to study the minimal dimensions of \mathrm{CAT(0)}$ spaces on which matrix groups or automorphism groups of free groups act without fixed points (see Bridson \cite{bridsonprob}, Question 9.2). In order to state our results easily, it is better to define the following. \begin{definition} (relative property $\mathcal{FA}_{n}$) Let $H$ be a subgroup of a group $G.$ The pair $(G,H)$ has the relative property $\mathcal{FA}_{n}$ (resp. $ \mathcal{FA}_{n}$) if whenever $G$ acts on an $n$-dimensional complete \mathrm{CAT(0)}$ space $X$ by isometries (resp. semisimple isometries), the subgroup $H$ has a fixed point. \end{definition} We call a group $G$ has property $\mathcal{FA}_{n}$ if $(G,G)$ has the relative property $\mathcal{FA}_{n}.$ It should be pointed that we don't require the isometries to be semisimple in the definition of $\mathcal{FA _{n}.$ Note that the property $s\mathcal{FA}_{n}$ is the same as the strong property $\mathrm{FA}_{n}$ considered by Farb \cite{fa}. \begin{theorem} \label{1.2}Let $R$ be an associative ring and $E_{n}^{\prime }(R)$ the extended elementary subgroup. Then $(R^{n}\rtimes E_{n}^{\prime }(R),R^{n})$ has the relative property $\mathcal{FA}_{n-1}.$ \end{theorem} Since $E_{n}(\mathbb{Z}[\frac{1}{p}])\ltimes \mathbb{Z}[\frac{1}{p}]^{n}$ as a subgroup of $\mathrm{GL}_{n+1}(\mathbf{Q}_{p})$ acts on an affine building $X$ of dimension $n$ without fixed points, the dimension in Theorem \ref{1.2} is sharp. Let $\mathrm{Aut}(F_{n})$ be the automorphism group of the rank-$n$ free group $F_{n}.$ \begin{theorem} \label{1.3}$(\mathrm{Aut}(F_{n}),\mathrm{Aut}(F_{2}))$ has the relative property $\mathcal{FA}_{n-3}$ for any $n\geq 4.$ \end{theorem} \begin{corollary} \label{1.4}Any isometric action of $\mathrm{Aut}(F_{n})$ $(n\geq 4)$ on a complete $\mathrm{CAT(0)}$ space $X$ of dimension $d\leq 2[\frac{n}{3}]$ has a fixed point, i.e. $\mathrm{Aut}(F_{n})$ has property $\mathcal{FA _{2[n/3]}.$ Here $[n/3]$ is the integer part. \end{corollary} The previous result was also observed by Bridson \cite{brid2}. The case of d\leq 2[n/2]$ is already known by Varghese \cite{var}. The proofs of the above theorems are based on Helly's theorem and the theory of Coxeter groups. The method has further applications to group actions on 1-dimensional spaces, like dendrites and unique arcwise connected spaces. Recall that a dendrite $X$ is a connected compact metrizable space (continuum) such that any two points are the extremities of a unique arc in X$ (cf. \cite{dm}). Let $\Gamma $ be a lattice in a semisimple algebraic group of rank at least two. Buchesne and Monod \cite{dm} prove that any action of $\Gamma $ on a dendrite $X$ fixes a fixed point or a pair of two points. An $\mathbb{R}$--tree is a geodesic metric space in which there is a unique arc connecting each pair of points. Bogopolski \cite{bo} prove that any isometric action of $\mathrm{Aut}(F_{n})$ on a simplicial tree has a fixed point. Culler and Vogtmann \cite{cv} give a short proof based on their idea of \textquotedblleft minipotent\textquotedblright\ elements. Bridson \cite{brid08} proves a similar result for group actions on $\mathbb{R} -trees using the triangle condition. We prove the following. \begin{theorem} \label{last}Let $X$ be a unique arcwise connected space (eg. a tree or dendrite). Any action of $\mathrm{Aut}(F_{n})$ (or $\mathrm{SL}_{n}(\mathbb{ }),n\geq 3$) on $X$ by homeomorphisms has a fixed point. \end{theorem} This gives a simultaneous generalization of the fixed point property for both group actions on simplicial trees, $\mathbb{R}$-trees and those on dendrites. \section{Notations and basic facts} \subsection{\textrm{CAT(0)} spaces} Let $(X,d_{X})$ be a geodesic metric space. For three points $x,y,z\in X,$ the geodesic triangle $\Delta (x,y,z)$ consists of the three vertices $x,y,z$ and the three geodesics $[x,y],[y,z]$ and $[z,x].$ Let $\mathbb{R}^{2}$ be the Euclidean plane with the standard distance $d_{\mathbb{R}^{2}}$ and \bar{\Delta}$ a triangle in $\mathbb{R}^{2}$ with the same edge lengths as \Delta $. Denote by $\varphi :\Delta \rightarrow \bar{\Delta}$ the map sending each edge of $\Delta $ to the corresponding edge of $\bar{\Delta}.$ The space $X$ is called a \textrm{CAT(0)} space if for any triangle $\Delta $ and two elements $a,b\in \Delta ,$ we have the inequality \begin{equation*} d_{X}(a,b)\leq d_{\mathbb{R}^{2}}(\varphi (a),\varphi (b)). \end{equation* The typical examples of \textrm{CAT(0)} spaces include simplicial trees, hyperbolic spaces, products of \textrm{CAT(0) }spaces and so on. From now on, we assume that $X$ is a complete \textrm{CAT(0)} space. Denote by \textrm{Isom}$(X)$ the isometry group of $X.$ For any $g\in $ \textrm{Isom} (X)$\textrm{, }let \begin{equation*} \mathrm{Minset}(g)=\{x\in X:d(x,gx)\leq d(y,gy)\text{ for any }y\in X\} \end{equation* and let $\tau (g)=\inf\nolimits_{x\in X}d(x,gx)$ be the translation length of $g.$ When the fixed-point set $\mathrm{Fix}(g)\neq \emptyset ,$ we call g $ elliptic. When $\mathrm{Minset}(g)\neq \emptyset $ and $d_{X}(x,gx)=\tau (g)>0$ for any $x\in \mathrm{Minset}(g),$ we call $g$ hyperbolic. The group element $g$ is called semisimple if the minimal set $\mathrm{Minset}(g)$ is not empty, i.e. it is either elliptic or hyperbolic. A subset $C$ of a \textrm{CAT(0)} space if convex, if any two points $x,y\in C$ can connected by the geodesic segment $[x,y]\subset C.$ For more details on \textrm{CAT(0) }spaces, see the book of Bridson and Haefliger \cite{bh}. The following lemma is from \cite{bh} (II.2.4). \begin{lemma} \label{proj}Let $\gamma :X\rightarrow X$ be an isometry of a CAT(0) space X\ $and $C$ an $\gamma $-invariant convex, complete subspace$.$ (1) For any $x\in X,$ there exists a unique $p(x)\in C$ such that d(x,p(x))=d(x,C):=\inf_{c\in C}d(x,c).$ This gives a projection p:X\rightarrow C,$ which is distance-non-increasing. (2) We have $p(\gamma x)=\gamma p(x)$ for any $x\in X.$ Moreover, $\mathrm Min}(\gamma |_{C})=\mathrm{Min}(\gamma )\cap C.$ \end{lemma} \begin{proof} The first part (1) is \cite{bh} (II.2.4). The second part is in the proof II.6.2(4) of \cite{bh}. Actually, we have that \begin{equation*} d(\gamma x,\gamma px)=d(x,px)=d(x,C)=d(\gamma x,C) \end{equation*} and thus $\gamma px=p(\gamma x)$ by the uniqueness of projection points. Therefore, $p\mathrm{Min}(X)=\mathrm{Min}(\gamma )\cap C=\mathrm{Min}(\gamma |_{C}).$ \end{proof} \begin{lemma} \label{le1}Let $G=A\rtimes H$ be a semi-direct product of two groups $A$ and $H.$ Suppose that both $A$ and $H$ have non-empty fixed point sets $X^{A}$ and $X^{H}$. Then $G$ has non-empty fixed point set $X^{G}.$ \end{lemma} \begin{proof} For any $x\in X^{A}$ and any $h\in H,a\in A,$ we have that \begin{equation*} ahx=h(h^{-1}ah)x=hx. \end{equation*} This shows that $hX^{A}=X^{A}.$ Moreover, the fixed point $X^{A}$ is a convex closed complete subspace of $X$. Let $p:X\rightarrow X^{A}$ be the projection as defined in Lemma \ref{proj}. Since $p(hx)=hp(x)$ for any $x\in X,$ we have that $p(X^{H})=X^{H}\cap X^{A}=X^{G}\neq \emptyset .$ \end{proof} \subsection{Matrix groups\label{elem}} In this subsection, we briefly recall the definitions of the elementary subgroups $E_{n}(R)$ of the general linear group $\mathrm{GL}_{n}(R)$. Let R $ be an associative ring with identity and $n\geq 2$ be an integer. The general linear group $\mathrm{GL}_{n}(R)$ is the group of all $n\times n$ invertible matrices with entries in $R$. For an element $r\in R$ and any integers $i,j$ such that $1\leq i\neq j\leq n,$ denote by $e_{ij}(r)$ the elementary $n\times n$ matrix with $1s$ in the diagonal positions and $r$ in the $(i,j)$-th position and zeros elsewhere. The group $E_{n}(R)$ is generated by all such $e_{ij}(r),$\textsl{\ i.e. \begin{equation*} E_{n}(R)=\langle e_{ij}(r)|1\leq i\neq j\leq n,r\in R\rangle . \end{equation* Let $D=\{\mathrm{diag}(\varepsilon _{1},\varepsilon _{2},\cdots ,\varepsilon _{n})\mid \varepsilon _{i}=\pm 1\}$ be the diagonal subgroup. The extended elementary group $E_{n}^{\prime }(R)$ is defined as $E_{n}(R),$ when $n$ is odd and $\langle E_{n}(R),D\rangle <\mathrm{GL}_{n}(R)$ when $n$ is even. Note tha \begin{equation*} \begin{pmatrix} -1 & \\ & - \end{pmatrix \begin{pmatrix} 1 & 1 \\ & \end{pmatrix \begin{pmatrix} 1 & \\ -1 & \end{pmatrix \begin{pmatrix} 1 & 2 \\ & \end{pmatrix \begin{pmatrix} 1 & \\ -1 & \end{pmatrix \begin{pmatrix} 1 & 1 \\ & \end{pmatrix . \end{equation* This implies that $\langle E_{n}(R),D\rangle $ contains $E_{n}(R)$ as an index-2 subgroup. When $R=\mathbb{Z}$ and $n$ is even, we have E_{n}^{\prime }(\mathbb{Z})=\mathrm{GL}_{n}(\mathbb{Z}).$ Denote by $I_{n}$ the identity matrix and by $[a,b]$ the commutator aba^{-1}b^{-1}.$ The following lemma displays the commutator formulas for E_{n}(R)$ (cf. Lemma 9.4 in \cite{mag}). \begin{lemma} \label{ecom}Let $R$ be a ring and $r,s\in R.$ Then for distinct integers i,j,k,l$ with $1\leq i,j,k,l\leq n,$ the following hold: \begin{enumerate} \item[(1)] $e_{ij}(r+s)=e_{ij}(r)e_{ij}(s);$ \item[(2)] $[e_{ij}(r),e_{jk}(s)]=e_{ik}(rs);$ \item[(3)] $[e_{ij}(r),e_{kl}(s)]=I_{n}.$ \end{enumerate} \end{lemma} By Lemma \ref{ecom}, the group $E_{n}(R)$ $(n\geq 3)$ is finitely generated when the ring $R$ is finitely generated. \section{Basic facts on the relative property $\mathcal{FA}_{n}$} \begin{definition} Let $H$ be a subgroup of a group $G.$ The pair $(G,H)$ has the relative property $\mathcal{FA}_{n}$ (resp. $s\mathcal{FA}_{n}$) if whenever $G$ acts isometrically (resp. semi-simply) on a complete $n$-dimensional $\mathrm{CAT (0)$ space $X,$ the subgroup $H$ has a fixed point. \end{definition} We call that $G$ has property $\mathcal{FA}_{n}$ if $(G,G)$ has the relative property $\mathcal{FA}_{n}.$ The following are some basic properties. \begin{lemma} \label{basic}(1) For groups $H<K<G,$ if $(K,H)$ has $\mathcal{FA}_{n}$ (resp. $s\mathcal{FA}_{n}$), then so does $(G,H).$ If $(G,K)$ has $\mathcal FA}_{n}$ (resp. $s\mathcal{FA}_{n}$), then so does $(G,H).$ (2) Let $H_{2}<H_{1}$ be two subgroups of $G.$ If $H_{1}$ is bounded generated by finitely many conjugates of $H_{2},$ then $(G,H_{1})$ has \mathcal{FA}_{n}$ (resp. $s\mathcal{FA}_{n}$) if and only if $(G,H_{2})$ has $\mathcal{FA}_{n}$ (resp. $s\mathcal{FA}_{n}$). (3) If $H$ is of finite index in $G,$ then $(G,H)$ has $\mathcal{FA}_{n}$ (resp. $s\mathcal{FA}_{n}$) if and only if $G$ has $\mathcal{FA}_{n}$ (resp. $s\mathcal{FA}_{n}$). (4) If $(G,H)$ has the relative property $\mathcal{FA}_{n+1}$ (resp. $ \mathcal{FA}_{n}$), then $(G,H)$ has the relative property $\mathcal{FA}_{n}$ (resp. $s\mathcal{FA}_{n}$). \end{lemma} \begin{proof} (1) is obvious from the definition. For (2), let $H_{1}$ be bounded generated by $K_{1},K_{2},\cdots ,K_{k}$ (i.e. there exists $N>0$ such that each element $g\in G$ is a product of at most $N$ elements in $\cup _{i=1}^{k}K_{i}$), where each $K_{i}$ is a conjugation of $H_{2}.$ When $H_{2}$ has a fixed point $x$, each $K_{i}$ has a fixed point. Since each orbit $H_{i}x$ is bounded, the orbit $H_{1}x$ is bounded and thus $H_{1}$ has a fixed point. When $(G,H_{1})$ has $\mathcal{F }_{n},$ it follows (1) that $(G,H_{2})$ has $\mathcal{FA}_{n}$. For (3), when $H$ has a fixed point $x,$ the orbit $Gx$ is bounded and thus G$ also has a fixed point. For (4), when $G$ acts on a $n$-dimensional complete $\mathrm{CAT}(0)$ space $X$ by isometries, the group $G$ can also act on the product $X\times \mathbb{R}$ by trivial action on the second component. When $H$ has a fixed point in $X\times \mathbb{R},$ it has also a fixed point in $X.$ \end{proof} \begin{lemma} Let $H$ be a nilpotent group subgroup of a finitely generated group $G.$ Suppose that each element in $H$ is distorted in $G.$ Then $(G,H)$ has the relative property $s\mathcal{FA}_{n}$ for any $n.$ \end{lemma} \begin{proof} When $G$ acts on a complete $\mathrm{CAT}(0)$ space $X$ by semisimple isometries, each element in $H$ is either hyperbolic or elliptic. Suppose that some element $h\in H$ is hyperbolic and thus has a translation axis $l . Fix a finite generating set $S<G$ and denote by $l_{S}$ the word length function. For $x\in l,$ the translation \begin{eqnarray*} |h| &=&d(x,hx)=\lim_{n\rightarrow \infty }\frac{d(x,h^{n}x)}{n} \\ &\leq &\lim_{n\rightarrow \infty }\frac{l_{s}(h^{n})Max\{d(x,sx):s\in S\}}{n =0, \end{eqnarray* which is impossible. This shows that each element in $H$ is elliptic. Since H$ is nilpotent, there is an upper central serie \begin{equation*} \{1\}=Z_{0}\trianglelefteq Z_{1}\trianglelefteq \cdots \trianglelefteq Z_{n}=H \end{equation* such that each $Z_{i+1}/Z_{i}\ $is the center of $G/Z_{i}.$ Note that $Z_{1}$ is abelian. Any finitely generated subgroup $K$ of $Z_{1}$ has a fixed point by Lemma \ref{le1}. Since the countable abelian group $Z_{1}$ is a direct limit of finitely generated subgroups, it also has a fixed point. Inductively suppose that $Z_{i}$ has a fixed point. The quotient (abelian) group $Z_{i+1}/Z_{i}$ acts invariantly on the fixed point set $\mathrm{Fix (Z_{i})$ and also thus has a fixed point. This proves that $Z_{i+1}$ and thus $H$ has a fixed point. \end{proof} \begin{remark} The previous lemma does not hold for $\mathcal{FA}_{n}.$ For example, the subgroup $\langle e_{12}(1)\rangle \cong \mathbb{Z}$ generated by the n\times n$ matrix with $1$ in the $(1,1)$-th position and the diagonals, $0 s elsewhere, is distorted in $\mathrm{SL}_{n}(\mathbb{Z})$ $(n\geq 3).$ But \mathrm{SL}_{n}(\mathbb{Z})$ acts properly on the symmetric space $\mathrm{S }_{n}(\mathbb{R})/SO(n).$ \end{remark} \section{Helly's theorem and Coxeter groups} The classical Helly's theorem says that for $m$ non-empty convex subsets \{F_{i}\}_{i=1}^{m}$ $(m\geq n+2)$ in the Euclidean space $\mathbb{R}^{n}$ satisfying the intersection of any $n+1$ sets $\cap _{j=1}^{n}F_{i_{j}}\neq \emptyset ,$ the whole intersection $\cap _{i=1}^{m}F_{i}\neq \emptyset .$ The following is the Helly theorem for $\mathrm{CAT}(0)$ spaces (cf. Bridson \cite{brid}, Theorem 3.2; Varghese \cite{var}, Theorem 5.10.) \begin{lemma} \label{helly}(the Helly theorem) Let $X$ be a $d$-dimensional (i.e. covering dimension) complete $\mathrm{CAT}(0)$ space and $S$ a finite family of non-empty closed convex subspaces. If the intersection of each $(d+1) -elements of $S$ is non-empty, then $\cap S$ is non-empty. \end{lemma} \begin{corollary} (\cite{var}, 5.1.) Let $G$ be a group, $Y$ a finite generating set of $G$ and $X$ $d$-dimensional complete $\mathrm{CAT}(0)$ space. Let $\Phi :G\rightarrow \mathrm{Isom}(X)$ be a homomorphism. If each $(d+1)$-element subset of $Y$ has a fixed point in $X$, then $G$ has a fixed point in $X$. \end{corollary} The following was firstly proved by Bridson \cite{brid} (Proposition 3.4): \begin{lemma} \label{3.4}Let $X^{d}$ be a complete $\mathrm{CAT}(0)$ space and $G$ a group acting on $X$ by isometries. Suppose that there are integers k_{1},k_{2},...,k_{m}$ and subset $S_{i},i=1,2,...,m$ satisfying (1) every $k_{i}$-element of $S_{i}$ has a common fixed point; (2) $[s_{i},s_{j}]=1$ for any $s_{i}\in S_{i},s_{j}\in S_{j},i\neq j;$ (3) $d<k_{1}+k_{2}+\cdots +k_{m}.$ Then for some $i$ every finitely many elements in $S_{i}$ have a common fixed point. \end{lemma} \subsection{The action of Coxeter groups} A Coxeter group $(W,S)$ is a group $W$ with the presentatio \begin{equation*} \langle s_{1},s_{2},\cdots ,s_{n}\mid (s_{i}s_{j})^{m_{ij}}=1\rangle \end{equation* where $S=\{s_{1},s_{2},\cdots ,s_{n}\},$ $m_{ii}=1$ and $m_{ij}\geq 2$ for i\neq j.$ It is possible that $m_{ij}=\infty ,$ which means no relations of the form $(s_{i}s_{j})^{\infty }$. There is a canonical group homomorphism \rho :W\rightarrow \mathbb{Z}/2$ defined by mapping each $s_{i}$ to the generator $1\in \mathbb{Z}/2.$ The kernel is denoted by $SW=\ker \rho ,$ called the alternating Coxeter subgroup. For a subset $T\subseteq S$, the subgroup $W_{T}<W$ generated by elements in $T$ is called a parabolic subgroup (or a special subgroup in the literature). The Coxeter matrix is the matrix $M=(m_{ij})_{n\times n}$ and the Schl\"{a}fli matrix is $C=(-\cos (\frac{\pi }{m_{ij}}))_{n\times n}.$ The Coxeter graph of $(W,S)$ is a graph defined as the following. The vertices of the graph are labelled by generators in $S.$ Vertices $(s_{i},s_{j})$ are adjacent if and only if m_{ij}\geq 3$. An edge is labelled with the value of $m_{ij}$ whenever the value is $\geq 4.$ The following is well-known (cf. \cite{Davis}). \begin{lemma} \label{finite}The Coxeter group is finite if and only the eigenvalues of the Schl\"{a}fli matrix are all positive. The finite Coxeter groups consist of three one-parameter families of increasing rank $A_{n},B_{n},D_{n}$, one one-parameter family of dimension two $I_{2}(p)$, and six exceptional groups: $E_{6},E_{7},E_{8},F_{4},H_{3}$ and $H_{4}.$ \end{lemma} \bigskip \begin{equation*} \begin{array}{c} \FRAME{itbpF}{4.2531in}{1.1018in}{0in}{}{}{Figure}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "T";width 4.2531in;height 1.1018in;depth 0in;original-width 9.5692in;original-height 2.4587in;cropleft "0";croptop "1";cropright "1";cropbottom "0";tempfilename 'PR7VXG02.bmp';tempfile-properties "XPR";}} \\ \text{Figure 1: Coxeter graphs of the finite Coxeter groups \end{array \end{equation*} \begin{definition} Let $n>0$ be an integer. A Coxeter group $(W,S)$ is $n$-spherical if any n+1 $ elements in $S$ generates a finite group. \end{definition} \begin{lemma} \label{coxe}Let $X$ be a complete $\mathrm{CAT}(0)$ space of dimension n\geq 1$. Let $(W,S)$ be an $n$-spherical Coxeter group. Then any action of W$ on $X$ has a fixed point. \end{lemma} \begin{proof} By the assumption, the fixed point set $X^{s}$ is closed for each generator s\in S$. The intersection $\cap _{i=1}^{k}X^{s_{i}}=X^{W_{\{s_{1},s_{2},\cdots ,s_{k}\}}}$ is not empty for k\leq n+1,$ since $W_{\{s_{1},s_{2},\cdots ,s_{k}\}}$ is finite. The Helly theorem (cf. Lemma \ref{helly}) implies that the fixed point set $X^{W}$ is not empty. \end{proof} \section{Actions of matrix groups on $\mathrm{CAT}(0)$ spaces} In this section, we always suppose that $X$ is a complete $\mathrm{CAT}(0)$ space and $\mathrm{Isom}(X)$ is the group of isometries. Let $R$ be a finitely generated ring and $E_{n}(R)$ (resp. $E_{n}^{\prime }(R)$) the (resp. extended) elementary subgroup. In this section, we will prove Theorem \ref{1.1}. \begin{lemma} \label{lem1.2}Let $1\leq j\neq j\leq n.$ The elementary subgroup $E_{n}(R)$ has property $\mathcal{FA}_{n-2}$ if and only if $(E_{n}(R),e_{ij}(x))$ has property $\mathcal{FA}_{n-2}$ for any $x\in R.$ \end{lemma} \begin{proof} For any $y\in R,$ the two matrices $e_{ij}(x)$ and $e_{ij}(y)$ commute with each other. Lemma \ref{le1} implies that they have a common fixed point. Therefore, the subgroup $e_{ij}:=\langle e_{ij}(x):x\in R\rangle $ has a fixed point. Since all $e_{ij}$ $(i\neq j$) are conjugate, $e_{12}$ has a fixed point. Apply Lemma \ref{le1} again to get that the subgroup $\langle e_{1i}(x)\mid 2\leq i\leq n,x\in R\rangle $ has a fixed point and the upper triangular subgroup $H=\langle e_{ij}(x)\mid 1\leq i<j\leq n,x\in R\rangle $ has a fixed point. Note that $E_{n}(R)$ is generated by $S=\{e_{12},e_{23} \cdots ,e_{n-1,n},e_{n,1}\}$ and any $n-1$ elements of $S$ generate a subgroup isomorphic to $H.$ The Helly theorem implies that $E_{n}(R)$ has a global fixed point. \end{proof} \bigskip Denote by $\{a_{i}\}_{i=1}^{n}$ the standard basis of the finitely generated free abelian group $\mathbb{Z}^{n},$ by $\{\sigma _{i}\}_{i=1}^{n}$ the standard basis of the elementary abelian group $(\mathbb{Z}/2)^{n}.$ Let $G \mathbb{Z}^{n}\rtimes ((\mathbb{Z}/2)^{n}\rtimes S_{n})$ be the semi-direct product, where $\mathbb{Z}/2$ acts on $\mathbb{Z}$ by reflection and $S_{n}$ acts on $(\mathbb{Z}/2)^{n}$ and $\mathbb{Z}^{n}$ by permuting the standard basis. Let $\rho :(\mathbb{Z}/2)^{n}\rtimes S_{n}\rightarrow \{+1,-1\}$ be the group homomorphism mapping each $\sigma _{i}$ and each permutation $(ij)$ to $1$. Denote by $A_{n}^{+}=\ker \rho $ and $G^{+}=\mathbb{Z}^{n}\rtimes A_{n}^{+}.$ \begin{proposition} \label{prop} Let $n\geq 2.$ Any isometric action of $G$ on a complete \mathrm{CAT}(0)$ space $X^{d}$ $(d<n)$ has a global fixed point. In other words, the group $G$ has the property $\mathcal{FA}_{n-1}.$ \end{proposition} \begin{proof} Choose \begin{equation*} S=\{a_{1}\sigma _{1},(12),(23),...,(n-1,n),\sigma _{n}\}. \end{equation* Note that each element of $S$ is of order two. Let $H=(W,S)$ be the coxeter group assigned to the Coxeter matrix $(m_{ij})$ defined by m_{ii}=2,m_{12}=4,m_{i,i+1}=3$ for $2\leq i\leq n$ and $m_{n,n+1}=4$ and all other entries $m_{ij}=\infty .$ Recall that \begin{equation*} H=\langle s_{i}\in S\mid (s_{i}s_{j})^{m_{ij}}=1,1\leq i,j\leq n\rangle . \end{equation* Let $f:H\rightarrow \langle S\rangle $ be defined a \begin{eqnarray*} s_{1} &\rightarrow &a_{1}\sigma _{1}, \\ s_{i+1} &\rightarrow &(i,i+1),1\leq i\leq n-1, \\ s_{n+1} &\rightarrow &\sigma _{n}. \end{eqnarray* It could be directly checked that $f$ is a group homomorphism. The Coxeter graph of $(W,S)$ is a path consisting of $n+1$ vertices with the edges (s_{1},s_{2}),(s_{n},s_{n+1})$ labled by $4.$ The subgraph spanned by any $n$ vertices is either a path of type $B_{n}$ or disjoint union of two paths of type $B_{k}$. By the classification of finite Coxeter groups (cf. Lemma \re {finite}), any spherical subgroup of rank $n$ is finite in $H$. This proves that the subgroup generated by any $n$ elements is finite. The Helly theorem (cf. Lemma \ref{helly}) implies that $S$ has a common fixed point $x$. Note that $a_{1}\in \langle S\rangle $ and thus $\mathbb{Z}^{n}<\langle S\rangle . $ Therefore, the action of $G$ on $X$ has a bounded orbit $Gx$ and thus has a global fixed point by Lemma \ref{basic} (3). \end{proof} \bigskip Note that $\mathbb{Z}^{n}\rtimes ((\mathbb{Z}/2)^{n}\rtimes S_{n})$ could act on the Euclidean space $\mathbb{R}^{n}$ in the standard way. This shows that the bound in the previous proposition is sharp. \bigskip \begin{lemma} \label{lem1.1}Let $E_{n}(R)$ act isometrically on a complete $\mathrm{CAT (0) $ space $X^{d}$ $(d<n-1).$ Assume $n$ is odd. For any element $x\in R,$ the subgroup $\langle e_{1i}(x)\mid i=2,3,\cdots ,n\rangle $ has a fixed point. \end{lemma} \begin{proof} When $n$ is odd, let $H$ be the subgroup generated by \begin{equation*} \{-I_{n}(12),-I_{n}(23),...,-I_{n}(n-1,n),-I_{n}\sigma _{n}\} \end{equation* and $\langle e_{1i}(x)\mid i=2,3,\cdots ,n\rangle .$ Define a group homomorphis \begin{equation*} \alpha :\mathbb{Z}^{n}\rtimes ((\mathbb{Z}/2)^{n}\rtimes S_{n})\rightarrow H \end{equation* by \begin{equation*} a_{i}\mapsto e_{1,i+1}(x),\sigma _{i}\mapsto -I_{n}\varepsilon _{i} \end{equation* and \begin{equation*} (i,i+1)\mapsto -I_{n}(i,i+1). \end{equation* Proposition \ref{prop} implies that $H$ has a fixed point. \end{proof} \bigskip \bigskip \begin{proof}[Proof of Theorem \protect\ref{1.2}] Suppose that $R^{n}\rtimes E_{n}^{\prime }(R)$ acts on a complete $n-1$ dimensional $\mathrm{CAT}(0)$ space $X.$ When $n$ is odd, we view $(\mathbb{ }/2)^{n}\rtimes S_{n}$ as a subgroup of $\mathrm{SL}_{n}(\mathbb{Z})=E_{n} \mathbb{Z})$ through the group homomorphism $\alpha $ in the proof of Lemma \ref{lem1.1}. The ring homomorphism $\mathbb{Z}\rightarrow R$ preserving the identity induces a group homomorphism $\mathrm{GL}_{n}(\mathbb{Z )\rightarrow E_{n}^{\prime }(R).$ Without confusions, we still denote the image by $(\mathbb{Z}/2)^{n}\rtimes S_{n}.$ For any $x\in R,$ let \begin{eqnarray*} S_{x} &=&\{(x,\sigma _{1}),(12),(23),...,(n-1,n),\sigma _{n}\} \\ &\subseteq &R^{n}\rtimes E_{n}^{\prime }(R). \end{eqnarray* The obvious map $S\rightarrow S_{x}$ by $\sigma _{1}a_{1}\rightarrow ((x,0,\cdots ,0),\sigma _{1})$ induces a group homorphis \begin{equation*} \mathbb{Z}^{n}\rtimes ((\mathbb{Z}/2)^{n}\rtimes S_{n})\rightarrow R^{n}\rtimes ((\mathbb{Z}/2)^{n}\rtimes S_{n}). \end{equation* Proposition \ref{prop} implies that the subgroup $xR^{n}$ (for any $x$) and thus $R^{n}$ has a fixed point. \end{proof} \bigskip \begin{proof}[Proof of Theorem \protect\ref{1.1}] When $n$ is odd, we have $E_{n}^{\prime }(R)=E_{n}(R).$ Theorem \ref{1.1} follows Lemma \ref{lem1.1} and Lemma \ref{lem1.2}. When $n$ is even, the same group homomoprhism $\alpha $ as in Lemma \ref{lem1.1} and Proposition \ref{prop} imply that the subgroup $\langle e_{1i}(x)\mid i=2,3,\cdots ,n\rangle $ has a fixed point. Theorem \ref{1.1} is proved by Lemma \ref{1.2 . \end{proof} \bigskip \section{Actions of $\mathrm{Aut}(F_{n})$ on $\mathrm{CAT}(0)$ spaces} Suppose that the free $F_{n}=\langle a_{1},...,a_{n}\rangle $ and $\mathrm Aut}(F_{n})$ the group of automorphisms. For any $1\leq i\leq n,$ let \varepsilon _{i}:F_{n}\rightarrow F_{n}$ be the involution defined by a_{i}\rightarrow a_{i}^{-1}$ and $a_{j}\rightarrow a_{j}$ for any $j\neq i.$ The Nielsen transformations are defined by \begin{equation*} \rho _{ij}:a_{i}\rightarrow a_{i}a_{j},a_{k}\rightarrow a_{k},k\neq i \end{equation* and \begin{equation*} \lambda _{ij}:a_{i}\rightarrow a_{j}a_{i},a_{k}\rightarrow a_{k},k\neq i. \end{equation* For $1\leq i\neq j\leq n,$ denote by $(ij)$ the permutation of basis. It is well-known that $\mathrm{Aut}(F_{n})$ is generated by the elements $\rho _{ij},\lambda _{ij}$ and $\varepsilon _{i}$ (see Gersten \cite{ger}). The following lemma is a variation on the argument that Gersten \cite{ger12} used to show that $\mathrm{Aut}(F_{n})$ cannot act properly and cocompactly on a $\mathrm{CAT}(0)$ space if $n\geq 3.$ (cf. \cite{bh} p. 253). \begin{lemma} \label{gers}Let $G=\langle \alpha ,\beta ,t\mid t^{p}\beta t^{-p}=\beta \alpha ^{p}$ for any integer $p\rangle .$ If $(G,\langle t\rangle )$ has the relative property $\mathcal{FA}_{n},$ then $(G,\langle \alpha \rangle )$ has $\mathcal{FA}_{n}$ for any $n.$ \end{lemma} \begin{proof} Suppose that $G$ acts on an $n$-dimensional complete $\mathrm{CAT}(0)$ space $X$ and $t$ has a fixed point $x\in X$. For any $p$ we hav \begin{eqnarray*} d(\alpha ^{p}x,x) &=&d(\beta ^{-1}t^{p}\beta t^{-p}x,x)\leq d(\beta ^{-1}t^{p}\beta x,\beta ^{-1}t^{p}x)+d(\beta ^{-1}t^{p}x,x) \\ &\leq &d(\beta x,x)+d(\beta ^{-1}x,x). \end{eqnarray* Therefore, the orbit $\{\alpha ^{p}x:p\in \mathbb{Z}\}$ is bounded and thus \alpha $ has a fixed point. \end{proof} \begin{theorem} \label{niel}Let $X^{d}$ be a complete $\mathrm{CAT}(0)$ space and $\mathrm Aut}(F_{n})$ $(n\geq 4)$ act on $X$ by isometries. When $d<n-2,$ each Nielsen transformation of $\mathrm{Aut}(F_{n})$ has a fixed point. \end{theorem} \begin{proof} Let $H$ be the subgroup generated by \begin{equation*} S=\{\rho _{21}^{-1}\rho _{31}(23),(34),...,(n-1,n),(2,n)\}. \end{equation* It can be directly checked that the product of any succesive two elements (including that of the first element and the last one) is of order $3.$ Let H$ be the coxeter group assigned to the Coxeter graph $K$ a loop of length n-1.$ Explicitly, the group \begin{equation*} H=\langle s_{1},\cdots ,s_{n-1}\mid s_{i}^{2}=(s_{i}s_{i+1})^{3}=1,1\leq i\leq n-1\rangle , \end{equation* where $s_{n}=s_{1}.$ Let $f:H\rightarrow \langle S\rangle $ be defined as \begin{eqnarray*} s_{1} &\rightarrow &\rho _{21}^{-1}\rho _{31}(23), \\ s_{i} &\rightarrow &(i+1,i+2),2\leq i\leq n-2, \\ s_{n-1} &\rightarrow &(2,n). \end{eqnarray*} It can be directly checked that $f$ is a group homomorphism. Since the subgraph spanned by $n-2$ vertices in the Coxter graph $K$ is of type $A,$ any $n-2$ elements in $S$ generate a finite group (cf. Lemma \ref{finite}). By the Helley theorem, $S$ has a common fixed point. Therefore, the element \rho _{21}^{-1}\rho _{31}\in \langle S\rangle $ has a fixed point. Thus \rho _{ij}^{-1}\rho _{i^{\prime }j}$ has fixed point for any $i,i^{\prime },j $ after conjugating by a permutation. Let $T=\rho _{32}^{-1}\rho _{42}.$ It can be directly checked that T^{-p}\rho _{13}T^{p}=\rho _{13}\rho _{12}^{p}$ for any integer $p.$ Let G=\langle \alpha ,\beta ,t\mid t^{p}\beta t^{-p}=\beta \alpha ^{p}$ for any integer $p\rangle .$ Define a group homomorphis \begin{equation*} f:G\rightarrow \mathrm{Aut}(F_{n}) \end{equation* by $f(t)=T,$ $f(\alpha )=\rho _{12},f(\beta )=\rho _{13}.$ Lemma \ref{gers} implies that $\rho _{12}$ has a fixed point. Since any Nielsen transformation is conjugate to $\rho _{12},$ the statement is proved. \end{proof} \bigskip Theorem \ref{1.3} is the same as the following theorem. \begin{theorem} Let $X^{d}$ be a complete $\mathrm{CAT}(0)$ space and $\mathrm{Aut}(F_{n})$ (n\geq 4)$ act on $X$ by isometries. When $d<n-2,$ the subgroup $\mathrm{Aut (F_{2})$ has a fixed point. \end{theorem} \begin{proof} It is already proved in Theorem \ref{niel} that each Nielsen transformation has a fixed point. Choose \begin{equation*} S=\{\rho _{12}\varepsilon _{2},\varepsilon _{1},\varepsilon _{2},\varepsilon _{1}\varepsilon _{2}(12)\}. \end{equation* We check that every two elements on $S$ has a common fixed in the following: The elements $\rho _{12}\varepsilon _{2}$ and $\varepsilon _{1}$ lie in \langle \rho _{12},\lambda _{12}\rangle \rtimes \langle \varepsilon _{2},\varepsilon _{1}\rangle \cong \mathbb{Z}^{2}\rtimes (\mathbb{Z}/2)^{2}.$ Since $\rho _{12}$ commutes with $\lambda _{12},$ they have a common fixed point. Therefore, the elements $\rho _{12}\varepsilon _{2}$ and $\varepsilon _{1}$ have a common fixed point. The elements $\rho _{12}\varepsilon _{2}$ and $\varepsilon _{2}$ generate an infinite Dihedral group and thus have a common fixed point. The elements $\rho _{12}\varepsilon _{2}$ and \varepsilon _{1}\varepsilon _{2}(12)$ generate a finite group and thus have a common fixed point. Note that $\mathrm{Aut}(F_{n})$ contains $2[\frac{n}{2}]$ copies of $\mathrm Aut}(F_{2})$ along the diagonal. The Bridson's lemma \ref{3.4} implies that when $d<2[\frac{n}{2}],$ the set $S$ has a common fixed point. Since $S$ generates $\mathrm{Aut}(F_{2}),$ the proof is finished. \end{proof} \begin{lemma} Let $X^{d}$ be a complete $\mathrm{CAT}(0)$ space and $\mathrm{Aut}(F_{n})$ (n\geq 5)$ act on $X$ by isometries. When $d<n-3,$ the subgroup $\mathrm{Aut (F_{2})\ltimes F_{2}$ has a fixed point. \end{lemma} \begin{proof} Let $T:F_{n}\rightarrow F_{n}$ be the transformation given by \begin{equation*} x_{4}\rightarrow x_{4}w^{-1},x_{5}\rightarrow x_{5}w, \end{equation* for a word $w\in \langle x_{1},x_{3}\rangle <F_{n}.$ Then $T^{-1}\rho _{24}T=\rho _{24}\rho _{2,w},$ where $\rho _{2,w}(x_{2})=x_{2}w$ and $\rho _{2,w}(x_{i})=x_{i}$ for any $i\neq 2.$ Consider the set \begin{equation*} S=\{T(45),(56),...,(n-1,n),(2,n),(4,n)\}. \end{equation* A similar argument using Coxeter groups as in the proof of Theorem \ref{niel} shows that any $n-4$ elements of $S$ generate a finite group. This implies that when $d<n-3,$ the whole group has a fixed point by the Helly's theorem. Therefore, the element $T$ has a fixed point. Note that for any integer $p,$ we have $T^{-p}\rho _{24}T^{p}=\rho _{24}\rho _{2,w}^{p}.$ Let $G=\langle \alpha ,\beta ,t\mid t^{p}\beta t^{-p}=\beta \alpha ^{p}$ for any integer p\rangle .$ Define a group homomorphis \begin{equation*} f:G\rightarrow \mathrm{Aut}(F_{n}) \end{equation* by $f(t)=T,$ $f(\alpha )=\rho _{2,w},f(\beta )=\rho _{24}.$ Lemma \ref{gers} implies that $\rho _{2,w}$ has a fixed point. Since $(12)\rho _{2,w}(12)=\rho _{1,v},v\in \langle x_{2},x_{3}\rangle ,$ we see that F_{2}=\langle \rho _{1v}:v\in \langle x_{2},x_{3}\rangle \rangle $ has a fixed point. Note that $\mathrm{Aut}(F_{2})$ normalizes $F_{2}.$ Theorem \re {1.3} shows that $\mathrm{Aut}(F_{2})$ has a fixed. The semi-direct product \mathrm{Aut}(F_{2})\ltimes F_{2}$ has a fixed point by Lemma \ref{le1}. \end{proof} \bigskip \begin{proof}[Proof of Corollary \protect\ref{1.4}] Let \begin{equation*} S=\{\varepsilon _{2}\rho _{12},\varepsilon _{1}(23),\varepsilon _{1}\varepsilon _{2}(12),(i,i+1),i=3,...,n-1,\varepsilon _{n}\}, \end{equation* \begin{equation*} S_{1}=\{\varepsilon _{2}\rho _{12},\varepsilon _{3},\varepsilon _{1}(23),\varepsilon _{1}\varepsilon _{2}(12)\}. \end{equation* It is direct (see \cite{var}) to check that any $2$ elements in $S$ or S_{1} $ generate a finite group and thus have a common fixed point. Note that $\mathrm{Aut}(F_{n})$ contains $[\frac{n}{3}]$ copies of $\mathrm{Aut (F_{3})$ along the diagonal. Since $S_{1}$ lies in $\mathrm{Aut}(F_{3})$, we know that $\mathrm{Aut}(F_{n})$ contains $[\frac{n}{3}]$ copies of $S_{1},$ any two of which commute. Bridson's result (cf. Lemma \ref{3.4}) implies that when $d<2[\frac{n}{3}],$ the set $S_{1}$ has a global fixed point. We will prove that any $(d+1)$-element subset of $S$ has a common fixed point. Inductively, assume that it is already proved for any $(k-1)$-element subset $(k\geq 4)$. Let \begin{equation*} S_{k}=\{\varepsilon _{2}\rho _{12},\varepsilon _{3},\varepsilon _{1}(23),\varepsilon _{1}\varepsilon _{2}(12),(i,i+1),i=3,...,k-1\}. \end{equation* Note that any $k$-element subset of $S$ generates a finite group, except possibly $S_{k}.$ Note also that $S_{k}$ is subset of $\mathrm{Aut}(F_{k})$ and $\mathrm{Aut}(F_{n})$ contains $[\frac{n}{k}]$ copies of $\mathrm{Aut (F_{k})$ and thus of $S_{k}.$ Lemma \ref{3.4} implies that when $d<(k-1) \frac{n}{k}]$ any finitely many elements of $S_{k}$ has a fixed point. Note that \begin{equation*} d<2[\frac{n}{3}]\leq (k-1)[\frac{n}{k}] \end{equation* for any $k\geq 4.$ Therefore, any $k$-element subset $S_{k}$ has a fixed point. The Helly theorem (cf. Lemma \ref{helly}) implies that $\langle S\rangle =\mathrm{Aut}(F_{n})$ has a fixed point. \end{proof} \section{Generalizations} In this section, we generalize the theorems proved in previous sections to a more general setting. For this purpose, we give the following definition. \begin{definition} Let $X$ be a topological space and $G$ be a subgroup of its homeomorphism group $\mathrm{Homeo}(X).$ For an integer $n>0,$ the pair (transformation group) $(X,G)$ is $n$-Helly good if \begin{enumerate} \item[(i)] any finite subgroup $H<G$ has a non-empty fixed point set \mathrm{Fix}(H);$ and \item[(ii)] any collection of finitely many finite subgroups \{H_{i}\}_{i\in I}$ $(H_{i}<G)$ has a non-empty intersection $\cap \mathrm Fix}(H_{i}),$ whenever $n+1$ of them has a nonempty intersection $\cap _{j=1}^{n+1}\mathrm{Fix}(H_{i_{j}}).$ \end{enumerate} \end{definition} In such a case, we call $X$ $n$-Helly good with respect to the group $G.$ \begin{theorem} Let $(X,G)$ be an $m$-Helly good pair. Suppose that $n>m$ and $R$ is a ring. When the semi-direct product $R^{n}\rtimes E_{n}^{\prime }(R)$ acts on $X$ by homeomorphisms in $G,$ the abelian subgroup $\{(x_{1},x_{2},\cdots ,x_{n})\mid x_{i}\in \mathbb{Z}x\subset R\}$ has a fixed point for any $x\in R$. In particular, any elementary matrix $e_{ij}(x)$ has a fixed point when E_{n+1}^{\prime }(R)$ acts on $X$ by homeomorphisms in $G,$ for any $x\in R$ and $1\leq i\neq j\leq n.$ \end{theorem} \begin{proof} The proof is similar to that of Proposition \ref{prop}. For any $x\in R,$ let \begin{eqnarray*} S_{x} &=&\{(x,\sigma _{1}),(12),(23),...,(n-1,n),\sigma _{n}\} \\ &\subseteq &R^{n}\rtimes E_{n}^{\prime }(R). \end{eqnarray* Let $H=(W,S)$ be the coxeter group assigned to the Coxeter matrix $(m_{ij})$ defined by $m_{ii}=2,m_{12}=4,m_{i,i+1}=3$ for $2\leq i\leq n$ and m_{n,n+1}=4$ and all other entries $m_{ij}=\infty .$ Recall that \begin{equation*} H=\langle s_{i}\in S\mid (s_{i}s_{j})^{m_{ij}}=1,1\leq i,j\leq n\rangle . \end{equation* Let $f:H\rightarrow \langle S_{x}\rangle $ be defined a \begin{eqnarray*} s_{1} &\rightarrow &((x,0,\cdots ,0),\sigma _{1}), \\ s_{i+1} &\rightarrow &(i,i+1),1\leq i\leq n-1, \\ s_{n+1} &\rightarrow &\sigma _{n}. \end{eqnarray* It could be directly checked that $f$ is a group homomorphism. The Coxeter graph of $(W,S)$ is a path consisting of $n+1$ vertices with the edges (s_{1},s_{2}),(s_{n},s_{n+1})$ labled by $4.$ The subgraph spanned by any $n$ vertices is either a path of type $B_{n}$ or disjoint union of two paths of type $B_{k}$. By the classification of finite Coxeter groups (cf. Lemma \re {finite}), any spherical subgroup of rank $n$ is finite in $H$. This proves that the subgroup generated by any $n$ elements is finite. The definition of $m$-Helly good spaces implies that $S$ has a common fixed point $x_{0}\in X . Note that $\{(x_{1},x_{2},\cdots ,x_{n})\mid x_{i}\in \mathbb{Z}x\subset R\}<\langle S\rangle .$ We view $R^{n}=\langle e_{12}(x_{1}),e_{13}(x_{2}),\cdots ,e_{1n}(x_{n-1})\mid x_{i}\in R\rangle <E_{n+1}^{\prime }(R).$ Therefore, e_{12}(x)$ has a fixed point. Since any elementary matrix $e_{ij}(x)$ is conjugate to $e_{12}(x),$ the last claim is proved. \end{proof} \bigskip Bridson \cite{brid08} obtains `The Triangle Criterion' for group actions on \mathbb{R}$-trees: If $\Gamma $ is generated by $A_{1}\cup A_{2}\cup A_{3}$ and $H_{ij}=\langle A_{i},A_{j}\rangle $ is finite for $i,j=1,2,3$, then \Gamma $ has property $F\mathbb{R}$ (i.e., any isometric action of $\Gamma $ on an $\mathbb{R}$-tree has a fixed point). In particular, the automorphism group of free group $\mathrm{Aut}(F_{n})$ and the special linear group \mathrm{SL}_{n}(\mathbb{Z}),n\geq 3,$ satisfy the triangle conditions and thus have property $F\mathbb{R}.$ We prove a similar result for $1$-Helly good transformation group $(X,G)$: \begin{theorem} \label{hellyfix}Let $(X,G)$ be a $1$-Helly good pair. If $\Gamma $ is generated by $A_{1}\cup A_{2}\cup A_{3}$ and $H_{ij}=\langle A_{i},A_{j}\rangle $ is finite for $i,j=1,2,3$, then any $G$-action of \Gamma $ on $X$ has a fixed point. In other words, any group homomorphism f:\Gamma \rightarrow X$ has a fixed point. In particular, any $G$-action of \mathrm{Aut}(F_{n})$ (or $\mathrm{SL}_{n}(\mathbb{Z}),n\geq 3$) on $X$ has a fixed point. \end{theorem} \begin{proof} Since each $H_{ij}$ is finite, it has a fixed point by condition (i) in the definition of $n$-Helly good pair. The condition (ii) implies the existence of a global fixed point. \end{proof} Now we discuss some examples, which are $n$-Helly good. \begin{example} A $\mathrm{CAT}(0)$ space $X$ of dimension $n$ is $n$-Helly good, with respect to the isometry group $G=\mathrm{Isom}(X).$ \end{example} \begin{proof} This follows easily the Helly's theorem (see Lemma \ref{helly}) and the Bruhat-Tits fixed point theorem (see \cite{bh}, Corollary 2.8, p.179). \end{proof} \bigskip Recall that a dendrite is a connected compact metrizable space $X$ satisfying one of the following equivalent conditions: (1) Any two distinct points of $X$ can be separated by a point. (2) $X$ is locally connected and contains no simple closed curve. (3) The intersection of any two connected subsets of X remains connected. (4) $X$ is a one-dimensional absolute retract. (5) $C(X)$ is projective in the category of unital $C^{\ast }$-algebras. (6) Any two points are extremities of a unique arc (i.e. a homeomorphic image of a compact interval in the real line) in $X.$ For more details, see \cite{cc}\cite{cd}\cite{na}. We need the following result. \begin{lemma} \label{den}Let $X$ be a dendrite. Let $G$ be a finite group. Then any action of $G$ on $X$ has a fixed point. \end{lemma} \begin{proof} When $G$ is finite, this is well-known (for example, see \cite{dm}, Corollary 5.1). \end{proof} A dendrite $X$ is actually $1$-Helly good, with respect to the homeomorphism group $\mathrm{Homeo}(X).$ This is a special case of the following result. A uniquely arcwise connected space $X$ is a hausdorff topological space in which every pair of distinct points $x,y$ are joined by a unique arc $[x,y] , which is a subset homeomorphic to a closed real interval. For more information, see Bowditch \cite{bow}. \begin{lemma} \label{unique}Let $X$ be a uniquely arcwise connected space. Then $X$ is $1 -Helly good, with respect to the homeomorphism group $\mathrm{Homeo}(X).$ \end{lemma} \begin{proof} Let $H<\mathrm{Homeo}(X)$ be a fintie subgroup. Fix $x\in X.$ Since the orbit $Hx$ is finite and there is a unique arc $[x,hx]$ connecting $x$ and hx$ for each $h\in H,$ the union $\cup _{h\in H}[x,hx]$ is a compact tree and thus a dendrite (cf. \cite{bow}, Lemma 1.2). Note that the union is invariant under the action of $H.$ Lemma \ref{den} says that any action of H $ has a fixed point. Therefore, the action of $H$ on $X$ has a non-empty fixed point set $\mathrm{Fix}(H)$. This proves the condition (i). For the condtion (ii), let $H_{1},H_{2},\cdots ,H_{n}$ be a collection of finitely many finite subgroups. For any two points $x,y\in \mathrm{Fix (H_{i}),$ the unique arc $[x,y]$ is invariant under the action of $H_{i}.$ Since a finite group cannot act freely on the real line $\mathbb{R},$ every point in $[x,y]$ is a fixed point of $H_{i}.$ This proves that the fixed point set $\mathrm{Fix}(H_{i})$ is path connected. For each $i,j=1,2,\cdots ,n,$ choose $x_{i,j}\in \mathrm{Fix}(H_{i})\cap \mathrm{Fix}(H_{j}).$ Denote by $A=\{x_{ij}:i,j=1,2,\cdots ,n\}$ and $Y=\cup _{x,y\in A}[x,y],$ which is a compact tree. The result follows from Helly's theorem for trees where connectedness coincides with convexity. \end{proof} \bigskip Theorem \ref{last} is a corollary of Lemma \ref{unique} and Theorem \re {hellyfix}. \bigskip \bigskip
1,108,101,562,680
arxiv
\section{Introduction and main result} Let $n \geq 1$. Denote by $\mathbb{T}^n=\mathbb{R}^n/\mathbb{Z}^n$ the $n$-dimensional torus. For $c \in \mathbb{R}$, consider the Hamilton-Jacobi equation \begin{equation*} H(x,Du(x))=c \quad (E_0) \end{equation*} where the Hamiltonian $H: \mathbb{T}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$ is jointly continuous and coercive in the momentum. In order to build solutions of the above equation, Lions, Papanicolaou and Varadhan \cite{LPV86} have introduced a technique called \textit{ergodic approximation}. For $\lambda \in (0,1]$, consider the discounted Hamilton-Jacobi equation \begin{equation} \label{disceq} \lambda v_\lambda(x)+H(x,Dv_\lambda(x))=0 \quad (E_\lambda) \end{equation} By a standard argument, this equation has a unique viscosity solution $v_{\lambda}:\mathbb{T}^n \rightarrow \mathbb{R}$. Moreover, $(-\lambda v_{\lambda})$ converges uniformly as $\lambda$ vanishes to a constant $c(H)$ called the \textit{critical value}. Set $u_{\lambda}:=v_{\lambda}+c(H)/\lambda$. The family $(u_\lambda)$ is equi-Lipschitz, and converges uniformly along subsequences towards a solution of $(E_0)$, for $c=c(H)$. Note that $(E_0)$ may have several solutions. Recently, under the assumption that $H$ is convex in the momentum, Davini, Fathi, Iturriaga and Zavidovique \cite{DFIZ16} have proved that $(u_\lambda)$ converges uniformly (towards a solution of ($E_0$)). In addition, they proved that the solution can be characterized using Mather measures and Peierls barriers. \\ Without the convexity assumption, the question of whether $(u_\lambda)$ converges or not remained open. This paper solves negatively this question and provides a 1-dimensional continuous and coercive Hamiltonian for which $(u_\lambda)$ does not converge\footnote{Note that for time-dependent Hamilton-Jacobi equations, several counterexamples about the asymptotic behavior of solutions have been pointed out in \cite{BS00}.}. \begin{theorem} \label{main} There exists a continuous Hamiltonian $H: \mathbb{T}^1 \times \mathbb{R} \rightarrow \mathbb{R}$ that is coercive in the momentum, such that $u_\lambda$ does not converge as $\lambda$ tends to 0. \end{theorem} The example builds on a class of discrete-time repeated games called \textit{stochastic games}. The main ingredient is to establish a connection between recent counterexamples to the existence of the limit value in stochastic games (see \cite{vigeral13, Z13}) and the Hamilton-Jacobi problem\footnote{Let us mention the work \cite{KS06,KS10,IS11,Z16} as other illustrations of the use of repeated games in PDE problems.}. \\ The remainder of the paper is structured as follows. Section 2 presents the stochastic game example. Section 3 shows that in order to prove Theorem \ref{main}, it is enough to study the asymptotic behavior of the stochastic game, when the discount factor vanishes. Section 4 determines the asymptotic behavior of the stochastic game. \section{The stochastic game example} Given a finite set $A$, the set of probability measures over $A$ is denoted by $\Delta(A)$. Given $a \in A$, the Dirac measure at $a$ is denoted by $\delta_a$. \subsection{Description of the game} Consider the following stochastic game $\Gamma$, described by: \begin{itemize} \item A state space $K$ with two elements $\omega_{1}$ and $\omega_{-1}$: $K=\left\{\omega_1,\omega_{-1} \right\}$, \item An action set $I=\left\{0,1\right\}$ for Player 1, \item An action set $J= \left\{2-\sqrt{2}+2^{-2n}, n \geq 1 \right\} \cup \left\{2-\sqrt{2} \right\}$ for Player 2, \item For each $(k,i,j) \in K\times I \times J$, a transition $q(. \, | k,i,j) \in \Delta(K)$ defined by: \begin{eqnarray*} q(.\,|\omega_1,i,j)&=&[i j+(1-i)(1-j)] \delta_{\omega_1}+[i(1-j)+(1-i)j] \delta_{\omega_{-1}}, \\ q(.\,|\omega_{-1},i,j)&=&[i (1-j)+(1-i)j] \delta_{\omega_1}+[i j+(1-i)(1-j)] \delta_{\omega_{-1}}. \end{eqnarray*} \item A payoff function $g: K \times I \times J \rightarrow [0,1]$, defined by \begin{equation*} g(\omega_1,i,j)=i j+2(1-i)(1-j) \quad \text{and} \quad g(\omega_{-1},i,j)=-i j-2(1-i)(1-j). \end{equation*} \end{itemize} Let $k_1 \in K$. The stochastic game $\Gamma^{k_1}$ starting at $k_1$ proceeds as follows: \\ \begin{itemize} \item The initial state is $k_1$. At first stage, Player 2 chooses $j_1 \in J$ and announces it to Player 1. Then, Player 1 chooses $i_1 \in I$, and announces it to Player 2. The payoff at stage 1 is $g(k_1,i_1 ,j_1)$ for Player 1, and $-g(k_1,i_1,j_1)$ for Player 2. A new state $k_2$ is drawn from the probability $q(. \ | k_1,i_1,j_1)$ and announced to both players. Then, the game moves on to stage 2. \item At each stage $m \geq 2$, Player 2 chooses $j_m \in J$ and announces it to Player 1. Then, Player 1 chooses $i_m \in I$, and announces it to Player 2. The payoff at stage $m$ is $g(k_m,i_m,j_m)$ for Player 1, and $-g(k_m,i_m,j_m)$ for Player 2. A new state $k_{m+1}$ is drawn from the probability $q(. \ | k_m,i_m,j_m)$ and announced to both players. Then, the game moves on to stage $m+1$. \end{itemize} \begin{remark} The action set of Player 2 can be interpreted as a set of randomized actions. Indeed, imagine that Player 2 has only two actions, $1$ and $0$. These actions are called \textit{pure actions}. At stage $m$, if Player 2 chooses $j_m \in J$, this means that he plays $1$ with probability $j_m$, and $0$ with probability $1-j_m$. Denote by $\widetilde{j_m} \in \left\{0,1\right\}$ his realized action. Player 1 knows $j_m$ before playing, but does not know $\widetilde{j_m}$. If Player 1 chooses $i_m \in I$ afterwards, then the realized payoff is $g(k_m,i_m,\widetilde{j_m})$. Thus, the payoff $g(k_m,i_m,j_m)$ represents the expectation of $g(k_m,i_m,\widetilde{j_m})$. Likewise, the transition $q(. \, | k_m,i_m,j_m)$ represents the law of $q(k_m,i_m,\widetilde{j_m})$. The transition and payoff in $\Gamma$ when players play pure actions can be represented by the following matrices: \begin{table}[H] \label{game} \centering \caption{Transition and payoff functions in state $\omega_1$ and $\omega_{-1}$} \vspace{0.4cm} \begin{tabular}{ |l|*{3}{c|}} \hline $\omega_1$ & 1 & 0 \\\hline 1 & $1$ & $\overrightarrow{0}$ \\\hline 0 & $\overrightarrow{0}$ & $2$ \\\hline \end{tabular} \hspace{1cm} \begin{tabular}{|l|*{3}{c|}} \hline $\omega_{-1}$ & 1 & 0 \\\hline 1 & $-1$ & $\overleftarrow{0}$ \\\hline 0 & $\overleftarrow{0}$ & $-2$ \\\hline \end{tabular} \end{table} The left-hand side matrix stands for state $\omega_1$, and the right-hand side matrix stands for state $\omega_{-1}$. Consider the left-hand side matrix. Player 1 chooses a row (either $1$ or $0$), and Player 2 chooses a column (either $1$ or $0$). The payoff is given by the numbers: for instance, $g(1,1)=1$ and $g(1,0)=0$. The arrow means that when the corresponding actions are played, the state moves on to state $\omega_{-1}$; otherwise, it stays in $\omega_1$. For instance, $q(.|\omega_1,1,1)=\delta_{\omega_1}$ and $q(.|\omega_1,1,0)=\delta_{\omega_{-1}}$. The interpretation is the same for the right-hand side matrix. In the game $\Gamma$, Player 1 can play only pure actions (1 or 0), and Player 2 can play $1$ with some probability $j \in J$. \\ This matrix representation is convenient to understand the strategic aspects of the game. \end{remark} \vspace{0.3cm} Let us now define formally \textit{strategies}. In general, the decision of a player at stage $m$ may depend on all the information he has: that is, the stage $m$, and all the states and actions before stage $m$. In this paper, it is sufficient to consider a restricted class of strategies, called \textit{stationary strategies}. Formally, a stationary strategy for Player 1 is defined as a mapping $y:K \times J \rightarrow I$. The interpretation is that at stage $m$, if the current state is $k$, and Player 2 plays $j$, then Player 1 plays $y(k,j)$. Thus, Player 1 only bases his decision on the current state and the current action of Player 2. Denote by $Y$ the set of stationary strategies for Player 1. \\ A stationary strategy for Player 2 is defined as a mapping $z:K \rightarrow J$. The interpretation is that at stage $m$, if the current state is $k$, then Player 2 plays $z(k)$. Thus, Player 2 only bases his decision on the current state. Denote by $Z$ the set of stationary strategies for Player 2. \\ The sequence $(k_1,i_1,j_1,k_2,i_2,j_2,...,k_m,i_m,j_m,...) \in H_\infty:=(K \times I \times J)^{\mathbb{N}^*}$ generated along the game is called \textit{history} of the game. Due to the fact that state transitions are random, this is a random variable. The law of this random variable depends on the initial state $k_1$ and the pair of strategies $(y,z)$, and is denoted by $\mathbb{P}^{k_1}_{y,z}$. \\ We will call $g_m$ the $m$-stage random payoff $g(k_m,i_m,j_m)$. Let $\lambda \in (0,1]$. The game $\Gamma^{k_1}_\lambda$ is the game where the strategy set of Player 1 (resp. 2) is $Y$ (resp. $Z$), and the payoff is $\gamma_{\lambda}^{k_1}$, where \begin{equation*} \gamma^{k_1}_{\lambda}(y,z)=\mathbb{E}^{k_1}_{y,z}\left(\sum_{m \geq 1} (1-\lambda)^{m-1} g_m \right). \end{equation*} The goal of Player 1 is to maximize this quantity, while the goal of Player 2 is to minimize this quantity. The game $\Gamma^{k_1}_\lambda$ has a value, that is: \begin{equation*} \min_{z \in Z} \max_{y \in Y} \gamma^{k_1}_{\lambda}(y,z)=\max_{y \in Y} \min_{z \in Z} \gamma^{k_1}_{\lambda}(y,z). \end{equation*} The value of $\Gamma^{k_1}_\lambda$ is then defined as the above quantity, and is denoted by $w_{\lambda}(k_1)$. A strategy for Player 1 is \textit{optimal} if it achieves the right-hand side maximum, and a strategy for Player 2 is \textit{optimal} if it achieves the left-hand side minimum. The interpretation is that if players are rational they should play optimal strategies, and as a result Player 1 should get $w_{\lambda}(k_1)$, and Player 2 should get $-w_{\lambda}(k_1)$. \subsection{Asymptotic behavior of the discounted value} \label{asympt} As we shall see in the next section, for each $\lambda \in (0,1]$, one can associate a discounted Hamilton-Jacobi equation with $c(H)=0$, such that its solution evaluated at $x=1$ is approximately $w_\lambda(\omega_1)$, for $\lambda$ small enough. Thus, the asymptotic behavior of this quantity needs to be studied. \\ \\ Define $\lambda_n:=\displaystyle 2^{-2n}\left(\frac{3}{4}-\frac{1}{\sqrt{2}} \right)^{-1}$ and $\mu_n:=\displaystyle 2^{-2n-1}\left(\frac{3}{4}-\frac{1}{\sqrt{2}} \right)^{-1}$. \begin{proposition} \label{propsto} The following hold: \begin{enumerate}[(i)] \item \label{prop2} $w_{\lambda}(\omega_{-1}) \leq w_{\lambda}(\omega_1) \leq w_{\lambda}(\omega_{-1})+2$ \item \label{key} $\lim_{n \rightarrow +\infty} w_{\lambda_n}(\omega_1)=1/\sqrt{2}$ and $\liminf_{n \rightarrow +\infty} w_{\mu_n}(\omega_1) >1/\sqrt{2}$. Consequently, $(w_{\lambda}(\omega_1))$ does not have a limit when $\lambda \rightarrow 0$. \end{enumerate} \end{proposition} The proof of the above proposition is done in Section \ref{proofprop}. As far as the proof of Theorem \ref{main} is concerned, the key point is \ref{key}. is Let us give here some piece of intuition for this result. Consider the game $\Gamma'$ that is identical to $\Gamma$, except that Player 2's action set is $[0,1]$ instead of $J$. For each $\lambda \in (0,1]$, denote by $w'_{\lambda}$ its discounted value. Because $J \subset [0,1]$, Player 2 is better off in the game $\Gamma'$ compared to the game $\Gamma$: $w'_\lambda \leq w_\lambda$. Interpret now $\Gamma$ and $\Gamma'$ as games with randomized actions, as in Table \ref{game}. As $\lambda$ vanishes, standard computations show that an (almost) optimal stationary strategy for Player 2 in $\Gamma'^{\omega_1}_{\lambda}$ is to play $1$ with probability $p^*(\lambda):=2-\sqrt{2}+\left(\frac{3}{4}-\frac{1}{\sqrt{2}}\right) \lambda$ in both states $\omega_1$ and $\omega_{-1}$, and $(w_{\lambda}(\omega_1))$ converges to $\frac{1}{\sqrt{2}}$. \\ Moreover, for all $n \geq 1$, $p^*(\lambda_n) \in J$. Thus, this strategy is available for Player 2 in $\Gamma$, and consequently $w_{\lambda_n}(\omega_1)=w'_{\lambda_n}(\omega_1)+O(\lambda_n)$, as $n$ tends to infinity. \\ On the other hand, for all $n \geq 1$, $p^*(\mu_n) \notin J$, and the distance of $p^*(\mu_n)$ to $J$ is larger than $\left(\frac{3}{4}-\frac{1}{\sqrt{2}}\right) \mu_n/2$. Consequently, the distance of the optimal strategy in $\Gamma^{\omega_1}_{\mu_n}$ to the optimal strategy in $\Gamma'^{\omega_1}_{\mu_n}$ is of order $\mu_n$. This produces a payoff difference of order $\mu_n$ at each stage, and thus of order 1 in the whole game. Thus, Player 2 is significantly disadvantaged in $\Gamma^{\omega_1}_{\mu_n}$ compared to $\Gamma'^{\omega_1}_{\mu_n}$, and the difference between $w_{\mu_n}(\omega_1)$ and $w'_{\mu_n}(\omega_1)$ is of order 1. \\ \begin{remark} As we shall see in the following section, we have $\lim_{\lambda \rightarrow 0} \lambda w_{\lambda}(\omega_1)=\lim_{\lambda \rightarrow 0} \lambda w_{\lambda}(\omega_{-1})=0$. \end{remark} The next section explains how to derive the counterexample and Theorem \ref{main} from Proposition \ref{propsto}. \section{Link with the PDE problem and proof of Theorem \ref{main}} The following proposition expresses $w_\lambda$ as the solution of a functional equation called \textit{Shapley equation}. \begin{proposition} \label{shapley} Let $\lambda \in (0,1]$ and $u_\lambda:=(1+\lambda)^{-1}w_{\lambda/(1+\lambda)}$. For each $r \in \left\{-1,1\right\}$, the two following equations hold: \begin{enumerate}[(i)] \item \label{shapleyv} \begin{eqnarray*} w_\lambda(\omega_r)&=&\min_{j \in J} \max_{i \in I} \left\{ g(\omega_r,i,j)+(1-\lambda) \left[q(\omega_{r} | \omega_r,i,j) w_\lambda(\omega_r) +q(\omega_{-r} | \omega_r,i,j) w_{\lambda}(\omega_{-r})\right] \right\} \end{eqnarray*} \item \label{shapleyu} \begin{equation*} \lambda u_\lambda(\omega_r)=\min_{j \in J} \max_{i \in I} \left\{ g(\omega_r,i,j)+ q(\omega_{-r}|\omega_r,i,j) \left[u_\lambda(\omega_{-r})-u_\lambda(\omega_r) \right]\right\} \end{equation*} \begin{proof} \begin{enumerate} \item The intuition is the following. Consider the game $\Gamma^{\omega_r}_{\lambda}$. At stage 1, the state is $\omega_r$. The term $g$ represents the current payoff, and the term $(1-\lambda)[...]$ represents the future optimal payoff, that is, the payoff that Player 1 should get from stage 2 to infinity. Thus, this equation means that the value of $\Gamma^{\omega_r}_{\lambda}$ coincides with the value of the one-stage game, where the payoff is a combination of the current payoff and the future optimal payoff. For a formal derivation of this type of equation, we refer to \cite[VII.1., p. 392]{MSZ}. \item Evaluating the previous equation at $\lambda/(1+\lambda)$ yields \begin{eqnarray*} w_{\frac{\lambda}{1+\lambda}}(\omega_r)&=&\min_{j \in J} \max_{i \in I} \left\{g(\omega_r,i,j)+ \frac{1}{1+\lambda}[q(\omega_{r} | \omega_r,i,j) w_{\frac{\lambda}{1+\lambda}}(\omega_r) +q(\omega_{-r}|\omega_r,i,j) w_{\frac{\lambda}{1+\lambda}}(\omega_{-r})] \right\} \end{eqnarray*} Using the fact that $q(\omega_{r} | \omega_r,i,j)=1-q(\omega_{-r}|\omega_r,i,j)$ yields the result. \end{enumerate} \end{proof} \end{enumerate} \end{proposition} For $p \in \mathbb{R}$, define $H_1:\mathbb{R} \rightarrow \mathbb{R}$ and $H_{-1}:\mathbb{R} \rightarrow \mathbb{R}$ by \begin{equation*} H_1(p):= \left\{ \begin{array}{ll} \displaystyle -\min_{j \in J} \max_{i \in I} \left\{ g(\omega_1,i,j)-p \cdot ([i(1-j)+(1-i)j] \right\}, & \mbox{if} \ |p| \leq 2, \\ H_1\left(2\frac{p}{|p|}\right)+|p|-2 & \mbox{if} \ |p| > 2. \end{array} \right. \end{equation*} \begin{equation*} H_{-1}(p):= \left\{ \begin{array}{ll} \displaystyle -\min_{j \in J} \max_{i \in I} \left\{ g(\omega_{-1},i,j)+p \cdot ([i(1-j)+(1-i)j] \right\}, & \mbox{if} \ |p| \leq 2, \\ H_{-1}\left(2\frac{p}{|p|}\right)+|p|-2 & \mbox{if} \ |p| > 2. \end{array} \right. \end{equation*} For $x \in [-1,1]$ and $p \in \mathbb{R}$, let \begin{equation} \label{hamiltonian} H(x,p):=|x| H_1(\left|p\right|)+(1-|x|)H_{-1}(\left|p\right|). \end{equation} Note that the definition of $H_1$ and $H_{-1}$ for $|p| > 2$ ensures that $\lim_{|p| \rightarrow +\infty} H_1(p)=\lim_{|p| \rightarrow +\infty} H_{-1}(p)=+\infty$, thus $\lim_{|p| \rightarrow +\infty} H(p)=+\infty$. Note also that for all $x \in [-1,1]$, $H_1(x,.)$ is increasing on $[-2,2]$ and $H_{-1}(x,.)$ is decreasing on $[-2,2]$. \\ Thanks to Proposition \ref{shapley} \ref{shapleyu} and Proposition \ref{propsto} \ref{prop2}, we have $\lambda u_\lambda(\omega_1)+H_1(u_\lambda(\omega_1)-u_\lambda(\omega_{-1}))=0$ and $\lambda u_\lambda(\omega_{-1})+H_{-1}(u_\lambda(\omega_1)-u_\lambda(\omega_{-1}))=0$. \\ \\ For $x \in [-1,1]$, let $u_{\lambda}(x)=|x| u_{\lambda}(\omega_1)+(1-|x|) u_{\lambda}(\omega_{-1})$. Let $x \in (-1,1) \setminus \left\{0\right\}$. Proposition \ref{propsto} \ref{prop2} implies that $w_{\lambda}(\omega_{-1}) \leq w_{\lambda}(\omega_1)$, thus $u_{\lambda}(\omega_{-1}) \leq u_{\lambda}(\omega_1)$ and $|Du_{\lambda}(x)|=u_{\lambda}(\omega_1)-u_{\lambda}(\omega_{-1})$. Consequently, Proposition \ref{shapley} \ref{shapleyu} yields \begin{equation} \label{HJBex} \lambda u_\lambda(x)+H(x,Du_\lambda(x))=0. \end{equation} Note that the above equation is identical to equation (\ref{HJBex}). The reason why we use the notation $u_\lambda$ and not $v_\lambda$ is that, as we shall see, $c(H)=0$, thus $u_\lambda$ coincides with $v_\lambda$. Extend $u_\lambda$ and $H(.,p)$ ($p \in \mathbb{R}$) as 2-periodic functions defined on $\mathbb{R}$. The Hamiltonian $H$ is continuous and coercive in the momentum, and the above equation holds in a classical sense for all $x \in \mathbb{R} \setminus \mathbb{Z}$. \\ For $x \in \mathbb{R}$, denote by $D^+u_\lambda(x)$ (resp., $D^- u_\lambda(x)$) the super-differential (resp., the sub-differential) of $u_\lambda$ at $x$. Let us show that $u_{\lambda}$ is a viscosity solution of (\ref{HJBex}) on $\mathbb{R}$. By 2-periodicity, it is enough to show that this is a viscosity solution for $x=0$ and $x=1$. \\ Let us start by $x=0$. We have $D^+ u_{\lambda}(0)=\emptyset$ and $D^- u_{\lambda}(0)=[u_\lambda(\omega_{-1})-u_\lambda(\omega_{1}),u_\lambda(\omega_{1})-u_\lambda(\omega_{-1})]$. \\ Let $p \in D^- u_{\lambda}(0)$. Then $H_{-1}(p) \geq H_{-1}(u_\lambda(\omega_{1})-u_\lambda(\omega_{-1}))=-\lambda u_{\lambda}(\omega_{-1})$, thus $\lambda u_{\lambda}(0)+H(0,p) \geq 0$. Consequently, $u_\lambda$ is a viscosity solution at $x=0$. \\ \\ Consider now the case $x=1$. We have $D^+ u_{\lambda}(1)=[u_\lambda(\omega_{-1})-u_\lambda(\omega_{1}),u_\lambda(\omega_{1})-u_\lambda(\omega_{-1})]$ and $D^- u_{\lambda}(1)=\emptyset$. \\ Let $p \in D^+ u_{\lambda}(1)$. Then $H_{1}(p) \leq H_{1}(u_\lambda(\omega_{1})-u_\lambda(\omega_{-1}))=-\lambda u_{\lambda}(\omega_1)$, thus $\lambda u_{\lambda}(1)+H(1,p) \geq 0$. Consequently, $u_\lambda$ is a viscosity solution at $x=1$. \\ Let us now conclude the proof of Theorem \ref{main}. Because $H$ is 2-periodic, equation $(\ref{HJBex})$ can be considered as written on $\mathbb{T}^1$. As noticed before, equation (\ref{HJBex}) is identical to equation (\ref{disceq}). Therefore, as stated in the introduction, $-\lambda u_{\lambda}$ converges to $c(H)$. Proposition \ref{propsto} \ref{key} implies that $(-\lambda_n u_{\lambda_n}(1))$ converges to 0, thus $c(H)=0$. Still by Proposition \ref{propsto} \ref{key}, $(u_\lambda(1))$ does not have a limit when $\lambda$ tends to 0: Theorem \ref{main} is proved. \section{Proof of Proposition \ref{propsto}} \label{proofprop} \subsection{Proof of \ref{prop2}} Consider Proposition \ref{shapley} \ref{shapleyv} for $r=1$. Take $j=1/2 \in J$. It yields \begin{eqnarray} \nonumber w_{\lambda}(\omega_1) &\leq& \max_{i \in I} \left\{ 1+(1-\lambda) \left(\frac{1}{2}w_{\lambda}(\omega_1)+\frac{1}{2} w_{\lambda}(\omega_{-1}) \right) \right\} \\ &=& 1+\frac{1}{2}(1-\lambda)\left(w_{\lambda}(\omega_1)+w_{\lambda}(\omega_{-1}) \right) \label{prog1}. \end{eqnarray} Take $i=1/2$. This yields \begin{equation} \label{prog2} w_{\lambda}(\omega_1) \geq \frac{1}{2}+\frac{1}{2}(1-\lambda)\left(w_{\lambda}(\omega_1)+w_{\lambda}(\omega_{-1}) \right). \end{equation} For $r=-1$, taking $j=1/2$ and then $i=1/2$ produce the following inequalities: \begin{equation} \label{prog3} w_{\lambda}(\omega_{-1}) \leq -\frac{1}{2}+\frac{1}{2}(1-\lambda)\left(w_{\lambda}(\omega_1)+w_{\lambda}(\omega_{-1}) \right), \end{equation} and \begin{equation} \label{prog4} w_{\lambda}(\omega_{-1}) \geq -1+\frac{1}{2}(1-\lambda)\left(w_{\lambda}(\omega_1)+w_{\lambda}(\omega_{-1}) \right). \end{equation} Combining (\ref{prog2}) and (\ref{prog3}) yield $w_{\lambda}(\omega_1) \geq w_{\lambda}(\omega_{-1})+1 \geq w_{\lambda}(\omega_{-1})$. Combining (\ref{prog1}) and (\ref{prog4}) yield $w_{\lambda}(\omega_{-1}) \geq w_{\lambda}(\omega_{1})-2$, and \ref{prop2} is proved. \subsection{Proof of \ref{key}} For $(i,i') \in \left\{0,1\right\}^2$, consider the strategy $y$ of Player 1 that plays $i$ in $\omega_1$ and $i'$ in $\omega_{-1}$ (regardless of Player 2's actions), and the strategy $z$ of Player 2 that plays $a$ in state $\omega_1$, and $b$ in state $\omega_{-1}$. Denote $\gamma_{\lambda}^{i,i'}(a,b):=\gamma_{\lambda}^{\omega_1}(y,z)$ (resp., $\widetilde{\gamma}_{\lambda}^{i,i'}(a,b):=\gamma_{\lambda}^{\omega_{-1}}(y,z)$), the payoff in $\Gamma_{\lambda}^{\omega_1}$ (resp., $\Gamma_{\lambda}^{\omega_{-1}}$), when $(y,z)$ is played. \begin{proposition} The following hold: \begin{enumerate} \item \begin{equation*} \gamma^{0,0}_{\lambda}(a,b)=\frac{-2(a-b-\lambda+b \lambda)}{\lambda(a+b+\lambda-a \lambda -b \lambda)} \end{equation*} \begin{equation*} \gamma^{1,1}_{\lambda}(a,b)=-\frac{a-b+\lambda b}{\lambda(a+b+\lambda-a \lambda -b \lambda-2)} \end{equation*} \begin{equation*} \gamma^{1,0}_{\lambda}(a,b)=\frac{2a+2b+2\lambda-a b -a \lambda-2b\lambda+a b \lambda-2}{\lambda(b-a+\lambda a -b \lambda+1)} \end{equation*} \begin{equation*} \gamma^{0,1}_{\lambda}(a,b)=-\frac{2a+2b-a b -2 b \lambda+ a b \lambda-2}{\lambda(a-b-a \lambda +b \lambda +1)} \end{equation*} \item \begin{itemize} \item $\gamma^{0,0}_{\lambda}$ is decreasing with respect to $a$ and increasing with respect to $b$. \item $\gamma_{\lambda}^{1,1}$ is increasing with respect to $a$ and decreasing with respect to $b$. \item $\gamma_{\lambda}^{1,0}$ is increasing with respect to $a$ and $b$. \item $\gamma_{\lambda}^{0,1}$ is decreasing with respect to $a$ and $b$. \end{itemize} \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item The payoffs $\gamma^{0,0}_{\lambda}(a,b)$ and $\widetilde{\gamma}_{\lambda}^{0,0}(a,b)$ satisfy the following recursive equation: \begin{eqnarray*} \gamma^{0,0}_{\lambda}(a,b)&=&a(1-\lambda)\widetilde{\gamma}_{\lambda}^{0,0}(a,b)+(1-a) (2+(1-\lambda) \gamma^{0,0}_{\lambda}(a,b)) \\ \widetilde{\gamma}^{0,0}_{\lambda}(a,b)&=&a(1-\lambda) \gamma_{\lambda}^{0,0}(a,b)+(1-a) (-2+(1-\lambda) \widetilde{\gamma}^{0,0}_{\lambda}(a,b)) \end{eqnarray*} Combining these two relations give the first equality. The three other equalities can be derived in a similar fashion. \item These monotonicity properties are simply obtained by deriving $\gamma^{i,i'}_\lambda$ with respect to $a$ and $b$. \end{enumerate} \end{proof} For $\lambda \in (0,1]$, set $\displaystyle p^*(\lambda):=2-\sqrt{2}+\left(\frac{3}{4}-\frac{1}{\sqrt{2}}\right) \lambda$. Define a strategy $y$ of Player 1 in the following way: \begin{itemize} \item in state $\omega_1$, play $0$ if $j \leq p^*(\lambda)$, play 1 otherwise, \item in state $\omega_{-1}$, play $1$ if $j \leq p^*(\lambda)$, play 0 otherwise. \end{itemize} The rationale behind this strategy can be found in Section \ref{asympt}. \\ For all $n \geq 1$, define \begin{equation*} \lambda_n:= \frac{2^{-2n}}{\displaystyle \frac{3}{4}-\sqrt{2}} \quad \text{and} \quad \mu_n:=\frac{2^{-2n-1}}{\displaystyle \frac{3}{4}-\sqrt{2}}. \end{equation*} \begin{proposition} The following hold: \begin{enumerate} \item \begin{equation*} \lim_{n \rightarrow +\infty} \min_{z \in Z} \gamma_{\lambda_n}(y,z)=\frac{1}{\sqrt{2}} \end{equation*} \item \begin{equation*} \lim_{n \rightarrow +\infty} \min_{z \in Z} \gamma_{\mu_n}(y,z)=\frac{5}{2 \sqrt{2}}-1>\frac{1}{\sqrt{2}} \end{equation*} \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item For all $(i,i') \in \left\{0,1\right\}$, \begin{equation*} \lim_{n \rightarrow +\infty} \gamma^{i,i'}_{\lambda_n}(p^*(\lambda_n),p^*(\lambda_n)) = \displaystyle \frac{1}{\sqrt{2}}, \end{equation*} and the result follows. \item Let $z$ be a strategy of Player 2, and $a=z(\omega_1)$ and $b=z(\omega_{-1}$). \\ Note that the interval $(p^*(\mu_n/2),p^*(2\mu_n))$ does not intersect $J$. The following cases are distinguished: \begin{case}{$a \leq p^*(\mu_n)$ and $b \leq p^*(\mu_n)$, thus $a \leq p^*(\mu_n/2)$ and $b \leq p^*(\mu_n/2)$} \end{case} We have $\gamma^{\omega_1}_{\mu_n}(y,z)=\gamma^{0,1}_{\mu_n}(a,b) \geq \gamma^{0,1}_{\mu_n}(p^*(\mu_n/2),p^*(\mu_n/2)) \underset{n \rightarrow +\infty}{\rightarrow} \displaystyle \frac{5}{4} \sqrt{2}-1$ \begin{case}{$a \leq p^*(\mu_n)$ and $b \geq p^*(\mu_n)$}, thus $a \leq p^*(\mu_n/2)$ and $b \geq p^*(2\mu_n)$ \end{case} We have $\gamma^{\omega_1}_{\mu_n}(y,z)=\gamma^{0,0}_{\mu_n}(a,b) \geq \gamma^{0,0}_{\mu_n}(p^*(\mu_n/2), p^*(2\mu_n)) \underset{n \rightarrow +\infty}{\rightarrow} \displaystyle - \frac{1+2 \sqrt{2}}{8(-2+\sqrt{2})}$ \begin{case}{$a \geq p^*(\mu_n)$ and $b \leq p^*(\mu_n)$}, thus $a \geq p^*(2\mu_n)$ and $b \leq p^*(\mu_n/2)$ \end{case} We have $\gamma^{\omega_1}_{\mu_n}(y,z)=\gamma^{1,1}_{\mu_n}(a,b) \geq \gamma^{1,1}_{\mu_n}(p^*(2\mu_n), p^*(\mu_n/2)) \underset{n \rightarrow +\infty}{\rightarrow} (-1/16) \displaystyle \frac{-25+14 \sqrt{2}}{\sqrt{2}-1}$ \begin{case}{$a \geq p^*(\mu_n)$ and $b \geq p^*(\mu_n)$}, thus $a \geq p^*(2\mu_n)$ and $b \geq p^*(2\mu_n)$ \end{case} We have $\gamma^{\omega_1}_{\mu_n}(y,z)=\gamma^{1,0}_{\mu_n}(a,b) \geq \gamma^{1,0}_{\mu_n}(p^*(2\mu_n), p^*(2\mu_n)) \underset{n \rightarrow +\infty}{\rightarrow} \displaystyle -2+2\sqrt{2}$ \\ Among these cases, the smallest limit is $\frac{5}{4}(\sqrt{2}-1)$, and the result follows. \end{enumerate} \end{proof} \section*{Acknowledgments} The author is very grateful to Pierre Cardaliaguet, Andrea Davini, Abraham Neyman, Sylvain Sorin and Maxime Zavidovique for helpful discussions.
1,108,101,562,681
arxiv
\section*{References} \bibliographystyle{unsrt}
1,108,101,562,682
arxiv
\section{Introduction} The distribution of charmonium states in $B$ meson decays is of special interest, because it provides a testing ground for color suppression in $B$ decays already attainable in the present experiments. At the $b$ mass scale, a leading order calculation generates a consistent pattern of color suppression for the production of $S$-wave charmonium states. In the context of the `color singlet mechanism' the production is related to the decay of a $b$ quark in the $B$ meson, at short distances, into a color singlet $c\bar{c}$ pair plus other quarks and gluons. The $c$ and $\bar{c}$ have almost equal momenta and reside in the appropriate angular-momentum state. This subprocess is dominated by the color suppressed internal $W$-exchange diagram, where hard gluon exchanges lead to an effective neutral current \cite{Bodwin}. Deviations from this leading order `singlet mechanism', which arise from relativistic corrections and soft gluon induced fragmentation of the $c\bar{c}$ into charmonium, affect the normalization but not the structure of the amplitude $b\rightarrow J/\psi\,X$ (in which $X$ sums over hadronic states) \cite{Ko}. Several authors made predictions for the branching ratio of the direct decay $b\rightarrow J/\psi\,X$ to leading order in $\alpha_s$ [1--5]. Recently a next-to-leading order QCD calculation was reported \cite{Berg}. The authors pointed out that the process under consideration cannot be explained presently by a standard application of perturbative QCD. The difficulties entering the analysis arise from the strong suppression of the leading order color singlet Wilson coefficient, causing considerable cancellations in three different orders of $\alpha_s$. Alternatively one may use a phenomenological factorization approach and take into account deviations from this prescription by replacing the color singlet renormalization coefficient by an effective neutral current coefficient $a_2$, which has to be determined from experimental data in a model dependent way. Both ARGUS and CLEO reported on inclusive $B$ decays to $J/\psi$, where they identified a sizable component of decays with three or more particles in the final state \cite{Argus1, Cleo3}. In a recent publication \cite{Cleo1} CLEO presented an analysis based on a data sample, which is one order of magnitude larger than those of previous studies, corresponding to an error reduction by a factor of $2.4\,$. A comparison between theory and experiment requires the branching ratio resulting from {\it direct} decays. Therefore the group corrected the measured $J/\psi$ spectrum for the `feed-down' modes $B\rightarrow \psi(2s)\,X$ and $B\rightarrow \chi_{c1}\,X$. As a result they found the direct branching ratio $(0.80\pm 0.08)\%$. A theoretical analysis carried out by the CLEO group following Ref.~\cite{Bodwin} yields $0.75\%$, in good agreement with the measurement, where the phenomenological factorization prescription has been applied using a value $|a_2|=0.23\,$. The latter value has to be modified when incorporating bound state effects for the initial $B$ meson, which have to be included in order to gain a theoretical prediction of the $J/\psi$ momentum spectrum within an inclusive approach. So far there is not available a detailed fit to experimental data of momentum spectra obtained from theory. Palmer and Stech have made a preliminary attempt using a simple wave function formalism \cite{Pal-St}. In this paper we will investigate two different approaches in order to account for the experimental results. As stated above a sizable component of the momentum spectrum is due to nonresonant multi-particle final states. Consequently, for a wide range of phase-space an inclusive description using quark-hadron duality, which was extensively applied in the inclusive {\it semileptonic} decays of $B$ mesons, will be appropriate. The semileptonic decays involve large energies and momentum transfers of the weak current over most of the phase-space. Several groups noticed that these kinematic regions investigate the light-cone behaviour of currents for which the methods of deep inelastic scattering are valid. There are various approaches available which derived useful results describing the lepton spectrum in the decay $B\rightarrow X_{u(c)}e\nu$. These approaches use different formalisms, but they still follow two basic steps.\\[3mm] (i) The decay of the $b\rightarrow u(c)$ which are propagating as free particles is considered and the resulting spectrum is folded with the momentum distribution of the $b$ quark obtaining moments of the quark distribution \cite{Jin1, Jin2}. A similar approach is given by the ACCMM model \cite{Alta}, in which one accounts for the binding effects by treating the spectator quark as particle on the mass shell with nonzero momentum and averages over this momentum. \\[3mm] (ii) After the introduction of the point-like behaviour one still has to calculate the expectation value of the bilocal transition operator between the initial hadronic state to gain the width of the decay. In this context the new development is the application of the operator product expansion (OPE), which involves an expansion in inverse powers of the heavy quark mass $m_b$ incorporating the formalism of the Heavy Quark Effective Theory (HQET) \cite{HQET}. The calculation of the lepton energy spectra requires a modified expansion in powers of $1/(1-y)m_b$ with $y$ being the normalized lepton energy [18--23].\\[3mm] The analytic form of the structure function of the heavy quark within the $B$ meson controls the endpoint behaviour of the leptonic spectrum, since in this region the OPE does not converge. It has been shown that the prediction of HQET far from the endpoint gives approximately the same shape of the spectrum as the ACCMM model with a Fermi motion parameter $p_f\simeq 0.3$ GeV \cite{Bigi4}. Yet the model dependence of this region is quite small \cite{Lisa}. Therefore it is of great relevance to test the approaches with decay channels of the $B$ meson different from the semileptonic one in order to extract the momentum distribution function of the heavy quark in a direct way and then compare it with the distribution function obtained in semileptonic decays. The inclusive decay $B\rightarrow J/\psi\,X$ provides a well suited testing ground since, as we shall show, in this process the dependence on the structure function of the $b$ quark is more direct, i.e., the $J/\psi$ momentum spectrum is proportional to the distribution function. A QCD based analysis of the decay using HQET is only of limited validity, since due to the smaller energy release the convergence of the OPE is less reliable than for semileptonic decays. Therefore one has to resort to a formalism in which the structure function can be modeled in terms of parameters that may be obtained from experiment. We will apply two different approaches. In Section \ref{IPM-sec} we present a field-theoretical version of the inclusive parton model, which two of us together with Jin already applied for semileptonic decays \cite{Jin1, Jin2}. In Section \ref{AC-sec} we calculate the $J/\psi$ momentum spectrum within the framework of the ACCMM model. In each of the analyses we determine the distribution parameter of the structure function from a comparison to the recent CLEO data. The overall normalization of the theoretical spectra further provides information respecting the value of the effective color singlet coefficient $|a_2|$. Finally we use our results to extract the value of $|V_{ub}/V_{cb}|$ in the ACCMM model from the semileptonic decay channel of the $B$. This procedure requires some additional remarks which are given in Section \ref{remarks}. In Section \ref{polarization} we compare our results with experimental data for the polarization of the $J/\psi$. The summary can be found in Section \ref{summary}. \section{\boldmath $B\rightarrow J/\psi\,X$ \unboldmath in the Inclusive Parton Model (IPM) \label{IPM-sec}} \subsection{Calculation of the Differential Branching Ratio} In order to investigate the interaction which induces charmonium production in $B$ decays, we start at the quark level at a high-energy scale where the Hamiltonian is known. This is renormalized to lower energies to produce an effective four-fermion interaction \begin{equation} {\cal H}_{ef\hspace{-0.5mm}f}=\frac{G_F}{\sqrt{2}}V^{ }_{cb}V_{cs}^* \left[ \left(c_2+\frac{1}{3}c_1\right) \bar{s}\gamma_\mu^L b\,\bar{c} \gamma^\mu_L c +\frac{1}{2}c_1\bar{c}\gamma_\mu^L\lambda^i c\,\bar{s}\gamma^\mu_L \lambda^i b\right]+\{s\rightarrow d\} \, , \label{Leff} \end{equation} $\gamma_\mu^L=\gamma_\mu (1-\gamma_5)$, where operators arising from penguin and box diagrams have been neglected. The first term is a color singlet, the second a color octet operator. The renormalization functions (Wilson coefficients) $c_i(\mu)$ have been computed up to the next-to-leading order corrections in Ref.~\cite{Buras}. The effective interaction (\ref{Leff}) is evaluated using a factorization prescription by which the amplitude can be written as a product of matrix elements of current operators. Deviations from this prescription are parametrized by substituting the color suppressed singlet coefficient by a free parameter $a_2$ which is written in terms of the effective number of colors $(1/\xi)$ and has to be determined by model-dependent fits to the experimental data, \begin{equation} \left(c_2+\frac{1}{N_c}c_1\right)\hspace{5mm} \rightarrow \hspace{5mm} a_2=c_2+\xi c_1 \, . \end{equation} $a_2$ is equivalent to the coefficient introduced for type-II processes in the factorization model of BSW \cite{Wirbel} for {\it exclusive} decays. Using CLEO data from two-body decay modes with $J/\psi$ mesons in the final state, an analysis within the BSW model yields \cite{a2exp} \begin{equation} |a_2|=0.26\pm 0.01 \pm 0.01 \pm 0.02 \, , \label{a2} \end{equation} where the second systematic error is due to the $B$ meson production fractions and lifetimes. Theoretical uncertainties mainly due to hadronic form factors are not included. A corresponding analysis \cite{Browder} using the model of {\it Neubert et al.} \cite{Neubert} provides a central value $|a_2|=0.23$, whereas that of {\it Deandrea et al.} \cite{Deandrea} yields $|a_2|=0.25$ (for $|V_{cb}|=0.041$, $\tau_B=1.44$ ps and $f_{D(D^*)}=220$ MeV). In this paper we extend the factorization model for exclusive two-body decays to an inclusive picture at the quark level. This is reasonable because, as was stated in Ref.~\cite{Waldi}, many-body final states will most likely start as two color singlet quark antiquark pairs, including intermediate massive resonances, where strong phases from final state interactions disappear in the sum of all states. The authors of Ref.~\cite{Waldi} made use of this inclusive picture to determine the lifetime ratio $\tau(B^+)/\tau(B^0)$. As we intend to reproduce not only the branching ratio but also the momentum spectrum for the decay $B\rightarrow J/\psi\,X$, we have to incorporate the momentum distribution of the $b$ quark in the $B$ meson. In our analysis we will let the effective color singlet coefficient $|a_2|$ to float within the range which is covered by the various models for exclusive modes. We consider the matrix element of the Hamiltonian (\ref{Leff}) for our process (in the explicit notation we only refer to the dominant decay involving an $s$ quark in the final state), \begin{equation} \langle J/\psi\, X_s| {\cal H}_{ef\hspace{-0.5mm}f} |B\rangle= a_2 \frac{G_f}{\sqrt{2}}V^{ }_{cb}V_{cs}^*\langle J/\psi|\bar{c} \gamma_\mu^L c|0\rangle \langle X_s| \bar{s}\gamma^\mu_L b|B\rangle\, , \label{mel} \end{equation} where we make use of the phenomenological factorization prescription. The second term of Eq.~(\ref{Leff}) does not contribute between color singlet states. For the matrix element of the $\bar{c}c(1S)$ at the origin we use the current-identity for pure vector-like states, \begin{equation} \langle 0|\bar{c}(0)\gamma_\mu^L c(0)|J/\psi\rangle = \varepsilon_{\mu} f_\psi M_{\psi} \, . \label{ci} \end{equation} This identification is valid, because the time-scale of the interaction to be considered is larger than the scale for the formation of the singlet state $(t_{int}>1/M_\psi$). $f_\psi$ is the $J/\psi$ decay constant and can, to leading order in the relative heavy quark velocity $v$, be related to the radial part of its nonrelativistic wave function. In principle one can determine $f_\psi$ from the electromagnetic decay $J/\psi\rightarrow e^+ e^-$, but practically one is confronted with the ignorance of higher order QCD corrections to the decay rate which might be of great relevance in view of existing large leading order corrections. Within the studies of {\it exclusive} two-body decays $B\rightarrow J/\psi\,M$ \cite{Neubert} the central value of the decay constant was fixed at $f_\psi=0.384$ GeV which results from the omission of any QCD correction (as of relativistic corrections to the wave function of the $J/\psi$). This analysis refers to the 1990 Particle Data Group value for the branching ratio of $J/\psi\rightarrow e^+ e^-$. Therefore when extracting $|a_2|$ from {\it inclusive} $B$ decays we will use the above value in order to compare the result with Eq.~(\ref{a2}). This procedure requires some additional remarks which concern both the studies of exclusive and inclusive decays. Relating the parameter which contains the nonperturbative effects in the production of $1S$-charmonium with the $J/\psi$ decay constant implies the neglect of color octet contributions to the fragmentation process in the explicit calculation. In Ref.~\cite{Ko} it was argued in the framework of nonrelativistic QCD (NRQCD), that the fragmentation of a $(\bar{c}c)_8(^3S_1)$ state into a $J/\psi$ in the long-distance scale may give sizable contributions to the $B\rightarrow J/\psi\,X$ decay rate. This feature occurs, despite the fact that the color octet operator is of the order ${\cal O}(v^4)$ respecting the velocity counting rules in the NRQCD, because of the large suppression of the color singlet Wilson coefficient. As the structure of the amplitude for the effective point-like decay $b\rightarrow J/\psi\,X$ remains unchanged, considering nonleading effects in the fragmentation process only affects the normalization of the $B\rightarrow J/\psi\,X$ decay spectrum. Using the leading order value $f_{\psi}=0.384$ GeV therefore means that the effective neutral current coefficient $|a_2|$, which is determined from a comparison with experimental data, includes contributions due to the charmonium fragmentation. Consequently, since these contributions are different for various charmonium states \cite{Ko}, the universality of $|a_2|$ is limited to $B$ decays involving fixed $S$-wave resonances.\\ \\ Reducing the matrix element (\ref{mel}) of the mesonic decay $B\rightarrow J/\psi\,X_s$ to that of the point-like effective neutral current subprocess $b\rightarrow J/\psi\,s$ (see Fig.~1c) in the valence quark approximation one obtains \begin{equation} \langle J/\psi\, X_s|{\cal H}_{ef\hspace{-0.5mm}f}|B\rangle=a_2 V^{ }_{cb} V_{cs}^*M_{\psi}f_{\psi}\frac{G_F}{\sqrt{2}}\varepsilon^*_{\mu} \bar{s}\gamma^{\mu}(1-\gamma_5)b \, . \end{equation} This expression for a free quark decay has to be modified by incorporating bound state corrections due to the strong interaction between heavy and light quark in the initial meson. In the {\it intuitive} parton model one has to take the incoherent sum over all subprocesses in which a distribution function $f(x)$ is introduced as a weight, \begin{equation} \Gamma_B = \int_0^1 dx\, f(x) \Gamma_b(x) \, , \hspace{5mm} \mbox{where} \hspace{5mm}\int_0^1 f(x)\,dx=1 \, . \label{ipm} \end{equation} In Eq.~(\ref{ipm}) transverse momenta of the heavy quark relative to the B meson are neglected which leads to the Lorentz invariant prescription $p_b^\mu = x P_B^\mu$. This approach corresponds to an equal velocity approximation, i.e., valence quarks and $B$ meson have the same velocity, and is valid at the infinite momentum frame. We therefore proceed as usual computing a Lorentz invariant quantity and then use it in any other frame. The basis for our approach is given by a {\it field-theoretical} version of the parton model. If we square the second matrix element on the right-hand side of Eq.~(\ref{mel}) and sum over all final states, which guarantees incoherence, we produce the hadronic tensor corresponding to the effective transition $B\rightarrow X_f$, \begin{equation} W_{\mu\nu}=-\frac{1}{2\pi}\int d^4 y \,e^{iqy}\langle B| \left[ j_\mu(y),\, j_\nu^\dagger (0)\right]|B\rangle \, , \end{equation} where $j_\mu(x)=\,:\hspace{-1mm}\bar{q}_f\gamma_\mu^L b(x)\hspace{-1mm}:$ is the left-handed effective neutral current. The same tensor structure we encountered in the semileptonic $B$ meson decays [11--14]. Substituting for the leptonic tensor the corresponding expression of the $J/\psi$ current, \begin{equation} L_{\mu\nu}(J/\psi)=2\pi^3 |V_{cf}|^2 |a_2|^2 f_\psi^2 M_{\psi}^2 \left(-g_{\mu\nu}+\frac{(k_\psi)_\mu (k_\psi)_\nu}{M_\psi^2}\right)\, , \label{psiten} \end{equation} we can write the differential rate for the decay $B\rightarrow X_f\,J/\psi$ in the restframe of the $B$ in exact analogy to the semileptonic decay, \begin{equation} d\Gamma_{(B\rightarrow X_f\,J/\psi)}=\frac{G_F^2 |V_{cb}|^2}{(2\pi)^5M_B} L_{\mu\nu}W^{\mu\nu}\frac{d^3k_\psi}{2E_\psi}\,. \end{equation} The general structure of the hadronic tensor reads \begin{eqnarray} W^{\mu\nu}&=&-g^{\mu\nu}W_1+\frac{1}{M_B^2}P_B^\mu P_B^\nu W_2-i\varepsilon^{ \mu\nu\alpha\beta}\frac{1}{M_B^2}P_{B\alpha}q_\beta W_3 \label{tensor1} \\ && +\frac{1}{M_B^2}q^\mu q^\nu W_4 +\frac{1}{M_B^2}(P_B^\mu q^\nu+P_B^\nu q^\mu)W_5+\frac{1}{M_B^2} i(P_B^\mu q^\nu-P_B^\nu q^\mu)W_6 \,, \nonumber \end{eqnarray} where in our case $q=k_\psi$. Introducing the light-cone dominance as in Ref.~[11--15] allows to refer the tensor corresponding to the transition $B\rightarrow X_f$ in the decay $B\rightarrow X_f\,J/\psi$ to the distribution function $f(x)$ of the heavy quark momentum, \begin{equation} W_{\mu\nu ,f}=4(S_{\mu\rho\nu\lambda}-i\varepsilon_{\mu\rho\nu\lambda}) \int dx\,f(x)P_B^\lambda(xP_B-k_\psi)^\rho \varepsilon[(xP_B-k_\psi)_0] \delta[(xP_B-k_\psi)^2-m_f^2]\, , \label{tensor2} \end{equation} with \begin{equation} S_{\mu\rho\nu\lambda}=g_{\mu\rho}g_{\nu\lambda}-g_{\mu\nu}g_{\rho\lambda} +g_{\mu\lambda}g_{\nu\rho} \hspace{5mm} \mbox{and} \hspace{5mm} \varepsilon (x)=\left\{ \begin{array}{ll} +1\, , \hspace{4mm} & x > 0 \\ -1 \, , & x < 0 \, . \end{array} \right. \end{equation} {}From Eqs.~(\ref{tensor1}) and (\ref{tensor2}) one obtains the structure functions $W_i$ of the hadronic tensor in the restframe of the $B$ meson, which two of us together with Jin derived in Ref.~\cite{Jin1, Jin2} in exact analogy for the semileptonic decay rate, \begin{eqnarray} \begin{array}{lcl} W_1=2[f(x_+)+f(x_-)]\, ,& \hspace{4mm} & \displaystyle W_2=\frac{8}{x_+-x_-}\left[x_+ f(x_+)-x_- f(x_-)\right]\, ,\\[4mm] \displaystyle W_3=W_5=\frac{-4}{x_+-x_-}\left[f(x_+)-f(x_-)\right]\, ,& \hspace{3mm} & W_4=W_6=0\, , \label{wi} \end{array} \end{eqnarray} where we have defined \begin{equation} \hspace*{-2mm} M_B x_\pm=\frac{1}{M_B}\left(P_B k_{\psi}\pm \sqrt{(P_Bk_{\psi})^2+(m_f^2-M_{\psi}^2)M_B^2}\,\right) \stackrel{(|\vec{p}_B|=0)}{=}\left(E_{\psi}\pm\sqrt{ |\vec{k}_{\psi}|^2+m_f^2}\,\right) \, . \end{equation} For the case of a massless final state quark $x_\pm$ are identical to the usual light-cone variables. The dependence of the distribution function on the single scaling variable $x$ is a consequence of the light-cone dominance, since in this framework the structure function $f(x)$ is obtained as the Fourier transform of the reduced bilocal matrix element, which contains the long-distance contributions to the hadronic tensor \cite{Jin3}, \begin{equation} f(x)=\int d(yP_B)e^{ix(yP_B)}\frac{1}{4\pi M_B^2} \langle B|\bar{b}(0)P_B^\mu \gamma_\mu^L b(y)|B\rangle|_{y^2=0} \, . \label{disfunc} \end{equation} The terms proportional to $f(x_-)$ in Eq.~(\ref{wi}) are a result of the field-theoretical approach. The kinematical range for $x_-$ belongs to a final state quark with negative energy. Therefore the corresponding terms can be associated, formally, with quark pair-creation in the $B$ meson (see Fig.~1b), whereas the dominant terms proportional to $f(x_+)$ reflect the direct decay of Fig.~1a. Including the small $f(x_-)$ term as well as the CKM-suppressed transition $c\rightarrow d$ the Lorentz invariant width of the decay $B\rightarrow J/\psi\,X$ reads \begin{eqnarray} E_B\cdot d\Gamma_{(B\rightarrow J/\psi\,X)} &=&\sum_{f=s,\,d}\frac{|C_f|^2}{2\pi^2}\int dx\, f(x) \left[ P_B(xP_B-k_\psi)+\frac{2}{M_\psi^2}(P_Bk_\psi)(xP_B k_\psi-M_\psi^2) \right] \nonumber \\ && \times \varepsilon[(xP_B-k_\psi)_0]\delta^{(1)}\left[(xP_B-k_\psi)^2-m_f^2 \right]\frac{d^3k_\psi}{2 E_\psi} \, \label{diffbr} \end{eqnarray} with \begin{equation} |C_f|=\frac{G_F}{\sqrt{2}}|V_{cb}||V_{cf}|M_{\psi}f_{\psi}|a_2|\,. \label{cf} \end{equation} Here the modification from the field-theoretical approach simply enters in form of the step-function $\varepsilon(x)$ which exactly provides the additional terms $f(x_-)$ in the tensor coefficients (\ref{wi}). Evaluating Eq.~(\ref{diffbr}) in the restframe of the $B$ meson we arrive at the formula for the $J/\psi$ momentum spectrum, \begin{equation} \frac{d\Gamma_{(B\rightarrow J/\psi\,X)}}{d|\vec{k}_{\psi}|}= \sum_{f=s,\,d}\frac{|C_f|^2}{4\pi M_B}\left[ 3W_1+W_2\frac{|\vec{k}_\psi|^2} {M_\psi^2}\right]\frac{|\vec{k}_\psi|^2}{E_\psi}, \end{equation} within the framework of the inclusive parton model. Using Eq.~(\ref{wi}) the corresponding differential branching ratio can be written as \begin{eqnarray} \lefteqn{\frac{1}{\Gamma_B} \frac{d\Gamma_{(B\rightarrow J/\psi\,X)}}{d|\vec{k}_{\psi}|} \left(|\vec{k}_{\psi}|,\,|\vec{p}_B|=0 \right)= } \label{br}\\ &&\tau_B\sum_{f=s,\,d}\frac{|C_f|^2} {2\pi M_B}\frac{|\vec{k}_{\psi}|^2}{E_{\psi}}\left[f(x_+) \left(1+\frac{2E_{\psi}M_B}{M_{\psi}^2}\left(x_+-\frac{2 m_f^2}{M_B^2 (x_+-x_-)}\right)\right)+(x_+\leftrightarrow x_-)\right] \, , \nonumber \end{eqnarray} within the kinematical range \begin{equation} M_{\psi}\le E_{\psi}\le \frac{M_B^2+M_{\psi}^2-S_{min}^{(f)}}{2M_B} \, , \hspace{1cm} S_{min}^{(f)}=m_f^2 \, . \label{kin} \end{equation} To compare our result (\ref{br}) with data from CLEO a Lorentz boost has in addition to be performed from the restframe of the $B$ meson to a $B$ produced at the $\Upsilon (4S)$ resonance ($|\vec{p}_B|=0.34$ GeV). The explicit form of the boost integral we give in Section \ref{ac-cal} in the context of the ACCMM model (see Eq.~(\ref{brac})).\\ \\ All terms are known in this decay except for the product $(|a_2|f_\psi)$ and the structure function which occurs with two arguments. We note that when neglecting the small $f(x_-)$ contribution the decay spectrum is directly proportional to $f(x_+)$. Thus measuring the momentum spectrum of the $J/\psi$ we can read from the data the distribution function. The extraction of $f(x_+)$ is direct, in contrast to the extraction from the electron energy spectrum in the semileptonic decay of the $B$ meson which involves an integral over the structure function. In the following Section we compare the predictions of Eq.~(\ref{br}) with the existing experimental data. \subsection{Analysis and Numerical Evaluation \label{subnum1} } The probability distribution function (\ref{disfunc}) is not Lorentz invariant and is defined in the infinite momentum frame. It cannot be transformed to the restframe of the $B$ meson (or a frame which corresponds to a $B$ meson produced at the $\Upsilon (4s)$ resonance), because it involves an infinite sum of quark-antiquark pairs whose calculation requires a complete solution of the field-theory. In the absence of direct measurements of the distribution function we use a one-parameter Ansatz and fix the distribution parameter by comparing our results with data. Referring to theoretical studies which pointed out that the distribution and fragmentation function of heavy quarks peak at large values of $x$ \cite{Bjorken, Brodsky}, as a working hypothesis we assume that the functional form of both is similar. The latter is known from experiment and we shall use the Peterson functional form [35--37] \begin{equation} f_\varepsilon(x)=N_\varepsilon \frac{x(1-x)^2}{[(1-x)^2+\varepsilon_p x]^2}\, , \label{peterson} \end{equation} with $\varepsilon_p$ being the free parameter and $N_\varepsilon$ the corresponding normalization constant. This form we already applied in the semileptonic decays of the $B$ meson \cite{Jin1, Jin2}. In Fig.~2a we show the distribution function $f_\varepsilon(x)$ for various values of $\varepsilon_p$. The kinematical range for the two arguments $x_\pm(m_f)$ of the distribution function according to Eq.~(\ref{kin}) reads \begin{equation} \frac{M_{\psi}+m_f}{M_B}\le x_+\le 1\, , \hspace{1cm} \frac{M_{\psi}^2-m_f^2}{M_B^2}\le x_-\le \frac{M_{\psi}-m_f}{M_B} \, . \end{equation} As pointed out in Fig.~3 the variable $x_-$ only occurs with values at which $f_\varepsilon (x_-)$ is small. Therefore the corresponding contribution to the differential branching ratio (\ref{br}) is small. We use the distribution (\ref{peterson}) to fit the measured momentum spectrum of the $J/\psi$ which was given by the CLEO group \cite{Cleo1}. As argued above we fix $f_\psi=0.384$ GeV and vary $|a_2|$ according to the range covered by the various models for exclusive decays. The shape of the theoretical spectrum is determined by the distribution parameter $\varepsilon_p$, which is illustrated in Fig.~4 where we show the spectrum for $|a_2|=0.275$ and several values of $\varepsilon_p$. A general feature of the analysis is our difficulty in reproducing the data over the whole range of phase-space. Confronted with this problem, we lay greater emphasis on the appropriate description of the {\it low} momentum range ($|\vec{k}_\psi| \le 1.4$ GeV) in our simultaneous fits of $|a_2|$, which in this decay has the meaning of a pure normalization constant, and the distribution parameter $\varepsilon_p$. Within this region the $J/\psi$ spectrum obtains a sizable contribution from decay channels containing three or more particles in the final state (where higher $K^*$ resonances are assumed to be unimportant), whereas the high momentum range is purely determined by the exclusive two-body decays $B\rightarrow J/\psi\,K^{(*)}$. Therefore incoherence as a necessary ingredient of the IPM is limited to the former region. Furthermore we demand the reproduction of the measured branching ratio for the decay under consideration. Thus if we apply small values for the distribution parameter $\varepsilon_p\le 0.006$ we cannot account for the low momentum region, which is underestimated significantly within the theoretical spectrum (see Fig.~5). On the other hand, demanding the reproduction of the branching ratio, a distribution corresponding to $\varepsilon_p \ge 0.010$ would imply a value $|a_2|\ge 0.30$, which is beyond the range given from the study of exclusive two-body decays. Therefore, in spite of the difficulty to reproduce the data accurately over the whole range of phase-space, the investigation of the decay $B\rightarrow J/\psi\,X$ is more restrictive with regard to the distribution parameter than the semileptonic decay of the $B$ meson which involves an integral over the structure function. Considering the fact that the IPM does not include hadronization effects and consequently yields an averaged spectrum, a satisfactory description of the dataset is achieved for $\varepsilon_p={\cal O}(0.008)$, $|a_2|={\cal O}(0.285)$ (compare Fig.~5) when using current masses for the final state quarks. Applying constituent masses would involve an incorrect position of the maximum of the momentum spectrum and, in addition, a sizable underestimate of the high momentum range (see Fig.~6). Yet also for $\varepsilon_p=0.008$ there is apparent a moderate systematic underestimate of the latter region. Since the model presumes the existence of multiple final states it may not hold for the high momentum range ($|\vec{k}_\psi| \ge 1.4$ GeV) where two-body decays determine the spectrum. Following the CLEO analysis, as a trial we subtract the expected contribution of the exclusive decays $B\rightarrow K^{(*)}J/\psi$ from the data for the semi-inclusive momentum spectrum. Carrying out the fit for this modified momentum distribution we observe that the agreement between theory and experiment is improved for large values of the distribution parameter $\varepsilon_p\ge 0.012$ (however only when applying constituent masses for the final state quarks), corresponding to a soft momentum distribution of the heavy quark. In Fig.~7 we show the modified spectrum for $|a_2|=0.275$ and $\varepsilon_p$ as stated there. This additional fit may serve as an indication that a value $\varepsilon_p={\cal O}(0.014)$ has to be suggested within the model when restricting the analysis of decay spectra to the `incoherence region' where multi-particle final states exist. Then additional information from a model describing the dominant two-body decay channels has to be implemented to reproduce the data over the whole range of phase-space. Finally, the errors in the data, although substantially improved, are still significant and a crucial test will be possible, when the error bars are further reduced. It is of special interest to establish whether the deviations at $|\vec{k}_\psi|\simeq 0.5$ GeV from the smooth shape of the spectrum, which is predicted in our inclusive approach, survive and to extend the analysis of the composition, in terms of resonances and continuum, for $|\vec{k}_\psi| \ge 1.4$ GeV. \section{\boldmath $B\rightarrow J/\psi\,X$ \unboldmath in the ACCMM Model \label{AC-sec}} \subsection{Calculation of the Differential Branching Ratio \label{ac-cal}} A second approach which allows us to determine the momentum distribution of the $J/\psi$ in the semi-inclusive decay of the $B$ meson is given through the ACCMM model \cite{Alta}. In this model the bound state corrections to the simple quark picture are incorporated by attributing to the spectator quark a Fermi motion within the meson. The momentum spectrum of the $J/\psi$ is then obtained by folding the Fermi motion with the spectrum from the $b$ quark decay. In Ref.~\cite{Barger} the shape of the $J/\psi$ momentum distribution resulting from Fermi momentum smearing has been given in the restframe of the $B$ meson, but without a comparison with data, and without consideration of the light spectator quark mass. Moreover the authors applied the constituent mass for the strange quark in the final state, whereas we will show that a satisfactory reproduction of the data can only be obtained applying the current mass. According to Eq.~(\ref{ci}) the $J/\psi$ is treated as a color singlet current using the factorization assumption of Eq.~(\ref{mel}). The spectator quark is handled as on-shell particle with definite mass $m_{sp}$ and momentum $|\vec{p\hspace{0.2mm}}|=p$. Consequently, the $b$ quark is considered to be off-shell with a virtual mass $W$ given in the restframe of the $B$ meson by energy-momentum conservation as \begin{equation} W^2(p)=M_B^2+m_{sp}^2-2M_B\sqrt{m_{sp}^2+p^2}\,. \label{vmass} \end{equation} {\it Altarelli et al.} introduced in the model a gaussian probability distribution $\phi(p)$ for the spectator (and thus for the heavy quark) momentum, \begin{equation} \phi(p)=\frac{4}{\sqrt{\pi}p_f^3}\exp\left(-p^2/p_f^2\right) \, , \label{ac-dist} \end{equation} normalized according to \begin{equation} \int_0^\infty dp\, p^2 \phi(p) = 1\, . \end{equation} Here a free parameter $p_f$ is adopted for the gaussian width which has to be determined by experiment. The main difference between the inclusive parton model and ACCMM is that the latter one must consider a $b$ quark in flight. We therefore start from the momentum spectrum of the $J/\psi$ resulting from the decay $b\rightarrow q_f\,J/\psi$ $(f=s,\,d)$ of a $b$ quark of mass $W$ and momentum $p$ which is given by \begin{equation} \frac{d\Gamma_b^{(f)}}{d|\vec{k}_{\psi}|}\left(|\vec{k}_{\psi}|,\,p\right)= \gamma_b^{-1} \frac{\Gamma_0^{(f)}}{k_+^{(b,\,f)}(p)-|k_-^{(b,\,f)}(p)|}\left[\theta \left(|\vec{k}_{\psi}|-|k_-^{(b,\,f)}(p)|\right)-\theta \left(|\vec{k}_{\psi}|-k_+^{(b,\,f)}(p)\right)\right] \, . \label{gbp} \end{equation} Here we have defined \begin{equation} \theta (x)=\left\{ \begin{array}{ll} 1, \hspace{4mm} & x > 0 \\ 0, & x < 0 \, . \end{array} \right. \end{equation} $\Gamma_0^{(f)}$ is the width of the analogous decay in the restframe of the heavy quark where we have confined ourselves to the leading order result, \begin{equation} \Gamma_0^{(f)} = \frac{|C_f|^2}{2\pi}\frac{k_0^{(f)}}{W^2}\left[m_f^2 +\frac{1}{2}\left(W^2-m_f^2 -M_{\psi}^2\right)\left(2+\frac{W^2-m_f^2}{M_{\psi}^2}\right)\right] \, , \label{g0} \end{equation} with $|C_f|$ according to Eq.~(\ref{cf}) and with the momentum $k_0^{(f)}$ of the $J/\psi$, \begin{equation} k_0^{(f)}=\frac{1}{2W}\left[\left(W^2-m_f^2+M_{\psi}^2\right)^2-4W^2 M_{\psi}^2\right]^{\frac{1}{2}} \;, \hspace{5mm} E_0^{(f)} = \sqrt{k_0^{(f)\,2}+M_{\psi}^2} \, . \end{equation} For a vanishing mass $m_f$ of the final quark Eq.~(\ref{g0}) is equivalent to Eq.~(5) in Ref.~\cite{Wise}. In Eq.~(\ref{gbp}) $k^{(b,\,f)}_\pm$ give the limits of the momentum range which results from the Lorentz boost from the restframe of the $b$ quark to a frame where the $b$ has a nonvanishing momentum $p$, \begin{equation} k^{(b,\,f)}_{\pm}(p)=\frac{1}{W}\left(E_b k_0^{(f)}\pm p E_0^{(f)} \right)\, , \end{equation} and $\gamma_b^{-1}$ is the corresponding Lorentz factor, \begin{equation} \gamma_b^{-1}=\frac{W}{E_b}\;, \hspace{1cm} E_b=\sqrt{W^2+p^2}\,. \end{equation} To calculate the momentum spectrum of the $J/\psi$ from the semi-inclusive decay of the $B$ meson one has to fold the heavy quark momentum probability distribution with the spectrum (\ref{gbp}) resulting from the quark subprocess. Performing this we finally arrive at the expression for the differential branching ratio for a $B$ meson in flight, \begin{eqnarray} \lefteqn{\hspace*{-1cm}\frac{1}{\Gamma_B} \frac{d\Gamma_{(B\rightarrow J/\psi\,X)}}{d|\vec{k}_{\psi}|} \left(|\vec{k}_{\psi}|, \, |\vec{p}_B|\right)= }\label{brac}\\ && \tau_B\sum_{f=s,\,d}\hspace{2mm} \int\limits_{|k_-(|\vec{k}_{\psi}|)|}^{k^{(f)}_+(|\vec{k}_{\psi}|)}\; \frac{d|\vec{k}_{\psi}'|}{k_+(|\vec{k}_{\psi}'|)-|k_-(|\vec{k}_{\psi}'|)|} \int\limits_0^{p_{max}^{(f)}}dp\;p^2\phi(p) \frac{d\Gamma_b^{(f)}}{d|\vec{k}_{\psi}|}\left(|\vec{k}_{\psi}'|,\,p\right) \, . \nonumber \end{eqnarray} Here $p_{max}^{(f)}$ is the maximum kinematically allowed value of the quark momentum $p$, i.e., that which makes $W$ in Eq.~(\ref{vmass}) equal to $W=m_f+M_{\psi}$, \begin{equation} p_{max}^{(f)}=\frac{1}{2M_B}\left[(M_B^2+m_{sp}^2-(m_f+M_{\psi})^2)^2 -4m_{sp}^2M_B^2 \right]^\frac{1}{2} \, . \end{equation} The first integration in Eq.~(\ref{brac}) results from the transformation from the spectrum for a $B$ meson at rest to the spectrum for a $B$ meson in flight, where \begin{equation} k_{\pm}(|\vec{k}_{\psi}|)=\frac{1}{M_B}\left(E_B |\vec{k}_{\psi}|\pm |\vec{p}_B| E_{\psi}\right) \, , \hspace{5mm} k_+^{(f)}(|\vec{k}_{\psi}|)=\mbox{min}\{k_+(|\vec{k}_\psi|),\, k_{max}^{(f)}\}\, , \end{equation} $k_{max}^{(f)}$ being the maximum value of the $J/\psi$ momentum from the decay $B\rightarrow J/\psi\,X_f$ in the restframe of the $B$, \begin{equation} k_{max}^{(f)}=\frac{1}{2 M_B}\left[\left(M_B^2+M_\psi^2 -S_{min}^{(f)}\right)^2-4 M_B^2 M_\psi^2\right]^{\frac{1}{2}}\, ,\hspace{5mm} S_{min}^{(f)}=(m_f+m_{sp})^2\, . \label{lim} \end{equation} In the following Section we make use of Eq.~(\ref{brac}) to compare the model predictions with experimental data. \subsection{Analysis and Numerical Evaluation \label{ac-an}} Both models, the inclusive parton model as well as the ACCMM model incorporate the bound state structure of the $B$ meson by postulating a momentum distribution for the heavy quark. Introducing in the latter model another $x$-variable as the ratio $x=W/M_B$, the appropriate distribution function for the relative mass fraction $w(x)$ of the $b$ quark in the restframe of the $B$ meson is given as \begin{eqnarray} w(x) & = & \frac{2M_B^2}{\sqrt{\pi}p_f^3}\,p(x)\,x\left(1-x^2+\frac{m_{sp}^2} {M_B^2}\right)\exp\left[-p(x)^2/p_f^2\right] \, , \nonumber \\[1mm] p(x) & = & \frac{M_B}{2}\left[(1-x^2)^2-2\frac{m_{sp}^2}{M_B^2}(1+x^2)+ \frac{m_{sp}^4}{M_B^4}\right]^{\frac{1}{2}} \, , \label{wx} \end{eqnarray} where the corresponding normalization reads \begin{equation} \int_0^{1-m_{sp}/M_B}w(x)\,dx=1 \, . \end{equation} In Fig.~2b we show the distribution function $w(x)$ for $m_{sp}=0.2$ GeV and various values of $p_f$. From Eqs.~(\ref{wx}) one realizes that for given on-shell masses the Fermi parameter $p_f$ determines the average value $\langle W \rangle=M_B \langle x \rangle$ of the virtual $b$ quark mass and hence also the total decay width. Both models have the advantage of avoiding the mass of the heavy quark as an independent parameter. As a consequence, the phase-space is treated correctly by means of using mesonic degrees of freedom. Nevertheless the endpoint of the phase-space in the ACCMM model is slightly different from that in the IPM because of the on-shell mass $m_{sp}$ of the spectator quark from which the minimal invariant mass square of the hadronic system $S_{min}^{(f)}$ results as given in Eq.~(\ref{lim}). The shape of the momentum spectrum in the decay $B\rightarrow J/\psi X$ is mainly determined by the value of $p_f$. The extraction of this value from a comparison with experimental data is important not only for explaining the decay itself and thus testing the factorization assumption of Eq.~(\ref{mel}), but also for the determination of $|V_{ub}/V_{cb}|$ from the endpoint region of the inclusive {\it semileptonic} $B\rightarrow X_{c(u)}l\nu$ decay spectrum. As recently stressed in Ref.~\cite{Kim} the experimental extraction of $p_f$ from semileptonic decays has been ambiguous, because various parameters of the model were fitted simultaneously to the lepton energy spectrum where in addition the perturbative QCD corrections are important in the endpoint region (especially for $b\rightarrow u$). Furthermore, as stated in Ref.~\cite{Lisa}, for a large range of the phase-space the bound state corrections are of minor importance in this decay channel. Whereas usually $p_f=0.3$ GeV is used for the data analysis the authors of Ref.~\cite{Kim} calculate the Fermi parameter theoretically in the relativistic quark model and obtain $p_f=0.54$ GeV. A CLEO analysis \cite{Cleo4} of the endpoint lepton spectrum in the semileptonic decay channel employing the common value $p_f=0.3$ GeV (and $m_{sp}=0.15$ GeV) yields a discrepancy between the ACCMM and the ISGW model of {\it Isgur et al.} \cite{ISGW} in the determination of $|V_{ub}/V_{cb}|$, \begin{eqnarray} 10^2\times |V_{ub}/V_{cb}|^2 & = & 0.57\pm 0.11 \hspace{1cm}(\mbox{ACCMM}) \nonumber \\ & = & 1.02\pm 0.20 \hspace{1cm} (\mbox{\it Isgur et al.}) \, . \end{eqnarray} Using however $p_f=0.5$ GeV the authors of Ref.~\cite{Kim} arrive at the value $10^2\times |V_{ub}/V_{cb}|^2=1.03$ within the ACCMM model, obtaining a good agreement with the ISGW model. The decay $B\rightarrow J/\psi\,X$ provides an independent way of extracting the Fermi parameter $p_f$. In contrast to the lepton spectrum in the semileptonic decay the shape of the $J/\psi$ momentum spectrum is highly sensitive to this parameter over the whole phase-space. This is analogous to the determination of the parameter $\varepsilon_p$ in Section \ref{subnum1}$\,$. \\ \\ Employing the spectator distribution function (\ref{ac-dist}) we calculated the momentum spectrum for the decay of $B$ mesons produced at the $\Upsilon(4s)$ resonance. Fig.~8 shows the comparison with the CLEO data for $f_\psi=0.384$ GeV, $m_{sp}=0.2$ GeV, $m_s=0.125$ GeV, $m_d=0$ and various sets of values for $p_f$ and $|a_2|$ as stated there. Here we want to emphasize that analogous to the IPM the best fit is achieved when employing current masses for the final state quarks. This is a consequence of the fact that within the theory these are treated as free particles without consideration of fragmentation. The application of constituent masses would yield a sizable underestimate of the high momentum region (see Fig.~9). In addition to the final state quark masses $m_f$ the spectator mass also determines the range of phase-space and the position of the maximum (compare Fig.~10). The latter is in accordance with the data for $m_{sp}=0.2$ GeV. In Fig.~8 one can see that the agreement of the theoretical spectrum with the data is very good provided we choose the value $p_f={\cal O}(0.55$ GeV). To demonstrate the high sensitivity of the spectrum on this parameter over the whole range of phase-space we present in Fig.~11 spectra for different values of $p_f$ with $f_\psi$ and $|a_2|$ fixed. It is obvious that the shape of the experimental $J/\psi$ momentum distribution (compare Fig.~8) cannot be reproduced in the model when using values for the Fermi motion parameter significantly smaller than $p_f=0.5$ GeV. Especially the sizable contribution in the low momentum range requires a soft probability distribution of the heavy quark momentum. As a result, we conclude that the value $p_f={\cal O}(0.55$ GeV) for the Fermi motion parameter of the ACCMM model is highly favoured when applying this model to the semi-inclusive decay $B\rightarrow J/\psi\,X$. This value is in exact accordance with the one calculated recently in Ref.~\cite{Kim} from the relativistic quark model. As mentioned above the discrepancy between the ACCMM model and the ISGW model concerning the determination of $|V_{ub}/V_{cb}|$ disappears when using our favoured value for $p_f$ in the study of the electron spectrum from inclusive semileptonic $B$ decays. A final feature of our investigations is related to the determination of the effective color singlet coefficient $|a_2|$. Using $f_\psi=0.384$ GeV, $\tau_B=1.54$ ps, and $|V_{cb}|=0.043$, we obtain $|a_2|={\cal O}(0.28)$ for the best fit of the ACCMM model to the CLEO data. This is in agreement to the value of $|a_2|$ which we found in the IPM. The study of an inclusive decay compared to exclusive decays in this context provides the advantage of solely involving an analysis that is independent of mesonic form-factors. \section{Remarks on the Extraction of \boldmath $|V_{ub}/V_{cb}|$ \unboldmath \label{remarks} } The application of the value for $p_f$, which we obtain from our investigation of the decay $B\rightarrow J/\psi\,X$, to the semileptonic decay channel requires some additional remarks. As pointed out in Ref.~\cite{Bigi5} the universality of the momentum distribution function describing the motion of the $b$ quark in the $B$ meson only holds for different decay processes when referring to a fixed final state quark mass. The common procedure, followed in the analysis of the semileptonic decay $B\rightarrow Xl\nu$, is to determine the distribution parameter from a fit of the lepton spectrum away from the endpoint where the spectrum is dominated by $b\rightarrow c$ transitions. This result is then used to model the endpoint region, which purely originates from $b\rightarrow u$ transitions, in order to extract the value of $|V_{ub}/V_{cb}|$ from the data. Considering however the fact that the decay $b\rightarrow c$ involves a quark mass $m_c$, which cannot be neglected, the corresponding distribution parameter might be unsuitable to describe simultaneously the transition $b\rightarrow u$. On the other hand, the independent determination of the heavy quark momentum distribution from the entire $J/\psi$ spectrum of the $B\rightarrow J/\psi\,X$ decay involves the effective transition $b\rightarrow s$ (and also the Cabibbo suppressed decay $b\rightarrow d$), i.e., a small (current) mass for the final state. Therefore the corresponding distribution parameter is still appropriate for the extraction of $|V_{ub}/V_{cb}|$, which requires the distribution function associated with the transition to a massless final state. The CLEO analysis \cite{Cleo4} was performed within the ACCMM model using the value $p_f=0.3$ GeV for both, $b\rightarrow c$ and $b\rightarrow u$ decays. The modification which arises when keeping this value for the transition $b\rightarrow c$ but applying $p_f=0.5$ GeV for $b \rightarrow u$ reads \begin{equation} \left|\frac{V_{ub}}{V_{cb}}\right|^2_{p_f=0.5} =\left|\frac{V_{ub}}{V_{cb}}\right|^2_{p_f=0.3} \times \frac{\tilde{\Gamma}(p_f=0.3)}{\tilde{\Gamma}(p_f=0.5)}\, , \label{Vub} \end{equation} where $\tilde{\Gamma}(p_f)\equiv \int_{2.3}^{2.6} dE_l\frac{d\Gamma} {dE_l}(p_f)$ denotes the integration over the endpoint domain of the leptonic spectrum supposing $|V_{ub}|=1$. This is exactly the same relation which has been used in Ref.~\cite{Kim} and we quote the corresponding value for $|V_{ub}/V_{cb}|^2$ in Section \ref{ac-an}$\,$. Nevertheless the physical interpretation is somewhat different. The authors of Ref.~\cite{Kim} took the value of $|V_{cb}|$ as determined independently from other analyses. This appeared to be necessary, because they did not consider the limitations to the universality of the distribution function arising from different masses of the final state quark. In contrast to this we argue that Eq.~(\ref{Vub}) can be applied directly in the investigation of the semileptonic spectrum, since only the distribution parameter governing the endpoint region has to be changed relative to the existing CLEO analysis \cite{Cleo4}. Within this analysis the application of the ACCMM model to $b\rightarrow c$ transitions has to be regarded as a phenomenological fit, because from a theoretical point of view the neglect of Fermi motion in the final state is only appropriate, when small final state quark masses are involved \cite{Bigi5}. The latter remark illustrates the importance of a study which is independent of the semileptonic data. Our investigation of the decay $B\rightarrow J/\psi\,X$ allows a suitable determination of the parameters of the ACCMM model which can be used for the analysis of the endpoint spectrum in the decay $B\rightarrow Xl\nu$. The same statement holds true for our analysis within the parton model. \section{Polarization of the \boldmath $J/\Psi$ \unboldmath \label{polarization}} Applying the same method as for the calculation of the unpolarized $J/\psi$ spectrum, we proceed to determine the momentum spectrum of a longitudinally polarized state. In references \cite{Kuehn2} and \cite{Pal-St} the polarization of the $J/\psi$ was obtained from a study of the free quark decay $b\rightarrow J/\psi\, s$, i.e., for fixed momenta in the final state. The result, $\Gamma_L/\Gamma \simeq 0.54$, was identified with the average polarization in the corresponding decay of the $B$ meson. Considering the bound state structure of the $B$ within our inclusive approach yields the momentum distribution of the $J/\psi$. Consequently, it allows us to determine the polarization in various kinematic regions, where the results can be compared with measurements from CLEO \cite{CLpol} and ARGUS \cite{ARpol}. Since our model is based on local quark-hadron duality, we do not expect to reproduce the polarization within the region $|\vec{k}_\psi|\ge 1.4$ GeV, because it is governed by the two-body decay modes $B\rightarrow J/\psi K^{(*)}$. As can be concluded from the ARGUS data (see Tab.~1), after subtraction of the exclusive mode $B\rightarrow J/\psi K$ the high momentum range of the decay spectrum is dominated by a single orbital angular momentum and consequently by a single $CP$ eigenstate of the mode $B\rightarrow J/\psi K^*$ \cite{Hon}. This feature is not accounted for within our inclusive approach since final state interactions are not part of our consideration. Nevertheless the study of the low momentum range, which involves nonresonant multi-particle states, provides an additional test of our approach, especially in view of future measurements with reduced experimental errors. Furthermore we investigate the modification of the average polarization relative to the free quark decay due to the bound state structure of the $B$, in which the hadronic vertex acts on the polarization of the $J/\psi$ through the momentum distribution of the underlying $b$ quark.\\ \\ Replacing the polarization sum in Eq.~(\ref{psiten}) by $\varepsilon_\mu (${\small $\lambda =0$})$\varepsilon_\nu (${\small $\lambda =0$}), a calculation analogous to the determination of the unpolarized $B\rightarrow J/\psi\,X$ decay spectrum within the parton model, introduced in Section \ref{IPM-sec}, yields the momentum spectrum for the longitudinally polarized $J/\psi$. In the restframe of the $B$, \begin{eqnarray} \lefteqn{\frac{1}{\Gamma_B} \frac{d\Gamma_{(B\rightarrow (J/\psi)_{L}\,X)}}{d|\vec{k}_{\psi}|} \left(|\vec{k}_{\psi}|,\,|\vec{p}_B|=0 \right)= } \label{polsp}\\ &&\tau_B\sum_{f=s,\,d}\frac{|C_f|^2} {2\pi M_B}\frac{|\vec{k}_{\psi}|^2}{E_{\psi}}\left[f(x_+) \left(-1+\frac{2E_{\psi}M_B}{M_{\psi}^2}\left(x_+-\frac{2 m_f^2}{M_B^2 (x_+-x_-)}\right)\right)+(x_+\leftrightarrow x_-)\right] \, , \nonumber \end{eqnarray} with $|C_f|$ defined according to Eq.~(\ref{cf}). Note that the difference to the unpolarized spectrum occurs in the first term in the parenthesis of Eq.~(\ref{polsp}), whose sign changes compared to Eq.~(\ref{br}). The partial branching ratio in the CLEO frame is obtained by performing the integration due to the transformation of the spectrum to that for a $B$ meson in flight (see Eq.~(\ref{brac})), as well as the integration over the momentum range which is considered. The calculation in the ACCMM model follows the lines of Section \ref{AC-sec}, where for the polarized case Eq.~(\ref{g0}) is replaced by the corresponding expression for the decay width of $b\rightarrow (J/\psi)_L\,q_f$. In the restframe of the heavy quark one obtains \begin{equation} \Gamma_{0,\,L}^{(f)}\, =\, \frac{|C_f|^2}{2\pi}\frac{k_0^{(f)}}{W^2}\left[-m_f^2 +\frac{1}{2}\left(W^2-m_f^2 -M_{\psi}^2\right)\left(\frac{W^2-m_f^2}{M_{\psi}^2}\right)\right] \, . \label{g0pol} \end{equation} The partial branching ratio for the momentum range $k_1\le |\vec{k}_\psi|\le k_2$ then reads \begin{eqnarray} \lefteqn{\hspace*{-5mm}\Delta B_{[L]}(k_1,\,k_2)=}\label{parpol}\\ & & \tau_B\sum_{f=s,\,d}\hspace*{2mm} \int\limits_{k_1}^{k_2} d|\vec{k}_\psi| \int\limits_0^{p_{max}^{(f)}}dp \; \frac{p^2\phi(p)\gamma_b^{-1} \Gamma_{0\,[L]}^{(f)}}{k_+^{(b,\,f)}(p)-|k_-^{(b,\,f)}(p)|} \int\limits_{g_1(|\vec{k}_\psi|,\,p)}^{g_2(|\vec{k}_\psi|,\,p)}\; \frac{d|\vec{k}_{\psi}'|}{k_+(|\vec{k}_{\psi}'|)-|k_-(|\vec{k}_{\psi}'|)|} \, , \nonumber \end{eqnarray} where \begin{eqnarray} g_1(|\vec{k}_\psi|,\,p)&=&\mbox{max}\left\{|k_-(|\vec{k}_\psi|)|,\, |k_-^{(b,\,f)}(p)|\right\} h(|\vec{k}_\psi|,\,p) \, , \\ g_2(|\vec{k}_\psi|,\,p)&=&\mbox{min\hspace{1mm}} \left\{\hspace*{1.1mm}k_+(|\vec{k}_\psi|)\hspace{1.1mm},\, \hspace{1.1mm}k_+^{(b,\,f)}(p)\hspace{1.1mm}\right\} h(|\vec{k}_\psi|,\,p) \, , \end{eqnarray} and \begin{equation} \label{hlim} h(|\vec{k}_\psi|,\,p)= \theta\left[k_+^{(f)}(|\vec{k}_\psi|)-|k_-^{(b,\,f)}(p)|\right] \theta\left[k_+^{(b,\,f)}(p)-|k_-(|\vec{k}_\psi|)|\right]\, . \end{equation} In Eqs.~(\ref{parpol}-\ref{hlim}) the limits of integration are written in a way as to take into account simultaneously the Lorentz boost from the $b$ to the $B$ restframe and the subsequent boost to the CLEO frame. Our results for the average longitudinal polarization of the $J/\psi$, $\Gamma_L/\Gamma =\Delta B_L/\Delta B$, are presented in Tab.~1. There we give the polarization obtained in the parton and the AC\nolinebreak CMM model for different values of the quark momentum distribution parameters $\varepsilon_p$ and $p_f$, respectively. The $J/\psi$ momentum ranges that we considered are chosen according to the experimental data from ARGUS and CLEO. \begin{table}[thb] \begin{center} \begin{tabular}{|ccccc|} \hline \hline $J/\psi$ momentum & \multicolumn{2}{c}{\hspace*{2.3cm}CLEO II \cite{CLpol}} & \multicolumn{2}{c|}{ARGUS \cite{ARpol}} \\ \hline $k_{\psi}< 0.8$ GeV & \multicolumn{2}{c}{\hspace*{2.3cm}$0.55 \pm 0.35$} & \multicolumn{2}{c|}{} \\ 0.8 GeV $<k_{\psi}< 1.4$ GeV & \multicolumn{2}{c}{\hspace*{2.3cm}$0.49 \pm 0.32$} & \multicolumn{2}{c|}{} \\ 1.4 GeV $<k_{\psi}< 2.0$ GeV & \multicolumn{2}{c}{\hspace*{2.3cm}$0.78 \pm 0.17$} & \multicolumn{2}{c|}{$1.17 \pm 0.17$}\\ all $k_{\psi}< 2.0$ GeV & \multicolumn{2}{c}{\hspace*{2.3cm}$0.59 \pm 0.15$} & \multicolumn{2}{c|}{} \\ \hline \multicolumn{5}{c}{}\\[-4mm] \hline \multicolumn{5}{|c|}{\large parton model \rule[-3mm]{0mm}{9mm}}\\ \hline $J/\psi$ momentum & $\varepsilon_p=0.004$ & $\varepsilon_p=0.006$ & $\varepsilon_p=0.008$ & $\varepsilon_p=0.010$ \\ \hline $k_{\psi}< 0.8$ GeV & 0.416 & 0.415 & 0.414 & 0.413 \\ 0.8 GeV$<k_{\psi}< 1.4$ GeV & 0.520 & 0.515 & 0.512 & 0.508 \\ 1.4 GeV$<k_{\psi}< 2.0$ GeV & 0.557 & 0.552 & 0.547 & 0.543 \\ all $k_{\psi}< 2.0$ GeV & 0.537 & 0.529 & 0.522 & 0.516 \\ \hline \multicolumn{5}{c}{}\\[-4mm] \hline \multicolumn{5}{|c|}{\large ACCMM model \rule[-3mm]{0mm}{9mm}}\\ \hline $J/\psi$ momentum & $p_f=0.3$ \hspace*{-2mm}& $p_f=0.4$ \hspace*{-2mm}& $p_f=0.5$ \hspace*{-2mm}& $p_f=0.55$ \hspace*{-2mm}\\ \hline $k_{\psi}< 0.8$ GeV &0.515 &0.504 &0.495 &0.491\\ 0.8 GeV$<k_{\psi}< 1.4$ GeV &0.548 &0.538 &0.530 &0.527\\ 1.4 GeV$<k_{\psi}< 2.0$ GeV &0.556 &0.549 &0.542 &0.539\\ all $k_{\psi}< 2.0$ GeV &0.553 &0.543 &0.534 &0.529\\ \hline \hline \end{tabular} \caption{$J/\psi$ polarization $\Gamma_L/\Gamma $ in the decay$B\rightarrow J/\psi X$ in the parton and the ACCMM model compared to data. The parameters are $m_{sp}=0.2$ GeV, $m_s=0.125$ GeV, $m_d=0$, and $\varepsilon_p$, $p_f[\mbox{GeV}]$ respectively, as given above.} \end{center} \end{table} {}From the table one can read that in both models the data is reproduced correctly for the range $|\vec{k}_\psi| \le 1.4$ GeV and also for the average over the complete momentum range. The agreement is good, but the experimental errors are still large. On the other hand, the polarization predicted for the high momentum region underestimates the data. This is to be expected because, as we mentioned, the region $|\vec{k}_\psi|>1.4$ GeV is dominated by two-body modes. Significant differences between the parton and the ACCMM model show up only for the low momentum range ($|\vec{k}_\psi| \le 0.8$ GeV), which may provide an additional feature for distinguishing between the models. The bound state corrections to the average polarization of the $J/\psi$ (for all $|\vec{k}_\psi| \le 2.0$ GeV) are marginal; a property which is reflected in the weak dependence of the polarization on the distribution parameters. As a consequence of the cancellation of binding effects in the ratio $\Delta B_L/\Delta B$, the determination of $\varepsilon_p$ and $p_f$, respectively, from the polarization turns out to be impossible. We conclude that the inclusive approach to $B\rightarrow J/\psi\,X$ decays in the parton and the ACCMM model yields the polarization of the $J/\psi$ in various kinematic regions. Our analysis shows that for the high momentum range, in which final state interactions are important, the approach is less reliable in the polarized than in the unpolarized case. On the other hand, the inclusive description is well suited in the low momentum range governed by nonresonant multi-particle states. A study of the polarization in this range may help distinguishing between the models considered, as soon as the experimental error is reduced. \section{Summary and Conclusion} \label{summary} We incorporate in this article the bound state effects of the $B$ meson in the analysis of the direct decay $B\rightarrow J/\psi\,X$. An inclusive approach has been worked out in detail within the framework of the parton and the ACCMM model where in each case a one-parameter momentum distribution function for the heavy quark is introduced. We fixed the distribution parameter by comparing the predicted $J/\psi$ momentum spectrum with the recent CLEO data, putting emphasis on the adequate reproduction of the sizable low momentum spectrum, which contains nonresonant multi-particle final states. Both models yield a $b$ quark momentum distribution softer than the one commonly used in studies of semileptonic $B$ decays. Within the parton model we obtain $\varepsilon_p={\cal O}(0.008)$ where moderate deviations from the Peterson {\it et al.} distribution, taken from production experiments, are apparent in the data. In further studies of inclusive $B$ decays within the parton model it will be of interest to consider a modified Ansatz for the $b$ quark momentum distribution. The ACCMM model can account for the data over the whole range of phase-space when we use a Fermi motion parameter $p_f={\cal O}(0.5$ GeV). Applying this result to the endpoint electron energy spectrum of semileptonic $B$ meson decays yields a value $10^2\times |V_{ub}/V_{cb}|^2=1.03$, which is in accordance to the result obtained within the exclusive ISGW model. The successful reproduction of the experimental data confirms the validity of the factorization assumption for inclusive $B$ decays where the bound state effects in both models imply a large value for the effective color singlet coefficient, $|a_2|={\cal O}(0.28)$ (when choosing $\tau_B=1.54$ ps, $|V_{cb}|=0.043$). The inclusive approach can also account for the measured average polarization of the $J/\psi$ which is independent of the normalization constants $(|a_2|f_\psi)$ and $\tau_B$. In view of the fact that semileptonic decay spectra away from the endpoint of the phase-space involve $b\rightarrow c$ transitions a study of the decay $B\rightarrow J/\psi\,X$ presently provides the only possibility to extract the momentum distribution of the $b$ quark corresponding to a decay in which the mass of the quark in the final state is negligible. This extraction is direct as no integration over the distribution function has to be performed. Furthermore the momentum spectrum of the $J/\psi$ is more sensitive to the bound state structure of the $B$ than the electron energy spectrum from semileptonic decays. The difficulties which arise in the study of a momentum spectrum containing resonance structures can be avoided as soon as precise measurements are available which allow a determination of the distribution function from the endpoint domain of the electron energy spectrum in semileptonic decays or the photon spectrum in $b\rightarrow s\,\gamma$ decays. \vspace*{3cm}\\ \centerline{ \large Acknowledgements } \\[0.5cm] PHS wants to thank the {\it Deutsche Forschungsgemeinschaft} for financial support (in connection with the Graduate College for Elementary Particle Physics in Dortmund).\\ \newpage
1,108,101,562,683
arxiv
\section{Introduction} Inflation gives an elegant explanation for observations of the early universe and is a part of the standard model of cosmology. However, what causes inflation has not been revealed and various models of inflation have been proposed. Among these models, the axion inflation is well motivated, because its shift symmetry ensures the flatness of the inflaton potential which is required for sufficient duration of inflation~\cite{Freese:1990rb,Pajer:2013fsa}. To reheat the universe after inflation, the inflaton needs to be coupled with other fields. Since the \ac{CS} coupling respects the shift symmetry, gauge fields coupled to the inflaton through the CS coupling are often considered and their rich phenomenology has been intensively studied, such as baryogenesis~\cite{Anber:2015yca,Fujita:2016igl,Jimenez:2017cdr,Domcke:2019mnd}, leptogenesis~\cite{Domcke:2020quw}, magnetogenesis~\cite{Turner:1987bw, Garretson:1992vt, Anber:2006xt, Fujita:2015iga, Adshead:2016iae, Cuissa:2018oiw}, the standard model particle production via the Schwinger effect~\cite{Domcke:2018eki,Domcke:2019qmm,Gorbar:2021rlt,Gorbar:2021zlr,Fujita:2022fwc}, and the chiral gravitational wave production~\cite{Cook:2011hg,Barnaby:2011qe,Anber:2012du,Namba:2015gja,Domcke:2016bkh,Adshead:2019igv}. $\mathrm{U}(1)$ gauge field can be generated through the CS coupling during the axion inflation. The mode function of the $\mathrm{U}(1)$ gauge field is amplified due to a tachyonic instability, when it exits the Hubble horizon~\cite{Turner:1987bw, Garretson:1992vt, Anber:2006xt}. Although the amplified mode quickly decays on super-horizon scales, a new mode always arises from sub-horizon scales and the gauge field amplitude is persistent. Since each Fourier mode independently evolves and its amplitude is randomly produced from quantum fluctuation, the $\mathrm{U}(1)$ gauge field amplitude should fluctuate and its orientation should continuously change in the coordinate space. Nevertheless, such stochastic behavior of the gauge field has not been explored in the literature. This is not just an academic question, rather could be related to phenomenological consequences, such as the resultant baryon asymmetry. It is desirable to understand whether the stochastic nature of gauge fields could alter the conventional picture. The stochastic formalism is useful for investigating such stochastic nature of a field caused by quantum fluctuation during inflation (see, e.g., Refs.~\cite{Starobinsky:1982ee,Starobinsky:1986fx,Starobinsky:1994bd} for the first papers), which is an effective theory for super-horizon fields often called \emph{IR modes}. As sub-horizon quantum fluctuations (dubbed \emph{UV modes}) continuously exit the horizon and join the IR modes in the accelerated expansion phase of the universe, the IR \ac{EoM} includes the ``noise'' term as a representative of fluctuations. In particular, if the UV modes get enhanced around the horizon crossing and can be viewed as classical fields with sufficient squeezing of mode functions, the dynamics of IR modes can be understood as a non-quantum Brownian motion. In this way, one can analyze the behavior of each local horizon patch by means of classical statistical mechanics. Though the stochastic formalism for scalar fields (both for inflatons and spectators, see, e.g., Ref.~\cite{Pinol:2020cdp} and references therein) has been well established so far, its application to vector fields has not been developed enough, because they are not enhanced by the horizon crossing in their minimal setup. The first study of the stochastic formalism for vector fields have addressed a kinetic coupling model where the inflaton is coupled to the kinetic term of $\mathrm{U}(1)$ gauge fields~\cite{Fujita:2017lfu}.\footnote{Ref.~\cite{Talebian:2019opf} also studied the stochastic formalism in the kinetic coupling model, but the stochastic equation there does not reproduce the classical background behavior even if the noise term is dropped. This is because the interplay between the inflaton and the gauge field, which enables a classical attractor solution for the gauge field, is not properly taken into account, unlike Ref.~\cite{Fujita:2017lfu}.} As described above, the \ac{CS} coupling can also source gauge fields in the axion inflation and thus they can be a good target of the stochastic formalism. However, the previous work on the stochastic formalism of this model claimed that both the electric and magnetic fields were always aligned along ``the $\hat{\bm x}$-direction'' and no rotation of their directions were discussed, which shows a stark contrast to our intuitive argument above~\cite{Talebian:2022jkb}. In this paper, we develop the stochastic formalism for $\mathrm{U}(1)$ gauge fields and explore its implication. We derive the Langevin equation for the $\mathrm{U}(1)$ gauge field with the CS coupling to a rolling axion during inflation. The derivation is analogous to that of a scalar field, but has some distinctions. We also solve the derived equation to illustrate the stochastic behavior of the $\mathrm{U}(1)$ gauge field. In particular, our numerical simulation demonstrates that the amplitude fluctuation and the change of the direction based on the above intuitive argument are indeed realized. This paper is organized as follows. In Sec.~\ref{model}, we briefly explain our setup. In Sec.~\ref{Derivation of Langevin equation}, we construct the stochastic formalism for the $\mathrm{U}(1)$ gauge field and derive its Langevin equation. In Sec.~\ref{Analytic results}, we analytically find the solution and study its properties. In Sec.~\ref{Numerical Simulation}, some results of our numerical simulation are shown. Sec.~\ref{Conclusion} is devoted to the conclusion of this paper. \section{Tachyonic growth of gauge fields in the axion inflation} \label{model} In this section, we briefly review our model in which the inflaton $\phi$ is coupled to the $\mathrm{U}(1)$ gauge field $A_\mu$ through the CS coupling; \begin{align} \mathcal{L}=\frac{1}{2}\partial_\mu \phi\partial^\mu \phi-V(\phi) -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{4f}\phi F_{\mu\nu}\tilde{F}^{\mu\nu}, \end{align} where $F_{\mu\nu}\equiv \partial_\mu A_\nu - \partial_\nu A_\mu$ is the electromagnetic field strength and $\tilde{F}^{\mu\nu}\equiv \epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}/(2\sqrt{-g})$ is its dual. The determinant of the spacetime metric is denoted by $g$ and the totally anti-symmetric tensor is defined by $\epsilon^{0123}=1$. In this paper, we do not specify the inflaton potential $V(\phi)$ or the value of the axion decay constant $f$ but simply assume the homogeneous and constant slow roll of the inflaton, $\partial_t\phi=\text{const.}$, as an input parameter in the approximately \ac{dS} background. The sign of the spacetime metric is defined as $\mathrm{d} s^2= a^2(\tau)(\mathrm{d} \tau^2-\mathrm{d} \bm{x}^2)$. Adopting the Coulomb gauge in vacuum, $A_0=\partial_iA_i=0$, the \ac{EoM} for the comoving gauge field is given by \begin{align} \partial_\tau^2 A_i-\partial_j^2 A_i-\frac{1}{f} (\partial_\tau \phi) \epsilon_{ijl}\partial_j A_l=0, \label{Original A EoM} \end{align} where the conformal time is denoted by $\tau$ and the rank-$3$ totally anti-symmetric tensor is $\epsilon_{123} = 1$. The gauge field is decomposed by the circular polarization and quantized as \begin{align} A_i(\tau, \bm{x}) &= \sum_{\lambda=\pm} \int \frac{{\rm d}^3 k}{(2\pi)^3} e^{i \bm{k \cdot x}} e_{i}^{(\lambda)}(\hat{\bm{k}}) \hat{A}_\lambda(\tau,\bm k), \\ \hat{A}_\lambda(\tau,\bm k) &= \hat{a}_{\bm{k}}^{(\lambda)} \mathcal{A}_\lambda(\tau,k) + \hat{a}_{-\bm{k}}^{(\lambda) \dag} \mathcal{A}_\lambda^*(\tau,k), \label{quantization} \end{align} where $e^{(\pm)}_i(\hat{\bm{k}})$ are the right/left-handed polarization vectors defined by $i \bm{k} \cp \bm e^{(\pm)}(\hat{\bm{k}})=\pm k\, \bm{e}^{(\pm)}(\hat{\bm{k}})$, and $\hat{a}_{\bm{k}}^{(\pm) \dag}$/$\hat{a}_{\bm{k}}^{(\pm)}$ are the creation/annihilation operators which satisfy the commutation relation of $[\hat{a}^{(\lambda)}_{\bm{k}},\hat{a}^{(\sigma) \dag}_{-\bm{k}'}] = (2\pi)^3\delta(\bm{k}+\bm{k}')\delta^{\lambda \sigma}$. During inflation $aH=-1/\tau$, the EoM for the mode function is written as \begin{align} \left[ \partial_\tau^2 +k^2 \pm 2k \frac{\xi}{\tau} \right] \mathcal{A}_\pm(\tau,k)=0, \label{EoMforA} \end{align} with a characteristic parameter \begin{align} \xi\equiv \frac{\partial_\tau \phi}{2f aH}=\frac{\dot{\phi}}{2fH}, \end{align} where dot denotes the cosmic time derivative. If $\xi>0$, for instance, $\mathcal{A}_+$ modes undergo an exponential enhancement around the horizon crossing, while $\mathcal{A}_-$ modes do otherwise. In the rest of this paper, we take $\xi > 0$ since the solution for $\xi < 0$ is readily obtained by performing the $CP$ transformation to the solution of $\xi > 0$. With the Bunch--Davies vacuum initial condition and constant $\xi$, one can find the analytic solution for $\mathcal{A}_+$ as \begin{align} \mathcal{A}_+(\tau,k)=\frac{1}{\sqrt{2k}}e^{\pi\xi/2} W_{-i\xi,1/2}(2ik\tau), \label{A_sol} \end{align} where $W_{\alpha,\beta}(z)$ is the Whittaker $W$ function. This solution approaches a constant asymptotic value in the super-horizon limit, \begin{align} \mathcal{A}_+(\tau,k) \xrightarrow{|k\tau|\ll \xi^{-1}} \frac{1}{\sqrt{2k}}\frac{e^{\pi \xi/2}}{\Gamma(1+i\xi)}, \end{align} where $\Gamma(z)$ is the Gamma function. With the solution~\eqref{A_sol}, the {\it physical} electromagnetic spectra for the $+$ mode are obtained as \begin{align} \tilde{\mathcal{P}}_{BB}^+(\tau,k)&= a^{-4}\mathcal{P}_{BB}^+(\tau,k)= \frac{k^5}{2\pi^2 a^4}\left|\mathcal{A}_+(\tau,k)\right|^{2} =\frac{|k\tau|^4 H^4}{4\pi^2}e^{\pi\xi} \left|W(-k\tau)\right|^2, \label{PB tilde} \\ \tilde{\mathcal{P}}_{EE}^+(\tau,k)&= a^{-4}\mathcal{P}_{EE}^+(\tau,k) = \frac{k^3}{2\pi^2 a^4} \left|\partial_\tau \mathcal{A}_+(\tau,k)\right|^{2} =\frac{|k\tau|^4 H^4}{4\pi^2 }e^{\pi\xi} \left|W'(-k\tau)\right|^2, \label{PE tilde} \\ \tilde{\mathcal{P}}_{BE}^+(\tau,k)&= a^{-4}\mathcal{P}_{BE}^+(\tau,k) = -\frac{k^4}{2\pi^2 a^4} \mathcal{A}_+(\tau,k)\partial_\tau \mathcal{A}_+^*(\tau,k) =\frac{|k\tau|^4 H^4}{4\pi^2 }e^{\pi\xi} W(-k\tau)W'^*(-k\tau), \label{PBE tilde} \end{align} where $\mathcal{P}_{XX}^\lambda$ are the {\it comoving} spectra and $\mathcal{P}_{EB}^\lambda=(\mathcal{P}_{BE}^{\lambda})^*$. Here, for brevity, we define the Whittaker function and its derivative as \begin{align} W(z) \equiv W_{-i\xi,1/2}(-2i z), \qquad W'(z)\equiv \partial_z W_{-i\xi,1/2}(-2iz). \end{align} In the left panel of Fig.~\ref{x510}, we present these {\it physical} power spectra. One observes that the spectra reach their peaks at around $|k_\mathrm{p}\tau|\simeq \xi^{-1}$. The right panel of Fig.~\ref{x510} shows the time evolution of the complex phases of the mode function and its derivative. After the phase rotation stops at $\kappa\simeq 2\xi$, one can treat $\hat{A}_\lambda$ as a classical perturbation~\cite{Polarski:1995jg}. \begin{figure}[tbp] \includegraphics[width=80mm]{FigTF/EB_powerspectra_plot.pdf} \hspace{8mm} \includegraphics[width=80mm]{FigTF/Phase.pdf} \caption {{\it (Left panel)} The {\it physical} power spectra $H^{-4}\tilde{\mathcal{P}}_{BB}^+$ (blue), $H^{-4}\tilde{\mathcal{P}}_{EE}^+$ (orange) and $-H^{-4}(\tilde{\mathcal{P}}_{EB}^+ +\tilde{\mathcal{P}}_{BE}^+)/2$ (green dashed) given in Eqs.\eqref{PB tilde}--\eqref{PBE tilde} for $\xi=5$. $\tilde{\mathcal{P}}_{EE}^+$ is larger than $|\mathrm{Re}[\tilde{\mathcal{P}}_{EB}^+]|$ and $\tilde{\mathcal{P}}_{BB}^+$ by $\mathcal{O}(\xi)$ and $\mathcal{O}(\xi^2)$, respectively. {\it (Right panel)} The phase of $W$ (blue) and $W'$ (orange) for $\xi=5$. The complex phases stop rotating at around $|k\tau|=2\xi$ and these terminal phases are different by $\pi$. } \label{x510} \end{figure} \section{Derivation of Langevin equation} \label{Derivation of Langevin equation} In this section, we develop the stochastic formalism of the $\mathrm{U}(1)$ gauge field. To go to the stochastic picture, we divide the vector potential $A_i (\tau, \bm x)$ into the IR part and the UV part, \begin{align} \bm A(\tau,\bm{x})= \sum_{\lambda=\pm}\left[ \bm A_{\rm IR}^\lambda(\tau,\bm{x}) + \bm A_{\rm UV}^\lambda(\tau,\bm{x})\right], \label{UVIR dec} \end{align} with \begin{align} \bm A_{\rm IR}^\lambda(\tau,\bm{x}) &\equiv \int \frac{\mathrm{d}^3 k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{x}}\,\mathcal{W}(\tau,k) \bm e^\lambda(\hat{\bm k})\hat{A}_\lambda(\tau,\bm k), \label{eq:A_IR} \\ \bm A_{\rm UV}^\lambda(\tau,\bm{x}) &\equiv \int \frac{\mathrm{d}^3 k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{x}}\left[1-\mathcal{W}(\tau,k)\right] \bm e^\lambda(\hat{\bm k})\hat{A}_\lambda(\tau,\bm k), \end{align} where we introduce a window function, \begin{align} \mathcal{W}(\tau,k)= \Theta(\kappa a H-k). \end{align} Here $\Theta(x)$ is the Heaviside function and $\kappa$ is a constant parameter which characterizes the boundary between IR and UV parts. $\bm A_{\rm IR} \tau,\bm{x})$ contains only the contributions from the mode functions for $k< \kappa a H$. We also define the IR and UV parts of its conjugate momentum $\Pi_i(\tau,\bm{x})\equiv A_i'(\tau,\bm{x})$ as \begin{align} \bm \Pi_{\rm IR}^\lambda(\tau,\bm{x}) &\equiv \int \frac{\mathrm{d}^3 k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{x}}\, \mathcal{W}(\tau,k) \bm e^\lambda(\hat{\bm k})\hat{A}'_\lambda(\tau,\bm k), \\ \bm \Pi_{\rm UV}^\lambda(\tau,\bm{x}) &\equiv \int \frac{\mathrm{d}^3 k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{x}}\left[1-\mathcal{W}(\tau,k)\right] \bm e^\lambda(\hat{\bm k})\hat{A}'_\lambda(\tau,\bm k). \end{align} The key point is that due to the time-dependence of the window function, the time derivative of $\bm{A}_\mathrm{IR}$ does not simply coincide with the IR part of the conjugate momentum $\bm{\Pi}_\mathrm{IR}$ but differs by the mode on the boundary as \begin{align} \partial_\tau \bm A_{\rm IR}^\lambda(\tau,\bm{x})-\bm\Pi_{\rm IR}^\lambda(\tau,\bm{x}) = \int \frac{\mathrm{d}^3 k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{x}}\, \mathcal{W}'(\tau,k) \bm e^\lambda(\hat{\bm k})\hat{A}_\lambda(\tau,\bm k). \label{A'Pi} \end{align} Note that the time derivative of the window function yields Dirac's delta function \begin{align} \mathcal{W}'(\tau,k)=\kappa a^2H^2\delta(\kappa aH-k). \end{align} Therefore, the \ac{EoM} for the IR modes following the original one~\eqref{Original A EoM} is not closed only by the IR modes but corrected by the transition mode as \begin{align} \partial_\tau\bm\Pi_{\rm IR}^\lambda - \bm \nabla^2 \bm A_{\rm IR}^\lambda-\frac{1}{f}\phi' \curl \bm A_{\rm IR}^\lambda= \int \frac{\mathrm{d}^3 k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{x}}\, \mathcal{W}'(\tau,k)\bm e^\lambda(\hat{\bm k})\hat{A}'_\lambda(\tau,\bm k), \end{align} where the term in the right-hand side represents the new mode joining the IR part. Here we introduce the IR part of the {\it physical} electromagnetic fields, \begin{align} \tilde{\bm E}_{\rm IR}^\lambda \equiv - a^{-2}\bm\Pi_{\rm IR}^\lambda, \qquad \tilde{\bm B}_{\rm IR}^\lambda \equiv a^{-2}\curl \bm A_{\rm IR}^\lambda. \label{eq:EB_IR} \end{align} Taking the rotation of Eq.~\eqref{A'Pi} and changing the time variable from the conformal time $\tau$ to the cosmic time $t$, one finds the stochastic equations for the {\it physical} electromagnetic fields as \begin{align} \dot{\tilde{\bm B}}_{\rm IR}^\lambda+2H\tilde{\bm B}_{\rm IR}^\lambda +a^{-1}\curl\tilde{\bm E}_{\rm IR}^\lambda =\tilde{\bm\Xi}_B^\lambda, \label{phy B EoM} \\ \dot{\tilde{\bm E}}_{\rm IR}^\lambda + 2H\tilde{\bm E}_{\rm IR}^\lambda - a^{-1} \curl \tilde{\bm B}_{\rm IR}^\lambda + 2H\xi \tilde{\bm B}_{\rm IR}^\lambda= \tilde{\bm\Xi}_E^\lambda. \label{phy E EoM} \end{align} where we define \begin{align} \tilde{\bm\Xi}_B^\lambda(t,\bm{x}) &\equiv \lambda H\frac{k_\mathrm{c} (t)}{a^2 (t)}\int \frac{\mathrm{d}^3 k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{x}}\, \delta(k_\mathrm{c}(t)-k) \bm e^\lambda(\hat{\bm k})k \hat{A}_\lambda(\tau,\bm k), \label{til xi B} \\ \tilde{\bm\Xi}_E^\lambda(t,\bm{x}) & \equiv - H \frac{k_\mathrm{c} (t)}{a^2 (t)} \int \frac{\mathrm{d}^3 k}{(2\pi)^3} e^{i\bm{k}\cdot\bm{x}}\, \delta(k_\mathrm{c}(t)-k)\bm e^\lambda(\hat{\bm k})\hat{A}'_\lambda(\tau,\bm k), \label{til xi E} \end{align} with the transition scale $k_\mathrm{c}(t)\equiv \kappa a(t) H$. If one takes a sufficiently small $\kappa\ll2\xi$, these transition modes can be understood as random but classical noise as we saw in the previous section. Their statistics are inherited from the results of quantum computations as \bae{ \braket{\tilde{\Xi}^\lambda_{Xi}(t,\bm{x})}=0 \qc \braket{\tilde{\Xi}^\lambda_{Xi}(t,\bm{x})\,\tilde{\Xi}^\sigma_{Yj}(t^\prime,\bm{y})}=\tilde{\mathcal{P}}_{XY}^{\lambda}(\kappa)H\delta(t-t^\prime)\delta^{\lambda\sigma}\psi^{\lambda}_{ij}(k_\mathrm{c}(t)\abs{\bm{x}-\bm{y}}), } where $X$ and $Y$ denote $B$ or $E$ and we introduced a short-hand notation $\tilde{\mathcal{P}}_{BB}^\lambda(\kappa)\equiv \tilde{\mathcal{P}}_{BB}^\lambda(\tau,k_\mathrm{c}(t))$, since it depends only on $-k_\mathrm{c}(t)\tau =\kappa$ (see Eq.~\eqref{PB tilde}). $\psi^\lambda_{ij}(z)$ represents the spherical correlator, \bae{ \psi^\pm_{ij}(z)\coloneqq\frac{1}{4\pi}\int\mathrm{d}{\cos{\theta}}\mathrm{d}{\phi}e^{iz\cos\theta}e^\pm_i(\hat{\bm{k}})e^{\pm*}_j(\hat{\bm{k}})=\pmqty{\frac{z\cos z+(z^2-1)\sin z}{2z^3} & \mp\frac{z\cos z-\sin z}{2z^2} & 0 \\ \pm\frac{z\cos z-\sin z}{2z^2} & \frac{z\cos z+(z^2-1)\sin z}{2z^3} & 0 \\ 0 & 0 & \frac{-z\cos z+\sin z}{z^3}}, } with the definition of the polarization vectors $\bm{e}^\pm(\hat{\bm{k}})=(\cos\varphi\cos\theta\mp i\sin\varphi,\sin\varphi\cos\theta\pm i\cos\varphi,-\sin\theta)^T/\sqrt{2}$ for $\hat{\bm{k}}=(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)^T$. As we are interested in the coarse-grained fields, such an oscillating and decaying correlator can be approximated by zero for $z\gg 1$ and by the limit of $z=0$ for $z\ll 1$. Although the following discussions do not strongly depend on its intermediate behavior, for simplicity, one can bridge the asymptotic forms with a step function as (see Ref.~\cite{Starobinsky:1994bd}) \bae{ \psi^\pm_{ij}(z)\simeq\psi^\pm_{ij}(0)\Theta(1-z)=\frac{1}{3}\delta_{ij}\Theta(1-z). } This implies that $\tilde{\bm{\Xi}}^\lambda_X$ is understood as a patch-independent ($\propto\Theta(1-k_\mathrm{c}\abs{\bm{x}-\bm{y}})$) and white ($\propto\delta(t-t^\prime)$) Gaussian noise. Furthermore, if one takes a sufficiently small $\kappa\ll1/\xi$, the gradient terms in the stochastic EoMs can be dropped, which is confirmed in the next section. Under this condition, therefore, the stochastic equation is independent for each local patch, since $\tilde{\bm{\Xi}}^\lambda_X$ is not correlated among patches and the influence from the neighboring patches is negligible. We hereafter focus on the one-patch dynamics, suppressing the spatial index $\bm{x}$. Note that with the conformal time and the comoving electromagnetic fields, the noise term would be $\bm{\Xi}_X=a^3 \tilde{\bm \Xi}_X$ and their variances increase in time, $\left< \Xi_{Xi}^\lambda(\tau) \Xi_{Yj}^\sigma(\tau')\right> \propto a^5\delta(\tau'-\tau)$. Such noise terms are tricky to treat in numerical calculations. Thus, it is more convenient to handle the {\it physical} electromagnetic fields for which the noise terms have constant variances. In the stochastic equations, two polarization modes are decoupled. Hereafter, we focus on the exponentially amplified mode $\lambda=+$ and suppress the polarization label. Although we have two noise terms, $\tilde{\bm \Xi}_B$ and $\tilde{\bm \Xi}_E$, they are not independent of each other. We define a matrix \begin{align} \mathcal{M}\equiv \frac{4\pi^2}{\kappa^4 H^4 e^{\pi\xi}} \begin{pmatrix}\tilde{\mathcal{P}}_{BB} & \tilde{\mathcal{P}}_{BE}\\ \tilde{\mathcal{P}}_{EB} & \tilde{\mathcal{P}}_{EE} \\ \end{pmatrix} = \begin{pmatrix}|W|^2 & W W'^{*} \\ W^* W' & |W'|^2 \\ \end{pmatrix}, \label{Matrix M} \end{align} where all arguments are $\kappa$. The determinant of this matrix is zero, $\det(\mathcal{M})=0$. As one observes in the right panel of Fig.~\ref{x510}, for a sufficiently small $\kappa\ll2\xi$, the rotation of the phase of $W$ stops and $\mathcal{M}$ becomes a real and symmetric matrix, \begin{align} \mathcal{M}\xrightarrow{\kappa\ll 2\xi} \begin{pmatrix}|W|^2 & -|W| |W'| \\ -|W| |W'| & |W'|^2 \\ \end{pmatrix}, \end{align} where the off-diagonal part has a minus sign because the phases of $W$ and $W'$ are different by $\pi$ as seen in the right panel of Fig.~\ref{x510}. Then we can diagonalize it with a rotational matrix \begin{align} R=\frac{1}{\sqrt{|W|^2+|W'|^2}} \begin{pmatrix} |W'| & |W| \\ -|W| & |W'| \\ \end{pmatrix} \quad\Longrightarrow\quad R \mathcal{M}R^T=\begin{pmatrix}0 & 0 \\ 0 & |W'|^2+|W|^2 \\ \end{pmatrix}. \label{R def} \end{align} Multiplying the stochastic EoMs by this rotational matrix, one finds \begin{align}\label{eq: rotated EoM} R\begin{pmatrix} a^{-2}\partial_t (a^2\tilde{\bm B}_{\rm IR}) \\ a^{-2}\partial_t (a^2 \tilde{\bm E}_{\rm IR} + 2H\xi\tilde{\bm B}_{\rm IR} \\ \end{pmatrix} = R\begin{pmatrix} \tilde{\bm\Xi}_B \\ \tilde{\bm\Xi}_E \\ \end{pmatrix}\equiv \begin{pmatrix} \tilde{\bm{\Xi}}_0\\ \tilde{\bm{\Xi}}\\ \end{pmatrix}, \end{align} where the gradient terms are dropped. $\tilde{\bm{\Xi}}_0\propto |W'| \tilde{\bm\Xi}_B + |W| \tilde{\bm\Xi}_E$ has only vanishing correlations for $\kappa\ll 2\xi$, \begin{align} \left< \tilde{\bm \Xi}_{0}\right>=0, \qquad \left< \tilde{\Xi}_{0i}(t) \tilde{\Xi}_{0j}(t')\right> =0, \qquad \left< \tilde{\Xi}_{0i}(t) \tilde{\Xi}_{j}(t')\right> =0. \end{align} Thus $\tilde{\bm \Xi}_0$ can be ignored and we have only one noise term $\tilde{\bm \Xi}$. Note that $\tilde{\mathcal{P}}_{BE}=\tilde{\mathcal{P}}_{EB}$ in this limit and thus we hereafter do not distinguish $\braket{\tilde{\bm{E}}_\mathrm{IR}\cdot\tilde{\bm{B}}_\mathrm{IR}}$ and $\braket{\tilde{\bm{B}}_\mathrm{IR}\cdot\tilde{\bm{E}}_\mathrm{IR}}$. The stochastic EoMs now read \begin{align}\label{eq: stochastic EoM in t} \begin{pmatrix} a^{-2}\partial_t (a^2\tilde{\bm B}_{\rm IR}) \\ a^{-2}\partial_t (a^2 \tilde{\bm E}_{\rm IR}) + 2H\xi\tilde{\bm B}_{\rm IR} \\ \end{pmatrix} \simeq R^T \begin{pmatrix} 0\\ \tilde{\bm{\Xi}}(t)\\ \end{pmatrix}, \end{align} where the noise term is characterized by its variance, \begin{align} \left<\tilde{\Xi}_i(t) \tilde{\Xi}_j(t')\right> = \delta_{ij}\delta(t-t')\frac{\kappa^4 H^5}{12\pi^2}e^{\pi\xi} \left(|W(\kappa)|^2+|W'(\kappa)|^2\right). \end{align} This set of equations has a simple interpretation. Without the noise terms, the IR electromagnetic fields quickly decay due to the Hubble friction. However, thanks to the noise terms with the constant variance, $\tilde{\bm B}_{\rm IR}$ is always sourced by $\tilde{\bm \Xi}_B$ and $\tilde{\bm E}_{\rm IR}$ is produced by not only $\tilde{\bm \Xi}_E$ but also $\tilde{\bm B}_{\rm IR}$. \section{Analytic results} \label{Analytic results} It is straightforward to obtain the formal solutions of Eq.~\eqref{eq: stochastic EoM in t} as \begin{align} \tilde{\bm B}_{\rm IR}(t)&\simeq a^{-2}(t)\int^t\mathrm{d} t' a^2(t')\tilde{\bm{\Xi}}_B(t'), \label{B solution} \\ \tilde{\bm E}_{\rm IR}(t)&\simeq a^{-2}(t)\left[\int^t_{t_{\rm in}} \mathrm{d} t' a^2(t') \tilde{\bm{\Xi}}_E (t') - 2H\xi\int^t\mathrm{d} t'\int^{t'}\mathrm{d} t'' a^2(t'')\tilde{\bm{\Xi}}_B(t'') \right], \end{align} where we neglected the initial values of $\tilde{\bm B}_{\rm IR}$ and $\tilde{\bm E}_{\rm IR}$, because their contributions quickly dilute. The variance of the IR electromagnetic fields are given by \begin{align}\label{eq: Bvar} \left< \tilde{\bm B}_{\rm IR}^2(t)\right> &\simeq a^{-4}(t)\iint^t\mathrm{d} t'\mathrm{d} t'' a^2(t') a^2(t'') \left<\tilde{\bm{\Xi}}_B(t')\tilde{\bm{\Xi}}_B(t'')\right>, \notag\\ &=H\tilde{\mathcal{P}}_{BB}(\kappa)\,a^{-4}(t)\int^t\mathrm{d} t' a^4(t'), \notag\\ &=\frac{1}{4}\tilde{\mathcal{P}}_{BB}(\kappa), \end{align} and \begin{align}\label{eq: Evar} \left< \tilde{\bm E}_{\rm IR}^2(t)\right> &\simeq \frac{1}{4}\tilde{\mathcal{P}}_{EE}(\kappa) - \frac{\xi}{8}\left(\tilde{\mathcal{P}}_{BE}(\kappa)+\tilde{\mathcal{P}}_{EB}(\kappa)\right) +\frac{\xi^2}{8}\tilde{\mathcal{P}}_{BB}(\kappa). \end{align} The cross-correlation is also computed as \begin{align} \label{eq: EBvar} \left< \tilde{\bm E}_{\rm IR}(t)\cdot \tilde{\bm B}_{\rm IR}(t)\right>=\frac{1}{4}\tilde{\mathcal{P}}_{EB}(\kappa) - \frac{\xi}{8} \tilde{\mathcal{P}}_{BB}(\kappa)\,. \end{align} Now we check the consistency of ignoring the gradient terms in the stochastic EoMs. To ignore the third term compared to the second term in Eq.~\eqref{phy B EoM}, we need \begin{align} \label{eq:consistencycheck} \frac{a^{-1} k_\mathrm{c} |\tilde{\bm E}_{\rm IR}|}{2H|\tilde{\bm B}_{\rm IR}|}\simeq \frac{\kappa}{2}\sqrt{\frac{\left< \tilde{\bm E}_{\rm IR}^2\right>}{\left< \tilde{\bm B}_{\rm IR}^2\right>}} \ll 1, \end{align} where the rotation was evaluated at the cutoff scale, $|\curl \tilde{\bm E}_{\rm IR}|\simeq k_\mathrm{c}|\tilde{\bm E}_{\rm IR}|$. We introduce $\kappa_\mathrm{max}(\xi)$ which saturates the above condition as \begin{align} \frac{\kappa_\mathrm{max}}{2}\sqrt{\frac{\left< \tilde{\bm E}_{\rm IR}^2\right>(\kappa_\mathrm{max},\xi)}{\left< \tilde{\bm B}_{\rm IR}^2\right>(\kappa_\mathrm{max},\xi)}}=1. \label{saturation condition} \end{align} As one can check that it is a monotonically increasing function of $\kappa$, the gradient term can be neglected for $\kappa\ll \kappa_\mathrm{max}$. We present numerically computed $\kappa_\mathrm{max}$ in the left panel of Fig.~\ref{invest_plot}. One observes that $\kappa_\mathrm{max}\simeq 1/\xi$ almost irrespective of the value of $\xi$. Thus the condition to safely neglect the gradient terms is \begin{align} \epsilon\equiv \xi\kappa \ll 1. \label{epsilon condition} \end{align} Under this condition, the gradient term in Eq.~\eqref{phy E EoM} can also be ignored compared with the fourth term, as we focus on a sufficiently large amplification parameter $\xi>1$. \begin{figure}[tbp] \includegraphics[width=80mm]{FigTF/kappa_max.pdf} \hspace{8mm} \includegraphics[width=80mm]{FigTF/EB_antiparallel.pdf} \caption {{\it (Left panel)} $\kappa_\mathrm{max}$ defined in Eq.~\eqref{saturation condition} multiplied by $\xi$. For $\kappa\ll \kappa_\mathrm{max}\simeq \xi^{-1}$, the gradient term can be safely ignored. {\it (Right panel)} $\left< \hat{\bm E}_{\rm IR}\cdot \hat{\bm B}_{\rm IR}\right>$ defined in Eq.~\eqref{eq:EBcorr} against $\kappa$ for $\xi=3$ (blue), $10$ (orange) and $30$ (green). The vertical dashed lines denote $\epsilon\equiv \xi\kappa=1$ or equivalently $\kappa\simeq\kappa_\mathrm{max}(\xi)$. For $\epsilon\ll 1$, the IR electromagnetic fields are almost completely anti-parallel. } \label{invest_plot} \end{figure} The condition~\eqref{epsilon condition} is reasonable. Since the {\it physical} power spectra peak at $|k_\mathrm{p}\tau|\sim 1/\xi$ as seen in Fig.~\ref{x510}, we should coarse-grain the electromagnetic fields on a larger scale than the correlation length $|k_\mathrm{p}\tau|^{-1}\sim \xi$ to obtain a patch-independent dynamics, which leads to Eq.~\eqref{epsilon condition}. We also note that, for $\xi > 1$, this gradient condition \eqref{epsilon condition} is tighter than the classicalization condition $\kappa<2\xi$. Using the analytic solutions~\eqref{eq: Bvar}--\eqref{eq: EBvar}, one finds that the IR electric field and the IR magnetic field are anti-parallel,\footnote{Note that the strict stochastic average of $\hat{\bm{E}}_\mathrm{IR}\cdot\hat{\bm{B}}_\mathrm{IR}=\frac{\tilde{\bm{E}}_\mathrm{IR}\cdot\tilde{\bm{B}}_\mathrm{IR}}{\sqrt{\tilde{\bm{B}}_\mathrm{IR}^2\tilde{\bm{E}}_\mathrm{IR}^2}}$ is not equivalent to $\frac{\braket{\tilde{\bm{E}}_\mathrm{IR}\cdot\tilde{\bm{B}}_\mathrm{IR}}}{\braket{\tilde{\bm{B}}_\mathrm{IR}^2}^{1/2}\braket{\tilde{\bm{E}}_\mathrm{IR}^2}^{1/2}}$. Here we rather call the latter $\braket{\hat{\bm{E}}_\mathrm{IR}\cdot\hat{\bm{B}}_\mathrm{IR}}$, which can be calculated analytically and indeed shows the anti-parallelness of the electromagnetic fields in average. Hereafter our discussions do not rely on this definition.} \begin{align} \label{eq:EBcorr} \left< \hat{\bm E}_{\rm IR}\cdot \hat{\bm B}_{\rm IR}\right> \equiv\frac{\left< \tilde{\bm E}_{\rm IR}\cdot \tilde{\bm B}_{\rm IR}\right>} {\left< \tilde{\bm B}_{\rm IR}^2\right>^{1/2}\left< \tilde{\bm E}_{\rm IR}^2\right>^{1/2}} \xrightarrow{\epsilon\ll 1} -1. \end{align} In the right panel of Fig.~\ref{invest_plot}, we present the $\kappa$ dependence of $\left< \hat{\bm E}_{\rm IR}\cdot \hat{\bm B}_{\rm IR}\right>$. One observes that $\left< \hat{\bm E}_{\rm IR}\cdot \hat{\bm B}_{\rm IR}\right>$ converges to $-1$ for $\epsilon\ll 1$, though a few percents deviation may be found at $\kappa\sim \kappa_\mathrm{max}$. We also compute the statistical properties of the energy density $\rho_\mathrm{IR}\equiv (\tilde{\bm E}_{\rm IR}^2+\tilde{\bm B}_{\rm IR}^2)/2$ and the inner product $\left< \tilde{\bm E}_{\rm IR}\cdot \tilde{\bm B}_{\rm IR}\right>$ of the IR electromagnetic fields. They appear in the Friedmann equation and the background EoM for the inflaton, respectively, and are of particular interest. The higher statistical moments of the IR fields are given by \begin{align}\label{eq: Bkur} \left< \tilde{\bm B}_{\rm IR}^4\right> =\frac{5}{3}\left< \tilde{\bm B}_{\rm IR}^2\right>^2, \qquad \left< \tilde{\bm E}_{\rm IR}^4\right> =\frac{5}{3}\left< \tilde{\bm E}_{\rm IR}^2\right>^2, \qquad \left< \left(\tilde{\bm E}_{\rm IR}\cdot \tilde{\bm B}_{\rm IR}\right)^2\right> =\frac{4}{3}\left< \tilde{\bm E}_{\rm IR}\cdot \tilde{\bm B}_{\rm IR}\right>^2 +\frac{1}{3}\left< \tilde{\bm E}_{\rm IR}^2\right>\left< \tilde{\bm B}_{\rm IR}^2\right>. \end{align} Note that although a Gaussian scalar random variable $S$ obeys $\left<S^4\right>=3\left<S^2\right>^2$, 3-dimensional vector one $V_i$ with $\left<V_i V_j\right>\propto \delta_{ij}$ generally satisfies $\left<V_i V_i V_j V_j\right>=\left<V_i^2\right>\left<V_j^2\right>+2\left<V_i V_j\right>^2=\left<{\bm V}^2\right>^2+(2/3)\left<{\bm V}^2\right>^2=(5/3)\left<{\bm V}^2\right>^2$. Using them, one finds the variances normalized by the squared mean value of $\rho_\mathrm{IR}$ and $\left< \tilde{\bm E}_{\rm IR}\cdot \tilde{\bm B}_{\rm IR}\right>$ are \begin{align} &\frac{\left< \rho_\mathrm{IR}^2\right>}{\left< \rho_\mathrm{IR}\right>^2} =\frac{5\left<\tilde{\bm B}_{\rm IR}^2\right>^2+5\left<\tilde{\bm E}_{\rm IR}^2\right>^2+6\left<\tilde{\bm B}_{\rm IR}^2\right>\left<\tilde{\bm E}_{\rm IR}^2\right> 4\braket{\tilde{\bm{B}}_\mathrm{IR}\cdot\tilde{\bm{E}}_\mathrm{IR}}^2}{3(\left<\tilde{\bm B}_{\rm IR}^2\right>+\left<\tilde{\bm E}_{\rm IR}^2\right>)^2} \,\xrightarrow{\epsilon\ll 1}\, \frac{5}{3}, \\ \frac{\braket{(\tilde{\bm{E}}_\mathrm{IR}\cdot\tilde{\bm{B}}_\mathrm{IR})^2}}{\braket{\tilde{\bm{E}}_\mathrm{IR}\cdot\tilde{\bm{B}}_\mathrm{IR}}^2} =1 \frac{1}{3}\frac{\left< \tilde{\bm B}_{\rm IR}^2\right>\left< \tilde{\bm E}_{\rm IR}^2\right>+\left<\tilde{\bm B}_{\rm IR}\cdot\tilde{\bm E}_{\rm IR}\right>^2} \braket{\tilde{\bm{E}}_\mathrm{IR}\cdot\tilde{\bm{B}}_\mathrm{IR}}^2} \,\xrightarrow{\epsilon\ll 1}\, \frac{5}{3}\,, \end{align} where we used Eq.~\eqref{eq:EBcorr} in the first line, and the convergence of the second line is similar to $\left< \hat{\bm E}_{\rm IR}\cdot \hat{\bm B}_{\rm IR}\right>$ shown in the right panel of Fig.~\ref{invest_plot}. Hence, their variances have the same statistics as the kurtosis of a 3-dimensional Gaussian vector variable and are smaller than that of a Gaussian scalar variable. Before closing this section, we consider the correlation time of the IR electromagnetic fields. Rewriting the solution~\eqref{B solution} into $a^2(t+\Delta t)\tilde{\bm B}_{\rm IR}(t+\Delta t)-a^2(t)\tilde{\bm B}_{\rm IR}(t)=\int^{t+\Delta t}_{t}\mathrm{d} t' a^2(t')\tilde{\bm{\Xi}}_B(t')$, and doing the same for $\tilde{\bm E}_\mathrm{IR}$, one can show \begin{align} \left< \tilde{\bm B}_{\rm IR}(t)\cdot \tilde{\bm B}_{\rm IR}(t+\Delta t) \right> &=e^{-2H\Delta t} \tilde{\bm B}_{\rm IR}^2 (t), \label{Bi Be} \\ \left< \tilde{\bm E}_{\rm IR}(t)\cdot \tilde{\bm E}_{\rm IR}(t+\Delta t) \right> &=e^{-2H\Delta t} \left[\tilde{\bm E}_{\rm IR}^2(t) - 2\xi H\Delta t\, \tilde{\bm E}_{\rm IR}(t)\cdot\tilde{\bm B}_{\rm IR}(t)\right]. \label{Ei Ee} \end{align} Although the second term in Eq.~\eqref{Ei Ee} gives a linear correction, the both correlations exponentially decay. The characteristic time scale is \begin{align} t_\mathrm{c}=\frac{1}{2H}. \label{correlation time} \end{align} Therefore, the IR electromagnetic fields take new independent values every half Hubble time. \section{Numerical Simulation} \label{Numerical Simulation} In this section we numerically simulate the IR electromagnetic fields and illustrate their behaviors. To make variables dimensionless, the time variable is often normalized by the Hubble parameter, that is, we use the e-folding number $N=\int^t_0H\mathrm{d}{t^\prime}$ as a time variable. The stochastic EoM~\eqref{eq: stochastic EoM in t} can be rewritten in $N$ as \bae{\label{num EoM} \pmqty{ \partial_N\tilde{\bm{B}}_\mathrm{IR}+2\tilde{\bm{B}}_\mathrm{IR} \\ \partial_N\tilde{\bm{E}}_\mathrm{IR}+2\tilde{\bm{E}}_\mathrm{IR} + 2\xi\tilde{\bm{B}}_\mathrm{IR} }=R^T \pmqty{ 0 \\ \tilde{\bm{\Xi}}(N) }, } with \bae{ \braket{\tilde{\Xi}_i(N)\tilde{\Xi}_j(N^\prime)}=\delta_{ij}\delta(N-N^\prime)\frac{\kappa^4H^4}{12\pi^2}e^{\pi\xi}(|W|^2+|W^\prime|^2). } \begin{figure} \centering \begin{tabular}{c} \begin{minipage}{0.5\hsize} \centering \includegraphics[width=0.95\hsize]{Figs/BAmpEAmp_err.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering \includegraphics[width=0.95\hsize]{Figs/rhoIR.pdf} \end{minipage} \\ \begin{minipage}{0.5\hsize} \centering \includegraphics[width=0.95\hsize]{Figs/EdotB.pdf} \end{minipage} \end{tabular} \caption{One realization of the IR electromagnetic fields numerically computed based on Eq.~\eqref{num EoM} with the parameters $H=10^{-5}M_{\rm Pl}$, $\xi=5$, and $\kappa=0.02$, and vanishing electromagnetic fields at initial time $N=0$. {(\it Top-Left panel)} The amplitudes $\tilde{\bm{B}}^2_\mathrm{IR}$ (blue) and $\tilde{\bm{E}}^2_\mathrm{IR}$ (orange dashed) normalized by $H^4$ against the e-folding number $N$. From bottom to top, the horizontal black thin lines show the analytic estimations of the mean amplitudes~\eqref{eq: Bvar} and \eqref{eq: Evar} and gray bands indicate their standard deviations~\eqref{eq: std deviation}. \emph{(Top-Right panel)} The similar plot for $\rho_\mathrm{IR}=(\tilde{\bm{E}}_\mathrm{IR}^2+\tilde{\bm{B}}_\mathrm{IR}^2)/2$. {\it Bottom panel)} The normalized inner product of the IR electromagnetic fields, $\hat{\bm{B}}_\mathrm{IR}\cdot\hat{\bm{E}}_\mathrm{IR}\equiv \tilde{\bm{B}}_\mathrm{IR}\cdot\tilde{\bm{E}}_\mathrm{IR}/(|\tilde{\bm{B}}_\mathrm{IR}||\tilde{\bm{E}}_\mathrm{IR}|)$. It stochastically fluctuates around $-1$. } \label{fig: sim} \end{figure} In Fig.~\ref{fig: sim}, we present the amplitudes, the energy density, and the inner product of the IR electromagnetic fields in one realization of our numerical simulations, starting from the vanishing field value at the initial time $N=0$. The analytically estimated mean values~\eqref{eq: Bvar} and \eqref{eq: Evar} and their standard deviations \bae{\label{eq: std deviation} \sqrt{\braket{(\tilde{\bm{B}}_\mathrm{IR}^2)^2}-\braket{\tilde{\bm{B}}_\mathrm{IR}^2}^2}=\sqrt{\frac{2}{3}}\braket{\tilde{\bm{B}}_\mathrm{IR}^2} \qc \sqrt{\braket{(\tilde{\bm{E}}_\mathrm{IR}^2)^2}-\braket{\tilde{\bm{E}}_\mathrm{IR}^2}^2}=\sqrt{\frac{2}{3}}\braket{\tilde{\bm{E}}_\mathrm{IR}^2}, } derived from Eq.~\eqref{eq: Bkur} are also shown. One finds that the amplitudes of $\tilde{B}_\mathrm{IR}$ and $\tilde{E}_\mathrm{IR}$ shown in the top-left panel rapidly reach and stay around the predicted averages within the estimated errors, which indicates the superhorizon electric/magnetic fields are dominated by the stochastic noise. It is interesting to note that these two amplitudes fluctuate in a very similar way, because they are sourced by the same noise $\tilde{\bm{\Xi}}$. The similar plot for the energy density $\rho_\mathrm{IR}=(\tilde{\bm{E}}_\mathrm{IR}^2+\tilde{\bm{B}}_\mathrm{IR}^2)/2$ is shown in the top-right panel. The bottom panel confirms that the unit vectors $\hat{\bm{B}}_\mathrm{IR}=\tilde{\bm{B}}_\mathrm{IR}/|\tilde{\bm{B}}_\mathrm{IR}|$ and $\hat{\bm{E}}_\mathrm{IR}=\tilde{\bm{E}}_\mathrm{IR}/|\tilde{\bm{E}}_\mathrm{IR}|$ are in the anti-parallel configuration $\hat{\bm{B}}_\mathrm{IR}\cdot\hat{\bm{E}}_\mathrm{IR} = -1$ for the most of time. \begin{figure} \centering \begin{tabular}{c} \begin{minipage}{0.5\hsize} \centering \includegraphics[width=0.8\hsize]{Figs/3D01_sph.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering \includegraphics[width=0.8\hsize]{Figs/3D05_sph.pdf} \end{minipage} \\\\ \begin{minipage}{0.5\hsize} \centering \includegraphics[width=0.8\hsize]{Figs/3D2_sph.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering \includegraphics[width=0.8\hsize]{Figs/3D10_sph.pdf} \end{minipage} \end{tabular} \caption{The 3-dimensional trajectories of the unit vectors $\hat{\bm{B}}$ (blue) and $\hat{\bm{E}}$ (orange) in the same realization as Fig.~\ref{fig: sim}. These panels show them for the duration $\Delta N$ of $0.1$ (top-left), $0.5$ (top-right), $2$ (bottom-left), and $10$ e-folds (bottom-right). $\hat{\bm{B}}$ and $\hat{\bm{E}}$ are anti-parallel and do not significantly change the direction for $\Delta N\lesssim 0.5$ as expected from Eqs.~\eqref{eq:EBcorr} and \eqref{correlation time}. For a longer time scale, however, they randomly take other directions and eventually sweep all directions, keeping $\hat{\bm{B}}\cdot\hat{\bm{E}}\simeq -1$.} \label{fig: sphere} \end{figure} In Fig.~\ref{fig: sphere}, we present the representative trajectories of the unit vectors, $\hat{\bm{B}}_\mathrm{IR}$ and $\hat{\bm{E}}_\mathrm{IR}$. Within the correlation time $t_\mathrm{c}=(2H)^{-1}$ or $N_\mathrm{c}=1/2$, they do not significantly change the direction. However, since they lose their memories of the past directions over the correlation time $t \gtrsim t_\mathrm{c}$, they are oriented in random directions and the trajectories finally sweep the entire 3-dimensional sphere. This result demonstrates that the IR electromagnetic fields continually change their directions during inflation and analysis under the approximation of static electromagnetic fields may fail to capture its interesting consequences in the present model. \section{Conclusion} \label{Conclusion} In this paper, we developed the stochastic formalism of $\mathrm{U}(1)$ gauge fields coupled to a rolling pseudo-scalar field during inflation. The derivation of the stochastic (Langevin) EoMs for $\mathrm{U}(1)$ gauge fields is analogous to that for a scalar field, while we had the following two features. First, the variances of the noise terms become constant for the {\it physical} electromagnetic fields, $\tilde{\bm E}\propto a^{-2}\bm E$ and $\tilde{\bm B}\propto a^{-2}\bm B$, in the cosmic time $t$ in the \ac{dS} limit. Second, although two different noise terms $\tilde{\bm \Xi}_E$ and $\tilde{\bm \Xi}_B$ appeared in the course of the derivation, we diagonalized them and found only one vector noise term $\tilde{\bm \Xi}$ is relevant for the IR modes. Thus, one needs a single 3-dimensional Gaussian random variable to compute the behaviors of the electromagnetic fields. This is actually in the same situation as a standard scalar field case: a noise for a scalar field $\Xi_\phi$ and one for its conjugate momentum $\Xi_\pi$ are caused by a single noise (see, e.g., Ref.~\cite{Pinol:2020cdp}). We investigated the derived stochastic EoMs in both analytic and numerical ways. We analytically found that the expected values of the electromagnetic amplitudes are constants given by their power spectra, and the electric and magnetic fields are expected to be anti-parallel. Moreover, the variance of their energy density is $5/3$ of its mean value squared, which is smaller than the kurtosis of a scalar Gaussian variable because more degrees of freedom are involved. Our numerical simulation demonstrated that the electromagnetic fields randomly change their directions over the coherent time scale, while keeping the anti-parallel configuration. Since this continuous change of direction of the electromagnetic fields has not been discussed in the previous works, it would be interesting to explore its implication for related phenomenology. Note that the isotropy is spontaneously broken when we pick up one particular configuration realized in a local Hubble patch. However, each Hubble patch is understood independent and the expectation values are obtained by averaging all the Hubble patches. As the direction of the gauge field is random for each Hubble patch, the isotropy is conserved in this sense. Our formalism should be carefully extended to compute the spatial distribution of $\mathrm{U}(1)$ gauge fields. If one allocates multiple IR electromagnetic fields in spatial grid positions, which independently evolve based on the stochastic EoMs, Gauss's law $\bm \nabla\cdot\tilde{\bm E}_\mathrm{IR}(t,\bm x)=0$ would be violated. Note that the constraint condition coming from the Euler–Lagrange equation for $A_0$ corresponds to Gauss's law in the present case with the temporal gauge $A_0=0$. Gauss's law is trivially satisfied at the leading order in the gradient expansion, but beyond the leading order, both the gradient terms in the EoM and the spatial correlations of noises should be consistently taken into account. That is compatible in itself with the stochastic formalism, and one can implement it in principle, though it may complicate the calculation procedure. This issue does not matter as long as the IR fields at a single spatial point is computed by neglecting their gradient. \begin{comment} \section{The ratio between $W'$ and $W$} Here we briefly evaluate the relative size between $\tilde \mathcal{P}_{EE}, \tilde \mathcal{P}_{BE/EB}$ and $\tilde \mathcal{P}_{BB}$ which is controlled by $|W'|/|W|$, \begin{align} \sqrt{\frac{\tilde{\mathcal{P}}_{EE}}{\tilde{\mathcal{P}}_{BB}}} =\frac{\tilde{\mathcal{P}}_{EB/BE}}{\tilde{\mathcal{P}}_{BB}} =\frac{|W'|}{|W|}. \end{align} The asymptotic behaviors of $W$ and $W'$ in the limit $\epsilon\to 0$ are \begin{align} \lim_{\epsilon\to0}\left|W_{-i\xi,1/2}(-2i\kappa)\right|^2 &=\frac{\sinh(\pi\xi)}{\pi\xi}, \\ \lim_{\epsilon\to0}\left|\partial_\kappa W_{-i\xi,1/2}(-2i\kappa)\right|^2 &=|i+4\xi \gamma_E+2\xi[\ln(2i\kappa)+\psi(-i\xi)]|^2 \frac{\sinh(\pi\xi)}{\pi\xi}, \end{align} where $\gamma_E$ is Euler's constant, $\psi(z)$ is the digamma function. Thus the ratio between $W'$ and $W$ is a non-trivial factor \begin{align} \frac{|W'|}{|W|}\xrightarrow{\epsilon\to 0} |i+4\xi \gamma_E+2\xi[\ln(2i\kappa)+\psi(-i\xi)]|. \end{align} Further taking the limit $\xi\to \infty$ by keeping $\epsilon$ small, one finds \begin{align} \lim_{\xi\to \infty}\Big|i+4\xi \gamma_E+2\xi[\ln(2i\kappa)+\psi(-i\xi)] \Big|^2 =4\xi^2 \big(2\gamma_E+\ln(2\epsilon)\big)^2. \end{align} Thus, the ratio between $|W|$ and $|W'|$ is roughly propotional to $\xi$ but it also has a factor depending on $\epsilon$. \begin{figure}[tbp] \begin{center} \includegraphics[width=100mm]{FigTF/WWp_ratio.pdf} \end{center} \caption {$|W'|/|\xi W|$ for $\kappa=1$ (blue), $10^{-1}$ (orange), $10^{-2}$ (green) and $10^{-3}$ (red) from bottom to top. This value corresponds to $\xi^{-1}(\tilde{\mathcal{P}}_{EE}/\tilde{\mathcal{P}}_{BB})^{1/2}$ and $\xi^{-1}\tilde{\mathcal{P}}_{EB}/\tilde{\mathcal{P}}_{BB}$. For $\kappa\lesssim 10^{-2}$, it becomes significantly larger than unity and that justifies ignoring the second and third term in Eqs.~\eqref{eq: Evar} and \eqref{eq: EBvar}.} \label{O1_factor} \end{figure} In Fig.~\ref{O1_factor}, we plot the ratio between $|W'|$ and $|W|$ devided by $\xi$. One observes that this factor is not huge but significantly larger than unity for $\kappa\lesssim 10^{-2}$ which justify the approximation used in Eqs.~\eqref{eq:EBcorr} and \eqref{eq:consistencycheck} \end{comment} \bibliographystyle{JHEP}
1,108,101,562,684
arxiv
\section{Introduction} \label{sec:intro} Modern text-to-speech (TTS) models can learn pronunciations from raw text input and its corresponding audio data, but in languages such as English, phonemes provide more precise pronunciation information than graphemes. As a result, many TTS systems use phonemic input during training to directly access and correct pronunciations for new vocabulary at inference time. One of the hardest problems for grapheme-to-phoneme (G2P) systems is the resolution of heteronyms, i.e., words that have a single spelling but different pronunciations. For example, \textit{``read"} in \textit{“I will read the book”} vs. \textit{“She read her project last week”}. Some heteronyms, such as \textit{``bass"}, have multiple pronunciations with the same part of speech, and they need to be disambiguated based on semantic context. In this work, we focus on the heteronym disambiguation task and propose a pipeline for labeling heteronyms in training data for both multi-stage and end-to-end (E2E) G2P models. Some multi-stage G2P systems \cite{g2pE2019, espeakng} use a set of rules for heteronym disambiguation, but high-quality rule-based systems require expert knowledge and are difficult to scale and maintain. An alternative machine learning approach for heteronym disambiguation is to treat this task as a part-of-speech tagging or a classification problem \cite{yarowsky1997homograph, gorman-etal-2018-improving}. Emerging E2E G2P systems use sentence-level training data \cite{vrezavckova2021t5g2p, ploujnikov2022soundchoice} and aim to handle out-of-vocabulary (OOV) and heteronyms in a single pass. Neural multi-stage and E2E solutions for heteronym disambiguation require labeled data where heteronyms appear in context, but unfortunately, there is a dearth of such data. Due to the domain expertise required for labeling phonemes, G2P datasets are few and far between. In datasets like TIMIT \cite{timit} and The Buckeye Speech Corpus \cite{buckeye}, phoneme transcriptions of audio are provided along with grapheme transcriptions. In TIMIT, transcriptions were human-verified, but the number of unique sentences is too small to train a G2P model. The Buckeye Speech Corpus consists of around 26 hours of conversational speech that was transcribed and phonemically labeled. Since the phoneme labels were automatically generated from the audio, the labels are noisy and sometimes contain alignment errors despite some corrections made by human research assistants, which makes the dataset more unreliable for G2P training. To our knowledge, the Wikipedia Homograph Data \cite{gorman-etal-2018-improving} (WikiHomograph) is the only open-source dataset with a sufficient number of samples to train a neural model for heteronym disambiguation. WikiHomograph is a text-only dataset where each sample is an entire sentence with a labeled heteronym. Unfortunately, this dataset does not contain a comprehensive list of English homographs. Moreover, some pronunciations in the WikiHomograph set of heteronyms are significantly underrepresented, leading to class imbalance \cite{nicolis11homograph}. For example, the corpus contains multiple sentences with the noun form of the heteronyms ``desert", ``addict" and ``subject" and no samples with the verb forms. The WikiHomograph dataset was annotated by linguists, and manual annotation remains the mainstream method of data creation. In addition, some preprocessing is required to train an E2E G2P model on the WikiHomograph dataset, as only the target homograph is labeled in each example sentence. \cite{ploujnikov2022soundchoice} uses CMUdict \cite{cmudict} to label known words while dropping sentences with OOV words. As a heteronym data augmentation technique, Nishiyama et al. \cite{nishiyama-etal-2018-dataset} introduced a method to match each sense of a heteronym to a synonymous word with a unique pronunciation and to substitute the heteronym for its synonym in a text corpus. This method requires a large textual database for queries, as well as expert knowledge and evaluators to confirm that the resulting sentences are correct. As the method was applied to Japanese heteronyms, there is no available data for English. Other relevant methods of heteronym resolution and verification include the morphological rewriting rules \cite{matsuoka} and the context-dependent phone-based HMMs that use acoustic features \cite{lu2008}. \cite{tatanov2022mixer} skips the phoneme representation in lieu of passing graphemes into a language model to generate its text representation. We plan to add these to our paper to address this broader context. \begin{figure*}[h] \includegraphics[width=\textwidth]{pics/pipeline.pdf} \caption{Data labeling pipeline for sentence-level G2P model training includes the following steps: 1) Input text. 2) Replace known unambiguous words with phoneme forms from the dictionary. 3) For sentences with heteronyms: generate sentences with all possible heteronym forms. 4) Score candidate pronunciations with context using the Aligner. 5) Select a sentence with the minimum score. 6) Mask remaining OOV words. } \label{fig:pipeline} \end{figure*} \begin{figure*}[t] \includegraphics[width=\textwidth]{pics/read-disamb-short.pdf} \caption{A comparison between the L2 distance matrices between the aligned text and audio embeddings when disambiguating the word \textit{``read"} from the entry: \textit{``... and therefore far pleasanter and easier to read"}. Values shown correspond to the audio frames that were aligned with each text token, and the average distance is taken across this diagonal to find the overall score for a given pronunciation; the rest of the values are disregarded. The average embedding distances for \textit{/\textipa{\*rEd}/} and \textit{/\textipa{\*rid}/} are 452.9 and 403.3, respectively. The latter one would be picked, as it is closer to the audio embedding across the aligned frames.} \label{fig:emb_distance} \end{figure*} We propose an automatic heteronym disambiguation approach that can generate examples for underrepresented or missing heteronym forms. Our proposed pipeline annotates speech data with heteronym phoneme labels automatically. The labeled sentences can then be used in conjunction with dictionary lookups for unambiguous known words and ``$<$unk$>$" tokens for OOV words to create training data for neural G2P or heteronym classification models without human labeling. To get target phonemic labels for heteronyms, we train the RAD-TTS Aligner \cite{radtts_aligner} on transcribed audio data. Then we use the Aligner to score possible heteronym pronunciation options and choose the one that matches the corresponding audio best. To evaluate the quality of generated data, we train a BERT-based classification model and E2E ByT5 G2P model. The results show that the proposed data augmentation technique improves heteronym disambiguation accuracy for both models. We release code\footnote{\url{https://github.com/NVIDIA/NeMo}} and all aligner-generated and hand-checked data for heteronym disambiguation model training. \section{Heteronym resolution pipeline} We propose using a RAD-TTS Aligner \cite{radtts_aligner} model to automatically select correct heteronym forms. The RAD-TTS Aligner \cite{radtts_aligner} is a speech-to-text alignment model based on the alignment mechanism introduced in RAD-TTS \cite{shih2021rad}, which allows for easy visualization and human-understandable scores when comparing candidate pronunciations. The Aligner takes a mix of graphemes and phonemes as input: phonemes for known unambiguous words and graphemes for ambiguous or OOV words. It learns to align text tokens and audio frame encodings using the $L_2$ distance between the representations, generating a soft alignment that can be converted to a hard alignment using the Viterbi algorithm. These hard alignments between text tokens and audio frames can be used in tandem with the predicted $L_2$ distance matrix in order to determine the distances between a token encoding and each of its corresponding audio frames' encodings. Thus, given a word $T$ consisting of $N$ input tokens $t_1, ..., t_N$, where token $t_i$ has been aligned with $M_i$ audio frames $a_1^{(i)}, ..., a_{M_i}^{(i)}$ out of audio $A$, the average distance, $D_{avg}$, between a word and the audio can be found as: \begin{equation} D_{avg}\big(T, A\big) = \frac{\sum\limits_{i=1}^N \sum\limits_{j=1}^{M_i} L_2(enc\_t_i, enc\_a_{j}^{(i)})}{\sum\limits_{i=1}^N M_i} \end{equation} In essence, the average distance between a word and its acoustic form is a sum of distances between its constituent tokens and their aligned audio frames, divided by the number of audio frames corresponding to the word. We can use these distances to disambiguate heteronyms with an audio sample. Figure \ref{fig:pipeline} shows the proposed automatic phoneme-labeling process for generating sentences with disambiguated heteronyms for sentence-level G2P model training. We first convert known unambiguous words to their phonetic pronunciations with dictionary lookups. This work uses the CMUdict training split defined in \cite{zhu2022byt5}. OOV words are left as graphemes. Next, we generate multiple candidates by substituting the heteronym with each possible phonemic form in the dictionary. Then, we pass each candidate along with the corresponding audio file through a trained Aligner model to automatically label heteronyms by picking the pronunciation whose phoneme encodings are closer on average to the audio encodings, i.e., smallest $D_{avg}$. Figure \ref{fig:emb_distance} shows an example of the alignments and distances for two potential pronunciations of \textit{``read"} from an entry that ends \textit{``and therefore far pleasanter and easier to read."} Using this method, we can disambiguate all known heteronyms in our speech dataset. Finally, we mask out OOV words with a special masking token, ``$<$unk$>$", and force the G2P model to produce the same masking token as a phonetic representation during training. During inference, the model generates phoneme predictions for OOV words without emitting the masking token as long as this token is not included in the grapheme input. To control the quality of the disambiguated data, we propose thresholding with a confidence score that represents how much closer the best candidate pronunciation is to the audio. Specifically, the score is a normalized difference between the chosen candidate's L2 distance versus the least likely candidate's L2 distance. The confidence score of disambiguation is found by taking the difference between the highest and lowest L2 distances over all the candidates, then dividing it by the average between the highest and lowest L2 distances. For the example in Figure \ref{fig:emb_distance}, this would be $(452.9-403.3)/(452.9+403.3)/2) = 0.116$. The higher the score, the more likely it is for the disambiguation to be correct. We can now remove any samples with disambiguations that have confidence scores lower than the desired threshold. Once heteronym disambiguations have been performed, the sentences can then be converted to phonemes for use in sentence-level G2P training. As before, we use a dictionary lookup for known unambiguous words, and now we can replace heteronyms with the disambiguated phoneme form. Samples with OOV words can either be dropped, or OOV labels can be replaced with an $\langle$unk$\rangle$ token for training. \section{Aligner training and dataset generation} \label{heteronym_experiments} We use the LJSpeech \cite{ljspeech17} and Hi-Fi TTS \cite{bakhturina21_interspeech} (speakers 9017 and 12787) datasets to generate G2P data with disambiguated heteronyms, and train one Aligner model per speaker. Speaker 9017's data contains 57.8 hours and its Aligner model was trained for 250 epochs, speaker 12787 contains 27.1 hours and its Aligner model was trained for 400 epochs, and the LJSpeech model was trained for 1000 epochs on 22.8 hours of data. All models were trained on a single RTX 8000 GPU using the Adam optimizer, a learning rate of 0.001, and a weight decay of 1e-6. A Cosine Annealing scheduler was used, with a minimum learning rate of 5e-5 and a warmup ratio of 0.35. For disambiguation, sentences without heteronyms were discarded. Aligner-disambiguated training sets of speakers 9017, 12787, and LJSpeech were compiled into the \textbf{Aug} set. We also created subsets of the data by filtering out samples where the Aligner confidence score was below a threshold value: \textbf{Aug-0.01\%} consists of samples with a confidence score of at least 0.01\%; similarly for thresholds of 0.02\% and 0.03\%. For each augmented subset, we created a ''balanced" version that aims to equalize the number of occurrences of each heteronym form in the combined WikiHomograph and Aug. training data to mitigate model bias (Table \ref{tab:aligner_stats}). \begin{table}[] \centering \caption{Number of aligner-generated samples added depending on the confidence threshold values and balancing strategy.} \resizebox{\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c} \hline \multicolumn{1}{c|}{\textbf{Threshold}} & \textbf{0.00\%} & \textbf{0.01\%} & \textbf{0.02\%} & \textbf{0.03\%} \\ \hline Num samples (bal) & 1230 & 794 & 620 & 572 \\ \hline Num samples (non bal) & 3883 & 2939 & 2286 & 1805 \\ \hline \end{tabular}} \label{tab:aligner_stats} \end{table} \section{Evaluation} \begin{table}[t] \centering \caption{True positives and false positives of each pronunciation of ``subject" as predicted by the speaker 9017 Aligner with various confidence thresholds.} \resizebox{\columnwidth}{!}{% \begin{tabular}{l|c|c|c|c|c} \hline \multirow{2}{*}{\textbf{``Subject" Eval}} & \multicolumn{2}{|c|}{\textbf{/\textipa{s@b"dZEkt}/ (v.)}} & \multicolumn{2}{c|}{\textbf{/\textipa{"s@bdZIkt}/ (adj./n.)}} & \multirow{2}{*}{\textbf{Total}} \\ \cline{2-5} & TP & FP & TP & FP & \\ \hline Threshold: 0.00\% & 1 & 30 & 48 & 0 & 79 \\ \hline Threshold: 0.01\% & 1 & 5 & 25 & 0 & 31 \\ \hline Threshold: 0.02\% & 1 & 1 & 13 & 0 & 15 \\ \hline Threshold: 0.03\% & 0 & 0 & 4 & 0 & 4 \\ \hline \end{tabular} } \label{tab:subject_disamb} \end{table} \begin{table}[t] \centering \caption{Accuracy on the heteronym disambiguation task of the BERT-based heteronym classification model on WikiHomograph and Hard evaluation sets depending on the amount and quality of the Aligner-generated augmented data.} \begin{tabular}{lccc} \hline \multicolumn{1}{l|}{\multirow{2}{*}{\textbf{Training data}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Threshold}}} & \multicolumn{2}{c}{\textbf{Accuracy, \%}} \\ \cline{3-4} \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{WikiH}} & \multicolumn{1}{c}{\textbf{Hard}} \\ \hline \multicolumn{1}{l}{WikiHomograph} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{98.70} & \multicolumn{1}{c}{86.64} \\ \hline & 0.00\% & 98.99 & 89.63 \\ + Aligner data & 0.01\% & 98.97 & 90.88 \\ (no balance) & 0.02\% & \textbf{99.07} & \textbf{91.04} \\ & 0.03\% & 98.97 & 90.09 \\ \hline & 0.00\% & 98.97 & 83.02 \\ + Aligner data & 0.01\% & 99.05 & 89.47 \\ (balance) & 0.02\% & 99.03 & 89.00 \\ & 0.03\% & 99.03 & 89.46 \\ \bottomrule \end{tabular} \label{tab:bert-cl} \end{table} \begin{table}[] \centering \caption{Evaluation of ByT5 E2E G2P model on heteronym disambiguation task (accuracy on WikiHomograph and Hard set) and on OOV (PER on CMUdict test split) depending on the Aligner-augmented data.} \begin{tabular}{lcccc} \hline \multicolumn{1}{l|}{\multirow{2}{*}{\textbf{Training data}}} & \multicolumn{1}{c|}{{\textbf{Thres-}}} & \multicolumn{2}{c|}{\textbf{Accuracy, \%}} & \multicolumn{1}{c}{\textbf{CMUdict}} \\ \cline{3-5} \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{\textbf{hold}} & \multicolumn{1}{c|}{\textbf{WikiH}} & \multicolumn{1}{c}{\textbf{Hard}} & \multicolumn{1}{|c}{\textbf{PER, \%}} \\ \hline WikiH + CMU & - & 95.42 & 79.72 & 8.62 \\ \hline & 0.00\% & 95.42 & 83.02 & 8.24 \\ + Aligner & 0.01\% & \textbf{96.10} & \textbf{85.85} & 8.97 \\ (not balanced) & 0.02\% & 95.79 & 82.08 & 8.47 \\ & 0.03\% & 95.79 & 83.02 & \textbf{8.06} \\ \bottomrule \end{tabular} \label{tab:byt5} \end{table} To assess the quality of heteronym resolution with the Aligner model, we hand-checked sentences from LJSpeech dev set, which contains 26 heteronyms. The LJSpeech Aligner model chose the grammatically correct candidate 23 times. However, two of the grammatically incorrect selections accurately reflected the pronunciation of the speaker. We also performed limited human evaluation of the heteronym labels derived from the Hi-Fi TTS speaker 9017 model for textit{``read"} and \textit{``subject"}. Out of 179 occurrences of the word \textit{``read,"} (87 /\textipa{\*rid}/, 92 /\textipa{\*rEd}/), the Aligner model picked the correct form 176 times (an accuracy of 98.3\%), with only three errors. However, it performs poorly on heteronym \textit{``subject"}, which has two forms that sound similar: /\textipa{s@b"dZEkt}/ and /\textipa{"s@bdZIkt}/. This can be mitigated by confidence thresholding, as seen in Table \ref{tab:subject_disamb}. We conclude that the Aligner model is highly dependent on the enunciation and pronunciation of the speaker, and is prone to error if the audio is noisy or if the speaker mispronounces a heteronym. It also tends to have trouble with heteronyms that have forms that sound similar, but this can be mitigated by confidence thresholding. We also manually verified heteronyms from the dev and test sets of the selected Hi-Fi TTS speakers. We then combined these samples with some proprietary sentences to create a test set that covers most of the heteronym forms missing from the evaluation set of the WikiHomograph dataset. This dataset (hereafter \textbf{Hard-eval}) contains 195 sentences and is used to evaluate the effect of the Aug data on the G2P models' performance. To perform automatic quality estimation, we train a token classification BERT-based \cite{devlin2018bert} heteronym disambiguation model on the WikiHomograph dataset. The model takes a sentence as input, and then for every word, it selects a heteronym option out of the available dictionary forms. The model handles multiple heteronyms simultaneously. We mask irrelevant forms to disregard the model’s predictions for non-ambiguous words. E.g., given the input ``The Poems are simple to read and easy to comprehend.” the model scores possible {`read present` and `read past`} options for the word ``read”. We finetuned our model from pre-trained ``bert-base-cased"\footnote{{https://huggingface.co/bert-base-cased}} model for ten epochs on a 16GB GPU with batch size 32, the AdamW optimizer, a learning rate of 5e-5, a WarmupAnnealing scheduler, and a weight decay of 0.01. Table \ref{tab:bert-cl} summarizes experiments with the BERT classification model trained on WikiHomograph data and various amounts of Aligner-generated data. The results are the averages of 3 runs. The highest accuracy was achieved on WikiHomograph and Hard-eval sets with ``non-balanced 0.02'' aligner data augmentation, 99.07\% and 91.04\%, respectively. Performance on the balanced set is more consistent on the WikiHomograph set (99+\%) and slightly below the best result. Non-balanced data augmentation leads to better results in the Hard-eval set than the performance with balanced data augmentation, 90+\% vs. about 89\%. We hypothesize that this is because the augmented data provides more non-Wikipedia examples with a vocabulary closer to the Hard-eval set. A confidence threshold of at least 0.01\% is recommended as it provides a higher quality of the augmented data; see the performance drop from 86.64\% to 83.02\% if no thresholding is used. The heteronym disambiguation task has a low tolerance towards errors as these errors propagate down to the text-to-speech pipeline. Using higher Aligner threshold values reduces the number of the augmented samples but assures a higher quality of the training data. To check the validity of our sentence-level labeling pipeline on E2E G2P models, we follow \cite{vrezavckova2021t5g2p} and \cite{zhu2022byt5} and train a sentence-level ByT5 model G2P model. The training data for our E2E G2P model consists of CMUdict \cite{cmudict} and WikiHomograph with various amounts of Aligner augmented data. We used the same CMUdict split proposed in \cite{zhu2022byt5} for labeling known words and ``$<$unk$>$" token for OOV words. We finetuned our model from pre-trained ``google/byt5-small"\footnote{{https://huggingface.co/google/byt5-small}} model for five epochs on eight 16GB GPUs with batch size 8, the AdamW optimizer, a learning rate of 1e-3, a WarmupAnnealing scheduler, and a weight decay of 0.01. Experiments with E2E ByT5 model (Table \ref{tab:byt5}) second the positive effect of the data augmentation while keeping the phoneme error rate (PER) on CMUDict test nearly the same. PER measures the generation capabilities of E2E G2P models. \section{Conclusions} In this paper, we propose a data augmentation method that can automatically disambiguate heteronyms to generate data for sentence-level G2P model training. This data labeling technique can be used to balance out existing heteronym forms in gold standard data, add new heteronyms without manual labeling, or simply create more training data as labeled heteronym data is scarce. The proposed method is also controllable using confidence threshold filtering, depending on whether a particular application may need more data with potentially lower quality, or high confidence labels at the cost of the number of samples generated. Additionally, we introduce a masking token that opens the door to sentence-level G2P model training without human annotation. We show through human evaluation and experimentation that the resulting automatically-generated data improves the performance of both BERT classification and E2E G2P systems. We hope that this method will help to remedy this lack of data both for more robust training and for more informative evaluation. \bibliographystyle{IEEEtran}
1,108,101,562,685
arxiv
\section*{Figures captions} Figure~\ref{fig1tem}. A typical TEM dark field image of the UFG 1570 alloy with a corresponding SAED (a), a bright field image (b) and a grain size distribution (c) Figure~\ref{fig2mech}. Engineering stress-strain curves for 1570 and 7475 alloys in UFG and coarse--grained states Figure~\ref{hp}. The Hall--Petch relation for the Al alloys: 1100~\cite{TsujiNATO2006}, Al--3\%Mg~\cite{FurukawaHoritaPMA1998} and data on the yield stresses of Al alloys: 1560~\cite{MarkushevMurashkinMSE2004}, 1570 and 7475 Figure~\ref{apt}. 3D reconstruction of an analyzed volume in the UFG 1570 alloy; (a) full data set showing a planar segregation of Mg (Al atoms are displayed in dots and Mg atoms in bubbles); (b) selected part orientated to display (311)Al atomic planes on the right of the planar segregation; (c) 2D chemical map showing the Mg concentration fluctuations within the volume; (d) concentration profile computed across the segregation (sampling volume thickness 1 nm) \newpage \begin{figure}[t] \begin{center} \caption{ } \includegraphics[angle=0, width=7.5cm]{fig1a_tem} \includegraphics[angle=0, width=7.5cm]{fig1b_tem} \includegraphics[angle=0, width=7.5cm]{fig1c_gs}% \label{fig1tem} \end{center} \end{figure} \newpage \begin{figure}[!ht] \begin{center} \caption{ } \vspace{8pt} \includegraphics[angle=0, width=12cm]{fig2mech}% \label{fig2mech} \end{center} \end{figure} \newpage \begin{figure}[!ht] \begin{center} \caption{ } \vspace{8pt} \includegraphics[angle=0, width=12cm]{fig3hp}% \label{hp} \end{center} \end{figure} \newpage \begin{figure}[!ht] \begin{center} \caption{ } \vspace{8pt} \includegraphics[angle=0, width=10cm]{fig4_v3}% \label{apt} \end{center} \end{figure} \end{document}
1,108,101,562,686
arxiv
\section{Introduction} \label{sec:intro} Dense 3D shape acquisition of swimming human or live fish is an important research topic for sports, biological science and so on. Passive stereo is a common solution for capturing 3D shapes because of its advantage on simplicity; \ie, it only requires two cameras. In addition, since the shapes are recovered only from a pair of stereo images, it can capture moving or deforming objects. One severe problem on passive stereo is instability, \ie, it fails to capture objects with textureless surfaces or irregular reflection. To overcome the problem, using a pattern projector to add an artificial texture onto the objects has been proposed. At underwater environments, there are additional problems for shape reconstruction by the system, such as refraction and disturbance by fluctuation and bubbles. Further, since original textures of objects are interfered by bubbles and projected patterns, they should be removed for obtaining original texture. For refraction issue, a depth-dependent calibration where refractions are approximated by a lens distortion of a center projection model is proposed~\cite{Kawasaki:WACV17}. For disturbance issue, recently a convolutional neural network (CNN)-based stereo is proposed~\cite{Ichimaru:3DV2018}. However, those previous techniques still have some holes in their results when bubble size is large, because the shapes are irregular and partially transparent. In addition, all the experiments are only conducted with a small water tank under controlled lighting condition. In this paper, we propose a transfer learning based CNN stereo as well as efficient construction of bubble database for the purpose; we develop a special bubble generation device to create a bubble database containing multiple size and density of bubbles. For the texture recovery, we also propose a CNN-based method for projected-pattern removal and bubble-canceling method. Since it is great labor to prepare task-specific dataset in extreme environment, we develop an unsupervised learning approach for texture recovery. Further, we develop a real system to capture live swimming human in a pool where lighting and other conditions are unknown and cannot be controlled. Experimental results are shown to prove the effectiveness of our method by comparing the results with the previous methods~\cite{chang2018pyramid,Ichimaru:3DV2018,mccnn}. We also conduct demonstration to show the reconstructed sequence of swimming human. Main contributions of the proposed technique are as follows: \begin{enumerate} \setlength{\parskip}{0cm} \setlength{\itemsep}{0cm} \item A multi-scale CNN-based stereo with transfer learning technique specialized for underwater environment is proposed. \item An unsupervised multi-scale CNN-based bubble and projected pattern removal method specialized for underwater environment is proposed \item A special bubble generation device to create original database containing wide variety of bubble for transfer learning are developed; the database is plan to be public available. \item The proposed technique is applied to live swimming human to recover the dynamic 3D shapes to confirm feasibility and practicability of the method. \end{enumerate} \section{Related work} \label{sec:related} For refraction problem, there are generally two types of solutions; one is geometric approach and the other is approximation-based approach. Geometric approach is based on physical models such as refractive index, distance to refraction interface, and normal of the interface ~\cite{Agrawal:CVPR2012,jordt2013refractive,kawahara:ICCV2013}. Those techniques can calculate genuine light rays if parameters are correctly estimated and interface is completely planar, however, they are usually impractical. Further, the non-central projection camera model is not suitable for shape reconstruction in theory. On the other hand, approximation approach converts captured images into central projection images by lens distortion and focal length adjustment~\cite{ferreira2005stereo}. They assumed focal point moved backward to adjust light paths to be as linear as possible, then remaining error was treated as lens distortion. It works well in most cases, but in specific case it fails because refractive distortion depends on depth and effective range of depth is not thoroughly analyzed yet. In terms of light attenuation and disturbance problem for water medium, light transport analysis has been conducted~\cite{Kutulakos:PAMI16,mukaigawa2010analysis}. Narasimhan \etal proposed a structured-light-based 3D scanning method for strong scattering and absorption media based on light transport analysis~\cite{narasimhan2005structured}. For weak scattering media, Bleier and N\"uchter used cross laser projector which only achieved sparse reconstruction~\cite{bleier}. To increase density, Campos and Codina projected parallel lines with DOE to capture underwater objects with one-shot scan~\cite{massot2014underwater}. Kawasaki et al. proposed a grid pattern to capture more dense shape with one-shot scan~\cite{Kawasaki:WACV17}. One drawback of those one-shot scanning techniques is that reconstruction tends to be unstable even if light attenuation and disturbances are not so strong because sensitivity of pattern detection is high for subtle change of projected pattern. Some research such as \cite{Anwer:Access2017} used infrared structured light or ToF sensors, but infrared attenuates rapidly in water as shown in Fig. \ref{fig:kinect}, and is not practical. \begin{figure}[t] \begin{minipage}[b]{0.5\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{kinectimage.png}} \end{minipage} \hfill \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{kinectdepth.png}} \end{minipage} \caption{Captured RGB image and depth image by Kinect v1. Distance to the targets is 70cm.} \label{fig:kinect} \vspace{-0.0cm} \end{figure} One simple solution is to apply passive stereo which is not much affected by those effects. In the air, to increase the stability, Konolige investigated how to add active pattern to the passive stereo system~\cite{konolige} and there are also commercial products available~\cite{3DMD,realsense:sr300}. We focus its simplicity and stability to achieve dense dynamic reconstruction. Recently, convolutional neural network (CNN) based stereo matching becomes popular. \u{Z}bontar and LeCun proposed a CNN-based method to train network as a cost function of image patches~\cite{mccnn}. Those techniques rather concentrate on textureless region recovery, but not noise compensation, which is a main problem for underwater stereo. Since patch based technique is known to be slow, Luo \etal proposed a speeding-up technique by substituting FCN to inner product at final stage~\cite{Luo:CVPR2016}. Shaked and Wolf achieve high accuracy as well as fast calculation time by combining both FCN to inner product~\cite{Shaked:CVPR2017}. To fundamentally solve the calculation time, end-to-end approach called DispNet is proposed, but accuracy is not so high~\cite{Mayer:CVPR2016}. Another problem for patch based CNN stereo is that it is severely affected by obstacles, image degradation or various scaling. Recently, multi-scale CNN technique is proposed to solve the patch size problem. Nah \etal proposed a method for debluring~\cite{Nah:CVPR2017}, Zhaowei \etal proposed a method for dehaze~\cite{Cai2016AUM} and Li \etal proposed a method for object recognition~\cite{li2017reside}, and Yadati \etal, Lu \etal, Chen \etal, and Ye \etal~\cite{PramodYadati:ICMVA2017,HaihuaLu2018,JiahuiChen:ICIP2016,Ye:Access2017} used multi-scale features for CNN-based stereo matching. Chang and Chen extended multi-scale CNN stereo to end-to-end network called PSMNet~\cite{chang2018pyramid}, and achieved higher accuracy, but handleable resolution is very limited because of huge memory consumption. Ichimaru \etal used FCN after multi-scale feature extraction to wisely integrate them~\cite{Ichimaru:3DV2018}, but they used only general stereo dataset. Collection of huge data for learning is another open problem for CNN-based stereo techniques. For solution, Zhou \etal proposed a technique without using ground truth depth data, but LR consistency as a loss function~\cite{Zhou:ICCV2017}. Tonioni \etal proposed a unsupervised method by using existing stereo technique as an instruction~\cite{Tonioni:ICCV2017}. Tulyakov and Ivanov proposed a multi-instance learning (MIL) method by using several constraints and cost functions~\cite{Tulyakov:ICCV2017}. However in general, unsupervised learning is instable compared to supervised learning. DispNet~\cite{Mayer:CVPR2016} and PSMNet~\cite{chang2018pyramid} are trained with generated images based on computer graphics, but transfer learning with natural images is necessary since computer graphics is not realistic enough to learn noises or camera characteristics. In this research, we created original stereo dataset and a special device for data augmentation which reproduces underwater environment for transfer learning. CNNs are also popular in the field of image restoration and segmentation. In underwater environment, there are several noises, such as bubbles or shadows of water surfaces. In addition, projected pattern onto the target object is also a severe noise. To remove such a large noise, inpainting method based GAN is promising~\cite{Iizuka:SIGGRAPH16,ChenyuYou2018}. However, since resolution of generative approach is basically low, noise removal approach is better fit to our purpose. For efficient noise removal, shallow CNN based approach using residual is proposed~\cite{He:CVPR2016}. Liu and Fang propose end-to-end architecture using WIN5RB network~\cite{Peng:arXiv2017} which outperform others. We also use this technique, but data collection and multi-scale extension is our original. Further, we also propose unsupervised learning approach to overcome the difficulties preparing such task-specific dataset. \section{System and algorithm overview} \label{sec:overview} \subsection{System Configuration} \label{ssec:sysconf} The proposed system includes two cameras and one projector as shown in Fig.~\ref{fig:sysconf}. We prepare two systems for experiments. One is for evaluation purpose where two cameras and a projector are set outside a water tank. The other is a practical system where devices are installed into a waterproof housing. For the both systems, the optical axes of the cameras are set orthogonal to housing surface so that error by refraction approximation is minimized. The two cameras are synchronized to capture dynamic scenes. In terms of the pattern projector to add textures onto the objects, no synchronization is required since the pattern is static. \begin{figure}[t] \begin{minipage}[b]{0.5\linewidth} \centering \centerline{\includegraphics[width=4.3cm]{sysconf.pdf}} \end{minipage} \hfill \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\includegraphics[width=3.5cm]{IMG_0200.jpg}} \end{minipage} \caption{{\bf Left: }Minimum system configuration of the proposed algorithm. {\bf Right: }Our experimental system for evaluation where two cameras and a projector are set outside a water tank.} \label{fig:sysconf} \vspace{-0.0cm} \end{figure} \subsection{Algorithm} \label{ssec:algo} The flow of our algorithm is shown in Fig.~\ref{fig:overview}. In learning phase, several CNNs such as CNN-based segmentation, CNN-based stereo matching, and CNN-based texture recovery are trained for robustness against underwater disturbances as shown in Fig.~\ref{fig:overview} (top). First, CNN-based segmentation network is trained to detect reconstruction target region. It can be trained with large image dataset, or small dataset created from captured images in reconstruction phase. In the method, we manually created the mask data for learning. For CNN Stereo, proper stereo dataset suitable for assuming application is prepared (\eg, fish and human images, in our case) without bubble. We also create special dataset which reproduces underwater environment by using a special bubble generation device for transfer learning purpose. CNN-based stereo matching network is efficiently trained by using both datasets. CNN-based texture recovery networks are also trained with prepared dataset. Reconstruction process is shown in Fig.~\ref{fig:overview} (bottom). First, the camera pair is calibrated. The refractions in the captured images are modeled and canceled by center projection approximation by depth-dependent calibration~\cite{Kawasaki:WACV17}. In the measurement process, the targets are captured with stereo cameras. Pattern illumination is projected onto the scene for adding features on it. From captured images, target regions are detected by a CNN-based segmentation technique, where only target regions are extracted. Then, a stereo-matching method is applied to the target regions. In our technique, a CNN-based stereo is applied to increase stability under the condition of dimmed patterns, disturbances by bubbles, and flickering shadows. Then, 3D points are reconstructed from disparity maps estimated by the stereo algorithm. Outliers are removed from the point cloud and meshes are recovered by Poisson equation method~\cite{Kazhdan:EGSGP06}. Since textures are degraded by bubbles and projected patterns, they are efficiently recovered by CNN-based bubble canceling and pattern removal technique. Using the recovered 3D shapes and textures, we can render the dynamic and textured 3D scene. \begin{figure}[t] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.0cm]{algorithm4_train_cropped.pdf}} \end{minipage} \hfill \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.0cm]{algorithm4_cropped.pdf}} \end{minipage} \caption{ Overview of the algorithm. } \label{fig:overview} \vspace{-0.0cm} \end{figure} \section{Stereo reconstruction using CNNs} \label{sec:cnnstereo} To deal with bubbles and water fluctuation disturbing the images, we create special database containing multiple size and density of bubbles at real underwater environment for transfer learning (\subsecref{dataset}). By using the data, we apply CNN-based target region extraction technique (\subsecref{segmentation}) and multi-scale CNN stereo (\subsecref{msc}). \subsection{Underwater stereo dataset created by special bubble generation device} \label{ssec:dataset} First, we create basic training datasets for stereo with projected pattern as follows. Since we assume measurement of swimming human and fish as our purpose, dataset which includes fish-like and human skin-like objects is necessary. Thus, we prepared model of coelacanth, largemouth bass, goliath grouper, mannequin head and hand. We prepared two cameras and one projector, and captured the above models and some additional objects in the air with graycode techniques from two views. Then we captured targets with two cameras changing room illumination and pattern projection condition. We captured 8 target object set, 3 poses, 3 illumination, and 4 pattern projection condition, in total 288 stereo image pairs were acquired. We also created ground truth disparity map from captured graycode images as shown in Fig.~\ref{fig:dataset} (right). The dataset is named ``Coel Dataset''. An example of the image in the dataset is shown in Fig.~\ref{fig:dataset} (left). Then, we create special training datasets for underwater stereo as follows. To effectively train the network with scene include bubbles, we develop a special bubble generator to reproduce underwater environment (Fig.~\ref{fig:augmentation}). Since underwater bubbles have wide variety, it is necessary to generate various types of bubbles. Our bubble generator can control bubble size, density, and generating position. We used the device to augment stereo dataset with bubble. Then, we placed a camera and a LCD monitor as they face each other, and placed water tank of $90\times45\times45$cm in-between them. Water tank was filled with transparent water and bubble generator is submerged into the tank. Graycode patterns were presented on the monitor and captured by the camera in order to acquire 2D point correspondences between camera image plane and the monitor pixel position. Then, arbitrary images of public available dataset were displayed on the monitor and captured by the camera while bubbles are generated in the tank. We captured dataset in 2 bubble sizes, 2 generating positions and 2 densities, \ie, total 8 cases plus one no-bubble scene as shown in Fig.~\ref{fig:datasetexample}. Used images are Middlebury 2005, 2006 dataset and Coel Dataset which contains 918 images. Note that such underwater stereo datasets including real bubbles do not yet exist and we will make the datasets public available; this is one of our important contribution of the paper. \begin{figure}[t] \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{IMG_0238.jpg}} \end{minipage} \hfill \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{IMG_0252.jpg}} \end{minipage} \caption{ {\bf Left:} Appearance of bubble generator. {\bf Right:} Situation of bubble data augmentation. } \label{fig:augmentation} \vspace{-0.0cm} \end{figure} \begin{figure*}[t] \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\includegraphics[width=8.45cm]{bubbleDataset1.jpg}} \end{minipage} \hfill \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\includegraphics[width=9cm]{bubbleDataset2.jpg}} \end{minipage} \caption{ Examples of augmented dataset with various types of bubbles. } \label{fig:datasetexample} \vspace{-0.0cm} \end{figure*} \begin{figure}[t] \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{datasetimg.png}} \end{minipage} \hfill \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{datasetdisp.png}} \end{minipage} \caption{ {\bf Left:} An example of images included in Coel Dataset. {\bf Right:} An example of disparity map. } \label{fig:dataset} \vspace{-0.0cm} \end{figure} \knote{ ・注意 今回は単純に画像をマスクして入力するだけでなく, CUDAによるSGMにおいて探索範囲をマスク内に限定する制約を加えてある(ms-cnn-fcn). End-to-Endは明示的にそのような処理を加えてはいないが, 多分真っ黒な領域は視差を出す必要が無いと学習してくれている } \subsection{CNN-based target-region extraction} \label{ssec:segmentation} For many applications, reconstruction targets are recognizable, such as swimming human in the water. In general, the wider the range of disparities considered in stereo-matching processes, the more possibilities exist, leading to wrong correspondences. Thus, by extracting the target regions from the input images and reducing possibilities of matching outside the target regions, 3D reconstruction process becomes more robust. In addition, 3D points of only required region can be obtained. To this purpose, we implemented an U-Net~\cite{ronneberger2015u}, an FCN with multi-scale feature extraction, and trained it for this task. We made training dataset from underwater image sequences. From image sequences, 200 images were sampled and the target regions were masked with manual operations. These training images were augmented by scalings, rotations, and translations. As a result, we provide 2000 pairs of source images and target-region masks for training U-Net. We use cross entropy as loss function. The trained U-Net is tested for large number of images, we obtained qualitatively successful results in most examples, even when camera was moving during capturing (Fig.~\ref{fig:seg}). In the evaluation process, we have found that the numbers of resolution levels of the U-Net architecture is important. By using only two or three levels of resolutions, we could not get sufficient results. We finally reached the conclusion that the U-Net with five levels of resolutions works effectively with our dataset. Using the obtained results, disparity search range can be limited and which improves robustness. The search range limitation is implemented into CNN stereo matching described later, which takes mask images as input in addition to rectified images. \begin{figure}[t] \begin{minipage}[b]{0.24\linewidth} \centering \centerline{\includegraphics[width=2.0cm]{seg_in.jpg}} \end{minipage} \hfill \begin{minipage}[b]{0.24\linewidth} \centering \centerline{\includegraphics[width=2.0cm]{seg_out.png}} \end{minipage} \hfill \begin{minipage}[b]{0.24\linewidth} \centering \centerline{\includegraphics[width=2.0cm]{seg_in2.jpg}} \end{minipage} \hfill \begin{minipage}[b]{0.24\linewidth} \centering \centerline{\includegraphics[width=2.0cm]{seg_out2.png}} \end{minipage} \caption{ An example of CNN segmentation results. } \label{fig:seg} \vspace{-0.0cm} \end{figure} \subsection{Multi-scale CNN stereo for robustness against bubble} \label{ssec:msc} There are several peculiar phenomena in the water such as bubbles, shadows of water surface, refraction caused by difference of water temperature, and so on. Those phenomena degrade the captured image and make bad effects on stereo matching. Since those phenomena are explained by complicated process of both physics and optics, it is difficult to solve analytically. In the paper, we adopt a learning based approach, \ie, CNN, for solution. Since bubbles and water fluctuation have much larger structure than pixel scale, basic CNN-based stereo matching technique does not work~\cite{mccnn}. There are several papers which can handle a large spatial structure by multi-scale extension~\cite{PramodYadati:ICMVA2017,HaihuaLu2018,JiahuiChen:ICIP2016} and they can recognize large scale information, however, such previous methods does not use enough number of scales, or utilize multi-scale information due to flattening of multi-scale features. In the paper, we propose multi-scale CNN stereo architecture which takes multiple scale patches (more than three) and process the retrieved features by Fully Convolutional Networks (FCN); this is not common for multi-scale CNN stereo. The architecture is shown in Fig.~\ref{fig:network-stereo}. It takes $44\times44$ (the size depends on the number of scales) patches from left and right rectified images for calculation. In the matching process, first, patches are down-sampled by MaxPooling layers, and multiple scale patches are prepared. Second, each patch is processed by former FCN to retrieve multi-scale features, respectively. Then, retrieved multi-scale features are up-sampled to original scale, concatenated, and processed by FCN to integrate multi-scale information. Finally, output tensors are vectorized and cosine distance between left and right vector is calculated to determine the similarity score. Weights are shared between left and right branches to reduce memory consumption, because multi-scale CNN architecture tend to consume large memory. Cost volume is obtained by sweeping input images with patches. Finally, post-processing are applied such as Semi-Global Matching (SGM)~\cite{SGBM} as well as Left-Right consistency check to get final disparity map. Post-processing is implemented by GPU and it achieved fast calculation as mentioned in experiment section. Effectiveness of our architecture is confirmed in experiment section (\subsecref{robust}). \begin{figure}[t] \begin{minipage}[b]{\linewidth} \centering \includegraphics[width=9.0cm]{architecture6_cropped.pdf} \end{minipage} \caption{ Network architecture of multi-scale CNN stereo. Left and right branches are symmetry. } \label{fig:network-stereo} \vspace{-0.0cm} \end{figure} \section{Unsupervised texture recovery from bubble and projected pattern} \label{sec:texture} For real situations, the captured images are often severely degraded by underwater environments, such as bubble and other noises, as well as projected pattern on the object surface. In order to remove such undesirable effects, we propose CNN-based solution. In our technique, we focus on two major problems, such as bubbles and projected pattern. Although those two phenomena are totally different and have different optical attributes, it is common in the sense that appearances for both effects have a wide variation in scale. Note that such wide variation depends on the distance between a target object, bubble and a projector. Such a large variation of scale of them makes it difficult for removal by simple noise removal method. Since multi-scale CNN is suitable to learn such a variation, we also use a multi-scale CNN for our bubble and pattern removal purpose. The network for such obstacle removal is shown in Fig.~\ref{fig:network-texture}. In the figure, it is shown that an original image is converted to three different resolutions and trained by independent CNN. Each output is up-sampled and concatenated to higher resolution. This network is advantageous because it can handle a large structure of projected pattern, as well as it can be trained in a relatively short time. For pattern removal training phase, we also used Coel Dataset as pattern removal training data which contains both images with and without pattern on same scene. We used images with pattern as input, and images without pattern as ground truth. Ground truth images were adjusted their brightness and contrast to fit to input images. In addition, although we need to remove projected pattern on real fish, Coel Dataset does not contain such images. Thus, we prepared raw fish (sea bream, filefish and chicken grunt) and submerged them into the water, then we captured them with and without pattern to make transfer dataset. The pattern removal network was trained with the transfer dataset after trained with Coel Dataset. In terms of bubble removal training, we used bubble augmented stereo dataset aforementioned in \subsecref{dataset} using no-bubble scene as ground truth. Furthermore, we consider unsupervised learning approach to recovery correct texture even in case training dataset is insufficient. To achieve it on pattern removal, first we trained pattern detection network to output the difference image between with and without pattern. The network architecture is duplication of texture recovery network shown in Fig.~\ref{fig:network-texture}. It can be trained in insufficient dataset situation because pattern detection is easier than pattern removal, or we can use alternative pattern detection method if necessary as \cite{Kawasaki:WACV17}. Using pattern detection network, loss function is defined as below: \begin{equation} \begin{split} MSE(in, R(in)) + \lambda \times MSE(D(R(in)), \vec{0}) \nonumber \end{split} \end{equation} where $in$ means input image, $R$ and $D$ means output of pattern removal and detection respectively, $MSE$ means mean squared error, and $\lambda$ is coefficient for balancing. The loss function leads to pattern is not detected, \ie, pattern is removed, while keeping output image not far from input image. Result of the function is back-propagated to only pattern removal network. The network was also trained with Coel Dataset and fish transfer dataset. Effectiveness of this unsupervised learning is confirmed in experiment section (\subsecref{texeval}). \begin{figure}[t] \begin{minipage}[b]{\linewidth} \centering \includegraphics[width=9.0cm]{architecture5_cropped.pdf} \end{minipage} \caption{ Network architecture of multi-scale CNN texture recovery. } \label{fig:network-texture} \vspace{-0.0cm} \end{figure} \section{Experiments} In order to evaluate our proposed method, we conducted several experiments. First, robustness of Multi-scale CNN stereo against underwater disturbances is investigated in \subsecref{robust}. Then, qualitative evaluation of texture recovery is conducted in \subsecref{texeval}. Finally, we captured real swimming human sequence and reconstructed it by proposed method to confirm its feasibility in \subsecref{demo_human}. \subsection{Evaluation of various CNN stereo techniques} \label{ssec:robust} We tested CNN-based stereo for underwater scene with bubbles. For evaluation purpose, we prepared six implementations, such as single-scale CNN stereo of \cite{mccnn} (mc-cnn), 2-scale CNN stereo with linear combination of \cite{JiahuiChen:ICIP2016} (ms-cnn-lin), end-to-end multi-scale CNN stereo of \cite{chang2018pyramid} (PSMNet), 2-scale CNN stereo with FCN of \cite{Ichimaru:3DV2018} (ms-cnn-fcn-2), proposed 3-scale CNN stereo with FCN (ms-cnn-fcn-3), and transfer learned ms-cnn-fcn-3 with bubble images (ms-cnn-fcn-3 (trans)). All CNNs were implemented with Tensorflow and trained with Middlebury 2001, 2003, 2005, 2006, 2014, and Coel Dataset, except ms-cnn-fcn-3 (trans) was also trained with augmented dataset, and pretrained model (KITTI 2015) was used for PSMNet. Post-processing such as SGM and LR Check were implemented with CUDA and achieved processing $1024\times768$ px images (maximum disparity is 256) in 30 seconds. The target objects were placed at a distance of 50, 60, 70cm and the depth-dependent calibration was applied. We intentionally made bubbles to interfere image capturing process. We reproduced four bubble environments, \ie, far little bubble, far much bubble, near little bubble, and near much bubble. In addition, no bubble scenes as reference were prepared. We captured three pairs of images for each target with five environments. In total, 90 images were captured. Then, we calculated disparity map by each CNN methods, and reconstructed all the scenes and targets. We calculated average RMSE from the GT shape of each target. The results are shown in Fig.~\ref{fig:rmse}. From the graph, we can confirm that the accuracy of proposed CNN architecture is better than previous method, supporting the effectiveness of our method. Fig.~\ref{fig:CNNStereo} shows examples of the reconstructed disparity maps for each technique confirming that shapes are recovered by our technique even if captured images are severely degraded by bubbles. Further comparison between PSMNet~\cite{chang2018pyramid} and ours are shown in Fig.~\ref{fig:psmnet}. PSMNet had difficulty to estimate correct disparity especially when the region was in occluding boundary, whereas ours estimated correct results. \begin{figure*}[t] \centering \centerline{\includegraphics[width=18cm]{disparity.pdf}} \caption{ Difference of disparity map between stereo methods in bubble scene. {\bf Left to Right: } Input image with bubbles, close-up of input, mc-cnn results~\cite{mccnn}, ms-cnn-lin results~\cite{JiahuiChen:ICIP2016}, PSMNet results~\cite{chang2018pyramid}, ms-cnn-fcn-2 results~\cite{Ichimaru:3DV2018}, ms-cnn-fcn-3 results, ms-cnn-fcn-3 (trans) results and reconstructed point cloud. } \label{fig:psmnet} \vspace{-0.0cm} \end{figure*} \begin{figure*}[t] \centering \centerline{\includegraphics[width=16cm]{disparity_psmnet_cropped.pdf}} \caption{ Detailed comparison of disparity map between PSMNet and ours. {\bf Left to Right: } Input image, PSMNet results, close-up of PSMNet results, ms-cnn-fcn-3 (trans) results, and close-up of ours. } \label{fig:CNNStereo} \vspace{-0.4cm} \end{figure*} \begin{figure*}[t] \centerline{\includegraphics[width=18cm]{swimmingman.jpg}} \caption{Swimming human experiment. Images are too small and only the left-most image shows significant bubbles, but all images contain bubbles in captured images.} \label{fig:demo_human} \vspace{-0.3cm} \end{figure*} \begin{figure}[t] \centering \centerline{\includegraphics[width=9.0cm]{graph6.pdf}} \caption{Comparison on proposed methods (blue bar) and previous methods(red bar). Proposed methods performed best in most cases. } \label{fig:rmse} \vspace{-0.0cm} \end{figure} \subsection{Experiments of texture recovery} \label{ssec:texeval} We also tested the bubble-removal and the pattern-removal techniques. The results are shown in Fig.~\ref{fig:texexp}. We can confirm that projected patterns are robustly removed by multi-scale CNN technique. Top two rows are results of supervised learning, and bottom row is result of unsupervised learning, showing unsupervised learning has enough ability to remove pattern. \begin{figure}[t] \centering \centerline{\includegraphics[width=9.0cm]{texture.pdf}} \caption{ Result of texture recovery experiment. first column is input image, 2nd column is close-up of input, 3rd column is output image, 4th column is close-up of output. } \label{fig:texexp} \vspace{-0.5cm} \end{figure} \subsection{Demonstration with swimming human} \label{ssec:demo_human} \begin{figure}[t] \centerline{\includegraphics[width=6cm]{newrig.jpg}} \caption{Experimental rig for swimming human capture.} \label{fig:newrig} \vspace{-0.4cm} \end{figure} Finally, we captured swimming human in a swimming machine, where swimmer can keep the same position by artificially created water flow. We made special experimental system, which consists of low-cost commercial devices, such GoPro Hero 3+ for stereo camera pair with synchronous cable, and a laser pattern projector with battery (Fig. \ref{fig:newrig}). We captured several swimming sequences, which is 3920 frames in 24 frames per seconds, \ie, 163 seconds in total. The reconstructed results are shown in Fig.~\ref{fig:demo_human}. In the figure, we can confirm that the 3D shape of human is successfully reconstructed by our method, even when the body was covered in heavy bubbles. Since direction of optical axis of camera is almost parallel to human axis in this setup, reconstructed depth is largely discretized; solution is our important future work. \vspace{-0.0cm} \section{Conclusion} \label{sec:conclusion} In this paper, we propose a robust and practical underwater dense shape reconstruction method using stereo cameras with a static-pattern projector. Since underwater environments have severe conditions, such as refraction, light attenuation and disturbances by bubbles, we propose a CNN-based solution, such as target-object segmentation and robust stereo matching with a multi-scale CNN. To acquire task-specific dataset, we created stereo dataset and special device for data augmentation which reproduces underwater environment. We also propose a texture-recovery method using a CNN. With our method, images with strong bubbles are robustly recovered through comprehensive experiments showing effectiveness of our method. Our future plan is to create underwater unmanned autonomous vehicle equipped with our system. \section*{Acknowledgment} This work was part supported by grant JSPS/KAKENHI 16H02849, 16KK0151, 18H04119, 18K19824 in Japan, and MSRA CORE14. \clearpage {\small \bibliographystyle{ieee}
1,108,101,562,687
arxiv
\section{Introduction} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par A complete Riemannian manifold $(M^n,g,f)$ is called a gradient Ricci soliton if there exists a smooth function $f$ on $M^n$ such that the Ricci tensor $Ric$ of the metric $g$ satisfies the equation \begin{equation} Ric+\nabla^2f=\lambda g \end{equation} for some constant $\lambda$. For $\lambda>0$ the Ricci soliton is shrinking, for $\lambda=0$ it is steady and for $\lambda<0$ expanding. The classification of gradient Ricci solitons has been a subject of interest for many people in recent years. For four-dimensional gradient Ricci solitons, A. Naber [12] showed that a four-dimensional non-compact shrinking Ricci soliton with bounded nonnegative Riemannian curvature is a finite quotient of $\mathbb{R}^4$, $\mathbb{R}^2\times\mathbb{S}^2$ or $\mathbb{R}\times\mathbb{S}^3$. X. Chen and Y. Wang [6] classified four-dimensional anti-self dual gradient steady and shrinking Ricci solitons. More generally, J.Y. Wu, P. Wu and W. Wylie [17] proved that a four-dimensional gradient shrinking Ricci soliton with half harmonic Weyl tensor (i.e. $divW^\pm=0$) is either Einstein or a finite quotient of $\mathbb{R}^4$, $\mathbb{R}^2\times\mathbb{S}^2$ or $\mathbb{R}\times\mathbb{S}^3$. For $n$-dimensional gradient Ricci solitons, M. Eminenti, G. La Nave and C. Mantegazza [7] proved that an $n$-dimensional compact shrinking Ricci soliton with vanishing Weyl tensor is a finite quotient of $\mathbb{S}^n$. More generally, P. Peterson and W. Wylie [14] showed that a gradient shrinking Ricci soliton with vanishing Weyl tensor is a finite quotient of $\mathbb{R}^n$, $\mathbb{S}^{n-1}\times\mathbb{R}$, or $\mathbb{S}^n$ by assuming $\int_M|Ric|^2e^{-f}<\infty$. The integral assumption was proven to be true for gradient shrinking Ricci solitions (see Theorem 1.1 of [11]). Without additional assumptions, Z. H. Zhang [18] obtained the same classification of gradient shrinking Ricci solitons with vanishing Weyl tensor. H. D. Cao and Q. Chen [1] introduced the covariant 3-tensor $D$, i.e. \begin{eqnarray*} D_{ijk}&=&\frac{1}{n-2}(R_{jk}\nabla_if-R_{ik}\nabla_jf)+\frac{1}{2(n-1)(n-2)}(g_{jk}\nabla_iR-g_{ik}\nabla_jR)\\ &&-\frac{R}{(n-1)(n-2)}(g_{jk}\nabla_if-g_{ik}\nabla_jf). \end{eqnarray*} to study the classification of locally conformally flat gradient steady solitons. The vanishing of $D$ is a crucial ingredient in their classification results. They [2] proved that a compact gradient shrinking Ricci solitons with $D=0$ is Einstein. Moreover, they showed that a four-dimensional complete non-compact Bach-flat gradient shrinking Ricci soliton is a finite quotient of $\mathbb{R}^4$ or $\mathbb{R}\times\mathbb{S}^3$. More generally, they proved that a $n$-dimensional $(n\geq5)$ complete non-compact Bach-flat gradient shrinking Ricci soliton with is a finite quotient of $\mathbb{R}^n$ or $\mathbb{R}\times N^{n-1}$, where $N$ is an $(n-1)$-dimensional Einstein manifold. H. D. Cao and Q. Chen [2] proved that a Bach-flat gradient shrinking Ricci soliton has vanishing $D$, where the Bach tensor is given by $$B_{ij}=\frac{1}{n-3}\nabla_k\nabla_lW_{ikjl}+\frac{1}{n-2}R_{kl}W_{ikjl}.$$ Moreover, they showed that the $3$-tensor $D$ is closely related to Cotton tensor, i.e. $$C_{ijk}=\nabla_iR_{jk}-\nabla_jR_{ik}-\frac{1}{2(n-1)}(g_{jk}\nabla_iR-g_{ik}\nabla_jR),$$ and Weyl tensor, i.e. \begin{eqnarray*} W_{ijkl}&=&R_{ijkl}-\frac{1}{n-2}(g_{ik}R_{jl}-g_{il}R_{jk}-g_{jk}R_{il}+g_{jl}R_{ik})\\ &&+\frac{R}{(n-1)(n-2)}(g_{ik}g_{jl}-g_{il}g_{jk}) \end{eqnarray*} by \[D_{ijk}=C_{ijk}+W_{ijkl}\nabla_l f.\] M. Fern\'{a}ndez-L\'{o}pez and E. Garcia-R\'{i}o [8] proved that the compact Ricci soliton is rigid if and only if it has harmonic Weyl tensor. For the complete non-compact case, O. Munteanu and N. Sesum [11] showed that a gradient shrinking Ricci soliton with harmonic Weyl tensor is rigid. In 2016, G. Catino, P. Mastrolia and D. D. Monticelli [4] proved that the gradient shrinking Ricci soliton is rigid if $div^4W=0$. In their paper, $div^4$ is defined by $div^4W=\nabla_k\nabla_j\nabla_l\nabla_iW_{ikjl}$. They showed that $div^4W=0$ if and only if $div^3C=0$, where $div^3C=\nabla_i\nabla_j\nabla_kC_{ijk}$. Then, they proved that $div^3C=0$ implies $C=0$. The rigidity result follows. S. Tachibana [15] proved that a compact orientable Riemannian manifold with $Rm>0$ and $divRm=0$ is a space of constant curvature. P. Peterson and W. Wylie [13] proved that a compact shrinking gradient Ricci soliton is Einstein if $\int_M Ric(\nabla f,\nabla f)\leq0$. They also showed that a gradient Ricci soliton is rigid if and only if it has constant scalar curvature and is radially flat. In order to state our results precisely, we introduce the following definitions for the Riemannian curvature: \[(divRm)_{ijk}:=\nabla_lR_{ijkl},\] \[(div^2Rm)_{ik}:=\nabla_j\nabla_lR_{ijkl},\] \[(div^3Rm)_i:=\nabla_k\nabla_j\nabla_lR_{ijkl},\] \[div^4Rm:=\nabla_i\nabla_k\nabla_j\nabla_lR_{ijkl}.\] For the Weyl curvature tensor, we define: \[(divW)_{ijk}:=\nabla_lW_{ijkl},\] \[(div^2W)_{ik}:=\nabla_j\nabla_lW_{ijkl},\] \[(div^3W)_i:=\nabla_k\nabla_j\nabla_lW_{ijkl},\] \[div^4W:=\nabla_i\nabla_k\nabla_j\nabla_lW_{ijkl}.\] Our main results are the following theorems for gradient shrinking Ricci solitons: \\\\\textbf{Theorem 1.1} Let $(M^n,g)$ be a gradient shrinking Ricci soliton with (1.1). If $div^4Rm=0$, then $(M^n,g)$ is rigid. \\\\\textbf{Theorem 1.2} Let $(M^n,g)$ be a gradient shrinking Ricci soliton with (1.1). If $div^3Rm(\nabla f)=0$, then $(M^n,g)$ is rigid. G. Catino, P. Mastrolia and D. D. Monticelli [4] proved that a gradient shrinking Ricci soliton with $div^4W=0$ is rigid. We will give a different proof in Section 8 Appendix. Moreover, we have the following result: \\\\\textbf{Theorem 1.3} Let $(M^n,g)$ be a gradient shrinking Ricci soliton with (1.1). If $div^3W(\nabla f)=0$, then $(M^n,g)$ is rigid. For the $4$-dimensional case, we have the following classification theorems: \\\\\textbf{Theorem 1.4} Let $(M^4,g)$ be a $4$-dimensional gradient shrinking Ricci soliton with (1.1). If $div^4Rm=0$, then $(M^4,g)$ is either (\rmnum{1}) Einstein, or (\rmnum{2}) a finite quotient of the Gaussian shrinking soliton $\mathbb{R}^4$, $\mathbb{R}^2\times\mathbb{S}^2$ or the round cylinder $\mathbb{R}\times\mathbb{S}^3$. \\\\\textbf{Theorem 1.5} Let $(M^4,g)$ be a $4$-dimensional gradient shrinking Ricci soliton with (1.1). If $div^3Rm(\nabla f)=0$, then $(M^4,g)$ is either (\rmnum{1}) Einstein, or (\rmnum{2}) a finite quotient of the Gaussian shrinking soliton $\mathbb{R}^4$, $\mathbb{R}^2\times\mathbb{S}^2$ or the round cylinder $\mathbb{R}\times\mathbb{S}^3$. \\\\\textbf{Theorem 1.6} Let $(M^4,g)$ be a $4$-dimensional gradient shrinking Ricci soliton with (1.1). If $div^3W(\nabla f)=0$, then $(M^4,g)$ is either (\rmnum{1}) Einstein, or (\rmnum{2}) a finite quotient of the Gaussian shrinking soliton $\mathbb{R}^4$, $\mathbb{R}^2\times\mathbb{S}^2$ or the round cylinder $\mathbb{R}\times\mathbb{S}^3$. \\\\\textbf{Remark 1.1} As it will be clear from the proof, the scalar assumptions on the vanishing of $div^4Rm$, $div^3Rm(\nabla f)$, and $div^3W(\nabla f)$ in all the above theorems can be trivially relaxed to a (suitable) inequality. To be precise, Theorem 1.1 and Theorem 1.4 hold just assuming $div^4Rm\geq0$. Under the condition of $div^3Rm(\nabla f)\geq0$, Theorem 1.2 and Theorem 1.5 still hold. Moreover, Theorem 1.3 and Theorem 1.6 hold for $div^3W(\nabla f)\geq0$. The rest of this paper is organized as follows. In Section 2, we recall some background material and prove some formulas which will be needed in the proof of the main theorems. In Section 3, we will prove that the compact gradient Ricci soliton with fourth divergence-free Riemannian tensor is Einstein. The proof makes use of a rigid theorem obtained by P. Peterson and W. Wylie [13]. In Section 4, we will deal with the complete noncompact case of Theorem 1.1. In Section 5, we give a direct proof of Theorem 1.2. We first prove divergence formulas of the Weyl tensor in Section 6, then we will prove Theorem 1.3. Finally, in Section 7 we will finish the proof of Theorems 1.4 to 1.6. \section{Preliminaries} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par First of all, we present some basic facts of gradient shrinking Ricci solitons. \\\\\textbf{Proposition 2.1} (([7,10,11,14])) Let $(M^n,g)$ be a gradient shrinking Ricci soliton with (1.1), we have the following identities. \begin{equation} \nabla_lR_{ijkl}=\nabla_jR_{ik}-\nabla_iR_{jk}, \end{equation} \begin{equation} \nabla R=2divRic, \end{equation} \begin{equation} R_{ijkl}\nabla_lf=\nabla_lR_{ijkl}, \end{equation} \begin{equation} \nabla_l(R_{ijkl}e^{-f})=0, \end{equation} \begin{equation} R_{jl}\nabla_lf=\nabla_lR_{jl}, \end{equation} \begin{equation} \nabla_l(R_{jl}e^{-f})=0, \end{equation} \begin{equation} \nabla R=2Ric(\nabla f,\cdot), \end{equation} \begin{equation} \Delta_fR_{ik}=2\lambda R_{ik}-2R_{ijkl}R_{jl}, \end{equation} \begin{equation} \Delta_fR=2\lambda R-2|Ric|^2, \end{equation} where $\Delta_f:=\Delta-\nabla_{\nabla f}$, \begin{equation} \Delta_f|Ric|^2=4\lambda|Ric|^2-4Rm(Ric,Ric)+2|\nabla Ric|^2, \end{equation} where $Rm(Ric,Ric)=R_{ijkl}R_{ik}R_{jl}$, and \begin{equation} R+|\nabla f|^2-2\lambda f=Const. \end{equation} Next we prove the following formulas for gradient shrinking Ricci soliton with (1.1). \\\\\textbf{Proposition 2.2} Let $(M^n,g)$ be a gradient shrinking Ricci soliton with (1.1), we have the following identities. \begin{equation} (div^2Rm)_{ik}=2\lambda R_{ik}+\nabla_lR_{ik}\nabla_lf-\frac{1}{2}\nabla_i\nabla_kR-R_{ik}^2-R_{ijkl}R_{jl}, \end{equation} \begin{equation} (div^3Rm)_{i}=-R_{ijkl}\nabla_kR_{jl}, \end{equation} and \begin{equation} div^4Rm=\nabla_lR_{jk}\nabla_kR_{jl}-|\nabla Ric|^2-R_{ijkl}\nabla_i\nabla_kR_{jl}. \end{equation} \\\\\textbf{Proof.} By direct computation, \begin{eqnarray*} (div^2Rm)_{ik}&=&\nabla_j\nabla_lR_{ijkl}\notag\\ &=&\Delta R_{ik}-\nabla_j\nabla_iR_{jk}\notag\\ &=&\Delta_fR_{ik}+\nabla_lR_{ik}\nabla_lf-\nabla_i\nabla_jR_{jk}+R_{ijkl}R_{jl}-R_{ik}^2\notag\\ &=&2\lambda R_{ik}-2R_{ijkl}R_{jl}+\nabla_lR_{ik}\nabla_lf-\frac{1}{2}\nabla_i\nabla_kR+R_{ijkl}R_{jl}-R_{ik}^2\notag\\ &=&2\lambda R_{ik}-R_{ijkl}R_{jl}+\nabla_lR_{ik}\nabla_lf-\frac{1}{2}\nabla_i\nabla_kR-R_{ik}^2, \end{eqnarray*} where we used (2.2) in the second equality. Moreover, we used (2.3) and (2.9) in the fourth equality. Using (2.13), we have \begin{eqnarray*} (div^3Rm)_{i}&=&\nabla_k\nabla_j\nabla_lR_{ijkl}\notag\\ &=&\nabla_k(2\lambda R_{ik}-R_{ijkl}R_{jl}+\nabla_lR_{ik}\nabla_lf-\frac{1}{2}\nabla_i\nabla_kR-R_{ik}^2)\notag\\ &=&\lambda\nabla_iR-\nabla_kR_{ijkl}R_{jl}-R_{ijkl}\nabla_kR_{jl}+\nabla_lR_{ik}\nabla_k\nabla_lf+\nabla_k\nabla_lR_{ik}\nabla_lf\notag\\ &&-\frac{1}{2}\nabla_k\nabla_i\nabla_kR-R_{ij}\nabla_kR_{kj}-R_{kj}\nabla_kR_{ij}\notag\\ &=&\lambda\nabla_iR+(\nabla_jR_{il}-\nabla_iR_{jl})R_{jl}-R_{ijkl}\nabla_kR_{jl}+\nabla_lR_{ik}(\lambda g_{kl}-R_{kl})\notag\\ &&+(\nabla_l\nabla_kR_{ik}+R_{lj}R_{ij}+R_{klij}R_{jk})\nabla_lf-\frac{1}{2}\nabla_i\Delta_fR-\frac{1}{2}\nabla_i(\nabla_kR\nabla_kf)\notag\\ &&-\frac{1}{2}R_{ij}\nabla_jR-\frac{1}{2}R_{ij}\nabla_jR-R_{kj}\nabla_kR_{ij}\notag\\ &=&\lambda\nabla_iR+R_{jl}\nabla_jR_{il}-\frac{1}{2}\nabla_i|Ric|^2-R_{ijkl}\nabla_kR_{jl}+\frac{\lambda}{2}\nabla_iR\notag\\ &&-R_{kl}\nabla_lR_{ik}+\frac{1}{2}\nabla_l\nabla_iR\nabla_lf+\frac{1}{2}R_{ij}\nabla_jR+R_{jk}\nabla_lR_{ijkl}\notag\\ &&-\lambda\nabla_iR+\nabla_i|Ric|^2-\frac{1}{2}\nabla_i\nabla_lR\nabla_lf-\frac{1}{2}\nabla_lR\nabla_i\nabla_lf\notag\\ &&-R_{ij}\nabla_jR-R_{kj}\nabla_kR_{ij}\notag\\ &=&\frac{1}{2}\nabla_i|Ric|^2-R_{ijkl}\nabla_kR_{jl}+\frac{\lambda}{2}\nabla_iR-\frac{1}{2}R_{ik}\nabla_kR+R_{jk}\nabla_jR_{ik}\notag\\ &&-\frac{1}{2}\nabla_i|Ric|^2-\frac{\lambda}{2}\nabla_iR+\frac{1}{2}R_{il}\nabla_lR-R_{kj}\nabla_kR_{ij}\notag\\ &=&-R_{ijkl}\nabla_kR_{jl}, \end{eqnarray*} where we used (2.3) in the third equality, used (2.2) and (1.1) in the fourth equality. Moreover, we used (2.4), (2.8) and (2.10) in the fifth equality. In the sixth equality, we used (1.1) and (2.2). It follows from (2.14) that \begin{eqnarray*} div^4Rm&=&\nabla_i\nabla_k\nabla_j\nabla_lR_{ijkl}\notag\\ &=&-\nabla_iR_{ijkl}\nabla_kR_{jl}-R_{ijkl}\nabla_i\nabla_kR_{jl}\notag\\ &=&(\nabla_lR_{jk}-\nabla_kR_{jl})\nabla_kR_{jl}-R_{ijkl}\nabla_i\nabla_kR_{jl}\notag\\ &=&\nabla_lR_{jk}\nabla_kR_{jl}-|\nabla Ric|^2-R_{ijkl}\nabla_i\nabla_kR_{jl}, \end{eqnarray*} where we used (2.2) in the third equality.$\hfill\Box$ \\\\\textbf{Remark 2.1} It follows from (2.13) that $div^2Rm$ is a symmetric 2-tensor. Therefore, we have the following identities. \[(div^2Rm)_{ik}=\nabla_j\nabla_lR_{ijkl}=\nabla_l\nabla_jR_{ijkl},\] \[(div^3Rm)_{i}=\nabla_k\nabla_j\nabla_lR_{ijkl}=\nabla_k\nabla_j\nabla_lR_{kjil}=\nabla_k\nabla_l\nabla_jR_{ijkl}=\nabla_k\nabla_l\nabla_jR_{kjil},\] and \[div^4Rm=\nabla_i\nabla_k\nabla_j\nabla_lR_{ijkl}=\nabla_i\nabla_k\nabla_l\nabla_jR_{ijkl}=\nabla_k\nabla_i\nabla_j\nabla_lR_{ijkl}=\nabla_k\nabla_i\nabla_l\nabla_jR_{ijkl}.\] Finally, we list following results that will be needed in the proof of the main theorems. \\\\\textbf{Lemma 2.1} Let $(M^n,g)$ be a complete gradient shrinking soliton with (1.1). Then it has nonnegative scalar curvature $R\geq0$. \\\\\textbf{Remark 2.2} Lemma 2.1 is a special case of a more general result of B. L. Chen [5] which states that $R\geq0$ for any ancient solution to the Ricci flow. \\\\\textbf{Lemma 2.2} (H. D. Cao and D. Zhou [3]) Let $(M^n,g)$ be a complete gradient shrinking soliton with (1.1). Then, (\rmnum{1}) the potential function f satisfies the estimates \begin{equation} \frac{1}{4}(r(x)-c_1)^2\leq f(x)\leq \frac{1}{4}(r(x)+c_2)^2, \end{equation} where $r(x)=d(x_0,x)$ is the distance function from some fixed point $x_0\in M$, $c_1$ and $c_2$ are positive constants depending only on $n$ and the geometry of $g$ on the unit ball $B(x_0,1)$; (\rmnum{2}) there exists some constant $C>0$ such that \begin{equation} Vol(B(x_0,s))\leq Cs^n \end{equation} for $s>0$ sufficiently large. \\\\\textbf{Lemma 2.3} (P. Petersen and W. Wylie [13]) A shrinking compact gradient soliton is rigid with trivial $f$ if \begin{equation} \int_MRic(\nabla f,\nabla f)\leq0. \end{equation} \\\\\textbf{Lemma 2.4} ( P. Petersen and W. Wylie [13]) A gradient soliton is rigid if and only if it has constant scalar curvature and is radially flat, that is, $sec(E,\nabla f)=0$. \\\\\textbf{Remark 2.3} The condition of $divRm=0$ is stronger than $sec(E,\nabla f)=0$. \\\\\textbf{Lemma 2.5} (O. Munteanu and N. Sesum [11]) For any complete gradient shrinking Ricci soliton with (1), we have \begin{equation} \int_M|Ric|^2e^{-\alpha f}<+\infty \end{equation} for any $\alpha>0$. \\\\\textbf{Lemma 2.6} (O. Munteanu and N. Sesum [11]) Let $(M, g)$ be a gradient shrinking Ricci soliton. If for some $\beta<1$ we have $\int_M|Rm|^2e^{-\beta f}<+\infty$, then the following identity holds. \begin{equation} \int_M|div Rm|^2e^{-f}=\int_M|\nabla Ric|^2e^{-f}<+\infty. \end{equation} \section{The Compact Case of Theorem 1.1} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par In this section, we prove the the compact case of Theorem 1.1: \\\\\textbf{Theorem 3.1} Let $(M^n,g)$ be a compact gradient shrinking Ricci soliton with (1.1). If $div^4Rm=0$, then $(M^n,g)$ is Einstein. The first step in proving Theorem 3.1 is to obtain the following integral equation. \\\\\textbf{Lemma 3.1} Let $(M^n,g)$ be a compact gradient shrinking Ricci soliton with (1.1), then \begin{equation} \int_M\nabla_lR_{jk}\nabla_kR_{jl}e^{-f}=\frac{1}{2}\int_M|\nabla Ric|^2e^{-f}. \end{equation} \\\\\textbf{Proof.} Calculating directly, we have \begin{eqnarray} &&\int_M\nabla_lR_{jk}\nabla_kR_{jl}e^{-f}\notag\\ &=&-\int_MR_{jk}\nabla_l\nabla_kR_{jl}e^{-f}+\int_MR_{jk}\nabla_kR_{jl}\nabla_lfe^{-f}\notag\\ &=&-\int_MR_{jk}(\nabla_k\nabla_lR_{jl}+R_{jp}R_{pk}+R_{lkji}R_{il})e^{-f}\notag\\ &&+\int_MR_{jk}\nabla_k(R_{jl}\nabla_lf)e^{-f}-\int_MR_{jk}R_{jl}\nabla_k\nabla_lfe^{-f}\notag\\ &=&-\int_MR_{jk}(R_{jp}R_{pk}+R_{lkji}R_{il})e^{-f}-\int_MR_{jk}R_{jl}(\lambda g_{kl}-R_{kl})e^{-f}\notag\\ &=&-\int_MtrRic^3e^{-f}+\int_MRm(Ric,Ric)e^{-f}-\lambda\int_M|Ric|^2e^{-f}+\int_MtrRic^3e^{-f}\notag\\ &=&\int_MRm(Ric,Ric)e^{-f}-\lambda\int_M|Ric|^2e^{-f}, \end{eqnarray} where we used (2.6) and (1.1) in the third equality. Applying (2.11) to (3.22), we obtain \begin{eqnarray*} &&\int_M\nabla_lR_{jk}\nabla_kR_{jl}e^{-f}\notag\\ &=&-\frac{1}{4}\int_M\Delta_f|Ric|^2e^{-f}+\frac{1}{2}\int_M|\nabla Ric|^2e^{-f}\notag\\ &=&-\frac{1}{4}\int_M(\Delta|Ric|^2-\nabla_{\nabla f}|Ric|^2)e^{-f}+\frac{1}{2}\int_M|\nabla Ric|^2e^{-f}\notag\\ &=&-\frac{1}{4}\int_M\nabla_{\nabla f}|Ric|^2e^{-f}+\frac{1}{4}\int_M\nabla_{\nabla f}|Ric|^2e^{-f}+\frac{1}{2}\int_M|\nabla Ric|^2e^{-f}\notag\\ &=&\frac{1}{2}\int_M|\nabla Ric|^2e^{-f}. \end{eqnarray*} $\hfill\Box$ Now we are ready to prove Theorem 3.1. \\\\\textbf{Proof of Theorem 3.1:} Integrating (2.15), we obtain \begin{eqnarray} &&\int_Mdiv^4Rme^{-f}\notag\\ &=&\int_M\nabla_lR_{jk}\nabla_kR_{jl}e^{-f}-\int_M|\nabla Ric|^2e^{-f}-\int_MR_{ijkl}\nabla_i\nabla_kR_{jl}e^{-f}\notag\\ &=&\frac{1}{2}\int_M|\nabla Ric|^2e^{-f}-\int_M|\nabla Ric|^2e^{-f}\notag\\ &=&-\frac{1}{2}\int_M|\nabla Ric|^2e^{-f}, \end{eqnarray} where we used Lemma 3.1 and (2.5) in the second equality. Since $div^4Rm=0$, it follows from (3.23) that $\int_M|\nabla Ric|^2e^{-f}=0$, i.e. $|\nabla Ric|=0$ $a. e$. . Note that any gradient shrinking Ricci soliton is analytic in harmonic coordinates, we have $|\nabla Ric|=0$ on $M$. By direct computation, we have \[0\leq|\nabla Ric-\frac{\nabla R}{n}g|^2=|\nabla Ric|^2-\frac{|\nabla R|^2}{n}=-\frac{|\nabla R|^2}{n}\leq0.\] Therefore, $R$ is a constant on $M$. It follows from (2.8) that $Ric(\nabla f,\nabla f)=\frac{1}{2}\langle\nabla R,\nabla f\rangle=0$. By Lemma 2.3, $(M^n,g)$ is rigid. The compactness of $(M^n,g)$ implies that $(M^n,g)$ is Einstein.$\hfill\Box$ \section{The Complete Non-compact Case of Theorem 1.1} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par In this section, we prove the complete non-compact case of Theorem 1.1: \\\\\textbf{Theorem 4.1} Let $(M^n,g)$ be a complete non-compact gradient shrinking Ricci soliton with (1.1). If $div^4Rm=0$, then $(M^n,g)$ is rigid. The first step in proving Theorem 4.1 is to obtain the following integral inequality. \\\\\textbf{Lemma 4.1} Let $(M^n,g)$ be a complete non-compact gradient shrinking Ricci soliton with (1.1). For every $C^2$ function $\phi:\mathbb{R_+}\rightarrow\mathbb{R}$ with $\phi(f)$ having compact support in $M$ and some constant $c>0$, we have \begin{equation} \int_M\nabla_lR_{jk}\nabla_kR_{jl}\phi^2(f)e^{-f}\leq c\int_M|Ric|^2|\nabla f|^2(\phi')^2e^{-f}+\frac{3}{4}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}. \end{equation} \\\\\textbf{Proof.} By direct computation, we have \begin{eqnarray} &&\int_M\nabla_lR_{jk}\nabla_kR_{jl}\phi^2(f)e^{-f}\notag\\ &=&-\int_MR_{jk}\nabla_l\nabla_kR_{jl}\phi^2(f)e^{-f}-\int_MR_{jk}\nabla_kR_{jl}\nabla_l\phi^2(f)e^{-f}+\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi^2(f)e^{-f}\notag\\ &=&-\int_MR_{jk}(\nabla_k\nabla_lR_{jl}+R_{jp}R_{pk}+R_{lkji}R_{il})\phi^2(f)e^{-f}-2\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi\phi'e^{-f}\notag\\ &&+\int_MR_{jk}\nabla_k(R_{jl}\nabla_lf)\phi^2(f)e^{-f}-\int_MR_{jk}R_{jl}\nabla_k\nabla_lf\phi^2(f)e^{-f}\notag\\ &=&-\int_MR_{jk}(R_{jp}R_{pk}+R_{lkji}R_{il})\phi^2(f)e^{-f}-2\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi\phi'e^{-f}\notag\\ &&-\int_MR_{jk}R_{jl}(\lambda g_{kl}-R_{kl})\phi^2(f)e^{-f}\notag\\ &=&-\int_MtrRic^3\phi^2(f)e^{-f}+\int_MRm(Ric,Ric)\phi^2(f)e^{-f}-2\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi\phi'e^{-f}\notag\\ &&-\lambda\int_M|Ric|^2\phi^2(f)e^{-f}+\int_MtrRic^3\phi^2(f)e^{-f}\notag\\ &=&\int_MRm(Ric,Ric)\phi^2(f)e^{-f}-2\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi\phi'e^{-f}-\lambda\int_M|Ric|^2\phi^2(f)e^{-f},\notag\\ \end{eqnarray} where we used (2.7) and (1.1) in the third equality. Applying (2.11) to (4.25), we obtain \begin{eqnarray*} &&\int_M\nabla_lR_{jk}\nabla_kR_{jl}\phi^2(f)e^{-f}\notag\\ &=&-2\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi\phi'e^{-f}-\frac{1}{4}\int_M\Delta_f|Ric|^2\phi^2(f)e^{-f}+\frac{1}{2}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}\notag\\ &=&-2\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi\phi'e^{-f}-\frac{1}{4}\int_M\Delta|Ric|^2\phi^2(f)e^{-f}\notag\\ &&+\frac{1}{4}\int_M\nabla_{\nabla f}|Ric|^2\phi^2(f)e^{-f}+\frac{1}{2}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}\notag\\ &=&-2\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi\phi'e^{-f}+\frac{1}{4}\int_M\langle\nabla|Ric|^2,\nabla\phi^2(f)\rangle e^{-f}\notag\\ &&-\frac{1}{4}\int_M\nabla_{\nabla f}|Ric|^2\phi^2(f)e^{-f}+\frac{1}{4}\int_M\nabla_{\nabla f}|Ric|^2\phi^2(f)e^{-f}+\frac{1}{2}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}\notag\\ &=&-2\int_MR_{jk}\nabla_kR_{jl}\nabla_lf\phi\phi'e^{-f}+\int_MR_{ik}\nabla_lR_{ik}\nabla_lf\phi\phi'e^{-f}\notag\\ &&+\frac{1}{2}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}\notag\\ &\leq&c\int_M|Ric||\nabla f||\nabla Ric||\phi||\phi'|e^{-f}+\frac{1}{2}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}\notag\\ &\leq&c\int_M|Ric|^2|\nabla f|^2(\phi')^2e^{-f}+\frac{3}{4}\int_M|\nabla Ric|^2\phi^2(f)e^{-f} \end{eqnarray*} for some constant $c>0$.$\hfill\Box$ \\\\\textbf{Lemma 4.2} Let $(M^n,g)$ be a complete non-compact gradient shrinking Ricci soliton with (1.1). For every $C^2$ function $\varphi:\mathbb{R_+}\rightarrow\mathbb{R}$ with $\varphi(f)$ having compact support in $M$, we have \begin{equation} -\int_MR_{ijkl}\nabla_i\nabla_kR_{jl}\varphi(f)e^{-f}=\int_M(|\nabla Ric|^2-\nabla_lR_{kj}\nabla_kR_{jl})\varphi'e^{-f}. \end{equation} \\\\\textbf{Proof.} By direct computation, we have \begin{eqnarray*} -\int_MR_{ijkl}\nabla_i\nabla_kR_{jl}\varphi(f)e^{-f}&=&\int_MR_{ijkl}\nabla_kR_{jl}\varphi'\nabla_ife^{-f}\notag\\ &=&\int_M\nabla_iR_{ijkl}\nabla_kR_{jl}\varphi'e^{-f}\notag\\ &=&\int_M(\nabla_kR_{jl}-\nabla_lR_{kj})\nabla_kR_{jl}\varphi'e^{-f}\notag\\ &=&\int_M(|\nabla Ric|^2-\nabla_lR_{kj}\nabla_kR_{jl})\varphi'e^{-f}, \end{eqnarray*} where we used (2.5), (2.4) and (2.2) in the first, second and third equality, respectively.$\hfill\Box$ Now we are ready to prove Theorem 4.1. \\\\\textbf{Proof of Theorem 4.1:} Let $\phi:\mathbb{R_+}\rightarrow\mathbb{R}$ be a $C^2$ function with $\phi=1$ on $(0,s]$, $\phi=0$ on $[2s,\infty)$ and $-\frac{c}{t}\leq\phi'(t)\leq0$ on $(s,2s)$ for some constant $c>0$. Define $D(r):=\{x\in M|f(x)\leq r\}$. By Lemma 4.2, we have \begin{eqnarray} -\int_MR_{ijkl}\nabla_i\nabla_kR_{jl}\phi^2(f)e^{-f}&=&\int_M(|\nabla Ric|^2-\nabla_lR_{kj}\nabla_kR_{jl})(\phi^2)'e^{-f}\notag\\ &=&2\int_M(|\nabla Ric|^2-\nabla_lR_{kj}\nabla_kR_{jl})\phi\phi'e^{-f}\notag\\ &\leq&0. \end{eqnarray} Integrating (2.15) and using Lemma 4.1 and (4.27), we have \begin{eqnarray} &&\int_Mdiv^4Rm\phi^2(f)e^{-f}\notag\\ &=&\int_M\nabla_lR_{jk}\nabla_kR_{jl}\phi^2(f)e^{-f}-\int_M|\nabla Ric|^2\phi^2(f)e^{-f}-\int_MR_{ijkl}\nabla_i\nabla_kR_{jl}\phi^2(f)e^{-f}\notag\\ &\leq&c\int_M|Ric|^2|\nabla f|^2(\phi')^2e^{-f}+\frac{3}{4}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}-\int_M|\nabla Ric|^2\phi^2(f)e^{-f}\notag\\ &\leq&\frac{c}{s^2}\int_{D(2s)\backslash D(s)}|Ric|^2|\nabla f|^2e^{-f}-\frac{1}{4}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}. \end{eqnarray} It follows from Lemma 2.1, (2.12), (2.16) and Lemma 2.5 that \[\int_M|Ric|^2|\nabla f|^2e^{-f}\leq\int_M|Ric|^2e^{-\alpha f}<+\infty\] for some $\alpha\in(0,1]$. Therefore, $$\frac{c}{s^2}\int_{D(2s)\backslash D(s)}|Ric|^2|\nabla f|^2e^{-f}\rightarrow0$$ as $s\rightarrow+\infty$. By taking $r\rightarrow+\infty$ in (4.28), we obtain $\int_M|\nabla Ric|^2e^{-f}=0$. Since $\int_M|\nabla Ric|^2e^{-f}<+\infty$, it follows from (2.20) that \[\int_M|divRm|^2e^{-f}=\int_M|\nabla Ric|^2e^{-f}=0.\] Hence, $|divRm|=|\nabla Ric|=0$ $a. e$. . Note that any gradient shrinking Ricci soliton is analytic in harmonic coordinates, we have $|divRm|=|\nabla Ric|=0$ on $M$. It is clear $divRm=0$ implies that $M^n$ is radially flat. By direct computation, we have \[0\leq|\nabla Ric-\frac{\nabla R}{n}g|^2=|\nabla Ric|^2-\frac{|\nabla R|^2}{n}=-\frac{|\nabla R|^2}{n}\leq0.\] Therefore, $R$ is a constant on $M$. Since $M^n$ is radially flat and has constant scalar curvature, it follows from Lemma 2.4 that $(M^n,g)$ is rigid.$\hfill\Box$ Theorem 1.1 follows by combining Theorem 3.1 and Theorem 4.1. \section{The proof of Theorem 1.2} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par In this section, we give a direct proof of Theorem 1.2. \\\\\textbf{Theorem 5.1} Let $(M^n,g)$ be a gradient shrinking Ricci soliton with (1.1). If $div^3Rm(\nabla f)=0$, then $(M^n,g)$ is rigid. \\\\\textbf{Proof.} From (2.14), we have \begin{eqnarray*} div^3Rm(\nabla f)&=&\nabla_k\nabla_j\nabla_lR_{ijkl}\nabla_if\notag\\ &=&-R_{ijkl}\nabla_kR_{jl}\nabla_if\notag\\ &=&\frac{1}{2}(\nabla_iR_{ijkl})(\nabla_lR_{jk}-\nabla_kR_{jl})\notag\\ &=&-\frac{1}{2}|divRm|^2, \end{eqnarray*} where we used (2.4) in the third equality and (2.2) in the last. Since $div^3Rm(\nabla f)=0$, $divRm=0$. It follows that $M$ is radially flat. Moreover, we have \[\nabla_iR=2\nabla_lR_{il}=-2g^{jk}\nabla_lR_{ijkl}=0,\] i.e. $R$ is a constant on $M$. Since $M^n$ is radially flat and has constant scalar curvature, it follows from Lemma 2.4 that $(M^n,g)$ is rigid.$\hfill\Box$ \section{Under the condition of Weyl tensor} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par In this section, we prove Theorems 1.3. The first step is to obtain the following formulas. \\\\\textbf{Proposition 6.1} Let $(M^n,g)$ $(n\geq3)$ be a gradient shrinking Ricci soliton with (1.1), we have the following identities. \begin{equation} (divW)_{ijk}=\frac{n-3}{n-2}(divRm)_{ijk}-\frac{n-3}{2(n-1)(n-2)}(g_{ik}\nabla_jR-g_{jk}\nabla_iR), \end{equation} \begin{equation} (div^2W)_{ik}=\frac{n-3}{n-2}(div^2Rm)_{ik}-\frac{n-3}{2(n-1)(n-2)}(g_{ik}\Delta R-\nabla_k\nabla_iR), \end{equation} \begin{equation} (div^3W)_{i}=\frac{n-3}{n-2}(div^3Rm)_{i}+\frac{n-3}{2(n-1)(n-2)}R_{ik}\nabla_kR, \end{equation} and \begin{equation} div^4W=\frac{n-3}{n-2}div^4Rm+\frac{n-3}{2(n-1)(n-2)}(\frac{1}{2}|\nabla R|^2+R_{ik}\nabla_i\nabla_kR). \end{equation} \\\\\textbf{Proof.} By direct computation, \begin{eqnarray*} (divW)_{ijk}&=&\nabla_lW_{ijkl}\notag\\ &=&\nabla_lR_{ijkl}-\frac{1}{n-2}(g_{ik}\nabla_lR_{jl}-\nabla_iR_{jk}-g_{jk}\nabla_lR_{il}+\nabla_jR_{ik})\notag\\ &&+\frac{1}{(n-1)(n-2)}(g_{ik}\nabla_jR-g_{jk}\nabla_iR)\notag\\ &=&\nabla_lR_{ijkl}-\frac{1}{n-2}\nabla_lR_{ijkl}\notag\\ &&-\frac{1}{2(n-2)}(g_{ik}\nabla_jR-g_{jk}\nabla_iR)+\frac{1}{(n-1)(n-2)}(g_{ik}\nabla_jR-g_{jk}\nabla_iR)\notag\\ &=&\frac{n-3}{n-2}\nabla_lR_{ijkl}-\frac{n-3}{2(n-1)(n-2)}(g_{ik}\nabla_jR-g_{jk}\nabla_iR), \end{eqnarray*} where we used (2.8)in the second equality. It follows from (6.29) that \begin{eqnarray*} &&(div^2W)_{ik}\notag\\ &=&\nabla_j\nabla_lW_{ijkl}\notag\\ &=&\frac{n-3}{n-2}\nabla_j\nabla_lR_{ijkl}-\frac{n-3}{2(n-1)(n-2)}(g_{ik}\Delta R-\nabla_k\nabla_iR), \end{eqnarray*} By (6.30), we have \begin{eqnarray*} (div^3W)_i&=&\nabla_k\nabla_j\nabla_lW_{ijkl}\notag\\ &=&\frac{n-3}{n-2}\nabla_k\nabla_j\nabla_lR_{ijkl}-\frac{n-3}{2(n-1)(n-2)}(\nabla_i\Delta R-\nabla_k\nabla_k\nabla_iR)\notag\\ &=&\frac{n-3}{n-2}\nabla_k\nabla_j\nabla_lR_{ijkl}+\frac{n-3}{2(n-1)(n-2)}R_{ik}\nabla_kR, \end{eqnarray*} From (6.31), we have \begin{eqnarray*} div^4W&=&\nabla_i\nabla_k\nabla_j\nabla_lW_{ijkl}\notag\\ &=&\frac{n-3}{n-2}\nabla_i\nabla_k\nabla_j\nabla_lR_{ijkl}+\frac{n-3}{2(n-1)(n-2)}\nabla_i(R_{ik}\nabla_kR)\notag\\ &=&\frac{n-3}{n-2}\nabla_i\nabla_k\nabla_j\nabla_lR_{ijkl}\notag\\ &&+\frac{n-3}{2(n-1)(n-2)}(\frac{|\nabla R|^2}{2}+R_{ik}\nabla_i\nabla_kR), \end{eqnarray*} $\hfill\Box$ As a corollary of Proposition 6.1, we have \\\\\textbf{Corollary 6.1} Let $(M^n,g)$ $(n\geq3)$ be a gradient shrinking Ricci soliton with (1.1), we have the following identities. \begin{equation} (divW)_{ijk}=\frac{n-3}{n-2}(\nabla_jR_{ik}-\nabla_iR_{jk})-\frac{n-3}{2(n-1)(n-2)}(g_{ik}\nabla_jR-g_{jk}\nabla_iR), \end{equation} \begin{eqnarray} (div^2W)_{ik}&=&\frac{n-3}{n-2}(2\lambda R_{ik}+\nabla_{\nabla f}R_{ik}-R_{ik}^2-R_{ijkl}R_{jl})-\frac{n-3}{2(n-1)}\nabla_i\nabla_kR\notag\\ &&-\frac{n-3}{2(n-1)(n-2)}(\nabla_{\nabla f}R+2\lambda R-2|Ric|^2)g_{ik}, \end{eqnarray} \begin{equation} (div^3W)_{i}=-\frac{n-3}{n-2}R_{ijkl}\nabla_kR_{jl}+\frac{n-3}{2(n-1)(n-2)}R_{ik}\nabla_kR, \end{equation} and \begin{eqnarray} div^4W&=&\frac{n-3}{n-2}(\nabla_lR_{jk}\nabla_kR_{jl}-|\nabla Ric|^2-R_{ijkl}\nabla_i\nabla_kR_{jl})\notag\\ &&+\frac{n-3}{2(n-1)(n-2)}(\frac{1}{2}|\nabla R|^2+R_{ik}\nabla_i\nabla_kR). \end{eqnarray} \\\\\textbf{Proof.} Applying (2.2) to (6.29), we obtain (6.33). Applying (2.10) and (2.13) to (6.30), we can get (6.34). Applying (2.14) to (6.31), we have (6.35). Applying (2.15) to (6.32), we have (6.36).$\hfill\Box$ \\\\Next, we prove that a gradient shrinking Ricci soliton with $div^3W(\nabla f)=0$ is rigid. \\\\\textbf{Theorem 6.1} Let $(M^n,g)$ be a gradient shrinking Ricci soliton with (1.1). If $div^3W(\nabla f)=0$, then $(M^n,g)$ is rigid. \\\\\textbf{Proof.} By (6.35), we have \begin{eqnarray} &&div^3W(\nabla f)\notag\\ &=&-\frac{n-3}{n-2}R_{ijkl}\nabla_kR_{jl}\nabla_if+\frac{n-3}{2(n-1)(n-2)}R_{ik}\nabla_kR\nabla_if\notag\\ &=&\frac{n-3}{2(n-2)}(\nabla_iR_{ijkl})(\nabla_lR_{jk}-\nabla_kR_{jl})+\frac{n-3}{4(n-1)(n-2)}|\nabla R|^2\notag\\ &=&-\frac{n-3}{2(n-2)}|divRm|^2+\frac{n-3}{4(n-1)(n-2)}|\nabla R|^2, \end{eqnarray} where we used (2.4) and (2.8) in the second equality and (2.2) in the last. It follows from (2.8) that $|\nabla R|^2\leq4|Ric|^2|\nabla f|^2$. By Lemma 2.1, (2.12), (2.16) and Lemma 2.5, we have \begin{equation} \int_M|\nabla R|^2e^{-f}\leq 4\int_M|Ric|^2|\nabla f|^2e^{-f}\leq\int_M|Ric|^2e^{-\alpha f}<+\infty,\notag \end{equation} where for some constant $\alpha\in(0,1]$. Integrating (6.37) and using the condition of $div^3W(\nabla f)=0$, we obtain \begin{equation} \int_M|divRm|^2e^{-f}=\frac{1}{2(n-1)}\int_M|\nabla R|^2e^{-f}<+\infty.\notag \end{equation} It follows from (2.20) that \begin{eqnarray} \int_M|\nabla Ric|^2e^{-f}&=&\int_M|divRm|^2e^{-f}\notag\\ &=&\frac{1}{2(n-1)}\int_M|\nabla R|^2e^{-f}\notag\\ &\leq&\frac{n}{2(n-1)}\int_M|\nabla Ric|^2e^{-f}, \end{eqnarray} where we used $|\nabla R|^2\leq n|\nabla Ric|^2$. Note that $\frac{n}{2(n-1)}<1$, we conclude from (6.38) that \begin{equation} \int_M|divRm|^2e^{-f}=\int_M|\nabla R|^2e^{-f}=0,\notag \end{equation} i.e., $|divRm|=|\nabla R|=0$ $a. e.$ . Since any gradient shrinking Ricci soliton is analytic in harmonic coordinates, $|divRm|=0$ on $M$. It follows that $M$ is radially flat. Moreover, $|\nabla R|=0$ on $M$, i.e., $R$ is a constant on $M$. By Lemma 2.4, $(M^n,g)$ is rigid.$\hfill\Box$ \section{Four-dimensional Case} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par We prove Theorems 1.4 to 1.6 in this section. From Theorems 1.1 to 1.3, we only need to show the following classification theorem. \\\\\textbf{Theorem 7.1} Let $(M^4,g)$ be a $4$-dimensional rigid gradient shrinking Ricci soliton with (1.1), then $(M^4,g)$ is either (\rmnum{1}) Einstein, or (\rmnum{2}) a finite quotient of the Gaussian shrinking soliton $\mathbb{R}^4$, $\mathbb{R}^2\times\mathbb{S}^2$ or the round cylinder $\mathbb{R}\times\mathbb{S}^3$. Before we prove Theorem 7.1, we present some results that are needed in the proof of Theorem 7.1. \\\\\textbf{Lemma 7.1} (M. Fern\'{a}ndez-L\'{o}pez and E. Garc\'{i}a-R\'{i}o [9]) Let $(M^n,g,f)$ be an $n$-dimensional gradient shrinking Ricci soliton with constant scalar curvature, then $R\in\{0,\lambda,\cdot,\cdot\cdot,(n-1)\lambda,n\lambda\}$. \\\\\textbf{Lemma 7.2} (M. Fern\'{a}ndez-L\'{o}pez and E. Garc\'{i}a-R\'{i}o [9]) No complete gradient shrinking Ricci soliton may exist with $R=\lambda$. Now we are ready to prove Theorem 7.1. \\\\\textbf{Proof of Theorem 7.1:} Note that $(M^4,g)$ is rigid, i.e., it is a finite quotient of $\mathbb{R}^k\times N^{4-k}$, where $N$ is an Einstein manifold and $k\in\{0,1,2,3,4\}$. It follows that $M^4$ has constant scalar curvature. Moreover, Lemma 7.1 and Lemma 7.2 imply that $R\in\{0,2\lambda,3\lambda,4\lambda\}$. We denote by $\{e_i\}_{i=1}^4$ a local orthonormal frame of $M^4$ with $e_1=\frac{\nabla f}{|\nabla f|}$. Moreover, We use $\{\alpha_i\}_{i=1}^4$ to represent eigenvalues of the Ricci tensor with corresponding orthonormal eigenvectors $\{e_i\}_{i=1}^4$. In the following, we divide the arguments into four cases: $\bullet$ Case 1: $R\equiv0$. In this case, $(M^4,g,f)$ is a finite quotient of the Gaussian soliton $\mathbb{R}^4$. $\bullet$ Case 2: $R\equiv2\lambda$. In this case, we have \[(\alpha_1,\alpha_2,\alpha_3,\alpha_4)\in\{(\frac{\lambda}{2},\frac{\lambda}{2},\frac{\lambda}{2},\frac{\lambda}{2}),(0,\frac{2\lambda}{3},\frac{2\lambda}{3},\frac{2\lambda}{3}),(0,0,\lambda,\lambda),(0,0,0,2\lambda)\}.\] It follows from (2.10) that $|Ric|^2=\lambda R=2\lambda^2$. Therefore, $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)=(0,0,\lambda,\lambda)$. The rigidity of $(M^4,g)$ implies that it is a finite quotient of $\mathbb{R}^2\times N^2$ with positive scalar curvature. It is clear that $N^2$ has to be $\mathbb{S}^2$. Therefore, $(M^4,g)$ is a finite quotient of $\mathbb{R}^2\times\mathbb{S}^2$. $\bullet$ Case 3: $R\equiv3\lambda$. In this case, we have \[(\alpha_1,\alpha_2,\alpha_3,\alpha_4)\in\{(\frac{3\lambda}{4},\frac{3\lambda}{4},\frac{3\lambda}{4},\frac{3\lambda}{4}),(0,\lambda,\lambda,\lambda),(0,0,\frac{3\lambda}{2},\frac{3\lambda}{2}),(0,0,0,3\lambda)\}.\] It follows from (2.10) that $|Ric|^2=\lambda R=3\lambda^2$. Therefore, $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)=(0,\lambda,\lambda,\lambda)$. The rigidity of $(M^4,g)$ implies that it is a finite quotient of $\mathbb{R}\times N^3$, where $N^3$ is Einstein with positive scalar curvature. It is clear that $N^3$ has to be $\mathbb{S}^3$. Therefore, $(M^4,g)$ is a finite quotient of $\mathbb{R}\times\mathbb{S}^3$. $\bullet$ Case 4: $R\equiv4\lambda$. In this case, $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)=(\lambda,\lambda,\lambda,\lambda)$, i.e., $(M^4,g)$ is Einstein with $Ric=\lambda g$. We conclude that $(M^4,g)$ is either Einstein or a finite quotient of $\mathbb{R}^4$, $\mathbb{R}^2\times\mathbb{S}^2$ or $\mathbb{R}\times\mathbb{S}^3$.$\hfill\Box$ \section{Appendix} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par G. Catino, P. Mastrolia and D. D. Monticelli [4] defined the fourth order divergence of Weyl tensor $div^4W$ to be $\nabla_k\nabla_j\nabla_l\nabla_iW_{ikjl}$. Moreover, they proved that a gradient shrinking Ricci soliton with $div^4W=0$ is rigid. It is clear from their proof that this result holds for $\nabla_k\nabla_j\nabla_l\nabla_iW_{ikjl}\leq0$. \\\\\textbf{Remark 8.1} The definition of $div^4W$ in G. Catino, P. Mastrolia and D. D. Monticelli [4] differs from ours by a minus sign. To be more precise, we have \begin{equation} \nabla_k\nabla_j\nabla_l\nabla_iW_{ikjl}=\nabla_j\nabla_k\nabla_l\nabla_iW_{ijkl}=-\nabla_j\nabla_k\nabla_l\nabla_iW_{jikl}=-\nabla_i\nabla_k\nabla_l\nabla_jW_{ijkl}. \end{equation} It follows from (6.30) that $\nabla_j\nabla_lW_{ijkl}$ is symmetric on $i$ and $k$, then it is also symmetric on $j$ and $l$, i.e., \begin{equation} \nabla_j\nabla_lW_{ijkl}=\nabla_l\nabla_jW_{ijkl}. \end{equation} Combining (8.39) and (8.40), we have \[\nabla_k\nabla_j\nabla_l\nabla_iW_{ikjl}=-\nabla_i\nabla_k\nabla_j\nabla_lW_{ijkl}.\] It is clear from (6.32) that \[div^4W=\frac{n-3}{n-2}div^4Rm+\frac{n-3}{2(n-1)(n-2)}(\frac{1}{2}|\nabla R|^2+R_{ik}\nabla_i\nabla_kR).\] The following theorems were proved by G. Catino, P. Mastrolia and D. D. Monticelli [4], we give a different proof here. \\\\\textbf{Theorem 8.1} Let $(M^n,g)$ be a compact gradient shrinking Ricci soliton with (1.1). If $div^4W=0$, then $(M^n,g)$ is Einstein. \\\\\textbf{Proof.} Integrating (6.36), we have \begin{eqnarray} &&\int_Mdiv^4We^{-f}\notag\\ &=&\frac{n-3}{n-2}\int_M(\nabla_lR_{jk}\nabla_kR_{jl}-|\nabla Ric|^2-R_{ijkl}\nabla_i\nabla_kR_{jl})e^{-f}\notag\\ &&+\frac{n-3}{2(n-1)(n-2)}\int_M(\frac{1}{2}|\nabla R|^2+R_{ik}\nabla_i\nabla_kR)e^{-f}\notag\\ &=&-\frac{n-3}{2(n-2)}\int_M|\nabla Ric|^2e^{-f}+\frac{n-3}{4(n-1)(n-2)}\int_M|\nabla R|^2e^{-f}\notag\\ &\leq&-\frac{n-3}{4n(n-1)}\int_M|\nabla R|^2e^{-f}, \end{eqnarray} where we used Lemma 3.1, (2.5) and (2.7) in the second equality. Moreover, we used $|\nabla R|^2\leq n|\nabla Ric|^2$ in the inequality. Since $div^4W=0$, it follows from (8.41) that $\nabla R=0$ $a. e.$ . Note that any gradient shrinking Ricci soliton is analytic in harmonic coordinates, we have $\nabla R=0$ on $M$, i.e., $R$ is a constant on $M$. Therefore, $Ric(\nabla f,\nabla f)=\frac{1}{2}\langle\nabla R,\nabla f\rangle=0$. By Lemma 2.3, $(M^n,g)$ is Einstein.$\hfill\Box$ \\\\\textbf{Theorem 8.2} Let $(M^n,g)$ be a complete non-compact gradient shrinking Ricci soliton with (1.1). If $div^4W=0$, then $(M^n,g)$ is rigid. \\\\\textbf{Proof.} Let $\phi:\mathbb{R_+}\rightarrow\mathbb{R}$ be a $C^2$ function with $\phi=1$ on $(0,s]$, $\phi=0$ on $[2s,\infty)$ and $-\frac{c}{t}\leq\phi'(t)\leq0$ on $(s,2s)$ for some constant $c>0$. Define $D(r):=\{x\in M|f(x)\leq r\}$. Integrating (6.37) we have \begin{eqnarray} &&\int_Mdiv^4W\phi^2(f)e^{-f}\notag\\ &=&\frac{n-3}{n-2}\int_M(\nabla_lR_{jk}\nabla_kR_{jl}-|\nabla Ric|^2-R_{ijkl}\nabla_i\nabla_kR_{jl})\phi^2(f)e^{-f}\notag\\ &&+\frac{n-3}{2(n-1)(n-2)}\int_M(\frac{1}{2}|\nabla R|^2+R_{ik}\nabla_i\nabla_kR)\phi^2(f)e^{-f}\notag\\ &\leq&c\int_{D(2s)\backslash D(s)}|Ric|^2|\nabla f|^2(\phi')^2e^{-f}-\frac{n-3}{4(n-2)}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}\notag\\ &&+\frac{n-3}{4(n-1)(n-2)}\int_M|\nabla R|^2\phi^2e^{-f}-\frac{n-3}{(n-1)(n-2)}\int_MR_{ik}\nabla_kR\nabla_if\phi\phi'e^{-f}\notag\\ &\leq&\frac{c}{s^2}\int_{D(2s)\backslash D(s)}|Ric|^2|\nabla f|^2e^{-f}+\frac{n-3}{3(n-1)(n-2)}\int_M|\nabla R|^2\phi^2(f)e^{-f}\notag\\ &&-\frac{n-3}{2(n-2)}\int_M|\nabla Ric|^2\phi^2(f)e^{-f}, \end{eqnarray} where we used Lemma 4.1 and Lemma 4.2 in the first inequality. Applying $div^4W=0$ and $|\nabla Ric|\geq\frac{|\nabla R|^2}{n}$ to (8.42), we obtain \begin{eqnarray} 0&\leq&\frac{c}{s^2}\int_{D(2s)\backslash D(s)}|Ric|^2|\nabla f|^2e^{-f}-\frac{(n-3)^2}{6n(n-1)(n-2)}\int_M|\nabla R|^2\phi^2(f)e^{-f}\notag\\ \end{eqnarray} It follows from Lemma 2.1, (2.12), (2.16) and Lemma 2.5 that \[\int_M|Ric|^2|\nabla f|^2e^{-f}\leq\int_M|Ric|^2e^{-\alpha f}<+\infty\] for some $\alpha\in(0,1]$. Therefore, $$\frac{c}{s^2}\int_{D(2s)\backslash D(s)}|Ric|^2|\nabla f|^2e^{-f}\rightarrow0$$ as $s\rightarrow+\infty$. By taking $r\rightarrow+\infty$ in (8.43), we obtain $\int_M|\nabla R|^2e^{-f}=0$. It follows that $\nabla R=0$ $a. e.$ . Note that any gradient shrinking Ricci soliton is analytic in harmonic coordinates, we have $\nabla R=0$ on $M$, i.e., $R$ is a constant on $M$. By taking $r\rightarrow+\infty$ in (8.43) and using $div^4W=0$ and $|\nabla R|=0$, we obtain $\int_M|\nabla Ric|^2e^{-f}=0$. Since $\int_M|\nabla Ric|^2e^{-f}<+\infty$, it follows (2.20) that \begin{equation} \int_M|divRm|^2e^{-f}=\int_M|\nabla Ric|^2e^{-f}=0. \end{equation} Hence, $|divRm|=0$ $a. e$. . Note that any gradient shrinking Ricci soliton is analytic in harmonic coordinates, we have $|divRm|=0$ on $M$. It is clear $divRm=0$ implies that $M^n$ is radially flat. Since $M^n$ is radially flat and has constant scalar curvature, it follows from Lemma 2.4 that $(M^n,g)$ is rigid.$\hfill\Box$ From Theorem 7.1, Theorem 8.1 and Theorem 8.2, we have a classification theorem of $4$-dimensional gradient shrinking Ricci solitons with $div^4W=0$: \\\\\textbf{Theorem 8.3} Let $(M^4,g)$ be a $4$-dimensional gradient shrinking Ricci soliton with (1.1). If $div^4W=0$, then $(M^4,g)$ is either (\rmnum{1}) Einstein, or (\rmnum{2}) a finite quotient of the Gaussian shrinking soliton $\mathbb{R}^4$, $\mathbb{R}^2\times\mathbb{S}^2$ or the round cylinder $\mathbb{R}\times\mathbb{S}^3$. \\\\\textbf{Remark 8.2} It is clear from the proof that Theorems 8.1 to 8.3 hold for $div^4W\geq0$. Moreover, it follows from (8.39) that Theorem 8.1 to 8.3 still hold if indices of $div^4W$ permutate. \section*{Acknowledgements} \renewcommand{\theequation}{\thesection.\arabic{equation}} \indent\par We would like to thank Professor Huai-Dong Cao for his encouragement and suggestions in improving the paper. The first author also thanks Professor Huai-Dong Cao for kindly invitation and warm hospitality during his stay at Lehigh University.
1,108,101,562,688
arxiv
\section{Introduction} Active inference \citep{friston_active_2012}, and a range of other formalisms usually referred to as intrinsic motivations \citep{storck_reinforcement_1995,klyubin_empowerment_2005,ay_predictive_2008}, all aim to answer a similar question: “Under minimal assumptions, how should an agent act?”. More practically, they relate to what would be a universal way to generate behaviour for an agent or robot that appropriately deals with its environment, i.e.\ acquires the information needed to act and acts towards an intrinsic goal. To this end, both the free energy principle and intrinsic motivations aim to bridge the gap between giving a biologically plausible explanation for how real organism deal with the problem and providing a formalism that can be implemented in artificial agents. Additionally, they share a range of properties, such as an independence of a priori semantics and being defined purely on the dynamics of the agent environment interaction, i.e. the agent's perception-action loop. Despite these numerous similarities, as far as we know, there has not been any unified or comparative treatment of those approaches. We believe this is in part due to a lack of an appropriate unifying mathematical framework. To alleviate this, we present a technically complete and comprehensive treatment of active inference, including a decomposition of its perception and action selection modes. Such a decomposition allows us to relate active inference and the inherent motivational principle to other intrinsic motivation paradigms such as empowerment \citep{klyubin_empowerment_2005}, predictive information \citep{ay_predictive_2008}, and knowledge seeking \citep{storck_reinforcement_1995,orseau_universal_2013}. Furthermore, we are able to clarify the relation to universal reinforcement learning \citep{hutter_universal_2005}. Our treatment is deliberately comprehensive and complete, aiming to be a reference for readers interested in the mathematical fundament. A considerable number of articles have been published on active inference \citep[e.g.][]{friston_active_2012,friston_active_2015,friston_active_2016,friston_active_learning_2016,friston_active_curiosity_2017,friston_graphical_2017,linson_active_2018}. Active inference defines a procedure for both perception and action of an agent interacting with a partially observable environment. The definition of the method, in contrast to other existing approaches~\citep[e.g.][]{hutter_universal_2005,doshi-velez_bayesian_2015,leike_nonparametric_2016}, does not maintain a clear separation between the inference and the action selection mechanisms, and the objective function. Most approaches for perception and action selection are generally formed of three steps: The first step involves a learning or inference mechanism to update the agent's knowledge about the consequences of its actions. In a second step, these consequences are evaluated with respect to an agent-internal objective function. Finally, the action selection mechanism chooses an action depending on the preceding evaluation. In active inference, these three elements are entangled. On one hand, there is the main feature of active inference: the combination of knowledge updating and action selection into a single mechanism. This single mechanism is the minimisation of a ``variational free energy'' \citep[p.188]{friston_active_2015}. The ``inference'' part of the name is justified by the formal resemblance of the method to the variational free energy minimisation (also known as evidence lower bound maximisation) used in variational inference. Variational inference is a way to turn Bayesian inference into an optimisation problem which gives rise to an approximate Bayesian inference method \citep{wainwright_graphical_2007}. The ``active'' part is justified by the fact that the output of this minimisation is a probability distribution over actions from which the actions of the agent are then sampled. Behaviour in active inference is thus the result of a variational inference-like process. On the other hand, the function (i.e.\ expected free energy) that induces the objective function in active inference is said to be ``of the same form'' as the variational free energy \citep[p.2673]{friston_active_curiosity_2017} or even to ``follow'' from it \citep[p.10]{friston_active_2016}. This suggests that expected free energy is the only objective function compatible with active inference. In summary, perception and action in active inference intertwines four elements: variational approximation, inference, action selection, and an objective function. Besides these formal features, active inference is of particular interest for its claims on biological plausibility and its relationship to the thermodynamics of dissipative systems. According to \citet[Section 3]{friston_active_2012}, active inference is a ``corollary'' to the free energy principle. Therefore, it is claimed, actions must minimise variational free energy to resist the dispersion of states of self-organising systems \citep[see also][]{friston_life_2013, allen2016cognitivism}. Active inference has also been used to reproduce a range of neural phenomena in the human brain \citep{friston_active_2016}, and the overarching free energy principle has been proposed as a ``unified brain theory''~\cite{friston2010free}. Furthermore, the principle has been used in a hierarchical formulation as theoretical underpinning of the predictive processing framework~\cite[][pp. 305-306]{clark2015surfing}, successfully explaining a wide range of cognitive phenomena. Of particular interest for the present special issue, the representation of probabilities in the active inference framework is conjectured to be related to aspects of consciousness~\citep{friston_consciousness_2013,linson_active_2018}. These strong connections between active inference and biology, statistical physics, and consciousness research make the method particularly interesting for the design of artificial agents that can interact with- and learn about unknown environments. However, it is currently not clear to which extent active inference allows for modifications. We ask: how far do we have to commit to the precise combination of elements used in the literature, and what becomes interchangeable? One target for modifications is the objective function. In situations where the environment does not provide a specific reward signal and the goal of the agent is not directly specified, researchers often choose the objective function from a range of \textit{intrinsic motivations}. The concept of intrinsic motivation was introduced as a psychological concept by \citep{ryan_intrinsic_2000}, and is defined as ``the doing of an activity for its inherent satisfactions rather than for some separable consequence''. The concept helps us to understand one important aspect of consciousness: the assignment of affect to certain experiences, e.g.\ the experience of fun~\citep{Dennett1991-DENCE} when playing a game. Computational approaches to intrinsic motivations \citep{oudeyer2009intrinsic,schmidhuber_formal_2010,santucci_which_2013} can be categorised roughly by the psychological motivations they are imitating, e.g.\ drives to manipulate and explore, the reduction of cognitive dissonance, the achievement of optimal incongruity, and finally motivations for effectance, personal causation, competence and self-determination. Intrinsic motivations have been used to enhance behaviour aimed at extrinsic rewards \citep{sutton_reinforcement_1998}, but their defining characteristic is that they can serve as a goal-independent motivational core for autonomous behaviour generation. This characteristic makes them good candidates for the role of value functions for the design of intelligent systems \citep{pfeifer2005}. We attempt to clarify how to modify active inference to accommodate objective functions based on different intrinsic motivations. This may allow future studies to investigate whether and how altering the objective function affects the biological plausibility of active inference. Another target for modification, originating more from a theoretical standpoint, is the variational formulation of active inference. As mentioned above, variational inference formulates Bayesian inference as an optimisation problem; a family of probability distributions is optimised to approximate the direct, non-variational Bayesian solution. Active inference is formulated as an optimisation problem as well. We consequently ask: is active inference the variational formulation of a direct (non-variational) Bayesian solution? Such a direct solution would allow a formally simple formulation of active inference without recourse to optimisation or approximation methods, at the cost of sacrificing tractability in most scenarios. To explore these questions, we take a step back from the established formalism, gradually extend the active inference framework, and comprehensively reconstruct the version presented in \citet{friston_active_2015}. We disentangle the four components of approximation, inference, action selection, and objective functions that are interwoven in active inference. One of our findings, from a formal point of view, is that expected free energy can be replaced by other intrinsic motivations. Our reconstruction of active inference then yields a unified formal framework that can accommodate: \begin{itemize} \item Direct, non-variational Bayesian inference in combination with standard action selection schemes known from reinforcement learning as well as objective functions induced by intrinsic motivations. \item Universal reinforcement learning through a special choice of the environment model and a small modification of the action selection scheme. \item Variational inference in place of the direct Bayesian approach. \item Active inference in combination with objective functions induced by intrinsic motivations. \end{itemize} We believe that our framework can benefit active inference research as a means to compare the dynamics induced by alternative action selection principles. Furthermore, it equips researchers on intrinsic motivations with additional ways for designing agents that share the biological plausibility of active inference. Finally, this article contributes to the research topic: Consciousness in Humanoid Robots, in several ways. First, there have been numerous claims on how active inference relates to consciousness or related qualities, which we outlined earlier in the introduction. The most recent work by \citet{linson_active_2018}, also part of this research topic, specifically discusses this relation, particularly in regards to assigning salience. Furthermore, intrinsic motivations (including the free energy principle for this argument) have a range of properties that relate to or are useful to a range of classical approaches recently summarised as as Good Old-Fashioned Artificial Consciousness \citep[GOFAC,][]{10.3389/frobt.2018.00039}. For example, embodied approaches still need some form of value-function or motivation \citep{pfeifer2005}, and benefit from the fact that intrinsic motivations are usually universal yet sensitive in regards to an agent's embodiment. % The enactive AI framework \citep{froese_enactive_2009}, another candidate for GOFAC, proposes further requirements on how value underlying motivation should be grounded in constitutive autonomy and adaptivity. \cite{guckelsberger2016does} present tentative claims on how empowerment maximisation relates to these requirements in biological systems, and how it could contribute to realising them in artificial ones. % Finally, the idea of using computational approaches for intrinsic motivation goes back to developmental robotics \citep{oudeyer_intrinsic_2007}, where it is suggested as way to produce a learning and adapting robot, which could offer another road to robot consciousness. Whether these Good Old-Fashioned approaches will ultimately be successful is an open question, and \cite{10.3389/frobt.2018.00039} asses them rather critically. However, extending active inference to alternative intrinsic motivations in a unified framework allows to combine features of these two approaches. For example it may bring together the neurobiological plausibility of active inference and the constitutive autonomy afforded by empowerment. \section{Related Work} \label{sec:relatedWork} Our work is largely based on \citet{friston_active_2015} and we adopt the setup and models from it. This means many of our assumptions are due to the original paper. Recently, \citet{buckley2017free} have provided an overview of continuous-variable active inference with a focus on the mathematical aspects, rather than the relationship to thermodynamic free energy, biological interpretations or neural correlates. Our work here is in as similar spirit but focuses on the discrete formulation of active inference and how it can be decomposed. As we point out in the text, the case of direct Bayesian inference with separate action selection is strongly related to general reinforcement learning \citep{hutter_universal_2005,leike_nonparametric_2016,aslanides_universal_2017}. This approach also tackles unknown environments with- and in later versions also without externally specified reward in a Bayesian way. Other work focusing on unknown environments with rewards are e.g.\ \citet{ross_model-based_2008} and \cite{doshi-velez_bayesian_2015}. We would like to stress that we do not propose agents using Bayesian or variational inference as competitors to any of the existing methods. Instead, our goal is to provide an unbiased investigation of active inference with a particular focus on extending the inference methods, objective functions and action-selection mechanisms. Furthermore, these agents follow almost completely in a straightforward (if quite involved) way from the model in \citet{friston_active_2015}. A small difference is the extension to parameterisations of environment and sensor dynamics. These parameterisations can be found in \citet{friston_active_2016}. % We note that work on planning as inference \citep{attias_planning_2003,toussaint_probabilistic_2009,botvinick_planning_2012} is generally related to active inference. In this line of work the probability distribution over actions or action sequences that lead to a given goal specified as a sensor value is inferred. Since active inference also tries to obtain a probability distribution over actions the approaches are related. The formalisation of the goal however differs, at least at first sight. How exactly the two approaches relate is beyond the scope of this publication. \section{Structure of this Article} Going forward, we will first outline our mathematical notation in \cref{sec:notation}. We then introduce the perception-action loop, which contains both agent and environment in \cref{sec:paloop}. In \cref{sec:inference} we introduce the model used by \citet{friston_active_2015}. We then show how to obtain beliefs about the consequences of actions via both (direct) Bayesian inference (\cref{sec:binference}) and (approximate) variational inference (\cref{sec:approxpostandvi}). These beliefs are represented in the form of a set of complete posteriors. Such a set is a common object but usually does not play a prominent role in Bayesian inference. Here, it turns out to be a convenient structure for capturing the agent' knowledge and describing intrinsic motivations. Under certain assumptions that we discuss in \cref{sec:urlmodel} the direct Bayesian case specialises to the belief updating of the Bayesian universal reinforcement learning agent of \citet{aslanides_universal_2017}. We then discuss in \cref{sec:aselectandim} how those beliefs (i.e.\ the set of complete posteriors) can induce action-value functions (playing the role of objective functions) via a given intrinsic motivation function. We present standard (i.e.\ non-active inference) ways to select actions based on such action-value functions. Then we look at different instances of intrinsic motivation functions. The first is the ``expected free energy'' of active inference. For this we explicitly show how our formalism produces the original expression in \citet{friston_active_2015}. Looking at the formulations of other intrinsic motivations it becomes clear that the expected free energy relies on expressions quite similar or identical to those that occur in other intrinsic motivations. This suggests that, at least in principle, there is no reason why active inference should only work with expected free energy as an intrinsic motivation. Finally, in \cref{sec:activeinference} formulate active inference for arbitrary action-value functions which include those induced by intrinsic motivations. Modifying the generative model of \cref{sec:genmodel} and looking at the variational approximation of its posterior comes close but does not correspond to the original active inference of \citet{friston_active_2015}. We explain the additional trick that is needed. In the appendices we provide some more detailed calculations as well as notation translation tables (\cref{appendix:translationTables}) from our own to those of \citet{friston_active_2015} and \citet{friston_active_2016}. \section{Notation} \label{sec:notation} We will explain our notation in more detail in the text, but for readers that mostly look at equations we give a short summary. Note that, Appendix \ref{appendix:translationTables} comprises a translation between \citet{friston_active_2015,friston_active_2016} and the present notation. Mostly, we will denote random variables by upper case letters e.g.\ $X,Y,A,E,M,S,...$ their state spaces by calligraphic upper case letters $\mathcal{X},\mathcal{Y},\mathcal{A},\mathcal{E},\mathcal{M},\S...$, specific values of random variables which are elements of the state spaces by lower case letters $x,y,a,e,m,s,...$. An exception to this are random variables that act as parameters of probability distributions. For those, we use upper case Greek letters $\Xi,\Phi,\Theta,...$, for their usually continuous state spaces we use ${\Delta_\Xi},{\Delta_\Theta},{\Delta_\Phi},...$ and for specific values the lower case Greek letters $\xi,\phi,\theta,...$. In cases where a random variable plays the role of an estimate of another variable $X$, we write the estimate as $\hX$, its state space as $\shX$ and its values as $\hx$. We distinguish different types of probability distributions with letters $\p,\q,\r$ and $\d$. Here, $\p$ corresponds to probability distributions describing properties of the physical world including the agent and its environment, $\q$ identifies model probabilities used by the agent internally, $\r$ denotes approximations of such model probabilities which are also internal to the agent, and $\d$ denotes a probability distribution that can be replaced by a $\q$ or a $\r$ distribution. We write conditional probabilities in the usual way, e.g.\ $\p(y|x)$. For a model of this conditional probability parameterised by $\theta$, we write $\q(\hy|\hx,\theta)$. \section{Perception-Action Loop} \label{sec:paloop} \begin{figure}[ht] \begin{center} \begin{tikzpicture} [->,>=stealth,auto,node distance=2cm, thick] \node (e) [] {$E_1$}; \node (e') [right of=e] {$E_{2}$}; \node (s) [below of=e, node distance=1cm] {$S_1$}; \node (s') [below of=e', node distance=1cm] {$S_{2}$}; \node (a) [below of=s, node distance=1cm] {$A_1$}; \node (a') [right of=a] {$A_{2}$}; \node (m) [below of=a, node distance=1cm] {$M_1$}; \node (m') [below of=a', node distance=1cm] {$M_{2}$}; \node (el) [left of=e] {$E_0$}; \node (er) [right of=e'] {}; \node (sl) [below of=el, node distance=1cm] {$S_0$}; \node (sr) [below of=er, node distance=1cm] {}; \node (al) [below of=sl, node distance=1cm] {}; \node (ar) [right of=a'] {}; \node (ml) [below of=al, node distance=1cm] {}; \node (mr) [below of=ar, node distance=1cm] {}; \path (e) edge node {} (e') (e) edge node {} (s) (e') edge node {} (s') (m) edge node {} (a) (m) edge node {} (m') (m') edge node {} (a') (s) edge node {} (m') (a) edge node {} (m') (a) edge[bend left=45] node {} (e) (a') edge[bend left=45] node {} (e') (el) edge node {} (e) (el) edge node {} (sl) (sl) edge node {} (m) (e') edge[-,dotted] node {} (er) (s') edge[-,dotted] node {} (mr) (m') edge[-,dotted] node {} (mr) (a') edge[-,dotted] node {} (mr) ; \end{tikzpicture} \caption{First two time steps of the Bayesian network representing the perception-action loop (PA-loop). All subsequent time steps are identical to the one from time $t=1$ to $t=2$.} \label{fig:smloop} \end{center} \end{figure} In this section we introduce an agent's perception-action loop (PA-loop) as a causal Bayesian network. This formalism forms the basis for our treatment of active inference. The PA-loop should be seen as specifying the (true) dynamics of the underlying physical system that contains agent and environment as well as their interactions. In Friston's formulation, the environment dynamics of the PA-loop are referred to as the \textit{generative process}. In general these dynamics are inaccessible to the agent itself. Nonetheless, parts of these (true) dynamics are often assumed to be known to the agent in order to simplify computation \citep[see e.g.~][]{friston_active_2015}. We first formally introduce the PA-loop as causal Bayesian network, and then state specific assumptions for the rest of this article. \subsection{PA-loop Bayesian Network} \cref{fig:smloop} shows an agent's PA-loop, formalised as causal Bayesian network. The network describes the following causal dependencies over time: At $t=0$ an initial environment state $e_0 \in \sE$ leads to an initial sensor value $s_0 \in \sS$. This sensor value influences the memory state $m_1 \in \sM$ of the agent at time $t=1$. Depending on this memory state, action $a_1 \in \sA$ is performed which influences the transition of the environment state from $e_0$ to $e_1 \in \sE$. The new environment state leads to a new sensor value $s_1$ which, together with the performed action $a_1$ and the memory state $m_1$, influence the next memory state $m_2$. The loop then continues in this way until a final time step $T$. We assume that all variables are finite and that the PA-loop is time-homogeneous\footnote{This means that all state spaces and transition probabilities are independent of the time step, e.g.\ $\sM_{t}=\sM_{t-1}$ and $p(s_t|e_t)=p(s_{t-1}|e_{t-1})$.}. % We exclude the first transition from $t=0$ to $t=1$ from the assumption of time-homogeneity in order to avoid having to pick an arbitrary action which precedes the investigated time-frame. The first transition is thus simplified to $\p(m_1|s_0,a_0):= \p(m_1|s_0)$. Under the assumption of time-homogeneity and the causal dependencies expressed in \cref{fig:smloop}, the joint probability distribution over the entire PA-loop is defined by: \begin{align} \p(e_{0:T},s_{0:T},a_{1:T},m_{1:T})&= \left( \prod_{t=1}^T \p(a_t|m_t) \p(m_t|s_{t-1},a_{t-1}) \p(s_t|e_t) \p(e_t|a_t,e_{t-1}) \right) \p(s_0|e_0) \p(e_0) \end{align} where $e_{0:T}$ is shorthand for states $(e_0, e_1, ..., e_T)$. In order to completely determine this distribution we therefore have to specify the state spaces $\sE,\sS,\sA$, and $\sM$ as well as the following probabilities and mechanisms for all $e_0,e_t,e_{t+1} \in \sE; s_0,s_t \in \sS; a_t,a_{t+1} \in \sA; m_1,m_t,m_{t+1} \in \sM$ for $t>0$: \begin{multicols}{2} \multicollinenumbers \begin{itemize} \item initial environment distribution: $\p(e_0)$, \item environment dynamics: $\p(e_{t+1}|a_{t+1},e_t)$, \item sensor dynamics: $\p(s_t|e_t)$, \item action generation: $\p(a_t|m_t)$, \item initial memory step $\p(m_1|s_0)$, \item memory dynamics: $\p(m_{t+1}|s_t,a_t,m_t)$. \end{itemize} \end{multicols} In the following we will refer to a combination of initial environment distribution, environment dynamics, and sensor dynamics simply as an \textit{environment}. Similarly, an \textit{agent} is a particular combination of initial memory step, memory dynamics, and action generation. The indexing convention we use here is identical to the one used for the generative model (see \cref{sec:genmodel}) in \citet{friston_active_2015}. Also, note the dependence of $M_t$ on $S_{t-1}$, $M_{t-1}$, and additionally $A_{t-1}$ in \cref{fig:smloop}. In the literature, the dependence on $A_{t-1}$ is frequently not allowed \citep{ay_information-driven_2012,ay_umwelt_2015}. However, we assume an “efference”-like update of the memory. Note that this dependence in addition to the dependence on $m_{t-1}$ is only relevant if the actions are not deterministic functions of the memory state\footnote{In the deterministic case there is a function $f:\sM \rightarrow \sA$ such that $p(m_t|s_{t-1},a_{t-1},m_{t-1})=p(m_t|s_{t-1},f(m_{t-1}),m_{t-1})=p(m_t|s_{t-1},m_{t-1})$.}. If action selection is probabilistic, knowing the outcome $a_{t-1}$ of the action generation mechanism $\p(a_{t-1}|m_{t-1})$ will convey more information than only knowing the past memory state $m_{t-1}$. This additional information can be used in inference about the environment state and fundamentally change the intrinsic perspective of an agent. We do not discuss these changes in more detail here but the reader should be aware of the assumption. In a realistic robot scenario, the action $a_t$, if it is to be known by the agent, can only refer to the ``action signal'' or ``action value'' that is sent to the robot's physical actuators. These actuators will usually be noisy and the robot will not have access to the final effect of the signal it sends. The (noisy) conversion of an action signal to a physical configuration change of the actuator is here seen as part of the environment dynamics $\p(e_{t}|a_{t},e_{t-1})$. Similarly, the sensor value is the signal that the physical sensor of the robot produces as a result of a usually noisy measurement, so just like the actuator, the conversion of a physical sensor configuration to a sensor value is part of the sensor dynamics $\p(s_t|e_t)$ which in turn belongs to the environment. As we will see later, the actions and sensor values must have well defined state spaces $\sA$ and $\sS$ for inference on an internal model to work. This further justifies this perspective. \subsection{Assumptions} \label{sec:paloopassumptions} For the rest of this article we assume that the environment state space $\sE$, sensor state space $\sS$ as well as environment dynamics $\p(e_{t+1}|a_{t+1},e_t)$ and sensor dynamics $\p(s_t|e_t)$ are arbitrarily fixed and that some initial environmental state $e_0$ is given. Since we are interested in intrinsic motivations, our focus is not on specific environment or sensor dynamics but almost exclusively on action generation mechanisms of agents that rely minimally on the specifics of these dynamics. In order to focus on action generation, we assume that all the agents we deal with here have the same memory dynamics. For this, we choose a memory that stores all past sensor values $s_{\prec t}=(s_0,s_1,...,s_{t-1})$ and actions $a_{\prec t}=(a_1,a_2,...,a_{t-1})$ in the memory state $m_t$. This type of memory is also used in \citet{friston_active_2015,friston_active_2016} and provides the agent with all existing data about its interactions with the environment. In this respect, it could be called a perfect memory. At the same time, whatever the agent learned from $s_{\prec t}$ and $a_{\prec t}$ that remains true based on the next time step's $s_{\preceq t+1}$ and $a_{\preceq t+1}$ must be relearned from scratch by the agent. A more efficient memory use might store only a sufficient statistic of the past data and keep reusable results of computations in memory. Such improvements are not part of this article \citep[see e.g.][for discussion]{fox_minimum-information_2016}. Formally, the state space $\sM$ of the memory is the set of all sequences of sensor values and actions that can occur. Since there is only a sensor value and no action at $t=0$, these sequences always begin with a sensor value followed by pairs of sensor values and actions. Furthermore, the sensor value and action at $t=T$ are never recorded. Since we have assumed a time-homogeneous memory state space $\sM$ we must define it so that it contains all these possible sequences from the start. Formally, we therefore choose the union of the spaces of sequences of a fixed length (similar to a Kleene-closure): \begin{equation} \label{eq:memorystatespace} \sM=\sS \cup \left(\bigcup_{t=1}^{T-1} \sS\times (\sS \times \sA)^t\right). \end{equation} With this we can define the dynamics of the memory as: \begin{align} \p(m_1|s_0):&= \begin{cases} 1 &\text{ if } m_1 = s_0 \\ 0 &\text{ else.} \end{cases}\\ \p(m_t|s_{t-1},a_{t-1},m_{t-1}) :&= \begin{cases} 1 &\text{ if } m_t = m_{t-1} s_{t-1} a_{t-1} \\ 0 &\text{ else.} \end{cases} \end{align} This perfect memory may seem unrealistic and can cause problems if the sensor state space is large (e.g. high resolution images). However, we are not concerned with this type of problem here. Usually, the computation of actions based on past actions and sensor values becomes a challenge of efficiency long before storage limitations kick in: the necessary storage space for perfect memory only increases linearly with time, while, as we show later, the number of operations for Bayesian inference increases exponentially. For completeness we also note how the memory dynamics look if actions are a deterministic function $f:\sM \rightarrow \sA$ of the memory state. Recall that in this case we can drop the edge from $A_{t-1}$ to $M_t$ in the PA-loop in \cref{fig:smloop} and have $a_t=f(m_t)$ so that we can define: \begin{align} \p(m_1|s_0):&= \begin{cases} 1 &\text{ if } m_1 = s_0 \\ 0 &\text{ else.} \end{cases}\\ \p(m_t|s_{t-1},m_{t-1}) :&= \begin{cases} 1 &\text{ if } m_t = m_{t-1} s_{t-1} f(m_{t-1}) \\ 0 &\text{ else.} \end{cases} \end{align} Given a fixed environment and the memory dynamics, we only have to define the action generation mechanism $\p(a_t|m_t)$ to fully specify the perception-action loop. This is the subject of the next two sections. In order to stay as close to \citet{friston_active_2015} as possible, we first explain the individual building blocks that can be extracted from Friston's active inference as described in \citet{friston_active_2015}. These are the variational inference and the action selection. We then show how these two building blocks are combined in the original formulation. We eventually leverage our separation of components to show how the action selection component can be modified, and thus extend the active inference framework. \section{Inference and Complete Posteriors} \label{sec:inference} Ultimately, an agent needs to select actions. Inference based on past sensor values and actions is only needed if it is relevant to the action selection. Friston's active inference approach promises to perform action selection within the same inference step that is used to update the agent's model of the environment. In this section, we look at the inference component only and show how an agent can update a generative model in response to observed sensor values and performed actions. The natural way of updating such a model is Bayesian inference via Bayes' rule. This type of inference leads to what we call the \textit{complete posterior}. The complete posterior represents all knowledge that the agent can obtain about the consequences of its actions from its past sensor values and actions. In \cref{sec:aselectandim} we discuss how the agent can use the complete posterior to decide what is the best action to take. % Bayesian inference as straightforward recipe is usually not practical due to computational costs. The memory requirements of the complete posterior update increases exponentially with time and so does the number of operations needed to select actions. To keep the computational tractable, we have to limit ourselves to only use parts of the complete posterior. Furthermore, since the direct expressions (even of parts) of complete posteriors are usually intractable, approximations are needed. Friston's active inference is committed to variational inference as an approximation technique. Therefore, we explain how variational inference can be used as an approximation technique. Our setup for variational inference (generative model and approximate posterior) is identical to the one in \citet{friston_active_2015}, but in this section we ignore the inference of actions included there. We will look at the extension to action inference in \cref{sec:aselectandim}. In the perception-action loop in \cref{fig:smloop}, action selection (and any inference mechanism used in the course of it) depends exclusively on the memory state $m_t$. As mentioned in \cref{sec:paloop}, we assume that this memory state contains all \textit{past} sensor values $s_{\prec t}$ and all \textit{past} actions $a_{\prec t}$. To save space, we write $sa_{\prec t}:=(s_{\prec t},a_{\prec t})$ to refer to both sensor values and actions. We then have: \begin{equation} m_t = sa_{\prec t}. \end{equation} However, since it is more intuitive to understand inference with respect to past sensor values and actions than in terms of memory, we use $sa_{\prec t}$ explicitly here in place of $m_t$. \subsection{Generative Model} \label{sec:genmodel} \begin{figure}[h!]% \begin{center} \begin{tikzpicture} [->,>=stealth,auto,node distance=2cm, thick] \tikzset{ hv/.style={to path={-| (\tikztotarget)}}, vh/.style={to path={|- (\tikztotarget)}}, } \tikzset{invi/.style={minimum width=0mm,inner sep=0mm,outer sep=0mm}} % % % % % % \node (e) [] {$\hE_1$}; \node (e') [right of=e] {$\hE_{2}$}; \node (s) [below of=e, node distance=1cm] {$\hS_1$}; \node (s') [below of=e', node distance=1cm] {$\hS_{2}$}; \node (a) [below of=s, node distance=1cm] {$\hA_1$}; \node (a') [right of=a] {$\hA_{2}$}; % \node (m') [below of=a', node distance=1cm] {}; \node (el) [left of=e] {$\hE_0$}; \node (er) [right of=e'] {}; \node (sl) [below of=el, node distance=1cm] {$\hS_0$}; \node (sr) [below of=er, node distance=1cm] {}; \node (al) [below of=sl, node distance=1cm] {}; \node (ar) [right of=a'] {}; \node (ml) [below of=al, node distance=1cm] {}; \node (mr) [below of=ar, node distance=1cm] {}; \node (th3) [left of=el] {${\Theta^3}$}; \node (th2) [above of=th3, node distance=1cm] {${\Theta^2}$}; \node (th1) [below of=th3, node distance=3cm] {${\Theta^1}$}; \node (al3) [left of=th3] {${\Xi^3}$}; \node (al2) [left of=th2] {${\Xi^2}$}; \node (al1) [left of=th1] {${\Xi^1}$}; \node (th2') [above of=e', node distance=1cm] {}; \node (th2r) [right of=th2'] {}; \node (c0) [right of=th1, node distance=1cm,invi] {}; \node (c1) [right of=c0,invi] {}; \node (c2) [right of=c1,invi] {}; \node (c3) [right of=c2,invi] {}; \node (c3a) [right of=c2,node distance=1cm,invi] {}; \path (al3) edge (th3) (al2) edge (th2) (al1) edge (th1) (th3) edge (el) (th2) edge[hv] (e) (th2) edge[hv] (e') (th2') edge[-,dotted] (th2r) (th1) edge[-] (c0) (c0) edge[vh] (sl) (c0) edge[-] (c1) (c1) edge[vh] (s) (c1) edge[-] (c2) (c2) edge[vh] (s') (c2) edge[-] (c3a) (c2) edge[-,dotted] (c3a) (c3) edge[-,dotted,vh] (sr) (e) edge node {} (e') (e) edge node {} (s) (e') edge node {} (s') % % % % % (a) edge[bend left=45,line width=6pt,draw=white] node {} (e) (a') edge[bend left=45,line width=6pt,draw=white] node {} (e') (a) edge[bend left=45] node {} (e) (a') edge[bend left=45] node {} (e') % % % % % % % % % % % % (el) edge node {} (e) (el) edge node {} (sl) % % % % % (e') edge[-,dotted] node {} (er) % (m') edge[-,dotted] node {} (mr) % ; \end{tikzpicture} \caption{Bayesian network of the generative model with parameters ${\boldsymbol{\Theta}}=({\boldsymbol{\Theta}^1},{\boldsymbol{\Theta}^2},{\boldsymbol{\Theta}^3})$ and hyperparameters $\Xi=({\Xi^1},{\Xi^2},{\Xi^3})$. Hatted variables are models / estimates of non-hatted counterparts in the perception-action loop in \cref{fig:smloop}. An edge that splits up connecting one node to $n$ nodes (e.g. ${\boldsymbol{\Theta}^2}$ to $\hE_1, \hE_2, ...$) corresponds to $n$ edges from that node to all the targets under the usual Bayesian network convention. Note that in contrast to the perception-action loop in \cref{fig:smloop}, imagined actions $\hA_t$ have no parents. They are either set to past values or, for those in the future, a probability distribution over them must be assumed.} \label{fig:genmodel} \end{center} \end{figure} The inference mechanism, internal to the action selection mechanism $\p(a|m)$, takes place on a hierarchical generative model (or density, in the continuous case). ``Hierarchical'' means that the model has parameters and hyperparameters, and ``generative'' indicates that the model relates \emph{parameters and latent variables}, i.e. the environment state, as ``generative'' causes to sensor values and actions as \emph{data} in a joint distribution. % The generative model we investigate here is a part of the generative model used in \citet{friston_active_2015}. For now, we omit the probability distribution over future actions and the ``precision'', which are only needed for active inference and are discussed later. The generative models in \citet{friston_active_learning_2016,friston_active_2016,friston_active_curiosity_2017} are all closely related. Note that we are not inferring the causal structure of the Bayesian network or state space cardinalities, but define the generative model as a fixed Bayesian network with the graph shown in \cref{fig:genmodel}. It is possible to infer the causal structure \citep[see e.g.][]{ellis_learning_2008}, but in that case, it becomes impossible to represent the whole generative model as a single Bayesian network \citep{ortega_bayesian_2011}. The variables in the Bayesian network in \cref{fig:genmodel} that model variables occurring outside of $\p(a|m)$ in the perception-action loop (\cref{fig:smloop}), are denoted as hatted versions of their counterparts. More precisely: \begin{itemize} \item $\hs \in \shS=\sS$ are modelled sensor values, \item $\ha \in \shA=\sA$ are modelled actions, \item $\he \in \shE$ are modelled environment states. \end{itemize} To clearly distinguish the probabilities defined by the generative model from the true dynamics, we use the symbol $\q$ instead of $\p$. In accordance with \cref{fig:genmodel}, and also assuming time-homogeneity, the joint probability distribution over all variables in the model until some final modelled time ${\hat{T}}$ is given by: \begin{align} \label{eq:genmodel} \begin{split} \q(\he_{0:T},&\hs_{0:T},\ha_{1:T},{\theta^1},{\theta^2},{\theta^3},{\xi^1},{\xi^2},{\xi^3}):=\\ &\left(\prod_{t=1}^T \q(\hs_t|\he_t,{\theta^1}) \q(\he_t|\ha_t,\he_{t-1},{\theta^2}) \q(\ha_t) \right) \q(\hs_0|\he_0,{\theta^1})\q(\he_0|{\theta^3}) \left(\prod_{i=1}^3 \q(\theta^i|\xi^i) \q(\xi^i)\right) \end{split} \end{align} Here, ${\theta^1},{\theta^2},{\theta^3}$ are the parameters of the hierarchical model, and ${\xi^1},{\xi^2},{\xi^3}$ are the hyperparameters. To save space, we combine the parameters and hyperparameters by writing \begin{align} \theta&:=({\theta^1},{\theta^2},{\theta^3})\\ \xi&:=({\xi^1},{\xi^2},{\xi^3}). \end{align} To fully specify the generative model, or equivalently a probability distribution over \cref{fig:genmodel}, we have to specify the state spaces $\shE,\shS,\shA$ and: \begin{multicols}{2} \multicollinenumbers \begin{itemize} \item $\q(\hs|\he,{\theta^1})$ the sensor dynamics model, \item $\q(\he'|\ha',\he,{\theta^2})$ the environment dynamics model, \item $\q(\he_0|{\theta^3})$ the initial environment state model, \item $\q({\theta^1}|{\xi^1})$ the sensor dynamics prior, \item $\q({\theta^2}|{\xi^2})$ the environment dynamics prior, \item $\q({\theta^3}|{\xi^3})$ the initial environment state prior, \item $\q({\xi^1})$ sensor dynamics hyperprior, \item $\q({\xi^2})$ environment dynamics hyperprior, \item $\q({\xi^3})$ initial environment state hyperprior, \item ${\hat{T}}$ last modelled time step, \item $\q(\ha_t)$ for all $t \in \{1,,...,{\hat{T}}\}$ the probability distribution over the actions at time $t$. \end{itemize} \end{multicols} The state spaces of the parameters and hyperparameters are determined by the choice of $\shE,\shS,\shA$. We will see in \cref{sec:plugin} that $\shS=\sS$ and $\shA=\sA$ should be chosen in order to use this model for inference on past sensor values and actions. For $\shE$ it is not necessary to set it equal to $\sE$ for the methods described to work. We note that if we set $\shE$ equal to the memory state space of \cref{eq:memorystatespace} the model and its updates become equivalent to those used by the Bayesian universal reinforcement learning agent \citet{hutter_universal_2005} in a finite (environment and time-interval) setting (see \cref{sec:urlmodel}). The last modelled time step ${\hat{T}}$ can be chosen as ${\hat{T}}=T$, but it is also possible to always set it to ${\hat{T}}=t+n$, in which case $n$ specifies a future time horizon from current time step $t$. Such an agent would model a future that goes beyond the externally specified last time step $T$. The dependence of ${\hat{T}}$ on $t$ (which we do not denote explicitly) within $\p(a|m)$ is possible since the current time step $t$ is accessible from inspection of the memory state $m_t$ which contains a sensor sequence of length $t$. The generative model assumes that the actions are not influenced by any other variables, hence we have to specify action probabilities. This means that the agent does not model how its actions come about, i.e.\ it does not model its own decision process. Instead, the agent is interested in the (parameters of) the environment and sensor dynamics. It actively sets the probability distributions over past and future actions according to its needs. In practice, it either fixes the probability distributions to particular values (by using Dirac delta distributions) or to values that optimise some measure. We look into the optimisation options in more detail later. Note that the parameters and hyperparameters are standard random variables in the Bayesian network of the model. Also, the rules for calculating probabilities according to this model are just the rules for calculating probabilities in this Bayesian network. In what follows, we assume that the hyperparameters are fixed as ${\Xi^1}={\xi^1},{\Xi^2}={\xi^2},{\Xi^3}={\xi^3}$. The following procedures (including both Bayesian and variational inference) can be generalised to also infer hyperparameters. However, our main reference \citep{friston_active_2015} and most publications on active inference also fix the hyperparameters. \subsection{Bayesian Complete Posteriors} \label{sec:binference} \label{sec:plugin} \begin{figure}% \begin{center} \begin{tikzpicture} [->,>=stealth,auto,node distance=2cm, thick] \tikzset{ hv/.style={to path={-| (\tikztotarget)}}, vh/.style={to path={|- (\tikztotarget)}}, } \tikzset{invi/.style={minimum width=0mm,inner sep=0mm,outer sep=0mm}} \node (e) [] {$\hE_1$}; \node (e') [right of=e] {$\hE_{2}$}; \node (s) [below of=e, node distance=1cm] {$s_1$}; \node (s') [below of=e', node distance=1cm] {$\hS_{2}$}; \node (a) [below of=s, node distance=1cm] {$a_1$}; \node (a') [right of=a] {$\hA_{2}$}; \node (m') [below of=a', node distance=1cm] {}; \node (el) [left of=e] {$\hE_0$}; \node (er) [right of=e'] {}; \node (sl) [below of=el, node distance=1cm] {$s_0$}; \node (sr) [below of=er, node distance=1cm] {}; \node (al) [below of=sl, node distance=1cm] {}; \node (ar) [right of=a'] {}; \node (ml) [below of=al, node distance=1cm] {}; \node (mr) [below of=ar, node distance=1cm] {}; \node (th3) [left of=el] {${\Theta^3}$}; \node (th2) [above of=th3, node distance=1cm] {${\Theta^2}$}; \node (th1) [below of=th3, node distance=3cm] {${\Theta^1}$}; \node (al3) [left of=th3] {${\xi^3}$}; \node (al2) [left of=th2] {${\xi^2}$}; \node (al1) [left of=th1] {${\xi^1}$}; \node (th2') [above of=e', node distance=1cm] {}; \node (th2r) [right of=th2'] {}; \node (c0) [right of=th1, node distance=1cm,invi] {}; \node (c1) [right of=c0,invi] {}; \node (c2) [right of=c1,invi] {}; \node (c3) [right of=c2,invi] {}; \node (c3a) [right of=c2,node distance=1cm,invi] {}; \path (al3) edge (th3) (al2) edge (th2) (al1) edge (th1) (th3) edge (el) (th2) edge[hv] (e) (th2) edge[hv] (e') (th2') edge[-,dotted] (th2r) (th1) edge[-] (c0) (c0) edge[vh] (sl) (c0) edge[-] (c1) (c1) edge[vh] (s) (c1) edge[-] (c2) (c2) edge[vh] (s') (c2) edge[-] (c3a) (c2) edge[-,dotted] (c3a) (c3) edge[-,dotted,vh] (sr) (e) edge node {} (e') (e) edge node {} (s) (e') edge node {} (s') (a) edge[bend left=45,line width=6pt,draw=white] node {} (e) (a') edge[bend left=45,line width=6pt,draw=white] node {} (e') (a) edge[bend left=45] node {} (e) (a') edge[bend left=45] node {} (e') (el) edge node {} (e) (el) edge node {} (sl) (e') edge[-,dotted] node {} (er) (m') edge[-,dotted] node {} (mr) ; \end{tikzpicture} \caption{Internal generative model with plugged in data up to $t=2$ with $\hS_0=s_0,\hS_1=s_1$ and $\hA_1=a_1$ as well as from now on fixed hyperparameters $\xi=({\xi^1},{\xi^2},{\xi^3})$. Conditioning on the plugged in data leads to the posterior distribution $\q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\ha_{t:{\hat{T}}},\theta|sa_{\prec t},\xi)$. Predictions for future sensor values can be obtained by marginalising out other random variables e.g.\ to predict $\hS_2$ we would like to get $\q(\hs_2|s_0,s_1,a_1,\xi)$. Note however that this requires an assumption for the probability distribution over $\hA_2$.} \label{fig:genmodeldata} \end{center} \end{figure} During action generation (i.e.\ within $\p(a|m)$) at time $t$, the agent has retained all its previously perceived sensor states and its previously performed actions in memory. The ``experience'' or data contained in its memory is thus $m_t=sa_{\prec t}$. This data can be plugged into the generative model to obtain posterior probability distributions over all non-observed random variables. Also, the model can estimate the not yet observed sensor values $\hs_{t:{\hat{T}}}$, past and future unobservable environment states $\he_{0:{\hat{T}}}$, parameters $\theta$ and hyperparameters $\xi$. These estimations are done by setting: \begin{equation} \hA_\tau = a_\tau, \text{for } \tau < t \end{equation} and \begin{equation} \hS_\tau = s_\tau, \text{for } \tau < t. \end{equation} as shown in \cref{fig:genmodeldata} for $t=2$. For these assignments to be generally possible, we need to choose $\shA$ and $\shS$ equal to $\sA$ and $\sS$ respectively. The resulting posterior probability distribution over all non-observed random variables is then, according to standard rules of calculating probabilities in a Bayesian network: \begin{align} \label{eq:posterior} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\ha_{t:{\hat{T}}},\theta|sa_{\prec t},\xi):&= \frac{\q(s_{\prec t},\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},a_{\prec t},\ha_{t:{\hat{T}}},\theta,\xi)}{\int\sum_{\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\ha_{t:{\hat{T}}}}\q(s_{\prec t},\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},a_{\prec t},\ha_{t:{\hat{T}}},\theta,\xi)\mathop{\kern0pt\mathrm{d}}\!{} \theta }. \end{align} Eventually, the agent needs to evaluate the consequences of its future actions. Just as it can update the model with respect to past actions and sensor values, the agent can update its evaluations with ``contemplated'' future action sequences $\ha_{t:{\hat{T}}}$. For each such future action sequence $\ha_{t:{\hat{T}}}$, the agent obtains a distribution over the remaining random variables in the model: \begin{align} \label{eq:posteriorgfa} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi):&= \frac{\q(s_{\prec t},\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},a_{\prec t},\ha_{t:{\hat{T}}},\theta,\xi)}{\int\sum_{\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}}}\q(s_{\prec t},\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},a_{\prec t},\ha_{t:{\hat{T}}},\theta,\xi)\mathop{\kern0pt\mathrm{d}}\!{} \theta }. \end{align} We call each such distribution a \textit{Bayesian complete posterior}. We choose the term complete posterior since the ``posterior'' by itself usually refers to the posterior distribution over the parameters and latent variables $\q(\theta,\he_{t-1}|sa_{\prec t},\xi)$ (we here call this a \textit{posterior factor}, see \cref{eq:posteriorgfa2}) and the posterior predictive distributions marginalise out the parameters and latent variables to get $\q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$. The complete posteriors are probability distributions over all random variables in the generative model including parameters, latent variables, and future variables. In this sense the set of all (Bayesian) complete posteriors represents the complete knowledge state of the agent at time $t$ about consequences of future actions after updating the model with past actions and observed sensor values $sa_{\prec t}$. At each time step the sequence of past actions and sensor values is extended from $sa_{\prec t}$ to $sa_{{\prec t}+1}$ (i.e.\ $m_t$ goes to $m_{t+1}$) and a new set of complete posteriors is obtained. All intrinsic motivations discussed in this article evaluate future actions based on quantities that can be derived from the corresponding complete posterior. It is important to note that the complete posterior can be factorised into a term containing the influence of past sensor values and actions (data). This factorisation can be made on the parameters $\theta$ and $\xi$, the environment states $\he_{\prec t}$, predicted future environment states $\he_{t:{\hat{T}}}$ and sensor values $\hs_{t:{\hat{T}}}$ depending on the future actions $\ha_{t:{\hat{T}}}$, and the estimated environment state $\he_{t-1}$ and $\theta$. Using the conditional independence \begin{align} SA_{\prec t} &\ci \hS_{t:{\hat{T}}},\hE_{t:{\hat{T}}} \mid\hA_{t:{\hat{T}}},\hE_{t-1},\Theta,\Xi, \end{align} which can be identified (via $d$-separation \citep{pearl_causality_2000}) from the Bayesian network in \cref{fig:genmodeldata}, we can rewrite this as: \begin{align} \label{eq:posteriorgfa2} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)&= \q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\q(\he_{\prec t}, \theta|sa_{\prec t},\xi). \end{align} This equation represents the desired factorisation. This formulation separates complete posteriors into a predictive and a posterior factor. The predictive factor is given as part of the generative model (\cref{eq:genmodel}) \begin{align} \q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)= \prod_{r=t}^{\hat{T}} \q(\hs_r|\he_r,{\theta^1}) \q(\he_r|\ha_r,\he_{r-1},{\theta^2}) \end{align} and does not need to be updated through calculations at different time steps. This factor contains the dependence of the complete posterior on future actions. This dependency reflects that, under the given generative model, the consequences of actions for each combination of $\Theta$ and $\hE_{t-1}$ remain the same irrespective of experience. What changes when a new action and sensor value pair comes in is the distribution over the values of $\Theta$ and $\hE_{t-1}$ and with them the \emph{expectations} over consequences of actions. On the other hand, the posterior factor must be updated at every time step. In \cref{sec:postfactorBI}, we sketch the computation which shows that it involves a sum over $|\sE|^t$ elements. This calculation is intractable as time goes on and one of the reasons to use approximate inference methods like variational inference. Due to the above factorisation, we may only need to approximate the posterior factor $\q(\he_{\prec t}, \theta|sa_{\prec t},\xi)$ and use the exact predictive factor if probabilities involving future sensor values or environment states are needed. This is the approach taken e.g.\ in \citet{friston_active_2015}. However, it is also possible to directly approximate parts of the complete posterior involving random variables in both factors , e.g.\ by approximating $\q(\he_{0:{\hat{T}}},{\theta^1}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$. This latter approach is taken in \citet{friston_active_2016} and we see it again in \cref{eq:futurefactorizedpost} but in this publication the focus is on the former approach. In the next section, we look at the special case of universal reinforcement learning before we go on to variational inference to approximate the posterior factor of the (Bayesian) complete posteriors. \subsection{Connection to Universal Reinforcement Learning} \label{sec:urlmodel} In this section, we relate the generative model of \cref{eq:genmodel} and its posterior predictive distribution to those used by the Bayesian universal reinforcement learning agent. Originally, this agent is defined by \citet{hutter_universal_2005}. More recent work includes \citet{leike_nonparametric_2016} and (for the current purpose sufficient and particularly relevant) \citet{aslanides_universal_2017}. Let us set $\shE=\sM$ with $\sM$ as in \cref{eq:memorystatespace} and let the agent identify each past $sa_{\prec t}$ with a state of the environment, i.e.\: \begin{align} \he_{t-1}=sa_{\prec t}. \end{align} Under this definition the next environment state $\he_t$ is just the concatenation of the last environment state $sa_{\prec t}$ with the next next action selected by the agent $\ha_t$ and the next sensor value $\hs_t$: \begin{align} \he_t=\hs\ha_{\preceq t}=sa_{\prec t} \hs\ha_t. \end{align} So given a next contemplated action $\bar{\ha}_t$ the next environment state $\he_t$ is already partially determined. What remains to be predicted is only the next sensor value $\hs_t$. Formally, this is reflected in the following derivation: \begin{align} \q(\he_t|\bar{\ha}_t,\he_{t-1},{\theta^2}) :&= \q(\hs_t,\ha_t,\hs\ha_{\prec t}|\bar{\ha}_t,sa_{\prec t},{\theta^2})\\ &=\q(\hs_t|\ha_t,\hs\ha_{\prec t},\bar{\ha}_t,sa_{\prec t},{\theta^2}) \q(\ha_t,\hs\ha_{\prec t}|\bar{\ha}_t,sa_{\prec t},{\theta^2})\\ &=\q(\hs_t|\ha_t,\hs\ha_{\prec t},\bar{\ha}_t,sa_{\prec t},{\theta^2}) \delta_{\bar{\ha}_t}(\ha_t) \delta_{sa_{\prec t}}(\hs\ha_{\prec t})\\ &=\q(\hs_t|\bar{\ha}_t,sa_{\prec t},{\theta^2}) \delta_{\bar{\ha}_t}(\ha_t) \delta_{sa_{\prec t}}(\hs\ha_{\prec t}). \end{align} This shows that in this case the model of the next environment state (the left hand side) is determined by the model of the next sensor value $\q(\hs_t|\bar{\ha}_t,sa_{\prec t},{\theta^2})$. So instead of carrying a distribution over possible models of the next environment state such an agent only needs to carry a distribution over models of the next sensor value. Furthermore, an additional model $\q(\hs|\he,{\theta^1})$ of the dependence of the sensor values on environment states parameterised by ${\theta^1}$ is superfluous. The next predicted sensor value is already predicted by the model $\q(\hs_t|\ha_t,sa_{\prec t},{\theta^2})$. It is therefore possible to drop the parameter ${\theta^1}$. The parameter ${\theta^3}$, for the initial environment state distribution, becomes a distribution over the initial sensor value since $\he_0=\hs_0$: \begin{align} \q(\he_0|{\theta^3})=\q(\hs_0|{\theta^3}). \end{align} We can then derive the posterior predictive distribution and show that it coincides with the one given in \citet{aslanides_universal_2017}. For the complete posterior of \cref{eq:posteriorgfa2} we find: \begin{align} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)&= \q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\q(\he_{\prec t}, \theta|sa_{\prec t},\xi) \tag{\ref{eq:posteriorgfa2} \text{ revisited}}\\ &= \q(\he_{t:{\hat{T}}}|\hs_{t:{\hat{T}}},\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\q(\he_{\prec t}, \theta|sa_{\prec t},\xi)\\ &= \q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\theta) \q(\theta|sa_{\prec t},\xi) \prod_{\tau=0}^t \delta_{sa_{\prec \tau}}(\he_\tau) \prod_{\tau=t+1}^{\hat{T}} \delta_{sa_{\prec t}\hs\ha_{t:\tau}}(\he_\tau). \end{align} To translate this formulation into the notation of \citet{aslanides_universal_2017} first drop the representation of the environment state which is determined by the sensor values and actions anyway. This means that the complete posterior only needs to predict future sensor values and parameters. Formally, this means the complete posterior can be replaced without loss of generality: \begin{align} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi) \rightarrow \q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\theta) \q(\theta|sa_{\prec t},\xi). \end{align} To translate notations let $\theta \rightarrow \nu$; $\ha, a \rightarrow a$; $\hs,s \rightarrow e$. Also, set ${\hat{T}}\rightarrow t$ because only one step futures are considered in universal reinforcement learning (this is due to the use of policies instead of future action sequences). Then, the equation for the posterior predictive distribution \begin{align} \q(\hs_t|\ha_t, sa_{\prec t},\xi) = \int \q(\hs_t|\ha_t,sa_{\prec t},\theta) \q(\theta|sa_{\prec t},\xi) \mathop{\kern0pt\mathrm{d}}\!{} \theta, \end{align} is equivalent to \citet[Eq. (5)]{aslanides_universal_2017} (the sum replaces the integral for a countable ${\Delta_\Theta}$): \begin{align} \xi(e|ae_{\prec t},a) &= \sum_{\nu} p(e|\nu,ae_{\prec t},a) p(\nu|ae_{\prec t})\\ \Leftrightarrow \xi(e) &= \sum_{\nu} p(e|\nu) p(\nu), \end{align} where we dropped the conditioning on $ae_{\prec t},a$ from the notation in the second line as done in the original (where this is claimed to improve clarity). Also note that $\xi(e)$ would be written $\q(e|\xi)$ in our notation. In the universal reinforcement learning literature parameters like $\theta$ (or $\nu$) and $\xi$ are sometimes directly used to denote the probability distribution that they parameterise. Updating of the posterior $\q(\theta|sa_{\prec t},\xi)$ in response to new data also coincides with updating of the weights $p(\nu)$: \begin{align} \q(\theta|sa_{\preceq t},\xi) &= \frac{\q(\theta,s_t|a_t,sa_{\prec t},\xi)}{\q(s_t|a_t,sa_{\prec t},\xi)}\\ &= \frac{\q(s_t|a_t,sa_{\prec t},\theta,\xi) \q(\theta|a_t,sa_{\prec t},\xi)}{\q(s_t|a_t,sa_{\prec t},\xi)}\\ &=\frac{\q(s_t|a_t,sa_{\prec t},\theta) \q(\theta|sa_{\prec t},\xi)}{\q(s_t|a_t,sa_{\prec t},\xi)}\\ &=\frac{\q(s_t|a_t,sa_{\prec t},\theta)}{\q(s_t|a_t,sa_{\prec t},\xi)} \q(\theta|sa_{\prec t},\xi). \label{eq:urlupdate} \end{align} The first two lines are general. From the second to third we used \begin{align} S_t \ci \Xi | A_t,SA_{\prec t},\Theta \end{align} and \begin{align} \Theta \ci A_t |SA_{\prec t}, \Xi \end{align} which follow from the Bayesian network structure \cref{fig:genmodel}. In the notation of \citet{aslanides_universal_2017} \cref{eq:urlupdate} becomes \begin{align} p(\nu|e) = \frac{p(e|\nu)}{p(e)} p(\nu). \end{align} This shows that assuming the same model class ${\Delta_\Theta}$ the predictions and belief updates of an agent using the Bayesian complete posterior of \cref{sec:binference} are the same as those of the Bayesian universal reinforcement learning agent. Action selection can then be performed just as in \citet{aslanides_universal_2017} as well. This is done by selecting policies. In the present publication we instead select action sequences directly. However, in both cases the choice maximises the value predicted by the model. More on this in \cref{sec:actionselection}. \subsection{Approximate Complete Posteriors} \label{sec:approxpostandvi} \begin{figure}% \begin{center} \begin{tikzpicture} [->,>=stealth,auto,node distance=2cm, thick] \tikzset{ hv/.style={to path={-| (\tikztotarget)}}, vh/.style={to path={|- (\tikztotarget)}}, } \tikzset{invi/.style={minimum width=0mm,inner sep=0mm,outer sep=0mm}} \node (e) [] {$\hE_1$}; \node (s) [below of=e, node distance=1cm] {}; \node (s') [below of=e', node distance=1cm] {}; \node (a) [below of=s, node distance=1cm] {}; \node (a') [right of=a] {}; \node (m') [below of=a', node distance=1cm] {}; \node (el) [left of=e] {$\hE_0$}; \node (er) [right of=e'] {}; \node (sl) [below of=el, node distance=1cm] {}; \node (sr) [below of=er, node distance=1cm] {}; \node (al) [below of=sl, node distance=1cm] {}; \node (ar) [right of=a'] {}; \node (ml) [below of=al, node distance=1cm] {}; \node (mr) [below of=ar, node distance=1cm] {}; \node (th3) [left of=el] {${\Theta^3}$}; \node (th2) [above of=th3, node distance=1cm] {${\Theta^2}$}; \node (th1) [below of=th3, node distance=3cm] {${\Theta^1}$}; \node (al3) [left of=th3] {${\Phi^3}$}; \node (al2) [left of=th2] {${\Phi^2}$}; \node (al1) [left of=th1] {${\Phi^1}$}; \node (phie0) [above of=el,node distance=1cm] {$\Phi^{E_0}$}; \node (phie1) [above of=e,node distance=1cm] {$\Phi^{E_1}$}; \node (th2r) [right of=th2'] {}; \node (c0) [right of=th1, node distance=1cm,invi] {}; \node (c1) [right of=c0,invi] {}; \node (c2) [right of=c1,invi] {}; \node (c3) [right of=c2,invi] {}; \node (c3a) [right of=c2,node distance=1cm,invi] {}; \path (al3) edge (th3) (al2) edge (th2) (al1) edge (th1) (phie0) edge (el) (phie1) edge (e) ; \end{tikzpicture} \caption{Bayesian network of the approximate posterior factor at $t=2$. The variational parameters ${\Phi^1},{\Phi^2},{\Phi^3}$ and $\Phi^{E_\pt}=(\Phi^{E_0},\Phi^{E_1})$ are positioned so as to indicate what dependencies and nodes they replace in the generative model in \cref{fig:genmodel}.} \label{fig:recmodel} \end{center} \end{figure} \begin{figure}% \begin{center} \begin{tikzpicture} [->,>=stealth,auto,node distance=2cm, thick] \tikzset{ hv/.style={to path={-| (\tikztotarget)}}, vh/.style={to path={|- (\tikztotarget)}}, } \tikzset{invi/.style={minimum width=0mm,inner sep=0mm,outer sep=0mm}} \node (e) [] {$\hE_1$}; \node (e') [right of=e] {$\hE_{2}$}; \node (s) [below of=e, node distance=1cm] {}; \node (s') [below of=e', node distance=1cm] {$\hS_2$}; \node (a) [below of=s, node distance=1cm] {}; \node (a') [right of=a] {$\ha_2$}; \node (m') [below of=a', node distance=1cm] {}; \node (el) [left of=e] {$\hE_0$}; \node (er) [right of=e'] {}; \node (sl) [below of=el, node distance=1cm] {}; \node (sr) [below of=er, node distance=1cm] {}; \node (al) [below of=sl, node distance=1cm] {}; \node (ar) [right of=a'] {}; \node (ml) [below of=al, node distance=1cm] {}; \node (mr) [below of=ar, node distance=1cm] {}; \node (th3) [left of=el] {${\Theta^3}$}; \node (th2) [above of=th3, node distance=1cm] {${\Theta^2}$}; \node (th1) [below of=th3, node distance=3cm] {${\Theta^1}$}; \node (th2u) [above of=th2,node distance=.5cm,invi] {}; \node (al3) [left of=th3] {${\Phi^3}$}; \node (al2) [left of=th2] {${\Phi^2}$}; \node (al1) [left of=th1] {${\Phi^1}$}; \node (phie0) [above of=el,node distance=1cm] {$\Phi^{E_0}$}; \node (phie1) [above of=e,node distance=1cm] {$\Phi^{E_1}$}; \node (th2r) [right of=th2'] {}; \node (c0) [right of=th1, node distance=1cm,invi] {}; \node (c1) [right of=c0,invi] {}; \node (c2) [right of=c1,invi] {}; \node (c3) [right of=c2,invi] {}; \node (c3a) [right of=c2,node distance=1cm,invi] {}; \path (al3) edge (th3) (al2) edge (th2) (al1) edge (th1) (phie0) edge (el) (phie1) edge (e) (th1) edge[-] (c0) (c0) edge[-] (c1) (c1) edge[-] (c2) (c2) edge[vh] (s') (th2) edge[-] (th2u) (th2u) edge[hv] (e') (th2u) edge[-,dotted,hv] (er) (c2) edge[-] (c3a) (c2) edge[-,dotted] (c3) (c3) edge[-,dotted,vh] (sr) (e) edge node {} (e') (e') edge node {} (s') (a') edge[bend left=45,line width=6pt,draw=white] node {} (e') (a') edge[bend left=45] node {} (e') (e') edge[-,dotted] node {} (er) ; \end{tikzpicture} \caption{Bayesian network of the approximate complete posterior of \cref{eq:apposteriorxi} at $t=2$ for the future actions $\ha_{t:{\hat{T}}}$. Only $\hE_{t-1}, {\Theta^1},{\Theta^2}$ and the future action $\ha_{t:{\hat{T}}}$ appear in the predictive factor and influence future variables. In general there is one approximate complete posterior for each possible sequence $\ha_{t:{\hat{T}}}$ of future actions.} \label{fig:recpred} \end{center} \end{figure} As mentioned in the last section, the complete posterior can be approximated via variational inference \citep[see ][]{attias_variational_1999,winn_variational_2005,bishop_pattern_2011,blei_variational_2017}. There are alternative methods such as belief propagation, expectation propagation \citep{minka_expectation_2001,vehtari_expectation_2014}, and sampling-based methods \citep{lunn_winbugs_2000,bishop_pattern_2011}, but active inference commits to variational inference by framing inference as variational free energy minimisation \citep{friston_active_2015}. Variational free energy (\cref{eq:fesimple}) is just the negative evidence lower bound (ELBO) of standard variational inference \citep[e.g.][]{blei_variational_2017}. In the following, we show how the complete posterior can be approximated via variational inference. The idea behind variational inference is to use a simple family of probability distributions and identify the member of that family which approximates the true complete posterior best. This turns inference into an optimisation problem. According to \citet{wainwright_graphical_2007} this reformulation as an optimisation problem is the essence of variational methods. If the family of distributions is chosen such that it includes the complete posterior then the optimisation will eventually lead to the same result as Bayesian inference. However, one advantage of the formulation as an optimisation is that it can also be performed over a family of probability distributions that is simpler than the family that includes the actual complete posterior. This is what turns variational inference into an approximate inference procedure. Usually, the (simpler) families of probability distributions are chosen as products of independent distributions. Recalling \cref{eq:posteriorgfa2}, the complete posterior as a product of a predictive and a posterior factor is: \begin{align*} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)&= \q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\q(\he_{\prec t}, \theta|sa_{\prec t},\xi). \tag{\ref{eq:posteriorgfa2} revisited} \end{align*} This product is the main object of interest. We want to approximate the formula with a probability distribution that lets us (tractably) calculate the posteriors required by a given intrinsic motivation, which can consequently be used for action selection. % As mentioned before, to approximate the complete posterior we here approximate only the posterior factor and use the given generative model's predictive factor as is done in \citet{friston_active_2015}\footnote{A close inspection of \citet[Eq. (9)]{friston_active_2015} shows that the approximate complete posterior that ends up being evaluated by the action-value function is the one we discuss in \cref{eq:apposteriorxi}. It uses the predictive factor to get the probabilities $\r(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},\he_{t-1},\phi)$ of future environment states. However, the approximate posterior in \citet[Eq.(10)]{friston_active_2015} uses a factorisation of all future environment states like the one we give in \cref{eq:futurefactorizedpost}. The probabilities of future environment states in that posterior are not used anywhere in \citet{friston_active_2015}. In principle, they could be used as is done in \citet[Eq. (2.6)]{friston_active_2016} where the complete posterior of \cref{eq:futurefactorizedpost} is used in the action-value function. Both approaches are possible.} The approximate posterior factor is then combined with the exact predictive factor to get the approximate complete posterior. Let us write $\r(\he_{\prec t},\theta|\phi)$ for the approximate posterior factor (\cref{fig:recmodel}), defined as: \begin{align} \label{eq:justappost} \r(\he_{\prec t},\theta|\phi):&=\r(\he_{\prec t}|\phi^{E_\pt})\r(\theta|\phi)\\ :&=\prod_{\tau=0}^{t-1} \r(\he_\tau|{\phi^{E_\tau}}) \prod_{i=1}^3 \r(\theta^i|\phi^i). \end{align} As we can see it models each of the random variables that the posterior factor ranges over as independent of all others. This is called a \textit{mean field} approximation. Then, the approximate complete posterior (\cref{fig:recpred}) is: \begin{align} \label{eq:apposteriorxi} \r(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},\phi):&= \q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\r(\he_{\prec t}, \theta|\phi). \end{align} Note that the variational parameter absorbs the hyperparameter $\xi$ as well as the past sensor values and actions $sa_{\prec t}$. The parameter does not absorb future actions which are part of the predictive factor. The dependence on future actions needs to be kept if we want to select actions using the approximate complete posterior. We have: \begin{align} \r(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},\phi) \approx \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi) \end{align} if \begin{equation} \r(\he_{\prec t}, \theta|\phi) \approx \q(\he_{\prec t}, \theta|sa_{\prec t},\xi). \end{equation} This approximation can be achieved by standard variational inference methods. For those interested more in the approximation of the complete posterior as in \citet{friston_active_2016}, we provide the used family of factorised distributions. It must be noted that the agent in this case carries a separate approximate posterior for each possible complete action sequence $\ha_{0:T}$. For predictions of environment states, it does not use the predictive factor, but instead looks at the set of generative models compatible with the past. For each of those, the agent considers all environment states at different times as independent. The approximate posteriors, compatible with a past sequence of actions $a_{\prec t}$, are of the form: \begin{align} \label{eq:futurefactorizedpost} \r(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},{\theta^1}|\ha_{t:{\hat{T}}},a_{\prec t},{\phi^1}) = \q(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}},{\theta^1})\prod_{\tau=0}^{\hat{T}} \r(\he_\tau|\ha_{t:{\hat{T}}},a_{\prec t},{\phi^{E_\tau}}) \r({\theta^1}|{\phi^1}). \end{align} Note also that the relation between sensor values and environment states is still provided by the generative models' sensor dynamics $\q(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}},{\theta^1})$. In this article however, we focus on the approach in \citet{friston_active_2015} which requires only one approximate posterior at time $t$ since future actions only occur in the predictive factors which we do not approximate. We define the relative entropy (or $\KL$-divergence) between the approximate and the true posterior factor: \begin{align} \KL[\r(\hE_{\prec t},\Theta|\phi)||\q(\hE_{\prec t},\Theta|sa_{\prec t},\xi)]:= \sum_{\he_{\prec t}}\int \r(\he_{\prec t},\theta|\phi) \log \frac{\r(\he_{\prec t},\theta|\phi)}{\q(\he_{\prec t},\theta|sa_{\prec t},\xi)} \mathop{\kern0pt\mathrm{d}}\!{} \theta. \end{align} Note that, we indicate the variables that are summed over by capitalising them. The $\KL$-divergence quantifies the difference between the two distributions. It is non-negative, and only zero if the approximate and the true posterior factor are equal \citep[see e.g.~][]{cover_elements_2006}. The variational free energy, also known as the (negative) evidence lower bound (ELBO) in variational inference literature, is defined as: \begin{align} \label{eq:fesimple} \F[\xi,\phi,sa_{\prec t}]:&=\sum_{\he_{\prec t}} \int \r(\he_{\prec t},\theta|\phi) \log \frac{\r(\he_{\prec t},\theta|\phi)}{\q(s_{\preceq t},\he_{\prec t},\theta|a_{\prec t},\xi)} \mathop{\kern0pt\mathrm{d}}\!{} \theta\\ &= - \log \q(s_{\prec t}|a_{\prec t},\xi) + \KL[\r(\hE_{\prec t},\Theta|\phi)||\q(\hE_{\prec t},\Theta|sa_{\prec t},\xi)] \label{eq:elbo2} \end{align} The first term in \cref{eq:elbo2} is the surprise of negative log evidence. For a fixed hyperparameter $\xi$ it is a constant. Minimising the variational free energy therefore directly minimises the $\KL$-divergence between the true and the approximate posterior factor given $sa_{\prec t}$ and $\xi$. In our case, variational inference amounts to solve the optimisation problem: \begin{align} \label{eq:vi} \phi^*_{sa_{\prec t},\xi}:=\argmin_\phi \F[\phi,sa_{\prec t},\xi]. \end{align} This optimisation is a standard problem. See \citet{bishop_pattern_2011,blei_variational_2017} for ways to solve it. The resulting variational parameters $\phi^*_{sa_{\prec t},\xi}=(\phi^{E_0}_{sa_{\prec t},\xi},...,\phi^{E_{t-1}}_{sa_{\prec t},\xi},{\phi^1}_{sa_{\prec t},\xi},{\phi^2}_{sa_{\prec t},\xi},{\phi^3}_{sa_{\prec t},\xi})$ define the approximate posterior factor. The variational parameters, together with the exact predictive factors, allow us to compute the approximate complete posteriors for each sequence of future actions $\ha_{t:{\hat{T}}}$: \begin{align} \r(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},\phi^*_{sa_{\prec t},\xi}) &= \q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\r(\he_{\prec t}, \theta|\phi^*_{sa_{\prec t},\xi})\\ &\approx \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi). \end{align} In the next section, we look at action selection as the second component of action generation. To this end, we show how to evaluate sequences of future actions $\ha_{t:{\hat{T}}}$ by evaluating either Bayesian complete posteriors or the approximate complete posteriors. \section{Action Selection Based on Intrinsic Motivations} \label{sec:aselectandim} \subsection{Intrinsic Motivation and Action-Value Functions} The previous section resulted in sets of Bayesian or approximate complete posteriors. Independently of whether a complete posterior is the approximate or the Bayesian version, it represents the entire knowledge of the agent about the consequences of the sequence of future actions $\ha_{t:{\hat{T}}}$ that is associated with it. In order to evaluate sequences of future actions the agent can only rely on its knowledge which suggests that all such evaluations should depend solely on complete posteriors. One could argue that the motivation might also depend directly on the memory state containing $sa_{\prec t}$. We here take a position somewhat similar to the one proposed by \citet{schmidhuber_formal_2010} that intrinsic motivations concerns the ``learning of a better world model''. We consider the complete posterior as the current world model and assume that intrinsic motivations depend only on this model and not on the exact values of past sensor values and actions. As we will see this assumption is also enough to capture the three intrinsic motivations that we discuss here. This level of generality is sufficient for our purpose of extending the free energy principle. Whether it sufficient for a final and general intrinsic motivation definition is beyond the scope of this publication. Complete posteriors are essentially conditional probability distributions over $\shS^{{\hat{T}}-t+1}\times \shE^{{\hat{T}}+1}\times {\Delta_\Theta}$ given elements of $\shA^{{\hat{T}}-t+1}$. A necessary (but not sufficient) requirement for intrinsic motivations in our context (agents with generative models) is then that they are functions on the space of such conditional probability distributions. Let $\Delta_{\shS^{{\hat{T}}-t+1}\times \shE^{{\hat{T}}+1}\times {\Delta_\Theta}|\shA^{{\hat{T}}-t+1}}$ be the space of conditional probability distributions over $\shS^{{\hat{T}}-t+1}\times \shE^{{\hat{T}}+1}\times {\Delta_\Theta}$ given elements of $\shA^{{\hat{T}}-t+1}$. Then an \textit{intrinsic motivation} is a function $\mathfrak{M}: \Delta_{\shS^{{\hat{T}}-t+1}\times \shE^{{\hat{T}}+1}\times {\Delta_\Theta}|\shA^{{\hat{T}}-t+1}} \times \shA^{{\hat{T}}-t+1} \rightarrow \mathbb{R}$ taking a probability distribution $\d(.,.,.|.) \in \Delta_{\shS^{{\hat{T}}-t+1}\times \shE^{{\hat{T}}+1}\times {\Delta_\Theta}|\shA^{{\hat{T}}-t+1}}$ and a given future actions sequence $\ha_{t:{\hat{T}}} \in \shA^{{\hat{T}}-t+1}$ to a real value $\mathfrak{M}(\d(.,.,.|.),\ha_{t:{\hat{T}}}) \in \mathbb{R}$. We can then see that the Bayesian complete posterior $\q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ for a fixed past $sa_{\prec t}$ written as $\q(.,.,.|.,sa_{\prec t},\xi)$ provides such conditional probability distribution. Similarly, every member of the family of distributions used to approximate the Bayesian complete posterior via variational inference $\r(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},\phi)$ written as $\r(.,.,.|.,\phi)$ also provides such a conditional probability distribution. It will become important when discussing active inference that the optimised value $\phi^*_{sa_{\prec t},\xi}$ of the variational parameters as well as any other value of the variational parameters $\phi$ define an element with the right structure to be evaluated together with a set of future actions by an intrinsic motivation function. Using intrinsic motivation functions we then define two kinds of induced action-value functions. These are similar to value functions in reinforcement learning. \footnote{The main difference is that the action-value functions here evaluate sequences of future actions as opposed to policies. This is the prevalent practice in active inference literature including \citet{friston_active_2015} and we therefore follow it here. } The first is the \textit{Bayesian action-value function} (or functional): \begin{equation} \label{eq:biactionvalue} {\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi):= \mathfrak{M}(\q(.,.,.|.,sa_{\prec t},\xi),\ha_{t:{\hat{T}}}). \end{equation} In words the Bayesian action-value function ${\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ infers the set of Bayesian complete posteriors of past experience $sa_{\prec t}$ and then evaluates the sequence of future actions $\ha_{t:{\hat{T}}}$ according to the intrinsic motivation function $\mathfrak{M}$. The \textit{variational action-value function} is defined as\footnote{We abuse notation here by reusing the same symbol ${\hat{Q}}$ for the variational action-value function as for the Bayesian action-value function. However, in this publication the argument ($sa_{\prec t},\xi$ or $\phi$) always indicates which one is meant.}: \begin{equation} {\hat{Q}}(\ha_{t:{\hat{T}}},\phi):= \mathfrak{M}(\r(.,.,.|.,\phi),\ha_{t:{\hat{T}}}). \end{equation} So the variational action-value function ${\hat{Q}}(\ha_{t:{\hat{T}}},\phi)$ directly takes the conditional probability distribution defined by variational parameter $\phi$ and evaluates the sequence of future actions $\ha_{t:{\hat{T}}}$ according to $\mathfrak{M}$. Unlike in the Bayesian case no inference takes place during the evaluation of ${\hat{Q}}(\ha_{t:{\hat{T}}},\phi)$. At the same time, after variational inference, if we plug in $\phi^*_{sa_{\prec t},\xi}$ for $\phi$ we have: \begin{align} \label{eq:actionvalueapprox} {\hat{Q}}(\ha_{t:\hTa},\phi^*_{sa_{\prec t},\xi})\approx {\hat{Q}}(\ha_{t:\hTa},sa_{\prec t},\xi). \end{align} Note that the reason we have placed a hat on ${\hat{Q}}$ is that, even in the Bayesian case, it is usually not the optimal action-value function but instead is an estimate based on the current knowledge state represented by the complete posteriors of the agent. Also note that some intrinsic motivations (e.g.\ empowerment) evaluate e.g.\ the next $n$ actions by using predictions reaching $n+m$ steps into the future. This means that they need all complete posteriors for $\ha_{t:t+n+m-1}$ but only evaluate the actions $\ha_{t:t+n-1}$. In other words they cannot evaluate actions up to their generative model's time-horizon ${\hat{T}}$ but only until a shorter time-horizon ${{\hat{T}}_a}={\hat{T}}-m$ for some natural number $m$. When necessary we indicate such a situation by only passing shorter future action sequences $\ha_{t:\hTa}$ to the action-value function, in turn, the intrinsic motivation function. The respective posteriors keep the original time horizon ${\hat{T}} > {{\hat{T}}_a}$. \subsection{Deterministic and Stochastic Action Selection} \label{sec:actionselection} We can then select actions simply by picking the first action in the sequence $\ha_{t:{\hat{T}}}$ that maximises the Bayesian action-value function: \begin{align} \label{eq:argmaxaction} \ha_{t:{\hat{T}}}^*(m_t):=\ha_{t:{\hat{T}}}^*(sa_{\prec t}):=\argmax_{\ha_{t:{\hat{T}}}} {\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi) \end{align} and set \begin{align} \ha^*(m_t):=\ha_t^*(m_t). \end{align} or for the variational action value function: \begin{align} \ha_{t:{\hat{T}}}^*(m_t):=\ha_{t:{\hat{T}}}^*(\phi^*_{sa_{\prec t},\xi}):=\argmax_{\ha_{t:{\hat{T}}}} {\hat{Q}}(\ha_{t:{\hat{T}}},\phi^*_{sa_{\prec t},\xi}). \end{align} and set \begin{align} \ha^*(m_t):=\ha_t^*(m_t). \end{align} This then results in a deterministic action generation $\p(a|m)$: \begin{align*} \p(a_t|m_t):=\delta_{\ha^*(m_t)}(a_t). \end{align*} We note here that in the case of universal reinforcement learning the role of ${\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ is played by $V^\pi_\xi(sa_{\prec t})$. There $\pi$ is a policy that selects actions in dependence on the entire past $sa_{\prec t}$ and $\xi$ parameterises the posterior just like in the present publication. The $\argmax$ in \cref{eq:argmaxaction} selects a policy instead of an action sequence and that policy is used for the action generation. A possible stochastic action selection that is important for active inference is choosing the action according to a so called softmax policy \citep{sutton_reinforcement_1998}: \begin{align} \label{eq:softmax} \p(a_t|m_t):=\sum_{\ha_{{t+1}:{\hat{T}}}}\frac{1}{Z(\gamma,sa_{\prec t},\xi)} e^{\gamma {\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi)} \end{align} where: \begin{align} Z(\gamma,sa_{\prec t},\xi):= \sum_{\ha_{t:{\hat{T}}}} e^{\gamma {\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi)} \end{align} is a normalisation factor. Note that we are marginalising out later actions in the sequence $\ha_{t:{\hat{T}}}$ to get a distribution only over the action $\ha_t$. For the variational action-value function this becomes: \begin{align} \label{eq:softmax2} \p(a_t|m_t):=\sum_{\ha_{{t+1}:{\hat{T}}}}\frac{1}{Z(\gamma,\phi^*_{sa_{\prec t},\xi})} e^{\gamma {\hat{Q}}(\ha_{t:{\hat{T}}},\phi^*_{sa_{\prec t},\xi})} \end{align} where: \begin{align} Z(\gamma,\phi^*_{sa_{\prec t},\xi}):= \sum_{\ha_{t:{\hat{T}}}} e^{\gamma {\hat{Q}}(\ha_{t:{\hat{T}}},\phi^*_{sa_{\prec t},\xi})}. \end{align} Since it is relevant for active inference (see \cref{sec:activeinference}), note that the softmax distribution over future actions can also be defined for arbitrary $\phi$ and not only for the optimised $\phi^*_{sa_{\prec t},\xi}$. At the same time, the softmax distribution for the optimised $\phi_{sa_{\prec t},\xi}$ clearly also approximates the softmax distribution of the Bayesian action-value function. Softmax policies assign action sequences with higher values of ${\hat{Q}}$ higher probabilities. They are often used as a replacement for the deterministic action selection to introduce some exploration. Here, lower $\gamma$ leads to higher exploration; conversely, in the limit where $\gamma \rightarrow \infty$ the softmax turns into the deterministic action selection. From an intrinsic motivation point of view such additional exploration should be superfluous in many cases since many intrinsic motivations try to directly drive exploration by themselves. Another interpretation of such a choice is to see $\gamma$ as a trade-off factor between the processing cost of choosing an action precisely and achieving a high action-value. The lower $\gamma$, the higher the cost of precision. This leads to the agent more often taking actions that do not attain maximum action-value. We note that the softmax policy is not the only possible stochastic action selection mechanism. Another option discussed in the literature is Thompson sampling \citep{ortega_minimum_2010,ortega_generalized_2014,aslanides_universal_2017}. In our framework this corresponds to a two step action selection procedure where we first sample an environment and parameter pair $(\bar{\he}_{t-1},\bar{\theta})$ from a posterior factor (Bayesian or variational) \begin{align} (\bar{\he}_{t-1},\bar{\theta}) \sim \d(\hE_{t-1},\Theta|sa_{\prec t},\xi) \end{align} then plug the according predictive factor $\q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},\bar{\he}_{t-1},\bar{\theta})$ into the action value function \begin{equation} {\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi):=\mathfrak{M}(\q(.,.|.,\bar{\he}_{t-1},\bar{\theta}),\ha_{t:{\hat{T}}}). \end{equation} This allows intrinsic motivations that only evaluate the probability distribution over future sensor values $\hS_{t:{\hat{T}}}$ and environment states $\hE_{t:{\hat{T}}}$. However, it rules out those that evaluate the posterior probability of environment parameters $\Theta$ because we sample a specific $\bar{\theta}$. \subsection{Intrinsic Motivations} Now, we look at some intrinsic motivations including the intrinsic motivation part underlying Friston's active inference. In the definitions, we use $\d(.,.,.|.) \in \Delta_{\shS^{{\hat{T}}-t+1}\times \shE^{{\hat{T}}+1}\times {\Delta_\Theta}|\shA^{{\hat{T}}-t+1}}$ as a generic conditional probability distribution. The generic symbol $\d$ is used since it represents both Bayesian complete posteriors and approximate complete posteriors. In fact, the definitions of the intrinsic motivations are agnostic with respect to the method used to obtain a complete posterior. In the present context, it is important that these definitions are general enough to induce both Bayesian and variational action-value functions. We usually state the definition of the motivation function using general expressions (e.g.\ marginalisations) derived from $\d(,.,.|.)$. Also, we look at how they can be obtained from Bayesian complete posteriors to give to the reader an intuition for the computations involved in applications. The approximate complete posterior usually makes these calculations easier and we will present an example of this. \subsubsection{Free Energy Principle} \label{sec:fep} Here, we present the non-variational Bayesian inference versions for the expressions that occur in the ``expected free energy'' in \citet{friston_active_2015,friston_active_curiosity_2017}. These papers only include approximate expressions after variational inference. Most of the expressions we give here can be found in \citet{friston_graphical_2017}. The exception is \cref{eq:infogain}, which can be obtained from an approximate term in \citet{friston_active_curiosity_2017} in the same way that the non-variational Bayesian inference terms in \citet{friston_graphical_2017} are obtained from the approximate ones in \citet{friston_active_2015}. In the following, we can set ${{\hat{T}}_a}={\hat{T}}$, since actions are only evaluated with respect to their immediate effects. According to \citet[Eq. (A.2) supplementary material]{friston_graphical_2017}, the ``expected free energy'' is just the future conditional entropy of sensor values\footnote{The original text refers to this as the ``expected entropy of outcomes'', not the expected conditional entropy of outcomes. Nonetheless, the associated Equation (A.2) in the original is identical to ours.} given environment states. Formally, this is (with a negative sign to make minimising expected free energy equivalent to maximising the action-value function): \begin{align} \label{eq:fepentropy} \mathfrak{M}(\d(.,.,.|.),\ha_{t:{\hat{T}}}) :&= \sum_{\he_{t:{\hat{T}}}} \d(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}) \sum_{\hs_{t:{\hat{T}}}} \d(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}}) \log \d(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}})\\ &= - \sum_{\he_{t:{\hat{T}}}} \d(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}) \HS_{\d}(\hS_{t:{\hat{T}}}|\he_{t:{\hat{T}}})\\ &=-\HS_{\d}(\hS_{t:{\hat{T}}}|\hE_{t:{\hat{T}}},\ha_{t:{\hat{T}}}). \end{align} Note that, we indicate the probability distribution $\d$ used to calculate entropies $\HS_{\d}(X)$ or mutual informations $\I_{\d}(X:Y)$ in the subscript. Furthermore,we indicate the variables that are summed over with capital letters and those that are fixed (e.g.\ $\ha_{t:{\hat{T}}}$ above) with small capital letters. In the case where $\d(.,.,.|.)$ is the Bayesian complete posterior $\q(.,.,.|.,sa_{\prec t},\xi)$, it uses the predictive distribution of environment states $\q(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ and the posterior of the conditional distribution of sensor values given environment states $\q(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}},sa_{\prec t},\xi)$. As we see next, both distributions can be obtained from the Bayesian complete posterior. The former distribution is a familiar expression in hierarchical Bayesian models and corresponds to a posterior predictive distribution or predictive density \citep[cmp. e.g.][Eq.(3.74)]{bishop_pattern_2011} that can be calculated via: \begin{align} \q(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)&= \int \sum_{\hs_{t:{\hat{T}}},\he_{\prec t}} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi) \mathop{\kern0pt\mathrm{d}}\!{} \theta \\ &= \int \sum_{\hs_{t:{\hat{T}}},\he_{\prec t}} \q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\q(\he_{\prec t}, \theta|sa_{\prec t},\xi) \mathop{\kern0pt\mathrm{d}}\!{} \theta \\ &= \int \sum_{\he_{t-1}} \q(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta) \q(\he_{t-1}, \theta|sa_{\prec t},\xi) \mathop{\kern0pt\mathrm{d}}\!{} \theta, \label{eq:postprede} \end{align} where we split the complete posterior into the predictive and posterior factor and then marginalised out environment states $\he_{{\prec t}-1}$ since the predictive factor does not depend on them. Note that in practice, this marginalisation corresponds to a sum over $|\sE|^{t-1}$ terms and therefore has a computational cost that grows exponential in time. However, if we use the approximate complete posterior such that $\d(.,.,.|.)=\r(.,.,.|.,\phi)$, we see from \cref{eq:apposteriorxi}, that $\q(\he_{\prec t}, \theta|sa_{\prec t},\xi)$ is replaced by $\r(\he_{\prec t}, \theta|\phi)$ which is defined as (\cref{eq:justappost}): \begin{align} \r(\he_{\prec t}, \theta|\phi) :=\prod_{\tau=0}^{t-1} \r(\he_\tau|{\phi^{E_\tau}}) \prod_{i=1}^3 \r(\theta^i|\phi^i). \end{align} This means that $\r(\he_{t-1}, \theta|\phi)$ is just $\r(\he_{t-1}|\phi^{E_{t-1}})\r(\theta|\phi)$, which we obtain directly from the variational inference without any marginalisation. If Bayesian inference increases in computational cost exponentially in time, this simplification leads to a significant advantage. This formulation leaves an integral over $\theta$ or, more precisely, a triple integral over the three ${\theta^1},{\theta^2},{\theta^3}$. However, if the $\q(\theta^i|\xi^i)$ are chosen as conjugate priors to $\q(\hs|\he,{\theta^1}),\q(\he'|\ha',\he,{\theta^2}),\q(\he_0|{\theta^3})$ respectively, then these integrals can be calculated analytically (compare the similar calculation of $\q(\he_{\prec t}, \theta|sa_{\prec t},\xi)$ in \cref{sec:postfactorBI}). The remaining computational problem is only the sum over all $\he_{t-1}$. The latter term (the posterior conditional distribution over sensor values given environment states) can be obtained via \begin{align} \q(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}},sa_{\prec t},\xi)&= \q(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}},\ha_{t:{\hat{T}}},sa_{\prec t},\xi)\\ &=\frac{\q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)}{\q(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)}. \label{eq:postsge} \end{align} Here, the first equation holds since \begin{equation} \hS_{t:{\hat{T}}} \ci \hA_{t:{\hat{T}}} \mid \hE_{t:{\hat{T}}},SA_{\prec t}. \end{equation} Both numerator and denominator can be obtained from the complete posterior via marginalisation as for the former term. This marginalisation also shows that the intrinsic motivation function, \cref{eq:fepentropy}, is a functional of the complete posteriors or $\d(.,.,.|.)$. In most publications on active inference the expected free energy in \cref{eq:fepentropy} is only part of what is referred to as the expected free energy. Usually, there is a second term measuring the relative entropy to an externally specified \textit{prior over future outcomes} (also called ``predictive distribution encoding goals'' \citealt{friston_active_2015}), i.e.\ a desired probability distribution $\p^d(\hs_{t:{\hat{T}}})$. The relative entropy term is formally given by: \begin{align} \label{eq:extrinsicvalue} \KL[\d(\hS_{t:{\hat{T}}}|\ha_{t:{\hat{T}}})|| \p^d(\hS^d_{t:{\hat{T}}})]=\sum_{\hs_{t:{\hat{T}}}} \d(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}) \log \frac{\d(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}})}{\p^d(\hs_{t:{\hat{T}}})}. \end{align} Clearly, this term will lead the agent to act such that the future distribution over sensor values is similar to the desired distribution. Since this term is used to encode extrinsic value for the agent, we mostly ignore it in this publication. It could included into any of the following intrinsic motivations. In \citet{friston_active_curiosity_2017} yet another term, called ``negative novelty'' or ``ignorance'', occurs in the expected free energy. This term concerns the posterior distribution over parameter ${\theta^1}$. It can be slightly generalised to refer to any subset of the parameters $\theta=({\theta^1},{\theta^2},{\theta^3})$. We can write it as a conditional mutual information between future sensor values and parameters (the ``ignorance'' is the negative of this): \begin{align} \label{eq:infogain} \I_{\d}(\hS_{t:{\hat{T}}}:\Theta|\ha_{t:{\hat{T}}})=\sum_{\hs_{t:{\hat{T}}}} \d(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}) \int \d(\theta|\hs_{t:{\hat{T}}},\ha_{t:{\hat{T}}}) \log \frac{\d(\theta|\hs_{t:{\hat{T}}},\ha_{t:{\hat{T}}})}{\d(\theta)} \mathop{\kern0pt\mathrm{d}}\!{} \theta. \end{align} This is identical to the information gain used in knowledge seeking agents. The necessary posteriors in the Bayesian case are $\q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$, $\q(\theta|\hs_{t:{\hat{T}}},\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ and $\q(\theta|sa_{\prec t},\xi)$ with \begin{align} \label{eq:postpreds} \q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi) &= \int \sum_{\he_{\prec t}} \q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta) \q(\he_{\prec t}, \theta|sa_{\prec t},\xi) \mathop{\kern0pt\mathrm{d}}\!{} \theta \end{align} a straightforward (if costly) marginalisation of the complete posterior. Just like previously for $\q(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$, the marginalisation is greatly simplified in the variational case (see \cref{sec:sgapost} for a more explicit calculation). The integrals can be computed if using conjugate priors. The other two posteriors can be obtained via \begin{align} \q(\theta|\hs_{t:{\hat{T}}},\ha_{t:{\hat{T}}},sa_{\prec t},\xi) = \frac{1}{\q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)} \sum_{\he_{0:{\hat{T}}}} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi) . \end{align} and \begin{align} \q(\theta|sa_{\prec t},\xi) &= \q(\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)\\ &= \sum_{\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}}} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi) . \end{align} In the latter equation we used \begin{align} \hA_{t:{\hat{T}}} \ci \Theta | SA_{\prec t}. \end{align} The marginalisations grow exponentially in computational cost with ${\hat{T}}$. In this case, the variational approximation only reduces the necessary marginalisation over $\he_{{\prec t}-1}$ to one over $\he_{t-1}$, but the marginalisation over future environment states $\he_{t:{\hat{T}}}$ and sensor values $\hs_{t:{\hat{T}}}$ remains the same since we use the exact predictive factor. In practice the time horizon into the future ${\hat{T}} - t$ must then be chosen sufficiently short, so that marginalising out $\he_{t:{\hat{T}}}$ and $\hS_{t:{\hat{T}}}$ is feasible. Together with the variational approximation the required marginalisations over past and future are then constant over time which makes the implementation of agents with extended lifetimes possible. The combination of the conditional entropy term and the information gain defines the (intrinsic part) of the action-value function of Friston's active inference (or free energy principle): \begin{align} \label{eq:fullfep} \mathfrak{M}^{FEP}(\d(.,.,.|.),\ha_{t:{\hat{T}}}) = -\HS_{\d}(\hS_{t:{\hat{T}}}|\hE_{t:{\hat{T}}})+\I_{\d}(\hS_{t:{\hat{T}}}:\theta|\ha_{t:{\hat{T}}}) \end{align} In the active inference literature this is usually approximated by a sum over the values at individual timesteps: \begin{align} \label{eq:timesumfep} \mathfrak{M}^{FEP}(\d(.,.,.|.),\ha_{t:{\hat{T}}}) = \sum_{\tau = t}^{\hat{T}} -\HS_{\d}(\hS_\tau|\hE_\tau)+\I_{\d}(\hS_\tau:\Theta|\ha_{t:{\hat{T}}}). \end{align} \subsubsection{Free Energy Principle Specialised to \citet{friston_active_2015}} Using \cref{appendix:translationTables}, we show how to get the action-value function of \citet[Eq. (9)]{friston_active_2015} in our framework. In \citet{friston_active_2015}, the information gain of \cref{eq:infogain} is not included, but the extrinsic term of \cref{eq:extrinsicvalue} is. Furthermore, the sum over timesteps in \cref{eq:timesumfep} is used. This leads to the following expression: \begin{align} \mathfrak{M}^{FEP}(\d(.,.,.|.),\ha_{t:{\hat{T}}}) = \sum_{\tau = t}^{\hat{T}} -\HS_{\d}(\hS_\tau|\hE_\tau) - \KL[\d(\hS_\tau|\ha_{t:{\hat{T}}})|| \p^d(\hS_\tau)]. \end{align} If we plug in an approximate complete posterior, we get: \begin{align} \label{eq:fristonfep} \mathfrak{M}^{FEP}(\r(.,.,.|.),\ha_{t:{\hat{T}}}) = \sum_{\tau = t}^{\hat{T}} -\HS_{\r}(\hS_\tau|\hE_\tau) - \KL[\r(\hS_\tau|\ha_{t:{\hat{T}}})|| \p^d(\hS_\tau)]. \end{align} with \begin{align} -\HS_{\r}(\hS_\tau|\hE_\tau) = \sum_{\he_\tau} \r(\he_\tau|\ha_{t:{\hat{T}}},\he_{t-1},\phi) \sum_{\hs_\tau} \r(\hs_\tau|\he_\tau,\phi) \log \r(\hs_\tau|\he_\tau,\phi), \end{align} and \begin{equation} \KL[\r(\hS_\tau|\ha_{t:{\hat{T}}})|| \p^d(\hS_\tau)] =\sum_{\hs_\tau} \r(\hs_\tau|\ha_{t:{\hat{T}}},\phi) \log \frac{\r(\hs_\tau|\ha_{t:{\hat{T}}},\phi)}{\p^d(\hs_\tau)}. \end{equation} For the particular approximate posterior of \cref{eq:apposteriorxi}, with its factorisation into exact predictive and approximate posterior factor, the individual terms can be further rewritten. \begin{align} \r(\he_\tau|\ha_{t:{\hat{T}}},\he_{t-1},\phi) &= \sum_{\hs_{t:{\hat{T}}},\he_{\tau+1:{\hat{T}}}\he_{t:\tau-1}\he_{0:t-2}}\int\r(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},\phi) \mathop{\kern0pt\mathrm{d}}\!{} \theta\\ &=\sum_{\hs_{t:{\hat{T}}},\he_{\tau+1:{\hat{T}}}\he_{t:\tau-1}\he_{0:t-2}}\int\q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\r(\he_{\prec t}, \theta|\phi) \mathop{\kern0pt\mathrm{d}}\!{}\theta\\ &=\sum_{\hs_{t:{\hat{T}}},\he_{\tau+1:{\hat{T}}}\he_{t:\tau-1}\he_{0:t-2}}\int\q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}}, \he_{t-1},\theta)\prod_{r=0}^{t-1} \r(\he_r|{\phi^{E_r}}) \prod_{i=1}^3 \r(\theta^i|\phi^i) \mathop{\kern0pt\mathrm{d}}\!{} \theta\\ &=\sum_{\he_{t:\tau-1}}\int\q(\he_{t:\tau-1}|\ha_{t:{\hat{T}}}, \he_{t-1},{\theta^2}) \r(\he_{t-1}|\phi^{E_{t-1}}) \r({\theta^2}|{\phi^2}) \mathop{\kern0pt\mathrm{d}}\!{} {\theta^2}\\ &=\left(\sum_{\he_{t:\tau-1}}\int \prod_{r=t}^\tau \q(\he_r|\ha_r, \he_{r-1},{\theta^2}) \r({\theta^2}|{\phi^2})\mathop{\kern0pt\mathrm{d}}\!{} {\theta^2} \right) \r(\he_{t-1}|\phi^{E_{t-1}}). \end{align} In \citet{friston_active_2015}, the environment dynamics $\q(\he_r|\ha_r, \he_{r-1},{\theta^2})$ are not inferred and are therefore not parameterised: \begin{align} \q(\he_r|\ha_r, \he_{r-1},{\theta^2}) &= \q(\he_r|\ha_r, \he_{r-1}) \end{align} and are set to the physical environment dynamics: \begin{equation} \q(\he_r|\ha_r, \he_{r-1}) = \p(\he_r|\ha_r, \he_{r-1}). \end{equation} This means the integral over ${\theta^2}$ above is trivial and we get: \begin{align} \r(\he_\tau|\ha_{t:{\hat{T}}},\he_{t-1},\phi) &= \sum_{\he_{t:\tau-1}} \prod_{r=t}^\tau \q(\he_r|\ha_r, \he_{r-1}) \r(\he_{t-1}|\phi^{E_{t-1}}) \\ \end{align} In the notation of \citet{friston_active_2015} (see \cref{sec:translationtable} for a translation table), we have \begin{align} \q(\he_r|\ha_r, \he_{r-1}) = \text{\textbf{B}}(\ha_r)_{\he_r \he_{r-1}} \end{align} where $\text{\textbf{B}}(\ha_r)$ is a matrix, and \begin{align} \r(\he_{t-1}|\phi^{E_{t-1}}) = (\wideparen{s}_{t-1})_{\he_{t-1}} \end{align} where $(\wideparen{s}_{t-1})$ is a vector, so that \begin{align} \r(\he_\tau|\ha_{t:{\hat{T}}},\he_{t-1},\phi)% &= (\text{\textbf{B}}(\ha_\tau) \cdots \text{\textbf{B}}(\ha_t) \cdot \wideparen{s}_{t-1})_{\he_\tau}\\ &=:(\wideparen{s}_\tau(\ha_{t:{\hat{T}}}))_{\he_\tau} \end{align} Similarly, since the sensor dynamics in \citet{friston_active_2015} are also not inferred, we find \begin{align} \r(\hs_\tau|\he_\tau,\phi) =\q(\hs_\tau|\he_\tau) =\p(\hs_\tau|\he_\tau). \end{align} \citeauthor{friston_active_2015} writes: \begin{align} \q(\hs_\tau|\he_\tau) =: \text{\textbf{A}}_{\hs_\tau \he_\tau} \end{align} with $\text{\textbf{A}}$ a matrix. So that, \begin{align} \r(\hs_\tau|\ha_{t:{\hat{T}}},\phi^{E_{t-1}})&= \text{\textbf{A}} \cdot \wideparen{s}_\tau(\ha_{t:{\hat{T}}})\\ &=:\wideparen{o}_\tau(\ha_{t:{\hat{T}}}). \end{align} Then \begin{align} \HS_{\r}(\hS_\tau|\hE_\tau) = - \boldsymbol{1}\cdot (\text{\textbf{A}} \times \log \text{\textbf{A}}) \cdot \wideparen{s}_\tau(\ha_{t:{\hat{T}}}) \end{align} where $\times$ is a Hadamard product and $\boldsymbol{1}$ is a vector of ones. Also, \begin{equation} \KL[\r(\hS_\tau|\ha_{t:{\hat{T}}})|| \p^d(\hS_\tau)] = \wideparen{o}_\tau(\ha_{t:{\hat{T}}}) \cdot (\log \wideparen{o}_\tau(\ha_{t:{\hat{T}}}) - \log \text{\textbf{C}}_\tau) \end{equation} where $(\text{\textbf{C}}_\tau)_{\hs_\tau} = \p^d(\hs_\tau)$. Plugging these expressions into \cref{eq:fristonfep}, substituting $\ha_{t:{\hat{T}}} \rightarrow \pi$, and comparing this to \citet[Eq. (9)]{friston_active_2015} shows that\footnote{There is a small typo in \citet[Eq. (9)]{friston_active_2015} where the time index of $\wideparen{s}_{t-1}$ in $(\wideparen{s}_\tau(\ha_{t:{\hat{T}}}))= (\text{\textbf{B}}(\ha_\tau) \cdots \text{\textbf{B}}(\ha_t) \cdot \wideparen{s}_{t-1})$ is given as $t$ instead of ${t-1}$.}: \begin{align} \mathfrak{M}^{FEP}(\r(.,.,.|.),\pi)&= \boldsymbol{1}\cdot (\text{\textbf{A}} \times \log \text{\textbf{A}}) \cdot \wideparen{s}_\tau(\ha_{t:{\hat{T}}}) - \wideparen{o}_\tau(\ha_{t:{\hat{T}}}) \cdot (\log \wideparen{o}_\tau(\ha_{t:{\hat{T}}}) - \log \text{\textbf{C}}_\tau)\\ &=\text{\textbf{Q}}(\pi). \end{align} This verifies that our formulation of the action-value function specialises to the ``expected (negative) free energy'' $\text{\textbf{Q}}(\pi)$. \subsubsection{Empowerment Maximisation} \label{sec:empowerment} Empowerment maximisation \citep{klyubin_empowerment_2005} is an intrinsic motivation that seeks to maximise the channel capacity from sequences of the agent's actions into the subsequent sensor value. The agent, equipped with complete knowledge of the environment dynamics, can directly observe the environment state. If the environment is deterministic, an empowerment maximisation policy leads the agent to a state from which it can reach the highest number of future states within a preset number of actions. \citet{salge2014empowerment} provide a good overview of existing research on empowerment maximisation. A more recent study relates the intrinsic motivation to the essential dynamics of living systems, based on assumptions from autopoietic enactivism \cite{Guckelsberger2016b}. Several approximations have been proposed, along with experimental evaluations in complex state / action spaces. \citet{Salge2018} show how deterministic empowerment maximisation in a three-dimensional grid-world can be made more efficient by different modifications of UCT tree search. Three recent studies approximate stochastic empowerment and its maximisation via variational inference and deep neural networks, leveraging a variational bound on the mutual information proposed by \citet{barber2003algorithm}. \citet{mohamed_variational_2015} focus on a model-free approximation of open-loop empowerment, and \citet{gregor2016variational} propose two means to approximate closed-loop empowerment. While these two approaches consider both applications in discrete and continuous state / action spaces, \citet{karl2017unsupervised} develop an open-loop, model-based approximation for the continuous domain specifically. The latter study also demonstrates how empowerment can yield good performance in established reinforcement learning benchmarks such as bipedal balancing in the absence of extrinsic rewards. In recent years, research on empowerment has particularly focused on applications in multi-agent systems. Coupled empowerment maximisation as a specific multi-agent policy has been proposed as intrinsic drive for either supportive or antagonistic behaviour in open-ended scenarios with sparse reward landscapes \cite{Guckelsberger2016a}. This theoretical investigation has then been backed up with empirical evaluations on supportive and adversarial video game characters \cite{Guckelsberger2016c,Guckelsberger2018}. Beyond virtual agents, the same policy has been proposed as a good heuristic to facilitate critical aspects of human-robot interaction, such as self-preservation, protection of the human partner, and response to human actions \cite{salge2017empowerment}. For empowerment, we select ${{\hat{T}}_a}=t+n$ and ${\hat{T}}=t+n+m$, with $n\geq 0$ and $m\geq1$. This means the agent chooses $n+1$ actions which it expects to maximise the resulting $m$-step empowerment. The according action-value function is: \begin{align} \mathfrak{M}^{EM}(\d(.,.,.|.),\ha_{t:{\hat{T}}}) :&= \max_{\d(\ha_{{{\hat{T}}_a}+1:{\hat{T}}})} \; \I_{\d}(\hA_{{{\hat{T}}_a}+1:{\hat{T}}}:\hS_{\hat{T}}|\ha_{t:\hTa}) \\ &=\max_{\d(\ha_{{{\hat{T}}_a}+1:{\hat{T}}})} \; \sum_{\ha_{{{\hat{T}}_a}+1:{\hat{T}}},\hs_{\hat{T}}} \d(\ha_{{{\hat{T}}_a}+1:{\hat{T}}}) \d(\hs_{\hat{T}}|\ha_{t:{\hat{T}}}) \log \frac{\d(\hs_{\hat{T}}|\ha_{t:{\hat{T}}})}{\d(\hs_{\hat{T}}|\ha_{t:\hTa})}. \end{align} Note that in the denominator of the fraction, the action sequence only runs to ${t:\hTa}$ and not to ${t:{\hat{T}}}$ as in the numerator. In the Bayesian case, the required posteriors are $\q(\hs_{\hat{T}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ (for each $\ha_{{{\hat{T}}_a}+1:{\hat{T}}}$) and $\q(\hs_{\hat{T}}|\ha_{t:\hTa},sa_{\prec t},\xi)$. The former distribution is a further marginalisation over $\hs_{{t+1}:{\hat{T}}-1}$ of $\q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$. The variational approximation only helps getting $\q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$, not the further marginalisation. The latter distribution is obtained for a given $\q(\ha_{{{\hat{T}}_a}+1:{\hat{T}}})$ from the former one via \begin{align} \q(\hs_{\hat{T}}|\ha_{t:\hTa},sa_{\prec t},\xi) &= \sum_{\ha_{{{\hat{T}}_a}+1:{\hat{T}}}} \q(\hs_{\hat{T}},\ha_{{{\hat{T}}_a}+1:{\hat{T}}}|\ha_{t:\hTa},sa_{\prec t},\xi)\\ &=\sum_{\ha_{{{\hat{T}}_a}+1:{\hat{T}}}} \q(\hs_{\hat{T}}|\ha_{{{\hat{T}}_a}+1:{\hat{T}}},\ha_{t:\hTa},sa_{\prec t},\xi) \q(\ha_{{{\hat{T}}_a}+1:{\hat{T}}}) \end{align} since the empowerment calculation imposes \begin{align} \q(\ha_{{{\hat{T}}_a}+1:{\hat{T}}}|\ha_{t:\hTa},sa_{\prec t},\xi)= \q(\ha_{{{\hat{T}}_a}+1:{\hat{T}}}). \end{align} \subsubsection{Predictive Information Maximisation} \label{sec:pim} Predictive information maximisation, \citep{ay_predictive_2008}, is an intrinsic motivation that seeks to maximise the predictive information of the sensor process. Predictive information is the mutual information between past and future sensory signal, and has been proposed as a general measure of complexity of stochastic processes \citep{bialek1999predictive}. For applications in the literature see \citet{ay_information-driven_2012,martius_information_2013,martius_self-exploration_2014}. Also, see \citet{little_maximal_2013} for a comparison to entropy minimisation. For predictive information, we select a half time horizon $k=\lfloor ({t:{\hat{T}}}-t+1)/2 \rfloor$ where $k>0$ for predictive information to be defined (i.e.\ ${t:{\hat{T}}}-t>0$). Then, we can define the expected mutual information between the next $m$ sensor values and the subsequent $m$ sensor values as the action-value function of predictive information maximisation. This is similar to the time-local predictive information in \citet{martius_information_2013}: \begin{align} \mathfrak{M}^{PI}(\d(.,.,.|.),\ha_{t:{\hat{T}}}) :&= \I_{\d}(\hS_{t:t+k-1}:\hS_{t+k:t+2k-1}|\ha_{t:{\hat{T}}}). \end{align} We omit writing out the conditional mutual information since it is defined in the usual way. Note that it is possible that $t+2k-1<{t:{\hat{T}}}$ so that the action sequence $\ha_{t:{\hat{T}}}$ might go beyond the evaluated sensor probabilities. This displacement leads to no problem since the sensor values do not depend on future actions. The posteriors needed are: $\q(\hs_{t:t+k-1}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$, $\q(\hs_{t+k:t+2k-1}|\hs_{t:t+k-1},\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$, and $\q(\hs_{t+k:t+2k-1}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$. The first and the last are again marginalisations of $\q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ seen in \cref{eq:postpreds}. The second posterior is a fraction of such marginalisations. \subsubsection{Knowledge Seeking} \label{sec:ksa} Knowledge seeking agents \citep{storck_reinforcement_1995,orseau_universal_2013} maximise the information gain with respect to a probability distribution over environments. The information gain we use here is the relative entropy between the belief over environments after actions and subsequent sensor values and the belief over environments (this is the KL-KSA of \citealt{orseau_universal_2013}, ``KL'' for Kullback-Leibler divergence). In our case the belief over environments can be identified with the posterior $\q(\theta|sa_{\prec t},\xi)$ since every $\theta=({\theta^1},{\theta^2},{\theta^3})$ defines an environment. In principle, this can be extended to the posterior $\q(\xi|sa_{\prec t},\xi)$ over the hyperprior $\xi$, but we focus on $\theta$ here. This definition is more similar to the original one. Then, we define the knowledge seeking action-value function using the information gain of \cref{eq:infogain}: \begin{align} \mathfrak{M}^{KSA}(\d(.,.,.|.),\ha_{t:{\hat{T}}}) :&= \I_{\d}(\hS_{t:{\hat{T}}}:\Theta|\ha_{t:{\hat{T}}}). \end{align} We have discussed the necessary posteriors following \cref{eq:infogain}. After this overview of some intrinsic motivations, we look at active inference. However, what should be clear is, that, in principle, both the posteriors needed for the intrinsic motivation function of the original active inference \citep{friston_active_2015} and the posteriors needed for alternative inferences overlap. This overlap shows that the other intrinsic motivations mentioned here also profit from variational inference approximations. There is also no indication that these intrinsic motivations cannot be used together with the next discussed active inference. \section{Active Inference} \label{sec:activeinference} Now, we look at active inference. Note that this section is independent of the intrinsic motivation function underlying the action-value function ${\hat{Q}}$. In the following we first look at and try to explain a slightly simplified version of the active inference in \citet{friston_active_2015}. Afterwards we also state the full version. As mentioned in the introduction, current active inference versions are formulated as an optimisation procedure that, at least at first sight, looks similar to the optimisation of a variational free energy familiar from variational inference. Recall that, in variational inference the parameters of a family of distributions are optimised to approximate an exact (Bayesian) posterior of a generative model. In the case we discussed in \cref{sec:approxpostandvi} the sought after exact posterior is the posterior factor of the generative model of \cref{sec:genmodel}. One of our questions about active inference is whether it is a straightforward application of variational inference to a posterior of some generative model. This would imply the existence of a generative model whose standard updating with past actions and sensor values leads to an optimal posterior distribution over future actions. Note that, this does not work with the generative model in of \cref{sec:genmodel} since the future actions there are independent of the past sensor values and actions. Given the appropriate generative model, it would then be natural to introduce it first and then apply a variational approximation similar to our procedure in \cref{sec:inference}. We were not able to find in the literature or construct ourselves a generative model such that variational inference leads directly to the active inference as given in \citet{friston_active_2015}. Instead we present a generative model that contains a posterior whose variational approximation optimisation is very similar to the optimisation procedure of active inference. It is also closely related to the two-step action generation of first inferring the posterior and then selecting the optimal actions. This background provides some intuition for the particularities of active inference. One difference of the generative model used here is that its structure depends on the current time step in a systematic way. The previous generative model of \cref{sec:genmodel} had a time-invariant structure. In \cref{sec:inference}, we showed how the generative model, together with either Bayesian or variational inference, can provide an agent with a set of complete posteriors. Each complete posterior is a conditional probability distribution over all currently unobserved variables ($\hS_{t:{\hat{T}}},\hE_{0:T}$) and parameters ($\Theta$ and more generally also $\Xi$) given past sensor values and actions $sa_{\prec t}$ and a particular sequence of future actions $\ha_{t:{\hat{T}}}$. Inference means updating the set of posteriors in response to observations $sa_{\prec t}$. Active inference should then update the distribution over future actions in response to observations. This means the according posterior cannot be conditional on future action sequences like the complete posterior in \cref{eq:posteriorgfa2}. Since active inference promises belief or knowledge updating and action selection in one mechanism the posterior should also range over unobserved relevant variables like future sensor values, environment states, and parameters. This leads to the posterior of \cref{eq:posterior}: \begin{align*} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\ha_{t:{\hat{T}}},\theta|sa_{\prec t},\xi). \tag{\ref{eq:posterior} \text{ revisited}} \end{align*} If this posterior has the right structure, then we can derive a future action distribution by marginalising: \begin{align} \q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)=\sum_{\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}}} \int \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\ha_{t:{\hat{T}}},\theta|sa_{\prec t},\xi) \mathop{\kern0pt\mathrm{d}}\!{} \theta. \end{align} Actions can then be sampled from the distribution obtained by marginalising further to the next action only: \begin{align} \label{eq:actioninstantiation} \p(a_t|m_t):=\sum_{\ha_{t+1:{\hat{T}}}} \q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi). \end{align} This scheme could justifiably be called (non-variational) active inference since the future action distribution is directly obtained by updating the generative model. However, as we mentioned above, according to the generative model of \cref{fig:genmodel}, the distribution over future actions is independent of the past sensor values and actions: \begin{align} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\ha_{t:{\hat{T}}},\theta|sa_{\prec t},\xi)= \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)\q(\ha_{t:{\hat{T}}}) \end{align} since \begin{align} \q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)= \q(\ha_{t:{\hat{T}}}). \end{align} Therefore, we can never learn anything about future actions from past sensor values and actions using this model. In other words, if we intend to select the actions based on the past, we cannot uphold this independent model. The inferred actions must become dependent on the history and the generative model has to be changed for a scheme like the one sketched above to be successful. In \cref{sec:actionselection}, we have mentioned that the softmax policy based on a given action-value function ${\hat{Q}}$ could be a desirable outcome of an active inference scheme such as the above. Thus, if we ended up with \begin{align} \label{eq:bayessoftmax} \q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)= \frac{1}{Z(\gamma,sa_{\prec t},\xi)} e^{\gamma {\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi)} \end{align} as a result of some active inference process, that would be a viable solution. We can force this by building this conditional distribution directly into a new generative model. Note that this conditional distribution determines all future actions $\ha_{t:{\hat{T}}}$ starting at time $t$ and not just the next action $\ha_t$. In the end however only the next action will be taken according to \cref{eq:actioninstantiation} and at time $t+1$ the action generation mechanism starts again, now with $\ha_{t+1:{\hat{T}}}$ influenced by the new data $sa_t$ in addition to $sa_{\prec t}$. So the model structure changes over time in this case with the dependency of actions on pasts $sa_{\prec t}$ shifting together with each time-step. Keeping the rest of the previous Bayesian network structure intact we define that at each time $t$ the next action $\hA_t$ depends on past sensor values and actions $sa_{\prec t}$ as well as on the hyperparameter $\xi$ (see \cref{fig:activegenmodel}): \begin{align} \q(\hs_{t:{\hat{T}}},\he_{0:{\hat{T}}},\ha_{t:{\hat{T}}},\theta|sa_{\prec t},\xi) := \q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},\he_{t-1},\theta)\q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)\q(\theta,\he_{\prec t}|sa_{\prec t},\xi). \end{align} On the right hand side we have the predictive and posterior factors left and right of the distribution over future actions. We define this conditional future action distribution to be the softmax of \cref{eq:bayessoftmax}. This means that the mechanism-generating future actions uses the Bayesian action-value function ${\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$. The Bayesian action-value function depends on the complete posterior $\q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ calculated using the old generative model of \cref{fig:genmodel} where actions do not not depend on past sensor values and actions. This is a complex construction with what amounts to Bayesian inference essentially happening within an edge (i.e.\ $\hS\hA_{\prec t} \rightarrow \hA_{t:{\hat{T}}}$) of a Bayesian network. However, logically there is no problem since the posterior $\q(\hs_{t:{\hat{T}}},\he_{t:{\hat{T}}},\theta|\ha_{t:{\hat{T}}},sa_{\prec t},\xi)$ for each $\ha_{t:{\hat{T}}}$ to be well defined really only needs $sa_{\prec t}$, $\xi$, and the model structure. Here we see the model structure as ``hard wired'' into the mechanism, since it is fixed for each time step $t$ from the beginning. \begin{figure} \begin{center} \begin{tikzpicture} [->,>=stealth,auto,node distance=2cm, thick] \tikzset{ hv/.style={to path={-| (\tikztotarget)}}, vh/.style={to path={|- (\tikztotarget)}}, } \tikzset{invi/.style={minimum width=0mm,inner sep=0mm,outer sep=0mm}} \node (ll2) [left of=al2,node distance=0.5cm,invi] {}; \node (ll3) [left of=al3,node distance=0.5cm,invi] {}; \node (ll1) [below of=al1,node distance=0.5cm,invi] {}; % \node (al2) [above of=al3, node distance=1cm] {${\Xi^2}$}; \node (al3) [left of=th3] {${\Xi^3}$}; \node (al1) [below of=al3, node distance=3cm] {${\Xi^1}$}; \node (th2) [above of=th3, node distance=1cm] {${\Theta^2}$}; \node (th3) [left of=el] {${\Theta^3}$}; \node (th1) [below of=th3, node distance=3cm] {${\Theta^1}$}; \node (el) [left of=e] {$\hE_0$}; \node (sl) [below of=el, node distance=1cm] {$\hS_0$}; \node (al) [below of=sl, node distance=1cm] {}; \node (ml) [below of=al, node distance=1cm] {}; \node (e) [] {$\hE_1$}; \node (s) [below of=e, node distance=1cm] {$\hS_1$}; \node (a) [below of=s, node distance=1cm] {$\hA_1$}; \node (th2') [above of=e', node distance=1cm] {}; \node (e') [right of=e, node distance=3cm] {$\hE_{2}$}; \node (s') [below of=e', node distance=1cm] {$\hS_{2}$}; \node (a') [below of=s', node distance=1cm] {$\hA_{2}$}; \node (m') [below of=a', node distance=1cm] {}; \node (th2r) [right of=th2'] {}; \node (er) [right of=e'] {$\hE_3$}; \node (sr) [below of=er, node distance=1cm] {$\hS_3$}; \node (ar) [below of=sr, node distance=1cm] {$\hA_3$}; \node (mr) [below of=ar, node distance=1cm] {}; \node (th2rr) [right of=th2r] {}; \node (err) [right of=er] {}; \node (srr) [below of=err, node distance=1cm] {}; \node (arr) [below of=srr, node distance=1cm] {$\phantom{\hA_4}$}; \node (afrr) [below of=srr, node distance=.5cm,invi] {}; \node (afdummy) [below of=sl, node distance=.75cm,invi] {}; \node (afl) [right of=afdummy, node distance=.5cm,invi] {}; \node (af) [right of=afl,invi] {}; \node (af') [right of=af, invi] {}; \node (afr) [right of=af',] {}; \node (afm) [left of=afr, node distance=1cm,invi] {}; \node (afto3) [left of=afr, node distance=.5cm,invi] {}; \node (dfl) [below of=afl, node distance=1cm,invi] {}; \node (df) [right of=dfl,invi] {}; \node (df') [right of=df, invi] {}; \node (dfr) [right of=df'] {}; \node (dsdummy) [below of=a', node distance=.75cm,invi] {}; \node (ds') [left of=dsdummy, node distance=.5cm,invi] {}; \node (dsr) [right of=ds', invi] {}; \node (dsrr) [right of=dsr, invi] {}; \node (dsrra) [right of=dsrr,node distance=0.5cm, invi] {}; \node (c0) [right of=th1, node distance=1cm,invi] {}; \node (c1) [right of=c0,invi] {}; \node (c2) [right of=c1,node distance=3cm,invi] {}; \node (c3) [right of=c2,invi] {}; \node (c4) [right of=c3,invi] {}; \node (c3a) [right of=c2,node distance=1cm,invi] {}; \node (c4a) [right of=c3,node distance=1cm,invi] {}; \node (c5) [right of=c4a,invi] {}; \node (xid') [below of=a',node distance=1.5cm,invi] {}; \node (xidr) [right of=xid',invi] {}; \node (xidrr) [right of=xidr,invi] {}; \path (al2) edge[-] (ll2) (ll2) edge[-,vh] (xidr) (al3) edge[-] (ll3) (ll3) edge[-,vh] (xidr) (al1) edge[-,vh] (xidr) (xidr) edge[-,dotted] (xidrr) (xid') edge (a') (xidr) edge (ar) ; \path (sl) edge[-] (afl) (afl) edge[-,vh] (ds') (s) edge[-] (af) (a) edge[-] (df) (af) edge[-] (df) ; \path (c4) edge[-,dotted,vh] (srr) ; \path (dsr) edge[-,line width=6pt,draw=white] (dsrra) (dsr) edge[-,dotted] (dsrra) (dsr) edge (ar) (ds') edge[-,line width=6pt,draw=white] (dsr) (ds') edge[-] (dsr) (ds') edge (a') (dsrr) edge[-,dotted] (arr) ; \path (al3) edge (th3) (al2) edge (th2) (al1) edge (th1) (th3) edge (el) (th2) edge[hv] (e) ; \path (th1) edge[-,line width=6pt,draw=white] (c4a) ; \path (th1) edge[-] (c0) (c0) edge[vh] (sl) (c0) edge[-] (c1) (c1) edge[vh,line width=6pt,draw=white] (s) (c1) edge[vh] (s) ; \path (th2) edge[hv] (e') (th2) edge[hv] (er) (th2') edge[-,dotted] (th2rr) (c1) edge[-] (c2) (c2) edge[vh,line width=6pt,draw=white] (s') (c2) edge[vh] (s') % (c2) edge[-] (c3) (c3) edge[vh,line width=6pt,draw=white] (sr) (c3) edge[vh] (sr) (c3) edge[-] (c4a) (c4a) edge[-,dotted,line width=6pt,draw=white] (c5) (c4a) edge[-,dotted] (c5) ; \path (el) edge node {} (e) (el) edge node {} (sl) (e) edge node {} (s) (a) edge[bend left=45,line width=6pt,draw=white] node {} (e) (a) edge[bend left=45] node {} (e) ; \path (e) edge node {} (e') (e') edge node {} (s') (a') edge[bend left=45,line width=6pt,draw=white] node {} (e') (a') edge[bend left=45] node {} (e') (e') edge (er) (ar) edge[bend left=45,line width=6pt,draw=white] node {} (er) (ar) edge[bend left=45] node {} (er) (er) edge[-,dotted] (err) (er) edge (sr) ; \end{tikzpicture} \caption{Generative model including $\q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)$ at $t=2$ with $\hS\hA_{\prec 2}$ influencing future actions $\hA_{2:{\hat{T}}}$. Note that, only future actions are dependent on past sensor values and actions, e.g.\ action $\hA_1$ has no incoming edges. The increased gap between time step $t=1$ and $t=2$ is to indicate that this time step is special in the model. For each time step $t$ there is an according model with the particular relation between past $\hS\hA_{\prec t}$ and $\hA_{t:{\hat{T}}}$ shifted accordingly.} \label{fig:activegenmodel} \end{center} \end{figure} We now approximate the posterior of \cref{eq:bayessoftmax} using variational inference. Like in \cref{sec:approxpostandvi} we do not approximate the predictive factor. Instead we only approximate the product of posterior factor $\q(\theta,\he_{\prec t}|sa_{\prec t},\xi)$ and future action distribution $\q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)$. By construction these are two independent factors but with an eye to active inference which treats belief or knowledge updating and action generation together we also treat them together. For the approximation we again use the approximate posterio factor of \cref{eq:justappost} and combine it with a distribution over future actions $\r(\ha_{t:{\hat{T}}}|\pi)$ parameterised by $\pi$: \begin{align} \r(\ha_{t:{\hat{T}}},\he_{\prec t},\theta|\pi,\phi)&:=\r(\ha_{t:{\hat{T}}}|\pi) \r(\he_{\prec t},\theta|\phi)\\ &:=\r(\ha_{t:{\hat{T}}}|\pi)\r(\he_{\prec t}|\phi^{E_\pt})\r(\theta|\phi). \end{align} The variational free energy is then: \begin{align} \label{eq:activefreeenergy} \F[\pi,\phi,sa_{\prec t},\xi]:&= \sum_{\ha_{t:{\hat{T}}},\he_{\prec t}} \int \r(\ha_{t:{\hat{T}}}|\pi)\r(\he_{\prec t},\theta|\phi) \log \frac{\r(\ha_{t:{\hat{T}}}|\pi)\r(\he_{\prec t},\theta|\phi)}{\q(s_{\prec t},\ha_{t:{\hat{T}}},\he_{\prec t},\theta|a_{\prec t},\xi)}\mathop{\kern0pt\mathrm{d}}\!{} \theta\\ &=\sum_{\ha_{t:{\hat{T}}},\he_{\prec t}} \int \r(\ha_{t:{\hat{T}}}|\pi)\r(\he_{\prec t},\theta|\phi) \log \frac{\r(\ha_{t:{\hat{T}}}|\pi)\r(\he_{\prec t},\theta|\phi)}{\q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)\q(\he_{\prec t},\theta|sa_{\prec t},\xi)\q(s_{\prec t}|a_{\prec t},\xi)}\mathop{\kern0pt\mathrm{d}}\!{} \theta\\ &=\F[\phi,sa_{\prec t},\xi]+\KL[\r(\hA_{t:{\hat{T}}}|\pi)||\q(\hA_{t:{\hat{T}}}|sa_{\prec t},\xi)]. \end{align} Where $\F[\phi,sa_{\prec t},\xi]$ is the variational free energy of the (non-active) variational inference (see \cref{eq:fesimple}). Variational inference then minimises the above expression with respect to parameters $\phi$ and $\pi$: \begin{align} \phi^*_{sa_{\prec t},\xi},\pi^*_{sa_{\prec t},\xi} :&=\argmin_{\phi,\pi} \F[\pi,\phi,sa_{\prec t},\xi]\\ &=\argmin_{\phi} \F[\phi,sa_{\prec t},\xi] + \argmin_\pi \KL[\r(\hA_{t:{\hat{T}}}|\pi)||\q(\hA_{t:{\hat{T}}}|sa_{\prec t},\xi)]. \label{eq:splitmin} \end{align} We see that the minimisation in this case separates into two minimisation problems. The first is just the variational inference of \cref{sec:approxpostandvi} and the second minimises the $\KL$-divergence between the parameterised action distribution $\r(\ha_{t:{\hat{T}}}|\pi)$ and the softmax $\q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)$ of the Bayesian action-value function. It is instructive to look at this $\KL$-divergence term closer: \begin{align} \KL[\r(\hA_{t:{\hat{T}}}|\pi)||\q(\hA_{t:{\hat{T}}}|sa_{\prec t},\xi)] &= -\HS_{\r}(\hA_{t:{\hat{T}}}|\pi) - \sum_{\ha_{t:{\hat{T}}}} \r(\ha_{t:{\hat{T}}}|\pi) \log \q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)\\ &= -\HS_{\r}(\hA_{t:{\hat{T}}}|\pi) - \sum_{\ha_{t:{\hat{T}}}} \r(\ha_{t:{\hat{T}}}|\pi) {\hat{Q}}(\ha_{t:{\hat{T}}},sa_{\prec t},\xi) + \log Z(\gamma,sa_{\prec t},\xi). \end{align} We see that the optimisation of $\pi$ leads towards high entropy distributions for which the expectation value of the action-value function ${\hat{Q}}(\ha_{t:{\hat{T}}},\phi)$ is large. Action selection could then happen according to \begin{equation} \p(a_t|m_t):=\sum_{\ha_{t+1:T}} \r(\ha_{t:{\hat{T}}}|\pi^*_{sa_{\prec t},\xi}). \end{equation} So the described variational inference procedure, at least formally, leads to a useful result. However, this is not the active inference procedure of \citet{friston_active_2015}. As noted above the minimisation actually splits into two completely independent minimisations here. The result of the minimisation with respect to $\phi$ in \cref{eq:splitmin} is actually not used for action selection and since action selection is all that matters here is mere ornament. However, there is a way to make use of it. Recall that plugging $\phi^*_{sa_{\prec t},\xi}$ into the variational action-value function ${\hat{Q}}(\ha_{t:{\hat{T}}},\phi)$ means that it approximates the Bayesian action value function (see \cref{eq:actionvalueapprox}). This means that if we define a softmax distribution $\r(\ha_{t:{\hat{T}}}|\phi)$ of the variational action-value function parameterised by $\phi$ as: \begin{align} \r(\ha_{t:{\hat{T}}}|\phi)= \frac{1}{Z(\gamma,\phi)} e^{\gamma {\hat{Q}}(\ha_{t:{\hat{T}}},\phi)}. \end{align} Then this approximates the softmax of the Bayesian action-value function: \begin{align} \r(\ha_{t:{\hat{T}}}|\phi^*_{sa_{\prec t},\xi}) \approx \q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi). \end{align} Consequently, once we have obtained $\phi^*_{sa_{\prec t},\xi}$ from the first minimisation problem in \cref{eq:splitmin} we can plug it into $\r(\ha_{t:{\hat{T}}}|\phi)$ and then minimise the $\KL$-divergence of $\r(\ha_{t:{\hat{T}}}|\pi)$ to this distribution instead of the one to $\q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)$. In this way the result of the first could be reused for the second minimisation. This remains a two part action generation mechanism however. Active inference combines these two steps into one minimisation by replacing $\q(\ha_{t:{\hat{T}}}|sa_{\prec t},\xi)$ in the variational free energy of \cref{eq:activefreeenergy} with $\r(\ha_{t:{\hat{T}}}|\phi)$. Since $\r(\ha_{t:{\hat{T}}}|\phi)$ thereby becomes part of the denominator it is also given the same symbol (in our case $\q$) as the generative model. So we define: \begin{align} \q(\ha_{t:{\hat{T}}}|\phi) :=\r(\ha_{t:{\hat{T}}}|\phi). \end{align} In this form the softmax $\q(\ha_{t:{\hat{T}}}|\phi)$ is a cornerstone of active inference. In brief, it can be regarded as a prior over action sequences. To obtain purposeful behaviour it specifies prior assumptions about what sorts of actions an agent should take when its belief parameter takes value $\phi$. Strictly speaking the expression resulting from the replacement $\q(\hA_{t:{\hat{T}}}|sa_{\prec t},\xi) \rightarrow \q(\ha_{t:{\hat{T}}}|\phi)$ in \cref{eq:activefreeenergy} is then not a variational free energy anymore since the variational parameters $\phi$ occur in both the numerator and the denominator. Nonetheless, this is the functional that is minimised in active inference as described in \citet{friston_active_2015}. So active inference is defined as the optimisation problem \citep[cmp.][Eq.(1)]{friston_active_2015}: \begin{align} \phi^*_{sa_{\prec t},\xi},\pi^*_{sa_{\prec t},\xi} &=\argmin_{\phi,\pi} \sum_{\ha_{t:{\hat{T}}},\he_{\prec t}} \int \r(\ha_{t:{\hat{T}}}|\pi)\r(\he_{\prec t},\theta|\phi) \log \frac{\r(\ha_{t:{\hat{T}}}|\pi)\r(\he_{\prec t},\theta|\phi)}{\q(s_{\prec t},\ha_{t:{\hat{T}}},\he_{\prec t},\theta|\phi,a_{\prec t},\xi)}\mathop{\kern0pt\mathrm{d}}\!{} \theta\\ &=\argmin_{\phi,\pi} \left(\F[\phi,sa_{\prec t},\xi]+\KL[\r(\hA_{t:{\hat{T}}}|\pi)||\q(\hA_{t:{\hat{T}}}|\phi)]\right). \end{align} This minimisation does not split into the two independent parts anymore since both the future action distribution $\q(\hA_{t:{\hat{T}}}|\phi)$ of the generative model and the approximate posterior factor in the variational free energy $\F[\phi,sa_{\prec t},\xi]$ are parameterised by $\phi$. This justifies the claim that active inference obtains both belief update and action selection through a single principle or optimisation. Compared to \citet{friston_active_2015}, we have introduced a simplification of active inference. In the original text, additional distributions over $\gamma$ (with according random variable $\Gamma$) are introduced to the generative model as $\q(\gamma|{\xi^\gamma})$ (which is a fixed prior) and to the approximate posterior as $\r(\gamma|\phi^\gamma)$. For the sake of completeness, we show the full equations as well. Since $\gamma$ is now part of the model, we write $\q(\ha_{t:{\hat{T}}}|\gamma,\phi)$ instead of $\q(\ha_{t:{\hat{T}}}|\phi)$. The basic procedure above stays the same. The active inference optimisation becomes: \begin{align} \label{eq:activeoptimisation} \begin{split}\phi^*_{sa_{\prec t},\xi},&\phi^{\gamma *}_{sa_{\prec t},\xi},\pi^*_{sa_{\prec t},\xi}\\ &=\argmin_{\phi,\phi^\gamma,\pi} \sum_{\ha_{t:{\hat{T}}},\he_{\prec t}} \iint \r(\ha_{t:{\hat{T}}}|\pi)\r(\gamma|\phi^\gamma)\r(\he_{\prec t},\theta|\phi) \log \frac{\r(\ha_{t:{\hat{T}}}|\pi)\r(\gamma|\phi^\gamma)\r(\he_{\prec t},\theta|\phi)}{\q(s_{\prec t},\ha_{t:{\hat{T}}},\gamma,\he_{\prec t},\theta|\phi,a_{\prec t},\xi)} \mathop{\kern0pt\mathrm{d}}\!{}\theta \mathop{\kern0pt\mathrm{d}}\!{} \gamma.\end{split} \end{align} Note that here, by construction, the denominator can be written as: \begin{align} \q(s_{\prec t},\ha_{t:{\hat{T}}},\gamma,\he_{\prec t},\theta|\phi,a_{\prec t},\xi) = \q(\ha_{t:{\hat{T}}}|\gamma,\phi) \q(\gamma|\phi^\gamma) \q(\he_{\prec t},\theta|sa_{\prec t},\xi) \q(s_{\prec t}|a_{\prec t},\xi). \end{align} Which allows us to write \cref{eq:activeoptimisation} with the original variational free energy again: \begin{align} \phi^*_{sa_{\prec t},\xi},&\phi^{\gamma *}_{sa_{\prec t},\xi},\pi^*_{sa_{\prec t},\xi} &=\argmin_{\phi,\phi^\gamma,\pi} \left(\F[\phi,sa_{\prec t},\xi] + \KL[\r(\hA_{t:{\hat{T}}},\Gamma|\pi,\phi^\gamma)||\q(\hA_{t:{\hat{T}}},\Gamma|\phi,{\xi^\gamma})]\right). \end{align} % \section{Applications and Limitations} An application of the active inference described here to a simple maze task can be found in \citet{friston_active_2015}. Active inference using different forms of approximate posteriors can be found in \citet{friston_active_2016,friston_active_2016}. Here, \citet{friston_active_curiosity_2017} also includes a knowledge seeking term in addition to the conditional entropy term. In the universal reinforcement learning framework \citet{aslanides_universal_2017} also implement a knowledge seeking agent. These works can be quite directly translated into our framework. For applications of intrinsic motivations that are not so directly related to our framework see also the references in the according \cref{sec:empowerment,sec:pim,sec:ksa}. A quantitative analysis of the limitations of the different approaches we discussed is beyond the scope of this publication. However, we can make a few observations that may help researchers interested in applying the discussed approaches. Concerning the computation of the complete posterior by direct Bayesian methods is not feasible beyond the simplest of systems and even then only for very short time durations. As mentioned in the text it contains a sum over $|\shE|^t$ elements. If the time horizon into the future is ${\hat{T}}-t$ then the predictive factor consists of $\shS^{{\hat{T}}-t} \times \shE^{{\hat{T}}-t} \times \shA^{{\hat{T}}-t}$ entries. This means predicting far into the future is also not feasible. Therefore ${\hat{T}}-t$ will usually have to be fixed to a small number. Methods that also approximate the predictive factor \citep[e.g.][]{friston_active_2016,friston_active_curiosity_2017} may be useful here. However, to our knowledge, their scalability has not been addressed yet. Since in these approaches the predictive factor is approximated in a similar way as the posterior factor here, we would expect that it is similar to the scalability of approximating the posterior factor. Employing variational inference reduces the computational burden for obtaining a posterior factor considerably. The sum over all possible past environment histories (the $|\shE|^t$ elements) is approximated within the optimisation. Clearly, by employing variational inference we inherit all shortcomings of this method. As mentioned also in \citet{friston_active_2016} variational inference approximations are known to become overconfident i.e.\ the approximate posterior tends to ignore values with low probabilities \citep[see e.g.][]{bishop_pattern_2011}. In practice this can of course lead to poor decision making. Furthermore, the convergence of the optimisation to obtain the approximate posterior can also become slow. As time $t$ increases the necessary computations for each optimisation step in the widely used coordinate ascent variational inference algorithm \citep{blei_variational_2017} grow with $t^2$. Experiments suggest that the number of necessary optimisation steps also grows over time. At the moment, we do not know how fast but this may also lead to problems. A possible solution would be to introduce some form of forgetting such that the considered past does not grow forever. Ignoring the problem of obtaining a complete posterior, we still have to evaluate and select actions. Computing the information theoretic quantities needed for the mentioned intrinsic motivations and their induced action-value functions is also computationally expensive. In this case fixing the future time horizon ${\hat{T}}-t$ can lead to constant computational requirements. These grow exponentially with the time horizon which makes large time horizons impossible without further approximations. Note that the action selection mechanisms discussed here also require the computation of the action-value functions for each of the future action sequences. Active inference is not a standard variational inference problem and therefore standard algorithms like the coordinate ascent variational inference may fail in this case. Other optimisation procedures like gradient descent may still work. As far as we know there have been no studies of the scalability of the active inference scheme up to now. \section{Conclusion} We have reconstructed the active inference approach of \citet{friston_active_2015} in in a formally consistent way. We started by disentangling the components of inference and action selection. This disentanglement has allowed us to also remove the variational inference completely and formulate the pure Bayesian knowledge updating for the generative model of \citet{friston_active_2015}. We have shown in \cref{sec:urlmodel} that a special case of this model is equivalent to a finite version of the model used by the Bayesian universal reinforcement agent \citep{hutter_universal_2005}. We then pointed out how to approximate the pure Bayesian knowledge updating with variational inference. To formalise the notion of intrinsic motivations within this framework, we have introduced intrinsic motivation functions that take complete posteriors and future actions as inputs. These induce action-value functions similar to those used in reinforcement learning. The action-value functions can then be used for both, the Bayesian and the variational agent, in standard deterministic or softmax action selection schemes. Our analysis of the intrinsic motivations \emph{Expected Free Energy Maximisation}, \emph{Empowerment Maximisation}, \emph{Predictive Information Maximisation}, and \emph{Knowledge Seeking} indicates that there is significant common structure between the different approaches and it may be possible to combine them. At the time of writing, we have already made first steps towards using the present framework for a systematic quantitative analysis and comparison of the different intrinsic motivations. Eventually, such studies will shed more conclusive light on the computational requirements and emergent dynamics of different motivations. An investigation of the biological plausibility of different motivations might lead to different results and this is of equal interest. Beyond the comparison of different intrinsic motivations within an active inference framework, the present work can thus contribute to investigations on the role of intrinsic motivations in living organisms. If biological plausibility of active inference can be upheld, and maintained for alternative intrinsic motivations, then experimental studies might be derived to test differentiating predictions. If active inference was key to cognitive phenomena such as consciousness, it would be interesting to see how the cognitive dynamics would be affected by alternative intrinsic motivations. \section*{Conflict of Interest Statement} CG, CS, SS, and DP declare no competing interests. In accordance with Frontiers policy MB declares that he was employed by company Araya Incorporated, Tokyo, Japan. \section*{Author Contributions} MB, CG, CS, SS, and DP conceived of this study, discussed the concepts, revised the formal analysis, and wrote the article. MB contributed the initial formal analysis. \section*{Funding} CG is funded by EPSRC grant [EP/L015846/1] (IGGI). CS is funded by the EU Horizon 2020 programme under the Marie Sklodowska-Curie grant 705643. DP is funded in part by EC H2020-641321 socSMCs FET Proactive project. \section*{Acknowledgments} MB would like to thank Yen Yu for valuable discussions on active inference. \begin{appendices} \clearpage \crefalias{section}{appsec} \section{Posterior Factor} \label{sec:postfactorBI} Here we want to calculate the posterior factor $\q(\he_{\prec t}, \theta|sa_{\prec t},\xi)$ of the complete posterior in \cref{eq:posteriorgfa2} without an approximation (i.e.\ as in direct, non-variational Bayesian inference). \begin{align} \q(\he_{\prec t}, \theta|sa_{\prec t},\xi)&= \frac{1}{\q(s_{\prec t}|a_{\prec t},\xi)} \q(s_{\prec t},\he_{\prec t},\theta|a_{\prec t},\xi)\\ &=\frac{1}{\q(s_{\prec t}|a_{\prec t},\xi)} \q(s_{\prec t}|\he_{\prec t},{\theta^1})\q(\he_{\prec t}|a_{\prec t},{\theta^2},{\theta^3})\q(\theta|\xi)\\ &=\frac{1}{\q(s_{\prec t}|a_{\prec t},\xi)} \prod_{\tau=0}^t \q(s_\tau|\he_\tau,{\theta^1}) \prod_{r=1}^t \q(\he_r|a_r,\he_{r-1},{\theta^2}) \q(\he_0|{\theta^3}) \prod_{i=1}^3\q(\theta^i|\xi^i). \end{align} We see that the numerator is given by the generative model. The denominator can be calulated according to: \begin{align} \label{eq:evidence} \q(s_{\prec t}|a_{\prec t},\xi)& = \int_{{\Delta_\Theta}} \q(s_{\prec t}|a_{\prec t},\theta) \q(\theta|\xi) \mathop{\kern0pt\mathrm{d}}\!{} \theta\\ &= \int_{{\Delta_\Theta}} \left( \sum_{\he_{\prec t}} \q(\he_0|{\theta^3}) \prod_{\tau=0}^t \q(s_\tau|\he_\tau,{\theta^1}) \prod_{r=1}^t\q(\he_r|a_r,\he_{r-1},{\theta^2}) \right) \prod_{i=1}^3 \q(\theta^i|\xi^i) \mathop{\kern0pt\mathrm{d}}\!{} \theta\\ &=\sum_{\he_{\prec t}} \int_{{\Delta_\Theta}} \q(\he_0|{\theta^3}) \prod_{\tau=0}^t \q(s_\tau|\he_\tau,{\theta^1}) \prod_{r=1}^t\q(\he_r|a_r,\he_{r-1},{\theta^2}) \prod_{i=1}^3 \q(\theta^i|\xi^i) \mathop{\kern0pt\mathrm{d}}\!{} \theta\\ \begin{split} &=\sum_{\he_{\prec t}} \left( \int \q(\he_0|{\theta^3}) \q({\theta^3}|{\xi^3})\mathop{\kern0pt\mathrm{d}}\!{}{\theta^3} \int \prod_{\tau=0}^t \q(s_\tau|\he_\tau,{\theta^1}) \q({\theta^1}|{\xi^1})\mathop{\kern0pt\mathrm{d}}\!{} {\theta^1} \right. \\ &\phantom{=\sum_{\he_{\prec t}} \left(\right.} \left. \times \int \prod_{r=1}^t\q(\he_r|a_r,\he_{r-1},{\theta^2}) \q({\theta^2}|{\xi^2})\mathop{\kern0pt\mathrm{d}}\!{}{\theta^2} \right) \end{split} \end{align} The three integrals can be solved analytically if $\q(\theta^i|\xi^i)$ are chosen as conjugate priors to $\q(s_\tau|\he_\tau,{\theta^1}),\q(\he_r|a_r,\he_{r-1},{\theta^2}),\q(\he_0|{\theta^3})$ respectively. However, the sum is over $|\sE|^t$ terms and therefore untractable as time increases. \clearpage \section{Approximate Posterior Predictive Distribution} \label{sec:sgapost} Here, we calculate the (variational) approximate predictive posterior distribution of $\q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}} ,sa_{\prec t},\xi)$ from a given approximate complete posterior. This expression plays a role in multiple intrinsic motivation functions like empowerment maximisation, predictive information maximisation, and knowledge seeking. For an arbitrary $\phi$ we have: \begin{align} \r(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}} ,\phi):&=\sum_{\he_{\prec t}} \int \q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},\he_{t-1},\theta) \r(\he_{\prec t},\theta|\phi) \mathop{\kern0pt\mathrm{d}}\!{} \theta \\ &=\sum_{\he_{t-1}} \int \q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},\he_{t-1},\theta) \r(\he_{t-1},\theta|\phi) \mathop{\kern0pt\mathrm{d}}\!{} \theta \\ &=\sum_{\he_{t-1}} \left(\int \q(\hs_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},\he_{t-1},\theta) \prod_{i=1}^3 \r(\theta^i|\phi^i) \mathop{\kern0pt\mathrm{d}}\!{} \theta \right) \r(\he_{t-1}|\phi^{E_{t-1}}) \\ \begin{split} &=\sum_{\he_{{t-1}}}\left(\sum_{\he_{t:{\hat{T}}}} \int \q(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}},{\theta^1}) \r({\theta^1}|{\phi^1}) \mathop{\kern0pt\mathrm{d}}\!{} {\theta^1} \times \right.\\ &\phantom{=\sum_{\he_{{t-1}}}\left(\sum_{\he_{t:{\hat{T}}}}\right.}\left.\vphantom{\sum_{\he_{t:{\hat{T}}}}}\times \int \q(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},\he_{t-1}, {\theta^2}) \r({\theta^2}|{\phi^2}) \mathop{\kern0pt\mathrm{d}}\!{} {\theta^2} \right) \r(\he_{t-1}|\phi^{E_{t-1}}) % \end{split}\\ \begin{split} &=\sum_{\he_{t-1}} \left( \sum_{\he_{t:{\hat{T}}}} \int \prod_{\tau=t}^{{\hat{T}}} \q(\hs_\tau|\he_\tau,{\theta^1}) \r({\theta^1}|{\phi^1}) \mathop{\kern0pt\mathrm{d}}\!{} {\theta^1} \times\right.\\ &\phantom{=\sum_{\he_{t-1}}\left(\sum_{\he_{t:{\hat{T}}}}\right.}\left.\vphantom{\sum_{\he_{t:{\hat{T}}}}}\times \int \prod_{\tau=t}^{{\hat{T}}} \q(\he_\tau|\ha_\tau,\he_{r-1}, {\theta^2}) \r({\theta^2}|{\phi^2}) \mathop{\kern0pt\mathrm{d}}\!{} {\theta^2} \right)\r(\he_{t-1}|\phi^{E_{t-1}})% \end{split}\\ &=\sum_{\he_{t-1}} \sum_{\he_{t:{\hat{T}}}} \r(\hs_{t:{\hat{T}}}|\he_{t:{\hat{T}}},{\phi^1}) \r(\he_{t:{\hat{T}}}|\ha_{t:{\hat{T}}},\he_{t-1},{\phi^2}) \r(\he_{t-1}|\phi^{E_{t-1}}) \end{align} From first to second line we usually have to marginalize $\q(\he_{\prec t},\theta|sa_{\prec t},\xi)$ to $\q(\he_{t-1},\theta|sa_{\prec t},\xi)$ with a sum over all $|\sE|^{t-1}$ possible environment histories $\he_{{\prec t}-1}$. Using the approximate posterior, we can use $\r(\he_{t-1}|\phi^{E_{t-1}})$ directly without dealing with the intractable sum. From third to fourth line, $\r({\theta^3}|{\phi^3})$ drops out since it can be integrated out (and its integral is equal to one). Note that during the optimisation \cref{eq:vi} $\r({\theta^3}|{\phi^3})$ does play a role so it is not superfluous.% From fifth to last line, we perform the integration over the parameters ${\theta^1}$ and ${\theta^2}$. These integrals can be calculated analytically if we choose the models $\r({\theta^1}|{\phi^1})$ and $\r({\theta^2}|{\phi^2})$ as conjugate priors to $\q(s|e,{\theta^1})$ and $\q(e'|a',e,{\theta^2})$. Variational inference prediction of the next $n={\hat{T}}-t-1$ sensor values requires the sum and calculation of $|\shE|^n$ terms for $|\shS|^n$ possible futures. \clearpage \section{Notation Translation Tables} \label{appendix:translationTables} A table to translate between our notation and the one used in \citet{friston_active_2015}. The translation is also valid in many cases for \citet{friston_active_2016,friston_active_learning_2016,friston_active_curiosity_2017}. Some of the parameters shown here only show up in the latter publications. \label{sec:translationtable} \vspace{.2cm} \begin{tabularx}{\textwidth}{|L L X|} \hline \text{This article} & \text{\citet{friston_active_2015}} & Note \\ \hline e_t \in \sE & & Actual environment states\\ \he_t \in \shE & s_t\in S & Estimated/modelled environment states\\ s_t \in \sS & o_t \in \Omega & Actual/observed sensor or outcome values\\ \hs_t \in \shS = \sS & o_t \in \Omega & Estimated/modelled (usually future) sensor or outcome values. Note that the index $\tau$ instead of $t$ often indicates an estimated future sensor value in \citet{friston_active_2015}. \\ a_t \in \sA & a_t \in A & Actions\\ \ha_t \in \shA =\sA & u_t \in U & Contemplated (usually future) actions\\ m_t \in \sM & & Agent memory state\\ \ha_{t:{\hat{T}}} & \pi,\tilde{u} & $\pi$ and $\tilde{u}$ both uniquely specify future action sequences \\ \theta & \theta & Generative model parameters \\ \q(\hs|\he,{\theta^1})=\q(\hs|\he) & P(o|s)=\text{\textbf{A}}_{os} & Model sensor dynamics, not parameterised in \citet{friston_active_2015}, $\text{\textbf{A}}$ is a matrix representation\\ \q(\he'|\ha',\he,{\theta^2})=\q(\he'|\ha',\he) & P(s'|s,u)= \text{\textbf{B}}(u)_{s's} & Model environment dynamics, not parameterised in \citet{friston_active_2015}, $\text{\textbf{B}}(u)$ is a matrix representation for each possible action $u$\\ \q(\he_0|{\theta^3}) & P(s_0|m) = \text{\textbf{D}}_{s_0} & Modelled initial environment state, not parameterised in \citet{friston_active_2015}, $\text{\textbf{D}}$ is a vector representation. Note, the parameter $m$ is a fixed hyperparameter\\ \xi = ({\xi^1},{\xi^2},{\xi^3}) & m & Generative model hyperparam.\ or model parameter that subsumes all hyperparameters\\ {\xi^1} & & sensor dynamics hyperparam.\\ {\xi^2} & & Environment dynamics hyperparam.\\ {\xi^3} & & Initial environment state hyperparam.\\ {\xi^\gamma} & (\alpha,\beta) & Precision hyperparam.\\ (\phi,\phi^\gamma) & \mu & Variational param.\\ \phi^{E_{0:{\hat{T}}}} & \wideparen{s} & Environment states variational param., \\ {\phi^{E_\tau}} & \wideparen{s}_\tau & for each timestep $\tau$\\ {\phi^1} & & Sensor dynamics variational param.\\ {\phi^2} & & Environment dynamics variational param.\\ {\phi^3} & & Initial environment state variational param.\\ \pi & \wideparen{\pi} & Future action sequence variational param.\\ \phi^\gamma & \wideparen{\gamma} &Precision variational param.\\ {\hat{Q}}(\ha_{t:{\hat{T}}},\phi) & \text{\textbf{Q}}(\pi)=\text{\textbf{Q}}(\tilde{u}|\pi) & Variational action-value function. The dependence of $\text{\textbf{Q}}(\tilde{u}|\pi)$ on $\wideparen{s}_t$ is omitted\\ \p(s_{\preceq t},e_{\preceq t},a_{\prec t}) & R(\tilde{o},\tilde{s},\tilde{a}) & Our physical environment corresponds to the generative process\\ \q(\hs_{\preceq t},\he_{\preceq t},\ha_{t:{\hat{T}}},\gamma|a_{\prec t},\xi) & P(\tilde{o},\tilde{s},\tilde{u},\gamma|\tilde{a},m) & The generative model for active inference including $\gamma$ (which we mostly omit)\\ \r(\he_{0:{\hat{T}}},\ha_{t:{\hat{T}}},\gamma|\pi,\phi,\phi^\gamma) & Q(\tilde{s},\tilde{u},\gamma|\mu) & Approximate complete posterior for active inference\\ \p^d(\hs_\tau) & P(o_\tau|m) & Prior over future outcomes.\\ \hline \end{tabularx} Since our treatment is more general than that of \citet{friston_active_2015} and quite similar (though not identical) to the treatment in \citet{friston_active_2016,friston_active_learning_2016,friston_active_curiosity_2017} we also give the relations to variables in those publications. We hope this will help interested readers to understand the latter publications even if some aspects of those are different. A discussion of those differences is beyond the scope of the present article. \begin{tabularx}{\textwidth}{|L L X|} \hline \text{This article} & \text{\citet{friston_active_2016}} & Note \\ \hline e_t \in \sE & & Actual environment states\\ \he_t \in \shE & s_t\in S & Estimated/modelled environment states\\ s_t \in \sS & o_t \in \Omega & Actual/observed sensor or outcome values\\ \hs_t \in \shS = \sS & o_t \in \Omega & Estimated/modelled (usually future) sensor or outcome values. Note that the index $\tau$ instead of $t$ often indicates an estimated future sensor value in \citet{friston_active_2015}. \\ a_t \in \sA & u_t \in A & Actions\\ \ha_t \in \shA =\sA & u_t \in \varUpsilon & Contemplated (usually future) actions\\ m_t \in \sM & & Agent memory state\\ \ha_{0:{\hat{T}}} & \pi, & action sequences \\ \theta & \theta & Generative model parameters \\ {\theta^1} & \text{\textbf{A}} & Sensor dynamics param.\\ {\theta^2} & \text{\textbf{B}} & Environment dynamics param.\\ {\theta^3} & \text{\textbf{D}} & Initial environment state param.\\ \xi & \eta & Generative model hyperparam.\ or model parameter that subsumes all hyperparameters\\\\ {\xi^1} & a & sensor dynamics hyperparam.\\ {\xi^2} & b & Environment dynamics hyperparam.\\ {\xi^3} & d & Initial environment state hyperparam.\\ {\xi^\gamma} & \beta & Precision hyperparam.\\ (\phi,\phi^\gamma) & \boldsymbol{\eta} & Variational param.\\ \phi^{E_{0:{\hat{T}}}} & \text{\textbf{s}}_{{0:T}} & Environment states variational param. \\ \q(\he_\tau|\ha_{t:{\hat{T}}},a_{0:{t-1}},{\phi^{E_\tau}}) & (\text{\textbf{s}}_\tau^\pi)_{\he_\tau} & For each sequence of actions and for each timestep there is a parameter $\text{\textbf{s}}_\tau^\pi$. Since a categorical distribution is used, the parameter is a vector of probabilities whose entry $\he_\tau$ is equal to the probability of $\he_\tau$ if we set $\shE=\{1,...,|\shE|\}$\\ {\phi^1} & \text{\textbf{a}} & Sensor dynamics variational param.\\ {\phi^2} & \text{\textbf{b}} & Environment dynamics variational param.\\ {\phi^3} & \text{\textbf{d}} & Initial environment state variational param.\\ \pi & \boldsymbol{\pi} & Future action sequence variational param.\\ \phi^\gamma & \boldsymbol{\beta} &Precision variational param.\\ {\hat{Q}}(\ha_{t:{\hat{T}}},\phi) & -\text{\textbf{G}}(\pi) & Variational action-value function. The dependence of $\text{\textbf{G}}(\pi)$ on $\text{\textbf{s}}_{0:T}^\pi$ is omitted\\ \p(s_{\preceq t},e_{\preceq t},a_{\prec t}) & R(\tilde{o},\tilde{s},\tilde{a}) & Our physical environment corresponds to the generative process\\ \q(\hs_{\preceq t},\he_{0:{\hat{T}}},\ha_{0:{\hat{T}}},\gamma,\theta,\xi) & P(\tilde{o},\tilde{s},\pi,\gamma,\text{\textbf{A}},\text{\textbf{B}},\text{\textbf{D}}|a,b,d,\beta) & The generative model for active inference\\ \r(\he_{0:{\hat{T}}},\ha_{0:{\hat{T}}},\gamma,\theta|\pi,\phi^\gamma,\phi) & Q(\tilde{s},\pi,\text{\textbf{A}},\text{\textbf{B}},\text{\textbf{D}},\gamma|\text{\textbf{s}}^\pi_{0:{\hat{T}}},\boldsymbol{\pi},\text{\textbf{a}},\text{\textbf{b}},\text{\textbf{d}},\boldsymbol{\beta}) & Approximate complete posterior for active inference\\ \p^d(\hs_\tau) & P(o_\tau)=\sigma(\text{\textbf{U}}_\tau) & Prior over future outcomes.\\ \hline \end{tabularx} \end{appendices} \bibliographystyle{apalike} %
1,108,101,562,689
arxiv
\section{Introduction} Understanding natural language requires the ability to pay attention to the most relevant information. For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading. However, retrieving problems may occur if irrelevant segments impose negative impacts on reading comprehension. Such distraction hinders the understanding process, which calls for an effective attention. This principle is also applicable to the computation systems for natural language. Attention has been a vital component of the models for natural language understanding and natural language generation. Recently, \citet{transformer} proposed Transformer, a model based on the attention mechanism for Neural Machine Translation(NMT). Transformer has shown outstanding performance in natural language generation tasks. More recently, the success of BERT \citep{bert} in natural language processing shows the great usefulness of both the attention mechanism and the framework of Transformer. However, the attention in vanilla Transformer has a obvious drawback, as the Transformer assigns credits to all components of the context. This causes a lack of focus. As illustrated in Figure~\ref{fig:attn_illu}, the attention in vanilla Transformer assigns high credits to many irrelevant words, while in Explicit Sparse Transformer, it concentrates on the most relevant $k$ words. For the word ``tim'', the most related words should be "heart" and the immediate words. Yet the attention in vanilla Transformer does not focus on them but gives credits to some irrelevant words such as ``him''. Recent works have studied applying sparse attention in Transformer model. However, they either add local attention constraints~\citep{child2019generating} which break long term dependency or hurt the time efficiency~\citep{sparsemax}. Inspired by \citet{sab} which introduce sparse credit assignment to the LSTM model, we propose a novel model called \textbf{Explicit Sparse Transformer} which is equipped with our sparse attention mechanism. We implement an explicit selection method based on top-$k$ selection. Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the $k$ most contributive states. Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer. \begin{figure}[!tb] \centering \includegraphics[width=1.0\linewidth]{figures/attn1225.pdf} \caption{Illustration of self-attention in the models. The orange bar denotes the attention score of our proposed model while the blue bar denotes the attention scores of the vanilla Transformer. The orange line denotes the attention between the target word ``tim'' and the selected top-$k$ positions in the sequence. In the attention of vanilla Transformer, "tim" assigns too many non-zero attention scores to the irrelevant words. But for the proposal, the top-$k$ largest attention scores removes the distraction from irrelevant words and the attention becomes concentrated.} \label{fig:attn_illu} \end{figure} We first validate our methods on three tasks. For further investigation, we compare our methods with previous sparse attention methods and experimentally answer how to choose k in a series of qualitative analyses. We are surprised to find that the proposed sparse attention method can also help with training as a regularization method. Visual analysis shows that Explicit Sparse Transformer exhibits a higher potential in performing a high-quality alignment. The contributions of this paper are presented below: \begin{itemize} \item We propose a novel model called Explicit Sparse Transformer, which enhances the concentration of the Transformer's attention through explicit selection. \item We conducted extensive experiments on three natural language processing tasks, including Neural Machine Translation, Image Captioning and Language Modeling. Compared with vanilla Transformer, Explicit Sparse Transformer demonstrates better performances in the above three tasks. \item Compared to previous sparse attention methods for transformers, our methods are much faster in training and testing, and achieves comparable results. \end{itemize} \section{Explicit Sparse Transformer} The review to the attention mechanism and the attention-based framework of Transformer can be found in Appendix \ref{background}. \label{sparsetransformer} \begin{figure} \centering \includegraphics[width=0.85\linewidth]{figures/framework_sparse.pdf} \caption{The comparison between the attentions of vanilla Transformer and Explicit Sparse Transformer and the illustration of the attention module of Explicit Sparse Transformer. With the mask based on top-$k$ selection and softmax function, only the most contributive elements are assigned with probabilities.} \label{fig:attn_comparison} \end{figure} Lack of concentration in the attention can lead to the failure of relevant information extraction. To this end, we propose a novel model, \textbf{Explicit Sparse Transformer}, which enables the focus on only a few elements through explicit selection. Compared with the conventional attention, no credit will be assigned to the value that is not highly correlated to the query. We provide a comparison between the attention of vanilla Transformer and that of Explicit Sparse Transformer in Figure~\ref{fig:attn_comparison}. Explicit Sparse Transformer is still based on the Transformer framework. The difference is in the implementation of self-attention. The attention is degenerated to the sparse attention through top-$k$ selection. In this way, the most contributive components for attention are reserved and the other irrelevant information are removed. This selective method is effective in preserving important information and removing noise. The attention can be much more concentrated on the most contributive elements of value. In the following, we first introduce the sparsification in self-attention and then extend it to context attention. In the unihead self-attention, the key components, the query $Q[l_{Q}, d]$, key $K[l_{K}, d]$ and value $V[l_{V}, d]$, are the linear transformation of the source context, namely the input of each layer, where $Q = W_{Q}x$, $K = W_{K}x$ and $V = W_{V}x$. Explicit Sparse Transformer first generates the attention scores $P$ as demonstrated below: \begin{align} P &= \frac{QK^{\text{T}}} {\sqrt{d}} \end{align} Then the model evaluates the values of the scores $P$ based on the hypothesis that scores with larger values demonstrate higher relevance. The sparse attention masking operation $\mathcal{M}(\cdot)$ is implemented upon $P$ in order to select the top-$k$ contributive elements. Specifically, we select the $k$ largest element of each row in $P$ and record their positions in the position matrix $(i, j)$, where $k$ is a hyperparameter. To be specific, say the $k$-th largest value of row $i$ is $t_{i}$, if the value of the $j$-th component is larger than $t_i$, the position $(i, j)$ is recorded. We concatenate the threshold value of each row to form a vector $t = [t_1, t_2, \cdots, t_{l_{Q}}]$. The masking functions $\mathcal{M}(\cdot, \cdot)$ is illustrated as follows: \begin{align} \mathcal{M}(P, k)_{ij}&=\left\{ \begin{aligned} P_{ij} \ \ \ \ \text{if}\ P_{ij} \geq t_i \text{ ($k$-th largest value of row $i$)} \\ -\infty \ \ \ \ \text{if}\ P_{ij} < t_i \text{ ($k$-th largest value of row $i$)} \end{aligned} \right. \end{align} With the top-$k$ selection, the high attention scores are selected through an explicit way. This is different from dropout which randomly abandons the scores. Such explicit selection can not only guarantee the preservation of important components, but also simplify the model since $k$ is usually a small number such as $8$, detailed analysis can be found in \ref{select_k}. The next step after top-$k$ selection is normalization: \begin{align} A &= \mathrm{softmax}(\mathcal{M}(P, k)) \end{align} where $A$ refers to the normalized scores. As the scores that are smaller than the top k largest scores are assigned with negative infinity by the masking function $\mathcal{M}(\cdot, \cdot)$, their normalized scores, namely the probabilities, approximate 0. We show the back-propagation process of Top-k selection in ~\ref{topk-bp}. The output representation of self-attention $C$ can be computed as below: \begin{equation} C = AV \end{equation} The output is the expectation of the value following the sparsified distribution $A$. Following the distribution of the selected components, the attention in the Explicit Sparse Transformer model can obtain more focused attention. Also, such sparse attention can extend to context attention. Resembling but different from the self-attention mechanism, the $Q$ is no longer the linear transformation of the source context but the decoding states $s$. In the implementation, we replace $Q$ with $W_{Q}s$, where $W_{Q}$ is still learnable matrix. In brief, the attention in our proposed Explicit Sparse Transformer sparsifies the attention weights. The attention can then become focused on the most contributive elements, and it is compatible to both self-attention and context attention. The simple implementation of this method is in the Appendix~\ref{implementation}. \begin{table}[tb] \centering \footnotesize \setlength{\tabcolsep}{4pt} \begin{tabular}{l c c c} \toprule \textbf{Model} & En-De & En-Vi & De-En \\ \midrule ConvS2S \citep{cnn_seq} &25.2 & - & - \\ Actor-Critic~\citep{Actor}&- &- & 28.5\\ NPMT+LM~\citep{NBMT}&- & 28.1 & 30.1\\ SACT~\citep{SACT} &- & 29.1 & -\\ Var-Attn~\citep{var_attn} &- &- & 33.7 \\ NP2MT~\cite{NP2MT} &- &30.6 & 31.7 \\ Transformer \citep{transformer} & 28.4 &- & -\\ RNMT \citep{chen2018best} & 28.5 & - & - \\ Fixup~\citep{fixup} &29.3 &- & 34.5 \\ Weighted Transformer \citep{wt} &28.9 & - & -\\ Universal Transformer \citep{ut} &28.9 & - & - \\ Layer-wise Coordination~\citep{NIPS2018_8019} & 29.1 & - & - \\ Transformer(relative position) \citep{shaw2018self} & 29.2 & - & -\\ Transformer \citep{ott2018scaling} & 29.3 & - & - \\ DynamicConv \citep{wu2019pay} & \textbf{29.7} & - & 35.2 \\ Local Joint Self-attention \citep{fonollosa2019joint} & \textbf{29.7} & - & \textbf{35.7} \\ \midrule Transformer(impl.) & 29.1 & 30.6 & 35.3 \\ Explicit Sparse Transformer & 29.4 & \textbf{31.1} & 35.6 \\ \bottomrule \end{tabular} \caption{Results on the En-De, En-Vi and De-En test sets. Compared with the baseline models, ``impl.'' denotes our own implementation. } \label{table:ende} \end{table} \section{Results} We conducted a series of experiments on three natural language processing tasks, including neural machine translation, image captioning and language modeling. Detailed experimental settings are in Appendix~\ref{exp-detail}. \subsection{Neural Machine Translation} \paragraph{Dataset} To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the \textit{newstest 2013} for validation and the \textit{newstest 2014} as our test set. We report the results on the test set. For En-Vi, we trained our model on the dataset in IWSLT 2015~\citep{envi}. The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used \textit{tst2012} for validation, and \textit{tst2013} for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences. Following \citet{risk}, we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding \citep{BPE}. The vocabulary size is 14,000. \paragraph{Result} Table~\ref{table:ende} presents the results of the baselines and our Explicit Sparse Transformer on the three datasets. For En-De, Transformer-based models outperform the previous methods. Compared with the result of Transformer \citep{transformer}, Explicit Sparse Transformer reaches 29.4 in BLEU score evaluation, outperforming vanilla Transformer by 0.3 BLEU score. For En-Vi, vanilla Transformer\footnote{While we did not find the results of Transformer on En-Vi, we reimplemented our vanilla Transformer with the same setting.} reaches 30.2, outperforming previous best method \citep{NBMT}. Our model, Explicit Sparse Transformer, achieves a much better performance, 31.1, by a margin of 0.5 over vanilla Transformer. For De-En, we demonstrate that Transformer-based models outperform the other baselines. Compared with Transformer, our Explicit Sparse Transformer reaches a better performance, 35.6. Its advantage is +0.3. To the best of our knowledge, Explicit Sparse Transformer reaches a top line performance on the dataset. \subsection{Image Captioning} \paragraph{Dataset} We evaluated our approach on the image captioning task. Image captioning is a task that combines image understanding and language generation. We conducted experiments on the Microsoft COCO 2014 dataset \citep{coco}. It contains 123,287 images, each of which is paired 5 with descriptive sentences. We report the results and evaluate the image captioning model on the MSCOCO 2014 test set for image captioning. Following previous works~\citep{updown_ic,Liu_2018}, we used the publicly-available splits provided by \citet{karpathy2015deep}. The validation set and test set both contain 5,000 images. \begin{table}[tb] \centering \footnotesize \begin{tabular}{lccc} \toprule \multicolumn{1}{l}{\textbf{Model}} & \multicolumn{1}{c}{\textbf{BLEU-4}} & \multicolumn{1}{c}{\textbf{METEOR}} & \multicolumn{1}{c}{\textbf{CIDEr}} \\ \midrule SAT \cite{SAT} &28.2 &24.8 &92.3 \\ SCST \cite{SCST} &32.8 &26.7 &106.5 \\ NBT \cite{NBT} & 34.7 & 27.1 &107.2 \\ AdaAtt \cite{AdaAtt} &33.2 &26.6 &108.5 \\ ARNN \cite{ARNN} &33.9 &27.6 &109.8 \\ Transformer &35.3 &27.7 & 113.1 \\ UpDown \cite{Updown} & \textbf{36.2} & 27.0 & 113.5 \\ \midrule Explicit Sparse Transformer & 35.7 &\textbf{28.0} & \textbf{113.8} \\ \bottomrule \end{tabular} \caption{Results on the MSCOCO Karpathy test split. } \label{table:coco} \end{table} \paragraph{Result} Table~\ref{table:coco} shows the results of the baseline models and Explicit Sparse Transformer on the COCO Karpathy test split. Transformer outperforms the mentioned baseline models. Explicit Sparse Transformer outperforms the implemented Transformer by +0.4 in terms of BLEU-4, +0.3 in terms of METEOR, +0.7 in terms of CIDEr. , which consistently proves its effectiveness in Image Captioning. \subsection{Language Modeling} \paragraph{Dataset} Enwiki8\footnote{http://mattmahoney.net/dc/text.html} is large-scale dataset for character-level language modeling. It contains 100M bytes of unprocessed Wikipedia texts. The inputs include Latin alphabets, non-Latin alphabets, XML markups and special characters. The vocabulary size 205 tokens, including one for unknown characters. We used the same preprocessing method following \citet{gated_feedback}. The training set contains 90M bytes of data, and the validation set and the test set contains 5M respectively. \paragraph{Result} Table~\ref{table:enwiki8} shows the results of the baseline models and Explicit Sparse Transformer-XL on the test set of enwiki8. Compared with the other strong baselines, Transformer-XL can reach a better performance, and Explicit Sparse Transformer outperforms Transformer-XL with an advantage. \begin{table}[t] \footnotesize \centering \begin{tabular}{l|cc} \toprule \bf Model & \bf Params & \bf BPC \\ \midrule LN HyperNetworks \citep{ha2016hypernetworks} & 27M & 1.34 \\ LN HM-LSTM \citep{chung2016hierarchical} & 35M & 1.32 \\ RHN \citep{zilly2017recurrent} & 46M & 1.27 \\ Large FS-LSTM-4 \citep{mujika2017fast} & 47M & 1.25 \\ Large mLSTM \citep{krause2016multiplicative} & 46M & 1.24 \\ Transformer \citep{al2018character} & 44M & 1.11 \\ Transformer-XL \citep{dai2019transformer} & 41M & 1.06 \\ Adaptive-span \citep{Sukhbaatar_2019} &39M & \textbf{1.02} \\ \midrule Explicit Sparse Transformer-XL & 41M & 1.05 \\ \bottomrule \end{tabular} \caption{ Comparison with state-of-the-art results on enwiki8. Explicit Sparse Transformer-XL refers to the Transformer with our sparsification method. } \label{table:enwiki8} \end{table} \section{Discussion} \label{discussion} In this section, we performed several analyses for further discussion of Explicit Sparse Transformer. First, we compare the proposed method of topk selection before softmax with previous sparse attention method including various variants of sparsemax~\citep{sparsemax,Correia_2019,Peters_2019}. Second, we discuss about the selection of the value of $k$. Third, we demonstrate that the top-k sparse attention method helps training. In the end, we conducted a series of qualitative analyses to visualize proposed sparse attention in Transformer. \subsection{Comparison with other Sparse Attention Methods}\label{sparsemax} \begin{table}[tb] \centering \footnotesize \setlength{\tabcolsep}{2pt} \begin{tabular}{l c c c c} \toprule \multicolumn{1}{l}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{En-Vi}} & \multicolumn{1}{c}{\textbf{De-En}} & \multicolumn{1}{c}{\textbf{Training Speed (tokens/s)}} & \multicolumn{1}{c}{\textbf{Inference Speed (tokens/s)}} \\ \midrule Transformer &30.6 & 35.3 &49K & 7.0K \\ Sparsemax~\citep{sparsemax} &- & 31.2 & 39K & 3.0K \\ Entmax-1.5~\citep{Peters_2019} &30.9 & 35.6 & 40K & 4.9K \\ Entmax-alpha~\citep{Correia_2019} &- &35.5 & 13K & 0.6K \\ Proposal &31.1 &35.6 & 48K & 6.6K \\ \bottomrule \end{tabular} \caption{In the Transformer model, the proposed method, top-k selection before softmax is faster than previous sparse attention methods and is comparable in terms of BLEU scores.} \label{table:sparsemax} \end{table} We compare the performance and speed of our method with the previous sparse attention methods\footnote{We borrow the implementation of Entmax1.5 in Tensorflow from \url{https://github.com/deep-spin/entmax}, and the implementation of Sparsemax, Entmax-1.5, Entmax-alpha in Pytorch from \url{https://gist.github.com/justheuristic/60167e77a95221586be315ae527c3cbd}. We have not found a reliable Tensorflow implementation of sparsemax and entmax-alpha in the transformer (we tried to apply the official implementation of sparsemax in Tensorflow to tensor2tensor, but it reports loss of NaN.) } on the basis of strong implemented transformer baseline. The training and inference speed are reported on the platform of Pytorch and IWSLT 2014 De-En translation dataset, the batch size for inference is set to $128$ in terms of sentence and half precision training(FP-16) is applied. As we can see from Table~\ref{table:sparsemax}, the proposed sparse attention method achieve the comparable results as previous sparse attention methods, but the training and testing speed is 2x faster than sparsemax and 10x faster than Entmax-alpha during the inference. This is due to the fact that our method does not introduce too much computation for calculating sparse attention scores. The other group of sparse attention methods of adding local attention constraints into attention~\citep{child2019generating,Sukhbaatar_2019}, do not show performance on neural machine translation, so we do not compare them in Table~\ref{table:sparsemax}. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/topk_select.pdf} \caption{Analyse the value of K on IWSLT En-Vi and De-En datasets. "inf" denotes the special case of the Explicit Sparse Transformer where all positions may be attended, same as the origin Transformer.} \label{fig:kvalue} \end{figure} \subsection{How to Select a Proper k?}\label{select_k} The natural question of how to choose the optimal $k$ comes with the proposed method. We compare the effect of the value of $k$ at exponential scales. We perform experiments on En-Vi and De-En from 3 different initializations for each value of $K$, and report the mean BLEU scores on the valid set. The figure~\ref{fig:kvalue} shows that regardless of the value of 16 on the En-Vi dataset, the model performance generally rises first and then falls as $k$ increases. For $k\in\{4,8,16,32\}$, setting the value of $k$ to $8$ achieves consistent improvements over the transformer baseline. \subsection{Do the proposed sparse attention method helps training?} We are surprised to find that only adding the sparsification in the training phase can also bring an improvement in the performance. We experiment this idea on IWSLT En-Vi and report the results on the valid set in Table~\ref{table:TP}, . The improvement of 0.3 BLEU scores shows that vanilla Transformer may be overparameterized and the sparsification encourages the simplification of the model. \begin{table} \centering \footnotesize \setlength{\tabcolsep}{4pt} \begin{tabular}{l c c c} \toprule \multicolumn{1}{l}{\textbf{Task }} & \multicolumn{1}{c}{\textbf{Base}} & \multicolumn{1}{c}{\textbf{T}} & \multicolumn{1}{c}{\textbf{T\&P}} \\ \midrule En-Vi (BLEU) &27.4 &27.7 &27.8 \\ \bottomrule \end{tabular} \caption{Results of the ablation study of the sparsification at different phases on the En-Vi test set. ``Base'' denotes vanilla Transformer. ``T'' denotes only adding the sparsification in the training phase, and ``T\&P'' denotes adding it at both phases as the implementation of Explicit Sparse Transformer does. } \label{table:TP} \end{table} \subsection{Do the Explicit Sparse Transformer Attend better?} To perform a thorough evaluation of our Explicit Sparse Transformer, we conducted a case study and visualize the attention distributions of our model and the baseline for further comparison. Specifically, we conducted the analysis on the test set of En-Vi, and randomly selected a sample pair of attention visualization of both models. The visualization of the context attention of the decoder's bottom layer in Figure~\ref{fig:lead}. The attention distribution of the left figure is fairly disperse. On the contrary, the right figure shows that the sparse attention can choose to focus only on several positions so that the model can be forced to stay focused. For example, when generating the phrase ``for thinking about my heart''(Word-to-word translation from Vietnamese), the generated word cannot be aligned to the corresponding words. As to Explicit Sparse Transformer, when generating the phrase "with all my heart", the attention can focus on the corresponding positions with strong confidence. \begin{figure}[t] \centering \subfigure[Attention of the bottom layer]{\label{fig:lead}\includegraphics[width=65mm]{figures/heat_1.pdf}} \qquad \subfigure[Attention of the top layer]{\label{fig:last}\includegraphics[width=60mm]{figures/heat_2.pdf}} ~ \caption{Figure~\ref{fig:lead} is the attention visualization of Transformer and Figure~\ref{fig:last} is that of the Explicit Sparse Transformer. The red box shows that the attentions in vanilla Transformer at most steps are concentrated on the last token of the context.} \label{fig:heat} \end{figure} The visualization of the decoder's top layer is shown in Figure~\ref{fig:last}. From the figure, the context attention at the top layer of the vanilla Transformer decoder suffers from focusing on the last source token. This is a common behavior of the attention in vanilla Transformer. Such attention with wrong alignment cannot sufficiently extract enough relevant source-side information for the generation. In contrast, Explicit Sparse Transformer, with simple modification on the vanilla version, does not suffer from this problem, but instead focuses on the relevant sections of the source context. The figure on the right demonstrating the attention distribution of Explicit Sparse Transformer shows that our proposed attention in the model is able to perform accurate alignment. \section{Related Work} Attention mechanism has demonstrated outstanding performances in a number of neural-network-based methods, and it has been a focus in the NLP studies \citep{attn}. A number of studies are proposed to enhance the effects of attention mechanism \citep{stanford_attention, transformer, sab, zhao2019muse}. \citet{stanford_attention} propose local attention and \citet{local_self_attn} propose local attention for self-attention. \citet{show_attend_tell} propose hard attention that pays discrete attention in image captioning. \citet{hmn} propose a combination soft attention with hard attention to construct hierarchical memory network. \citet{SACT} propose a temperature mechanism to change the softness of attention distribution. \citet{resan} propose an attention which can select a small proportion for focusing. It is trained by reinforcement learning algorithms \citep{reinforce}. In terms of memory networks, \citet{rae2016scaling} propose to sparse access memory. \cite{child2019generating} recently propose to use local attention and block attention to sparsify the transformer. Our approach differs from them in that our method does not need to block sentences and still capture long distance dependencies. Besides, we demonstrate the importance of Explicit Sparse Transformer in sequence to sequence learning. Although the variants of sparsemax~\citep{sparsemax,Correia_2019,Peters_2019} improve in machine translation tasks, we empirically demonstrate in \ref{sparsemax} that our method introduces less computation in the standard transformer and is much faster than those sparse attention methods on GPUs. \section{Conclusion} In this paper, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to make the attention in vanilla Transformer more concentrated on the most contributive components. Extensive experiments show that Explicit Sparse Transformer outperforms vanilla Transformer in three different NLP tasks. We conducted a series of qualitative analyses to investigate the reasons why Explicit Sparse Transformer outperforms the vanilla Transformer. Furthermore, we find an obvious problem of the attention at the top layer of the vanilla Transformer, and Explicit Sparse Transformer can alleviate this problem effectively with improved alignment effects.
1,108,101,562,690
arxiv
\section{Introduction} The world we live in today is very diverse, with over 7,000+ languages spoken across the globe\footnote{\url{https://www.ethnologue.com/guides/how-many-languages}}. These languages have varying traits and are spoken by communities of various sizes depending upon the popularity of the language. For example, Mandarin consists of over 50,000 \textit{hanzi} (characters) and is spoken by over 1.117 billion people\footnote{\url{https://www.berlitz.com/en-uy/blog/most-spoken-languages-world}}, while there exist languages like Rotokas, which is an indigenous language spoken by about 4,320 people on the island of Bougainville, Papua New Guinea, which consists of only 12 letters\footnote{\url{https://en.wikipedia.org/wiki/Rotokas_language}}. \begin{table}[] \tiny \centering \caption{Comparison of proposed dataset with existing large-scale multi-lingual and multi-modal datasets.} \label{tab:comparison} \vspace{0.7em} \resizebox{7.8cm}{!}{ \begin{tabular}{cccc} \toprule \multicolumn{4}{c}{\textbf{Multi-lingual Summarization Datasets}} \\ \hline \multicolumn{1}{c}{\textbf{Dataset Name}} & \multicolumn{1}{c}{\textbf{Dataset Size}} & \multicolumn{1}{c}{\textbf{\#Languages}} & \multicolumn{1}{c}{\textbf{Domain}} \\ \hline \texttt{XL-Sum} \cite{hasan2021xl} & 1M & 44 & News \\ \texttt{MLSUM} \cite{scialom2020mlsum} & 1.5M & 5 & News \\ \texttt{WikiLingua} \cite{ladhak2020wikilingua} & 770K & 18 & Tutorials \\ \texttt{MLGSum} \cite{wang2021contrastive} & 1.1M & 12 & News \\ \hline \texttt{M3LS} (Ours) & 1.1M & 20 & News \\ \hline \multicolumn{4}{c}{\textbf{Multi-modal Summarization Datasets}} \\ \hline \multicolumn{1}{c}{\textbf{Dataset Name}} & \multicolumn{1}{c}{\textbf{Dataset Size}} & \multicolumn{1}{c}{\textbf{Modalities}} & \multicolumn{1}{c}{\textbf{Domain}} \\ \hline \texttt{MSMO} \cite{zhu2018msmo} & 314K & Text + Image & News \\ \texttt{E-DailyMail} \cite{chen2018abstractive} & 219K & Text + Image & News\\ \texttt{How2} \cite{sanabria2018how2} & 190K & Text + Video + Audio & Multiple Domains \\ \texttt{MMSS} \cite{li2018multi} & 66K & Text + Image & News \\ \texttt{VMSMO} \cite{li2020vmsmo} & 185K & Text + Video + Audio & Social Network\\ \hline \texttt{M3LS} (Ours) & 1.1M & Text + Image & News\\ \bottomrule \end{tabular} } \end{table} These languages, although very crucial, restrict people to communicate their thoughts to others who speak the same language. The gift of sight, however, is something that is universally shared by every human being on this plant, irrespective of their culture, ethnicity, or the language that they speak. Through this work we aim to instigate the research towards improving existing automatic summarization systems by leveraging information from multiple languages and visual modalities. Various studies in the past have illustrated how unified summarization frameworks across multiple languages improve the summarization quality over mono-lingual frameworks \cite{wang2021contrastive}. Similarly, there have been works in multi-modal summarization that illustrate how multi-modal input can help improve the quality of summarization over text summarization systems \cite{jangra2020text,jangra2020multi,chen2018abstractive,mukherjee-etal-2022-topic}. Additionally, having multiple modalities in the output summary can help improve the overall satisfaction of the user \cite{zhu2018msmo,jangra2021multi}. Multiple modalities can also compensate for the inability of individual modalities to express various aspects of the summary. For instance, it is hard to express abstract concepts like ``freedom", ``gravity", etc. through images, while it can be expressed through text conveniently. Similarly, it is very difficult to describe a ``Pangolin" to someone who hasn't seen one beforehand. Hence, in this work we propose the task of Multi-modal Multi-lingual Summarization (\texttt{M3LS}), and also release the \texttt{M3LS} dataset\footnote{A sample of our dataset is available at \url{https://github.com/zenquiorra/M3LS}, the complete dataset will be released in the camera ready version of the work} to facilitate the research in this direction. The dataset comprises 1.1M news articles, spanning 20 languages comprising \textit{English}, \textit{Chinese}, \textit{Spanish}, \textit{Russian}, \textit{French}, \textit{Ukrainian}, \textit{Portuguese}, \textit{Japanese}, \textit{Tamil}, \textit{Hindi}, \textit{Marathi}, \textit{Gujarati}, \textit{Bengali}, \textit{Sinhala}, \textit{Urdu}, \textit{Pashto}, \textit{Indonesian}, \textit{Telugu}, \textit{Punjabi}, and \textit{Nepali}; making it the largest language-spanning summarization dataset. To the best of our knowledge, the proposed dataset is the largest summarization dataset for 13 languages (\textit{Russian}, \textit{Ukrainian}, \textit{Tamil}, \textit{Hindi}, \textit{Marathi}, \textit{Gujarati}, \textit{Bengali}, \textit{Sinhala}, \textit{Urdu}, \textit{Pashto}, \textit{Telugu}, \textit{Punjabi}, and \textit{Nepali}). We hope that the proposed task and the dataset will instigate and inspire multi-modal and multi-lingual research in less-explored languages for solving various tasks including but not limited to automatic summarization \cite{Nallapati2016AbstractiveTS,DBLP:journals/corr/SeeLM17}, article headline generation \cite{jin2020hooks,gavrilov2019self,zhang2018question}, keyword extraction \cite{showrov2019keyword,lee2008news,yao2019research}, image caption generation \cite{xu2015show,bai2018survey}, multi-modal embedding generation \cite{sun2019videobert,lu2019vilbert,li2019visualbert,zhou2020unified}, large-scale language modeling \cite{raffel2020exploring,devlin2018bert} etc. The major contributions of this work are as follows - \textit{1) We have proposed the multi-modal multi-lingual summarization (\texttt{M3LS}) task. 2) We have released the largest multi-modal summarization dataset that spans 20 languages. 3) The proposed dataset is the largest text summarization dataset for 13 languages. 4) To the best of our knowledge, we present the first ever multi-modal cross-lingual dataset (consisting of Japanese-to-English and English-to-Japanese). 5) We have provided multi-modal summarization baseline results for our dataset and a detailed analysis of the dataset.} \section{Related Work} The field of text summarization is more than 5 decades old \cite{Edmundson1969NewMI}, and has evolved to a great extent in recent years. Prior to the advances in sequence-to-sequence frameworks \cite{sutskever2014sequence}, people mainly focused on extractive summarization techniques that aim to generate summary via extracting words, phrases, or sentences \cite{mihalcea2004textrank, saini2019extractive, alguliev2010multi}. \citet{DBLP:journals/corr/SeeLM17} proposed the Pointer-Generator Networks, an attentive recurrent neural network based framework \cite{bahdanau2015neural}. Recent years have seen great progress in research in automatic summarization leveraging transformer based models \cite{zhang2020pegasus,devlin2018bert} and attention mechanism \cite{vaswani2017attention}. In this section we discuss the related works showcasing multi-modal datasets and multi-lingual datasets. A detailed size comparison of these datasets with \texttt{M3LS} is shown in Table \ref{tab:comparison}. \subsection{Multi-modal summarization datasets} Multi-modal summarization is the task of summarizing content comprising two or more input modalities. The output can be uni-modal or multi-modal depending on the task. In this section, we discuss existing large-scale multi-modal summarization datasets proposed in the community. We point the readers to \citet{jangra2021survey} for a comprehensive survey. \textbf{MSMO}: \citet{zhu2018msmo} proposed a multi-modal summarization dataset that consists of text and images. The dataset is obtained from the \texttt{DailyMail}\footnote{\url{https://www.dailymail.co.uk/home/index.html}} website and contains 314,581 instances in English language. However, \citet{hasan2021xl} illustrated that the \texttt{DailyMail} news highlights lack novel n-grams. \citet{fabbri2021summeval} also highlighted the inconsistency in quality of some reference summaries in the \texttt{CNN/DailyMail} dataset \cite{Nallapati2016AbstractiveTS}. \textbf{E-Dailymail} \citet{chen2018abstractive} proposed the E-Dailymail dataset, which contains text and images extracted from the \texttt{DailyMail} website. The dataset consists of 219,100 instances in English, containing the input document, article title, images, and image captions. \textbf{How2}: \citet{sanabria2018how2} proposed a multi-modal summarization dataset consisting of text, video, and audio modalities; it contains over 2000 hours of videos accompanied by the corresponding audio and speech transcriptions. \textbf{MMSS}: \citet{li2018multi} proposed a multi-modal summarization dataset consisting of text and images with the aim of proposing an image-aided sentence summarization framework. The dataset has 66K instances in English language, that is generated by extracting sentence-headline pairs from the \texttt{Gigaword} corpus\footnote{\url{ https://github.com/harvardnlp/sent-summary}}. \textbf{VMSMO}: To the best of our knowledge, \citet{li2020vmsmo} proposed the first large-scale asynchronous text-audio-video summarization dataset. The dataset is generated from the famous microblogging platform \texttt{Sina Weibo}\footnote{http://ir.weibo.com/}, and comprises of 184,920 instances in Chinese language. Similar trends of incorporating multiple modalities in language tasks can also be noticed in several tasks like question answering \cite{singh2021mimoqa}, translation \cite{elliott2017imagination}, sentiment analysis \cite{soleymani2017survey}, lexico-semantic classification \cite{jha2022combining}, keyword extraction \cite{verma2022maked} etc. \subsection{Multi-lingual Text Summarization Datasets} The popularity of studying the benefits of summarization in different languages to improve summarization qualities increased over the past few years. There have been a lot of research work in bi-lingual setting; however, in this work, we limit ourselves to discussing multi-lingual summarization datasets to be concise. \textbf{MLSUM} : \citet{scialom2020mlsum} proposed the \texttt{MLSUM} dataset that consists of 1.5 million news articles obtained from the \texttt{Dailymail/CNN} websites. The dataset spans five languages - French, German, Spanish, Russian and Turkish. \textbf{XL-Sum}: \citet{hasan2021xl} proposed the \texttt{XL-Sum} dataset that consists of 1.35 million articles in 44 languages obtained from \texttt{BBC news}, making it the most language-diverse summarization dataset to date. However, 25 of these 44 languages do not contain even 10,000 instances, making it incompetent to train any language model. \textbf{WikiLingua}: \citet{ladhak2020wikilingua} proposed the \texttt{Wikilingua} dataset, which is the largest parallel multi-lingual summarization to date. The dataset consists of 770K instances in English language, and is extended to 17 other languages for varying number of English articles. \textbf{MLGSum}: \citet{wang2021contrastive} proposed the \texttt{MLGSum} dataset that consists of articles from various news providers such as \texttt{BBC}, \texttt{france243} and \texttt{select faz}. The dataset has five high-resource and seven low-resource languages, with a total of 1.1 million instances, and is a rich source for text summarization for German language with ~500K instances. We observe that multiple popular datasets (see Table \ref{tab:comparison}) in multi\-modal summarization and multi-lingual summarization are useful for both technique evaluation and technique improvisation. However, the combined field of multi\-lingual multi\-modal summarization has remained largely unexplored, and it can be attributed to the lack of dedicated high quality dataset and formalizing it as a problem statement. Hence, we formally define the \texttt{M3LS} task and discuss the dataset addressing the problem further. \section{M3LS Task} Given for each language $l_k \in L$ where $L$ is the set of all languages, we have data $M^{l_k} = <T^{l_k} , I^{l_k}>$, where $T^{l_{k}}=\left\{t_{1}^{l_{k}}, t_{2}^{l_{k}}, \ldots, t_{|T|}^{l_{k}}\right\}$ is a set of documents, and $I^{l_k}=\{ I^{t_{1}^{l_{k}}}, I^{t_{2}^{l_{k}}}, \ldots, I^{t_{|T|}^{l_{k}}}\}$ is a set of images, where $I^{t_{j}^{l_{k}}}=\left\{i_{1}, i_{2}, \ldots, i_{|I|}\right\}^{t_{j}^{l_{k}}}$ denotes the set of images belonging to the document $t_j^{l_k} \in T^{k_k}$ and $|.|$ denotes the cardinality of a set. The task is to obtain a function $F$ that maps documents $t_{j}^{l_{k_1}} \in T^{l_{k_1}}$ in language $l_{k_1}$ along with their corresponding images, $I^{t_j^{l_{k_1}}} \in I^{l_{k_1}}$ to a set of multi-modal summaries in target language, $l_{k_2}$, comprising of text summaries (denoted by $O^{l_{k_2}}$) along with images from the input (denoted by $I^{l_{k_1}}$). \begin{equation} F:<T^{l_{k_1}}, I^{l_{k_1}}> \rightarrow <O^{l_{k_2}}, I^{l_{k_1}}> \end{equation} When $k_1\neq k_2$, we have multi-modal cross-lingual summarization, otherwise the task is multi-modal mono-lingual summarization, a graphic representation of the task is shown in Figure \ref{fig:box}. \begin{figure*}[] \centering \includegraphics[width=0.7\textwidth]{images/M3LS.pdf} \vspace{-4mm} \caption{Proposed \texttt{M3LS} task.} \label{fig:box} \vspace{-1em} \end{figure*} \section{M3LS Dataset} Through the \texttt{M3LS} task, we motivate the need for a multi-modal multi-lingual dataset by studying the developments in summarization techniques such as secondary enhancements using images with multi-modal output \cite{zhu2018msmo}, video-based multi\-modal summarization \cite{li2020vmsmo} and using multi\-objective optimization \cite{jangra2020multi}. On the other hand, development of multi-lingual transformer based models like \citet{xue2020mt5} has publicly available checkpoints fine-tuned for multiple language modelling tasks, including multi\-lingual summarization. Development of such models requires high-quality heterogeneous data and improvements in various models utilizing multi-modal shared attention layers for annotated data with image-text pairs for a specific language task. To address these issues, we present \texttt{M3LS} and in this section we discuss various steps involved in its construction. \begin{figure}[h] \includegraphics[width=0.4\textwidth]{images/Data_Example.pdf} \caption{Snapshot format of a webpage used in development of M3LS, and various features extracted during the scraping procedure} \label{fig:webpage} \end{figure} \subsection{Dataset Construction} We explore the news domain, as it is one of the most abundant and readily available domains and covers articles in multiple topics, while describing the events and lacking extreme bias. We analyzed the structure of articles and surveyed multiple news providers before finalizing on \texttt{BBC News}, which provides full sentence summaries in a uniform structured format across multiple languages. The summaries are professionally created by the author which ensures the quality of the data. We explain the steps involved in creating the \texttt{M3LS} dataset and discuss various aspects of the data. {\bf BBC News}: \texttt{BBC News}\footnote{\url{https://www.bbc.com/news}} is a division of British Broadcasting Corporation responsible for gathering and broadcasting current news affairs. Each BBC news article has a text summary comprising complete sentences in the present tense, avoiding opinions and sensationalism. We cover 20 different languages with summaries written in corresponding languages. We extract data from various parts of the webpage as shown in Figure \ref{fig:webpage}. {\bf Obtaining Articles}: We obtain links to articles from the corresponding \texttt{Twitter\footnote{\url{https://twitter.com/bbc}}} pages for each BBC language news dataset. To extend the dataset, we scrape\footnote{Data is collected in accordance with the terms and conditions mentioned on the website} valid links\footnote{A link is valid if it contains a BBC article summary for the corresponding domain.} obtained from the parsed articles of each language. The final collection of links is scraped separately using \texttt{scrapy}\footnote{\url{https://scrapy.org}} to obtain the final dataset. Since these links are showcased at the corresponding \texttt{twitter} page, these links ensure articles containing topics of interest and high popularity. We extend the dataset by recursively extracting links from suggestions and hyperlinks within a webpage. \textbf{Structuring the Data}: We obtain various features from the webpage as shown in Figure (\ref{fig:webpage}), and compile them in a \texttt{JSON} format; we also provide a dedicated parser, instructions and a tutorial for the ease of access of features from any instance. The data is freely available for use in accordance with the terms and conditions of \texttt{BBC News}, we discuss this in detail on the same link where our dataset is uploaded. {\bf Text Validation}: In order to ensure high-quality text from the source, we manually read 10 instances from each language\footnote{For languages unknown to the authors, we use \texttt{Google Translate}\url{https://translate.google.com} to translate the content in English language} from the collected links to verify if the articles are descriptive in nature and consist of text written in complete sentences. \noindent\textsc{\bf Summary Validation}: We manually checked the summary quality for 100 articles each in 4 languages\footnote{We restrict ourselves to 4 languages (\texttt{English}, \texttt{Hindi}, \texttt{Bengali} and \texttt{Marathi}) due to the understanding of languages of the authors presenting this work} from our dataset and validated if the given summary captures the information represented in the text. For every article, after carefully reading the text, we assign the gold summary a score between 1-5 with 5 being the best possible summary which captures most of the important information from the given article and vice versa, and also including parameters like summary length and length of the article. We observe that for > 70 articles across the languages evaluated obtain a score of > 4 out of 5 in our analysis. Assuming the uniformity of articles published by BBC across multiple domains, we assume that this fact is true for every language in our dataset. \noindent\textsc{\bf Final Dataset}: In final dataset, each news article contains the text document, images with corresponding captions, keywords, links to related news articles, and a multi-modal summary comprising of a few sentences and an image. \noindent\textsc{\bf Cross-lingual Dataset}: Our cross-lingual dataset contains all features from our final dataset, along with multi-modal summaries consisting of text in another language. It is obtained from the links given by the author within the Japanese language article to the corresponding article in English language, we manually check the information provided in both articles using \texttt{Google Translate}\footnote{\url{https://google.com/translate}} for 100 instances to verify the similarity of the content and summaries provided. \noindent\textbf{Train-Test-Validation split:} The dataset has 1.2 million news articles which we split into 80\% training, 10\% test and 10\% validation for languages having $\leq$ 50,000 articles, otherwise we select 90\% data for training, 5\% for testing and 5\% as validation split. \section{Dataset Analysis} \subsection{Overview} The \texttt{M3LS} dataset has 1.11M+ multi-lingual multi-modal instances across 20 languages and over 9K cross-lingual multi-modal instances for \texttt{English}-\texttt{Japanese} language pair. The dataset can be categorized into 8 high resource languages and 12 low resource languages\footnote{The categorization is done based on a threshold value of 50k data instances.} (refer to Appendix \ref{sec:appendix-B} for more details). The chosen languages originate from different parts of the globe, and belong to 5 different language roots: \textit{Indo-European}, \textit{Austronesian}, \textit{Japanic}, \textit{Dravidian}, and \textit{Sino-Tibetan}. \texttt{M3LS} dataset is quite diverse, with the least \#articles for \texttt{Sinhala} (10,148) and greatest \#articles for \texttt{English} (376,367). The dataset becomes even more complex and challenging with different sizes of input documents for different languages, with document size varying from \~ 330 tokens to over 2800+ tokens. The dataset articles cover a wide time span, with articles from 2009 to 2021 (refer to Appendix \ref{sec:appendix-A} for more details). We hope that the \texttt{M3LS} dataset will instigate and inspire research in less-explored languages, since 14 out of these 20 languages covered by the dataset are among the top-20 most spoken languages in the world\footnote{\url{https://lingua.edu/the\%2D20\%2Dmost\%2Dspoken\%2Dlanguages\%2Din\%2Dthe\%2Dworld\%2Din\%2D2022/}}; this diversity helps in modelling tasks for both well-explored and less-explored languages. \subsection{Dataset Comparison} To study the size and span of our dataset, we compare \texttt{M3LS} with other summarization datasets extracted from the \texttt{BBC News} domain. We found that \texttt{XSum} contains 53\% of the tokens from our dataset, while \texttt{XL-Sum} contains 58\% of the tokens from our dataset across all languages present in \texttt{M3LS}. However, they are uni-modal in nature, while \texttt{XSum} is uni-lingual. We observe that \texttt{M3LS} is magnitudes larger when compared to \texttt{XSum}, while exceeding by times 2-3 in almost all individual language instances when compared to \texttt{XL-Sum}. Both of these datasets are used to train and fine-tune several state-of-the-art-summarization models like \texttt{Pegasus}, hence we believe that \texttt{M3LS} will offer a wider and better language modelling support in terms of size and diversity for the languages present in it, with the additional benefit of multi-modality. \section{Experiments} \subsection{Setup} Depending upon the number of instances in each language within \texttt{M3LS}, we perform a train:test:validation split with a ratio of 80:10:10 if the number of instances is below 50K and 90:5:5 otherwise. To conduct our experiments in a multi-lingual setting, we survey publicly available tokenizers and sentence segmenters for multiple languages, and we combine them within one dedicated package for our experiments. We further define a set of rules for sentence segmentation for languages lacking such support from external packages within our package\footnote{\url{https://github.com/zenquiorra/TokSeg}}. We compile our package using \texttt{segtok}\footnote{\url{https://pypi.org/project/segtok/1.1.0/}} for the Indo-European language, \texttt{IndicNLP}\footnote{\url{https://github.com/anoopkunchukuttan/indic_nlp_library}} for Indian languages, \texttt{fugashi} \cite{mccann-2020-fugashi} for Japanese (\texttt{ja}) and \texttt{chinese}\footnote{\url{https://pypi.org/project/chinese/}} for Chinese. For data pre-processing steps such as stopword removal, we collect stopwords from the \texttt{nltk}\footnote{\url{https://nltk.org/}} package, and publicly available stopwords present in the \texttt{spaCy}\footnote{\url{https://github.com/explosion/spaCy/tree/master/spacy/lang}} repository for all languages in a centralized pipeline for our experiments. We evaluated the performance of various summarization techniques utilizing our dataset, including simpler techniques such as \texttt{LEAD-3} and \texttt{RANDOM} which have proven to be quite useful in past \cite{ghalandari2020large,scialom2020mlsum,sharma2019bigpatent}. We have also included statistics based \texttt{CENTROID} \cite{radev2004centroid} and graph based \texttt{TextRank} \cite{mihalcea2004textrank} techniques. To have a fair comparison across multiple languages using a shared dedicated model, we have evaluated the performance of an abstractive technique in a multi-lingual setting utilizing a pre-trained checkpoint\footnote{\url{https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum}} for summarization of the transformer-based \texttt{MT5} \cite{xue2020mt5} model. Finally to explore the multi-modal aspect of our dataset, we evaluate the performance of a multi-modal encoder-decoder based technique \cite{zhu2018msmo} that utilizes images and text to generate a multi-modal text summary. However the publicly available implementation\footnote{We use the implementation provided by the authors, which is a multi-layered package, modification of which to be compatible for a multi-lingual setting isn't feasible based on the software complexity} for \texttt{MSMO} restricts us to evaluate it only for the English language. However, to compare this score, we evaluate the performance of three state-of-the-art transformer-based models - \texttt{Pegasus} \cite{zhang2020pegasus}, \texttt{BART} \cite{lewis2020bart}, and \texttt{T5} \cite{xue2020mt5} for summarization which are compatible with the English language. Since, two of the pre-trained models we described above are fine-tuned on \texttt{XSum} and \texttt{XL-Sum} datasets which are extracted from the same source - \texttt{BBC News} - we avoid fine-tuning on models to have a fair comparison of the models and we explain the scores in discussions. In all techniques, we set the generated summary length threshold as the average length of gold summary for the corresponding language in our corpus. \subsection{Baselines} \textsc{\bf Simpler Extractive Approaches} \textsc{LEAD-3}: In this baseline, the first three sentences of the source text are extracted as the final summary. This method is a robust baseline, as shown by \cite{sharma2019bigpatent} for news summarization datasets. \textsc{RANDOM}: We recursively extract words randomly from the source text until the threshold summary length is reached. The aim of this baseline is to understand and compare other baselines with an unbiased model as a point of reference. \noindent\textsc{\bf Statistical Approach} \textsc{CENTROID}: We use the strategy proposed by \citet{radev2004centroid}, which ranks sentences based on the centrality scores obtained by the words in the sentence. We use TF-IDF scores to measure each word's similarity, and extract top sentences from each ranking until the threshold summary length is obtained. \noindent\textsc{\bf Graph Based Approach} \textsc{TextRank}: TextRank \cite{mihalcea2004textrank} is an unsupervised graph-based ranking technique based on the relevance of sentences in the source text\footnote{We use the implementation provided by the \texttt{gensim}{\url{https://radimrehurek.com/gensim_3.8.3/summarization/summariser.html} package and modify the segmentation and tokenizer part using our dedicated package.}} We consider the sentences which are most central to the document based on the ranking as generated summaries. \noindent\textsc{\bf RNN Based Approach} \textsc{MSMO}: MSMO \cite{zhu2018msmo} is an encoder-decoder model trained for multi-modal summarization. It utilizes a multi-modal attention mechanism to generate multi-modal summaries utilizing text and images. \noindent\textsc{\bf Transformer Based Approaches} \textsc{MT5}: MT5 \cite{xue2020mt5} is a transformer-based seq2seq model pre\-trained for multiple natural language tasks. We use the publicly available checkpoint\footnote{\url{https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum}} pre-trained for text summarization on the \texttt{XL-Sum} dataset \cite{hasan2021xl} for a multi-lingual setting. \textsc{PEGASUS}: Pegasus \cite{zhang2020pegasus} is a transformer-based model, pre-trained on a task to remove meaningful sentences from an input text, making it suitable for summarization. We used a checkpoint\footnote{\url{https://huggingface.co/google/pegasus-xsum}} of \texttt{PEGASUS} model pre-trained on the XSum dataset \cite{narayan2018don} for summarization. \textsc{BART}: BART \cite{lewis2020bart} uses a standard seq2seq architecture with a bi-directional encoder and a left-to-right decoder. We use a pre-trained model trained on the DailyMail/CNN \cite{Nallapati2016AbstractiveTS} for our evaluation. \textsc{T5}: T5 \cite{raffel2020exploring} is an encoder-decoder model trained on a mixture of natural language tasks, including translation and summarization; it converts any task into a text-to-text format. We use the pre-trained \texttt{T5-large} model for the summarization task. \begin{table}[] \caption{Comparison of ``ROUGE" f\-scores for summaries generated using Multi-modal baseline \texttt{MSMO} and Uni\-modal transformer based baselines against gold summaries from the English language dataset . ``R-f1" denotes ROUGE-1 f\-score, ``R-f2" denotes ROUGE\-2 f\-score, ``R-fL" denotes the ROUGE\-L f\-score, and ``BrS" denotes BERTSCORE.} \centering \begin{tabular}{|l|c|c|c|c|} \hline \textbf{English} & \textbf{R-f1} & \textbf{R-f2} & \textbf{R-fL} & \textbf{BrS} \\ \hline \texttt{BART} & 0.195 & 0.031 & 0.131 & 0.863 \\ \texttt{Pegasus} & \textbf{0.389} & \textbf{0.181} & \textbf{0.321} & \textbf{0.910} \\ \texttt{T5} & 0.197 & 0.0328 & 0.131 & 0.858 \\ \texttt{MSMO} & 0.217 & 0.046 & 0.158 & 0.851 \\ \hline \end{tabular} \label{tab:english-results} \end{table} \section{Results and Discussion} We evaluate the generated summaries against the gold summaries using the \texttt{ROUGE} \cite{lin2004rouge} evaluation metric. We report the \texttt{ROUGE-1}, \texttt{ROUGE-2}, \texttt{ROUGE-L} f-scores across every baseline discussed above \cite{lin2004rouge} (refer to Tables \ref{tab:comparison} and \ref{tab:tab-scores}). We additionally report \texttt{BERTSCORE} for English baselines \cite{zhang2019bertscore} (refer to Table \ref{tab:comparison}). \begin{table*}[] \caption{Performance of various techniques for summarization against the \texttt{M3LS} dataset gold summaries for every language. ``Lang" refers to the language code for a language according to the ISO 639-1 standard, ``R-f1" refers to the \texttt{ROUGE-1} f-scores, ``R-f2" refers to the \texttt{ROUGE-2} f-scores, ``R-fL" refers to the \texttt{ROUGE-L} f-scores} \resizebox{\textwidth}{!}{ \begin{tabular}{lrrrrrrrrrrrrrrr} \toprule Base & \multicolumn{3}{c}{\textbf{Random}} & \multicolumn{3}{c}{\textbf{LEAD-3}} & \multicolumn{3}{c}{\textbf{TextRank}} & \multicolumn{3}{c}{\textbf{CENTROID}} & \multicolumn{3}{c}{\textbf{MT5}} \\ Lang & R-f1 & R-f2 & R-fL & R-f1 & R-f2 & R-fL & R-f1 & R-f2 & R-fL & R-f1 & R-f2 & R-fL & R-f1 & R-f2 & R-fL \\ \midrule bn & 0.003 & 0.000 & 0.002 & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & \textbf{ 0.004} & 0.001 & 0.003 \\ mr & 0.013 & 0.000 & 0.012 & 0.041 & 0.005 & 0.040 & 0.025 & 0.002 & 0.025 & 0.006 & 0.001 & 0.006 & \textbf{ 0.044} & 0.005 & \textbf{0.044} \\ gu & 0.014 & 0.001 & 0.014 & \textbf{ 0.039} & 0.005 & 0.038 & 0.014 & 0.001 & 0.014 & 0.016 & 0.002 & 0.016 & 0.036 & 0.005 & 0.036 \\ ps & 0.002 & 0.000 & 0.001 & 0.000 & 0.000 & 0.000 & 0.002 & 0.000 & 0.001 & 0.000 & 0.000 & 0.000 & \textbf{ 0.003} & 0.000 & 0.001 \\ uk & 0.030 & 0.002 & 0.029 & 0.062 & 0.016 & 0.061 & 0.043 & 0.010 & 0.042 & 0.032 & 0.006 & 0.032 & \textbf{0.094} & 0.025 & \textbf{ 0.094} \\ pt & 0.179 & 0.009 & 0.114 & 0.204 & 0.033 & 0.124 & 0.199 & 0.030 & 0.128 & 0.089 & 0.008 & 0.075 & \textbf{0.276} & 0.085 & 0.193 \\ id & 0.118 & 0.001 & 0.083 & 0.172 & 0.037 & 0.117 & 0.144 & 0.030 & 0.104 & 0.104 & 0.014 & 0.080 & \textbf{ 0.289 }& 0.115 & 0.233 \\ ne & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ pa & 0.012 & 0.000 & 0.012 & \textbf{0.038} & 0.004 & \textbf{0.038} & 0.014 & 0.001 & 0.014 & 0.010 & 0.002 & 0.010 & 0.026 & 0.000 & 0.026 \\ si & 0.014 & 0.000 & 0.014 & 0.032 & 0.004 & 0.031 & 0.019 & 0.002 & 0.019 & 0.007 & 0.001 & 0.007 & \textbf{0.039} & 0.018 & \textbf{0.039} \\ ur & 0.006 & 0.000 & 0.006 & 0.023 & 0.001 & 0.023 & 0.006 & 0.000 & 0.005 & 0.024 & 0.001 & 0.023 & \textbf{0.044 }& 0.000 & \textbf{0.044} \\ fr & 0.168 & 0.007 & 0.107 & 0.206 & 0.043 & 0.126 & 0.177 & 0.033 & 0.115 & 0.164 & 0.024 & 0.110 & \textbf{0.209} & 0.041 & 0.141 \\ ru & 0.032 & 0.001 & 0.032 & 0.071 & 0.017 & 0.069 & 0.041 & 0.012 & 0.040 & 0.036 & 0.008 & 0.036 & \textbf{0.081 }& 0.011 & \textbf{0.081} \\ ja & 0.069 & 0.001 & 0.068 & 0.126 & 0.012 & 0.120 & 0.084 & 0.007 & 0.081 & 0.063 & 0.004 & 0.062 & \textbf{0.306} & 0.081 & 0.291 \\ te & 0.010 & 0.000 & 0.009 & 0.023 & 0.001 & 0.023 & 0.011 & 0.000 & 0.011 & 0.008 & 0.001 & 0.008 & \textbf{0.026} & 0.000 & \textbf{0.026} \\ ta & 0.014 & 0.001 & 0.014 & \textbf{0.034 }& 0.005 & \textbf{0.034} & 0.023 & 0.003 & 0.022 & 0.012 & 0.001 & 0.012 & 0.026 & 0.000 & 0.026 \\ zh & 0.022 & 0.001 & 0.022 & \textbf{0.053} & 0.008 & 0.051 & 0.042 & 0.005 & 0.041 & 0.025 & 0.003 & 0.025 & 0.125 & 0.042 & 0.118 \\ es & 0.177 & 0.008 & 0.117 & 0.180 & 0.033 & 0.117 & 0.110 & 0.018 & 0.073 & 0.081 & 0.008 & 0.067 & \textbf{0.280} & 0.084 & 0.202 \\ hi & 0.010 & 0.000 & 0.010 & \textbf{0.018} & 0.002 & 0.018 & 0.013 & 0.001 & 0.013 & 0.005 & 0.000 & 0.005 & 0.002 & 0.000 & 0.001 \\ en & 0.146 & 0.002 & 0.102 & \textbf{0.175} & 0.026 & 0.114 & 0.100 & 0.014 & 0.071 & 0.140 & 0.016 & 0.102 & \textbf{0.427} & 0.182 & 0.345 \\ \bottomrule \end{tabular} } \label{tab:tab-scores} \end{table*} \subsection{Multi-lingual baseline scores} We observe that transformer based techniques used in our experiments perform significantly better compared to other techniques. However, for the ``MT5" column, we observe very high scores and spikes of very low scores as observed in Table \ref{tab:tab-scores}, this behavior maybe caused due to two factors: \begin{itemize} \item Relatively high scores can be attributed to the use of ``MT5" checkpoint that is fine-tuned for the task of summarization on a dataset (\texttt{XL-Sum}) obtained from same source as ours. \item Very low scores for some languages can be attributed to the ``ROUGE" evaluation metric which relies on token overlap\footnote{We are not implementing stemming of tokens during evaluation, due to the lack of support of multi-lingual stemming methods across various softwares which we have used for experimentation and to have an even comparison with the supported languages}. Many of these languages, especially the ones with \texttt{Dravidan} and \texttt{Indo-European} origins have words which change their form significantly depending on their placement in the text and the context in which they appear, hence simple token overlap metrics show lower scores if the root form of the word isn't considered. \end{itemize} We observe that \texttt{LEAD-3} performs better for the languages in which transformer-based baseline performs poorly, this can be attributed to two factors: \begin{itemize} \item As shown by \citet{sharma2019bigpatent} that \texttt{LEAD-3} performs very well for summarization tasks when we consider the news domain, suggesting the idea that top sentences capture a lot of information within a news article. \item \texttt{LEAD-3} considers top-3 sentences from the text, unlike abstractive summarization, new tokens or new forms of existing tokens are not present in the given article. Since it is an extractive technique, the chances of token overlap are higher and hence better ``f-scores". \end{itemize} \subsection{Multi-modal baseline scores} Due to the limitation of lack of pre-trained frameworks in a multi-modal setting for most of the languages in the dataset, we were constrained to evaluate the multi-modal technique on the \texttt{English} dataset. On comparing the ``f-scores" of various uni-modal techniques with the multi-modal technique, we notice that the transformer based model \texttt{Pegasus} outperforms other techniques. This is largely attributed to the fact that the pre-trained checkpoint we have used for evaluation of summaries through the \texttt{Pegasus} model is fine-tuned on the \texttt{XSum} dataset, which has data collected from the same source as ours. We observe that for other models which are not fine-tuned on a dataset extracted from same source as ours, the multi-modal technique \texttt{MSMO} is able to outperform other techniques. \subsection{Abstractiveness of the proposed dataset} We propose an abstractive summarization dataset where the target summaries are manually written by human beings. The \texttt{M3LS} dataset demands abstractive techniques since the percentage of novel uni-grams in the dataset is quite high (refer to ``abs.gold" column in Appendix \ref{sec:appendix-B}). This fact is also observed in the results from the baseline techniques. For instance, \texttt{MT5} performs consistently superior for multiple languages as observed in Table\ref{tab:tab-scores}, the abstractive baselines have thrice as good ROUGE scores as the extractive baselines. \section{Conclusion} In this work, we release a large-scale multi-modal multi-lingual summarization dataset comprising of over 1.1M+ news articles and spanning 20 languages, and motivate the problem statement of Multi-modal Multi-lingual summarization using \texttt{M3LS}. To the best of our knowledge, this is the first ever multi-modal summarization data set spanning several languages. The proposed dataset is the largest summarization dataset for 13 out of 20 languages. We have evaluated the performance of various baselines to establish the quality of the proposed dataset in both multi-modal and multi-lingual settings. Through this work, we hope to instigate research in various less-explored languages in the community for various research problems including but not limited to summarization, headline generation, keyword extraction, image caption generation, multi-modal embedding generation, etc. In future works, we plan to work on shared models which address the \texttt{M3LS} task utilizing our dataset. \section*{Limitations} There are a few considerations to keep in mind in our work. \textbf{First}, the dataset currently has a multi-modal input, mapping to a textual summary. However, future work could involve annotating images to enhance the dataset with a multi-modal output. \textbf{Second}, the distribution for languages in the \texttt{M3LS} dataset is skewed due to the imbalanced number of articles published in BBC across languages and the late establishment of virtual print media in certain languages (as shown in Appendix \ref{sec:appendix-A}). \textbf{Third}, the current dataset uses an independent identically distributed split to create train and test sets, but more advanced techniques such as adversarial splits and likelihood splits could also be explored in future work. \textbf{Fourth}, while the current manuscript does not evaluate the dataset on both multi-modal and multi-lingual aspects simultaneously, we believe that this dataset has the potential to contribute to the development of such systems in the future. \section*{Acknowledgements} This publication is an outcome of the R\&D work undertaken in the project under the Visvesvaraya Ph.D. Scheme of Ministry of Electronics \& Information Technology, Government of India, being implemented by Digital India Corporation (Formerly Media Lab Asia).
1,108,101,562,691
arxiv
\section{Introduction} Machine learning techniques have been extensively used in high energy physics data analysis over the past few decades \cite{Bhat2011}. Especially supervised, multivariate classification algorithms, such as neural networks and boosted decision trees, have become commonplace in state-of-the-art physics analyses and have proven to be invaluable tools in increasing the signal-to-background ratio in searches of tiny signals of new physics. Such methods work under the assumption that there exists labeled data sets of signal and background events which can be used to train the classification algorithm. These training samples are usually obtained from Monte Carlo (MC) simulation of the new physics process and the corresponding Standard Model background. Supervised machine learning algorithms are the tools of model-dependent new physics searches. In the beginning of the analysis, one decides to focus on a particular variant of a certain new physics model. This could be for example a certain SUSY parametrization. Events corresponding to this choice are then generated using a MC generator followed by training of a classification algorithm to optimize the yield of such events. In the best case scenario, applying this classifier to real observed data would produce a statistically significant excess over the Standard Model background and result in a discovery of new beyond the Standard Model physics. Unfortunately, this process could go wrong on several different levels. Firstly, there are no guarantees that nature actually obeys any of the existing theoretical new physics models. The solution to the existing problems with the Standard Model could be something that no one has even thought of yet. Even if one of the existing theories is the right way forward, such models often have a large amount of free parameters. Exploring the whole high-dimensional parameter space one combination of values at a time could be a very laborious, time-consuming and error-prone process. Lastly, MC simulation of such processes often requires simplifications and approximations and could be a major source of systematic errors in its own right. This kind of uncertainties and systematic errors are not well-tolerated by supervised classification algorithms. Figure \ref{fig:nnDemo} illustrates the problem. When a neural network was trained using the signal and background data of Figure \ref{fig:nnDemo}a, we obtained the black decision boundary to separate signal events from background. But what if due to one of the above mentioned problems, the real new physics signal looked like the one in Figure \ref{fig:nnDemo}b? The algorithm trained with wrong type of training patterns would completely miss such a signal. The signal events would all be regarded as background even though the data clearly contains a signature of an interesting anomalous process. \begin{figure}[tb] \centering \subfigure{ \includegraphics[width=6cm]{./img/acat_toy1.pdf}} \hspace{1cm} \subfigure{ \includegraphics[width=6cm]{./img/acat_toy2.pdf}} \caption{Demonstration of the limitations of model-dependent searches of new physics. When a neural network is trained with the data of the left figure, the classification decision boundary is given by the black curve. But if the training data is systematically inaccurate, a new physics signal could be completely missed in the analysis. For example all the observations of the right figure would be classified as background.} \label{fig:nnDemo} \end{figure} While it is possible to alleviate this type of problems up to some extent by tweaking the training data and the classification algorithm of the supervised classifier, a more principled solution is provided by performing the new physics search in a model-independent mode where, instead of trying to provide an experimental validation of a particular beyond the Standard Model physics model, one is simply looking for deviations from known Standard Model physics. The goal of model-independent new physics searches is to be sensitive to any deviations from known physics, whether or not they are described by one of the existing theoretical models. This kind of ideas have been put forward both at the Tevatron and at the LHC. For example, at the CDF experiment, a combination of algorithms called Vista and Sleuth were used to perform a global model-independent search of new physics with early Tevatron Run II data \cite{CDF2008}, while an algorithm called MUSiC is currently being used to scan the data of the CMS experiment at the LHC for such deviations \cite{CMS2008}. In this paper, we present an alternative algorithm for model-independent searches of new physics based on semi-supervised anomaly detection techniques. As opposed to existing methods, our aim is not only to detect deviations from known Standard Model physics in a model-independent manner but also to produce a model for the observations which can be used to further analyze the properties of the potential new physics signal. Our method is also inherently multivariate and binning-free, while the existing methods are based on exploiting the properties of one-dimensional histograms. \section{Semi-supervised detection of collective anomalies} \subsection{Motivation} Our data analysis problem can be formulated as follows: given a labeled sample of known physics expected to be seen in the experiment and the empirical observations, how can we find out if there is a previously unknown contribution among the measurements and, if there is, how can we study its properties? In statistics, such a problem is solved using semi-supervised anomaly detection algorithms \cite{Chandola2009}. These are algorithms designed to look for deviations from a labeled sample of normal data. The neural network based background encapsulator \cite{Ribarics1993} of the trigger of the H1 experiment at HERA can, for instance, be regarded as an example of such an algorithm. Unfortunately, existing semi-supervised anomaly detection algorithms can rarely be directly applied to solve the model-independent search problem. This is because they are designed to classify observations as anomalies should they fall in regions of the data space where there is a small density of normal observations. The problem is that in many theoretical models the excess of events corresponding to the new physics signal appears among the bulk of the background distribution instead of the tail regions considered by standard anomaly detection algorithms. We solve the problem by exploiting the fact that if there is a new physics signal, it should occur as a collection of several events which, only when considered together, constitute an anomalous deviation from the Standard Model. \subsection{Fixed-background model for anomaly detection} We start by fitting a parametric density estimate $p_\mathrm{B}(\bm{x})$ to a labeled background sample. This density estimate, which we call the \emph{background model}, serves as a reference of normal data. We conduct a search for collective deviations from the background model by fitting to the observations a mixture of $p_\mathrm{B}(\bm{x})$ and an additional \emph{anomaly model} $p_\mathrm{A}(\bm{x})$: \begin{equation} \label{eq:pFB} p_{\mathrm{FB}}(\bm{x}) = (1-\lambda)p_{\mathrm{B}}(\bm{x}) + \lambda p_{\mathrm{A}}(\bm{x}). \end{equation} We call this mixture model the \emph{fixed-background model} to emphasize that when modeling the observations, $p_\mathrm{B}(\bm{x})$ is kept fixed which allows $p_\mathrm{A}(\bm{x})$ to capture any unmodeled anomalous contributions in the data. Fitting of both $p_\mathrm{B}(\bm{x})$ and $p_{\mathrm{FB}}(\bm{x})$ to the data is done by maximizing the likelihood of the model parameters. Pattern recognition of the anomalies with the fixed-background model enables a variety of data analysis tasks: \begin{enumerate} \item \emph{Classification}: Observations can be classified as anomalies using the posterior probability as a discriminant function \begin{equation} \label{eq:d} p(\mathrm{anomaly}|\bm{x}) = \frac{\lambda p_{\mathrm{A}}(\bm{x})}{(1-\lambda)p_{\mathrm{B}}(\bm{x}) + \lambda p_{\mathrm{A}}(\bm{x})} =: \mathcal{D}(\bm{x}). \end{equation} An observation $\bm{x}$ is then classified as an anomaly if $\mathcal{D}(\bm{x}) \geq T$ for some threshold $T \in [0,1]$. \item \emph{Proportion of anomalies}: The mixing proportion of the anomaly model $\lambda$ directly gives us an estimate for the proportion of anomalies in the observations. This could be further used to derive a cross section estimate for the anomalous physics process. \item \emph{Statistical significance}: The statistical significance of the anomaly model can be estimated by performing a statistical test for the background-only null hypothesis $\lambda = 0$ using the likelihood ratio test statistic $\Lambda$ \cite{Knight2000}. This enables us to discriminate between statistical fluctuations of the background and real anomalous processes. Following \cite{Wang1997}, we obtain the distribution of the test statistic using nonparametric bootstrapping. That is, we sample with replacement observations from the background data, fit $p_{\mathrm{FB}}(\bm{x})$ to this new sample and compute the corresponding value of $\Lambda$. This allows us to recover the distribution of $\Lambda$ under the background-only null hypothesis and hence to compute the significance of the observed collective anomaly. \end{enumerate} As a simple illustration of the fixed-background model, Figure~\ref{fig:illustration}a shows a univariate data set of background data generated from a Gaussian distribution and a maximum likelihood Gaussian density $p_{\mathrm{B}}(x)$ estimated using the data set. Figure~\ref{fig:illustration}b shows a very simple anomalous pattern that can be modeled with a single additional univariate Gaussian. Given a sample contaminated with these anomalies, our goal is to find an optimal combination of the parameters of the anomaly model ($\mu_\mathrm{A}$, $\sigma_\mathrm{A}^2$) and the mixing proportion $\lambda$. The resulting model $p_{\mathrm{FB}}(x)$ is shown with a black line and the anomaly model $p_{\mathrm{A}}(x)$ with a gray line in Figure~\ref{fig:illustration}b. \begin{figure}[tb] \centering \subfigure{ \includegraphics[width=7cm]{./img/bgfig.pdf} }~ \subfigure{ \includegraphics[width=7cm]{./img/anomfig.pdf} } \caption{(a) A histogram of background data from a univariate Gaussian distribution and an estimated background model $p_{\mathrm{B}}(x)$. (b) An illustration of the fixed-background model in a univariate case. The histogram shows the unlabeled observations (the gray excess in the histogram denotes the anomalous contribution). The estimated fixed-background model $p_{\mathrm{FB}}(x)$ is shown with a black line and the anomaly model $p_{\mathrm{A}}(x)$ with a gray line.} \label{fig:illustration} \end{figure} \subsection{Algorithmic details} We model all the densities in Equation \eqref{eq:pFB} using a multivariate Gaussian mixture model \cite{McLachlan2000} \begin{equation} p(\bm{x}|\bm{\theta}) = \displaystyle\sum_{j=1}^J \pi_j \mathcal{N}(\bm{x}|\bm{\mu}_j, \bm{\Sigma}_j). \end{equation} Here $J$ is the number of Gaussian components in the model. The components have means $\bm{\mu}_j$ and covariances $\bm{\Sigma}_j$, while $\pi_j$ are the mixing proportions which satisfy $\sum_j \pi_j = 1$. Fitting a Gaussian mixture model with $J$ components to the background sample by maximizing its log-likelihood is a standard problem in computational statistics and can be carried out using the iterative expectation-maximization (EM) algorithm \cite{McLachlan2008}. The algorithm proceeds in two steps. In the \emph{expectation step} (E-step), we compute the posterior probabilities for each data point $\bm{x}_i$ to have been generated by the $j$th Gaussian component \begin{equation} \label{estep} p(z_{ij}=1|\bm{x}_i,\bm{\theta}^k) = \frac{\pi_j^k\mathcal{N}(\bm{x}_i|\bm{\mu}_j^k, \bm{\Sigma}_j^k)}{\sum_{j'=1}^J\pi_{j'}^k \mathcal{N}(\bm{x}_i|\bm{\mu}_{j'}^k \bm{\Sigma}_{j'}^k)} =: \gamma_{ij}^k. \end{equation} Here, $\bm{\theta}^k$ contains the parameter estimates at the $k$th iteration and $\bm{z}_i$ indicates which component generated the $i$th observation. In the subsequent \emph{maximization step} (M-step), the parameter values are updated according to the following equations: \begin{equation} \pi_j^{k+1} = \frac{1}{N} \sum_{i=1}^N \gamma_{ij}^k, \quad \bm{\mu}_j^{k+1} = \frac{\sum_{i=1}^N \gamma_{ij}^k \bm{x}_i}{\sum_{i=1}^N \gamma_{ij}^k}, \quad \bm{\Sigma}_j^{k+1} = \frac{\sum_{i=1}^N \gamma_{ij}^k (\bm{x}_i-\bm{\mu}_j^{k+1})(\bm{x}_i-\bm{\mu}_j^{k+1})^{ \textrm{T} }}{\sum_{i=1}^N \gamma_{ij}^k}. \end{equation} The algorithm alternates between these two steps until the log-likelihood is not improving anymore. This gives us an estimate of the background model $p_\mathrm{B}(\bm{x})$. The next step of the anomaly detection algorithm is to fit to the unlabeled observations the fixed-background model \eqref{eq:pFB} where the anomaly model is a Gaussian mixture model with $Q$ components. When we keep the background model fixed, the free parameters of the full model are the parameters of the anomalous Gaussians, their mixing proportions and the global mixing proportion $\lambda$ and we seek to find such values of these parameter that their log-likelihood is maximized. This can be done easily by noting that as a mixture of two Gaussian mixtures, the fixed-background model itself is a Gaussian mixture with $J+Q$ components. Hence, we can use the same EM update equations as above to update the free parameters of the model and simply skip the updates of the fixed parameters. The complete update equations based on this idea can be found in the technical report \cite{Vatanen2011}, where we also describe a number of heuristics implemented in the algorithm to overcome the issues related to choosing the model complexities $J$ and $Q$. \section{Demonstration: Search for the Higgs boson} We demonstrate the application potential of the proposed anomaly detection framework by using a data set from the Higgs boson analysis at the CDF detector. It should be stressed that the results presented here should not be regarded as a realistic Higgs analysis. Instead, the goal is to merely demonstrate the performance and the potential benefits of the proposed algorithm. \subsection{Description of the data set} We consider a data set produced by the CDF collaboration \cite{Nagai2009} containing background events and MC simulated Higgs events where the Higgs is produced in association with the $W$ boson and decays into two bottom quarks, $q\bar{q} \rightarrow WH \rightarrow l \nu b \bar{b}$. In the data space, this signal looks slightly different for different Higgs masses $m_\textrm{H}$. The goal is to show that semi-supervised anomaly detection is able to identify such a signal without a priori knowledge of $m_\textrm{H}$. More generally, this could be any set of free parameters in the new physics theory under consideration. Each observation in the data set corresponds to a single simulated collision event in the CDF detector at the Tevatron proton-antiproton collider. We follow the event selection and choice of variables described in \cite{Nagai2009} for double SECVTX tagged collision events. We also consider an additional neural network based flavor separator from \cite{Chwalek2007}, giving us a total of 8 variables to describe each event. To facilitate density estimation and visualization of the results, the dimensionality of the logarithmically normalized data was reduced to 2 using principal component analysis (PCA) \cite{Jolliffe2002}. We used 3406 data points to train the background model which was then used to detect signals of 400 data points for masses $m_\textrm{H} = 100, 115, 135, 150$~GeV among another sample of 3406 observations of background data. Hence, the unlabeled sample contained 10.5\:\% of signal events. In reality, the expected signal is roughly 5 to 50 times weaker than this, but due to the limited number of background events available, the signal had to be amplified for this demonstration. Based on experiments with artificial toy data reported in \cite{Vatanen2011}, we expect to be able to detect signals which contribute only a few percent to the unlabeled sample when we have two orders of magnitude larger background statistics. \subsection{Modeling the Higgs data} We used the cross-validation-based information criterion (CVIC) \cite{Smyth2000} to select a suitable number of components $J$ for the background model. When a 5-fold cross-validation was performed, the evaluation log-likelihood was maximized with $J=5$. Figure~\ref{fig:higgsModel}a shows contours of the resulting five-component background model in the two-dimensional principal subspace. We then learned the fixed-background models for the signals with different masses starting with $Q = 3$ anomalous components. The algorithm converged with one anomalous component for $m_\textrm{H} = 100$~GeV and two components for the rest of the masses. The resulting anomaly model for $m_\textrm{H} = 150$~GeV is shown in Figure~\ref{fig:higgsModel}b. \begin{figure}[tb] \centering \subfigure{ \includegraphics[width=6cm]{./img/higgsbg.pdf} } \hspace{1cm} \subfigure{ \includegraphics[width=6cm]{./img/higgsanom.pdf} } \caption{(a) A projection of the WH background data into its two-dimensional principal subspace. The solid lines show contours of the estimated 5-component Gaussian mixture model for the background. (b) A projection of the $m_\textrm{H} = 150$~GeV test data set into the two-dimensional principal subspace. The solid lines show contours of the estimated 2-component Gaussian mixture model for the signal.} \label{fig:higgsModel} \end{figure} \subsection{Anomaly detection results} The statistical significances of the anomaly models were evaluated using the bootstrap technique with $50\:000$ resamplings. The significances, starting from the lowest mass, were $1.8\sigma$, $2.8\sigma$, $3.1\sigma$ and $3.3\sigma$. Hence, in our toy analysis, all the signals would have been significant enough to draw closer attention. Figure~\ref{fig:higgsROC}a shows the receiver operating characteristic (ROC) curves for anomaly detection with different Higgs masses. One can see that regardless of the mass of the Higgs, the algorithm is able to identify the signal with a relatively constant accuracy. The classification results are slightly better with the higher masses because the high-mass signal lies on a region of the data space with a lower background density. Starting from the lowest mass, the estimated anomaly proportions are $\lambda = 0.100,\:0.121,\:0.118,\:0.122$, which are all in agreement with the correct proportion of 0.105. We also trained a supervised MLP neural network for each of the mass points to act as an ``optimal'' reference classifier and compared the ROC curves to the ones obtained with anomaly detection. Figure \ref{fig:higgsROC}b shows the ROC curves for a neural network trained with the 150~GeV signal and tested with all the mass points. When the neural network was tested and trained with the same mass, the ROC curve was comparable to anomaly detection, which shows that the proposed model-independent framework is able to achieve similar performance as model-dependent supervised classification. However, when the neural network was tested with a mass different from its training mass, its performance started to decline. For example the 150~GeV neural network would be likely to miss the 100~GeV signal. On the other hand, anomaly detection is able to successfully identify the signal regardless of the mass. In other words, this experiment shows that supervised classifiers are able to efficiently identify only the signals they have been trained for with potentially severe consequences, while model-independent approaches, such as semi-supervised anomaly detection, do not suffer from such a limitation. \begin{figure}[tb] \centering \subfigure{ \includegraphics[width=7.2cm]{./img/acat_rocFEM.pdf} }\hspace{0.5cm} \subfigure{ \includegraphics[width=7.2cm]{./img/acat_rocNN.pdf} } \caption{(a)~ROC curves for the WH data with different Higgs masses $m_\textrm{H}$ with semi-supervised anomaly detection. The method is able to identify the signal without a priori knowledge of the mass. (b)~ROC curves for the WH data for a neural network classifier trained with the 150~GeV signal. The neural network is able to efficiently identify only the signal it has been trained for.} \label{fig:higgsROC} \end{figure} \section{Discussion} The semi-supervised anomaly detection algorithm could be used to scan the measurements for new physics signals by focusing on some particular final state which is thought to be especially sensitive to new physics. One could then use the framework to look for deviations from the expected Standard Model background in this final state. If a statistically significant anomaly is found, one could use the fixed-background model to study its properties. Since the anomalous events are likely to lie within the bulk of the background, the best way to reconstruct their properties would be a soft classification based approach \cite{Kuusela2011} where the contribution of a single event to some physical observable is weighted by the posterior class probability of the event. Reconstructing a number of physical spectra for the anomaly should allow one to produce a physics interpretation for the observed deviation. It is likely that most observed anomalies correspond detector effects and background mismodeling. If this is determined to be the case, new cuts could be introduced to isolate such regions or the background estimate could be corrected to account for the anomaly. The analysis could then be repeated iteratively until all anomalies are understood. If at some stage we encounter a significant anomaly which cannot be explained by just adjusting the background estimate, it could be a sign of new physics and should be studied further to see if there is a plausible new physics interpretation for it. The computational experiments of this paper were carried out using well-understood standard techniques from computational statistics in order to convey the basic idea of semi-supervised anomaly detection as clearly as possible. It is likely that some of the shortcomings of the algorithm could be alleviated by using more advanced computational tools. One of the obvious limitations of the algorithm is the curse of dimensionality. With a reasonable sample size, the algorithm seems to perform relatively well up to three dimensions, but beyond that, the number of observations required to estimate the parameters of the Gaussian mixture models becomes prohibitively large. We demonstrated with the Higgs example that one possible way of solving the problem is dimensionality reduction with PCA or some other dimensionality reduction algorithm. Another possibility would be to consider parsimonious Gaussian mixture models, where the number of parameters is reduced by constraining the structure of the covariance matrices \cite{Fraley2002}. The current algorithm is also only able to handle anomalies which manifest themselves as an excess over the background. That is, a deficit with respect to the background estimate is not treated properly, although it might be possible to circumvent this restriction by allowing $\lambda$ to take negative values. \section{Conclusions} We presented a novel and self-consistent framework for model-independent searches of new physics based on a semi-supervised anomaly detection algorithm. We showed using a Higgs boson data set that the method can be successfully applied to searches of new physics and demonstrated the potential benefits of the approach with respect to conventional analyses. To make sure that no new physics signals have been missed by the current model-dependent searches, it would be important to complement them by scanning the collision data with model-independent techniques, one example of which is the proposed anomaly detection framework. We hope that the work presented here helps to revive interest in such techniques among the HEP community by showing that model-independent new physics searches can be conducted in a feasible and practical manner. \ack The authors are grateful to the CDF collaboration for providing access to the Higgs signal and background Monte Carlo samples, to the Academy of Finland for financial support and to Matti P\"{o}ll\"{a}, Timo Honkela and Risto Orava for valuable advice. \section*{References} \bibliographystyle{iopart-num}
1,108,101,562,692
arxiv
\section{Introduction} If a significant primordial gravitational wave signal is detected in any near-future experiment, it will imply that the inflaton traversed a distance in field space larger than $M_{\rm Pl}$. This is the famous Lyth bound~\cite{Lyth:1996im} (for further refinements, see~\cite{Easther:2006qu,Baumann:2011ws,Antusch:2014cpa,Bramante:2014rva}). On first contemplating super-Planckian field ranges, an effective field theorist will tend to feel some discomfort, being inclined to write down the most general effective Lagrangian, \begin{equation} {\cal L} = - \frac{1}{2} m^2 \phi^2 + \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{\lambda}{4!} \phi^4 + \frac{c}{M_{\rm Pl}^2} \phi^2 \partial_\mu \phi \partial^\mu \phi - \frac{\lambda_6}{M_{\rm Pl}^2} \phi^6 - \ldots. \end{equation} For order-one values of the coefficients of high-dimension operators in $V(\phi)$, the potential will oscillate wildly over super-Planckian field ranges and spoil inflation. However, on further reflection, one realizes that a shift symmetry $\phi \to \phi + a$ would forbid every term in the potential, and thus a theory with a single dominant source of shift symmetry breaking can provide a technically natural approach to super-Planckian field ranges: we would predict that for some $f$, $\lambda \sim m^2/f^2$, $\lambda_6/M_{\rm Pl}^2 \sim m^2/f^4$, and so on. We could accommodate this shift symmetry in our effective field theory by parametrizing its breaking with a spurion and building a Lagrangian out of fields that nonlinearly realize the symmetry. The good behavior of the potential is enforced by the dominance of a single spurion. In fact, many physicists readily accept the axion as a potential solution to the strong CP problem, in spite of its severe Planck-suppressed operator problem~\cite{Kamionkowski:1992mf,Holman:1992us,Kallosh:1995hi}. To explain the very tight experimental bound on the effective theta angle in our universe, a theory of a generic pseudo-Goldstone axion must forbid high-dimension Planck-suppressed Peccei-Quinn-violating operators, so that QCD instanton effects provide the leading shift-symmetry-breaking spurion by several orders of magnitude. If the axion is a compact field (or equivalently, has an exact gauged discrete shift symmetry, $a \to a + 2\pi f$), then only exponentially small instanton effects can break the symmetry and it may be relatively easy to enforce single-spurion dominance. This may be realized by string theory axions, for example~\cite{Conlon:2006tq,Svrcek:2006yi}. The similarities in the need for a good shift symmetry to solve the strong CP problem and for large-field inflation have motivated concrete models of inflation which are natural from the effective field theory viewpoint~\cite{Freese:1990rb,Banks:1995dp,ArkaniHamed:2003wu,Kim:2004rp,Dimopoulos:2005ac,Silverstein:2008sg,McAllister:2008hb,Berg:2009tg,Germani:2010hd,Kaloper:2011jz,Kaloper:2014zba,Burgess:2014tja,Bachlechner:2014hsa,Csaki:2014bua,Furuuchi:2014cwa,Harigaya:2014rga,Higaki:2014mwa,Shiu:2015xda}. (Among these, the idea of $N$-flation builds on earlier work on assisted inflation, which was first studied in the context of exponential rather than axion-like potentials and then was generalized \cite{Liddle:1998jc, Copeland:1999cs, Mazumdar:2001mm, Jokinen:2004bp}. Another variation is $M$-flation \cite{Ashoorioon:2009wa,Ashoorioon:2011ki,Ashoorioon:2014jja}.) While effective field theorists can find no dramatic problem with large-field inflation, they may retain some skepticism about the existence of UV completions. It has been suggested that quantum gravity will impose more severe constraints than EFT. Axion fields with super-Planckian decay constants appear to be rare in string theory vacua~\cite{Banks:2003sx,Rudelius:2014wla}, which may be a suggestive hint of a more general principle. It is generally believed that the set of quantum gravity theories is discrete (at least for nonsupersymmetric theories, where subtleties of continuous moduli spaces do not arise). This means that many apparently sensible effective field theories are actually in the swampland of theories that cannot be consistently coupled to gravity~\cite{Vafa:2005ui,ArkaniHamed:2006dz,Ooguri:2006in,Douglas:2005hq}. At this point we have relatively few guidelines for how to judge that a theory is in the swampland, but the Weak Gravity Conjecture (WGC) is among the sharpest and most powerful and well-motivated~\cite{ArkaniHamed:2006dz,Kats:2006xp,Banks:2006mm,Cheung:2014vva,Cheung:2014ega}. The WGC asserts that any theory containing both gravity and a massless abelian gauge field should have a charged particle in the spectrum whose mass is less than its charge in Planck units. To be precise, we require $m < \sqrt{2} q e M_{\rm Pl}$ in a four-dimensional theory in which gravitons and photons are the only massless particles. The motivation is to avoid having a plethora of exactly stable extremal black hole states, which are potentially problematic~\cite{Susskind:1995da}. This leads to:\\ \noindent {\bf The Weak Gravity Conjecture (WGC)}: For any large, semiclassical, nearly-extremal black hole, there exists a state in the theory whose mass is small enough relative to its charge that the black hole can move away from extremality by emitting this state.\\ \noindent For a single $U(1)$ gauge group, this implies that there is a state satisfying $q/m \ge z_0$ for an appropriate constant $z_0$. In the case of multiple $U(1)$s, the condition is that the convex hull of the charge-to-mass vectors $\vec z = \vec q / m$ of the kinematically available charged states must contain the ball of radius $z_0$~\cite{Cheung:2014vva}. The is purely a kinematic requirement: black holes should be able to decay. From the very beginning, the WGC was claimed to rule out the theory of extranatural inflation~\cite{ArkaniHamed:2003wu}. Recently, there has been renewed interest in how the WGC can constrain large-field inflation~\cite{Rudelius:2014wla,Bachlechner:2014gfa,delaFuente:2014aca,Rudelius:2015xta,Montero:2015ofa,Brown:2015iha,Bachlechner:2015qja,Hebecker:2015rya,Brown:2015lia,Junghans:2015hba}. The essential idea is that theories of axions with good shift symmetries often obtain four-dimensional axion fields by dimensional reduction of higher-rank $p$-form fields, which are constrained by WGC arguments. However, such arguments are not without subtlety. The WGC is a rather weak statement at low energies, since large black holes could decay to states out of the reach of low-energy effective field theory. Given an effective field theory with cutoff $\Lambda$, the lightest semiclassical black holes have mass of order $M_{\rm Pl}^2/\Lambda$, so one could imagine that the conjecture is satisfied by states with mass between $\Lambda$ and $M_{\rm Pl}^2/\Lambda$ that cannot be studied without a full theory of quantum gravity. Moreover, extremal black holes can also satisfy the WGC, provided that the subleading corrections to the extremality bound have the correct sign~\cite{ArkaniHamed:2006dz,Kats:2006xp}. Nonetheless, the magnetic form of the WGC does have important consequences for the low-energy effective field theory and for inflation, which we review in~\S\ref{subsec:WGCreview}. As effective field theorists, we might want to impose the stronger constraint that black hole decay can be described in the low energy effective field theory. That is, we might want to limit our attention to theories in which we can positively assert that black holes decay, reasoning that any theory which violates this assumption lies outside theoretical control in the absence of a full, quantum gravity description. This suggests a variant of the WGC:\\ \noindent {\bf The Effective Weak Gravity Conjecture (EWGC)}: The state which satisfies the weak gravity conjecture should be describable in the low-energy effective field theory.\\ \noindent Indeed, we usually imagine black holes decaying to particles, hence the EWGC is sometimes implicit in discussions of the WGC. However, we emphasize that it amounts to a further assumption, which however is much weaker than the ``strong form'' of the WGC proposed in~\cite{ArkaniHamed:2006dz}, which we discuss in~\S\ref{sec:strongforms}, but which plays no role in our arguments. By contrast, the EWGC plays an important role in some---but not all---of our arguments. (We note in passing that the EWGC is implied by the much-stronger ``lattice WGC''~\cite{Heidenreich:2015nta}---discussed briefly in section~\ref{sec:LWGC}---whenever there is \emph{any} charged particle that can be described in the low-energy effective field theory.) Unlike the WGC, the EWGC is not directly motivated by the problem of remnants, since Planck-scale states satisfying the WGC can address this issue. If correct, the EWGC may stem from a dynamical version of the (kinematic) WGC. For instance, one variation on the Third Law of Thermodynamics in the black hole context could be that nearly-extremal black holes should not spontaneously move closer to extremality. To realize the implications of this statement, we note that black holes with sufficient charge do not decay predominantly by Hawking radiation. Initially, since Hawking radiation carries very little charge away, they decay \emph{towards} extremality \cite{Hiscock:1990ex}. The temperature of a Reissner-Nordstr\"om black hole goes to zero as the black hole approaches extremality, shutting off the Hawking radiation, but charged particles satisfying the WGC can still be emitted through an effect that is similar to Schwinger pair-production near the horizon~\cite{Gibbons:1975kk,Schumacher:1985zz,Khriplovich:1999gm}, which eventually becomes the dominant decay channel. The effect may also be understood as a Breitenlohner--Freedman-type instability of charged particles in the near-horizon geometry~\cite{Denef:2009tp,Chen:2012zn,Chen:2014yfa}. All that is required is that at least one such particle exist; despite statements to the contrary in \cite{Banks:2006mm}, it need not be the lightest charged particle, as lighter particles violating the WGC inequality are not emitted. Conversely, if there are no charged particles in the low-energy effective field theory satisfying WGC, we expect that the black hole continues to approach extremality, as pair production cannot occur whereas the emission of long string states or fission of the black hole into smaller black holes should be a very slow process in comparison to Hawking radiation.\footnote{In principle, when the black hole is very close to extremality, the temperature of the Hawking radiation may become low enough that these strongly suppressed processes are competitive. However, we expect that for a large black hole with a weakly curved horizon, this transition occurs \emph{exponentially} close to extremality, hence it may not even be visible in the thermodynamic limit. Moreover, it's not clear that these processes are effective at discharging the black hole, since their rates are typically not under theoretical control.} Thus, the EWGC may be motivated by thermodynamic considerations. As this paper was being finalized, we learned of work on the possibility that the WGC is satisfied by states that are not captured in the low-energy effective field theory \cite{Cornell2015}. Even if the EWGC does not hold in every consistent theory, the theories in which it is violated have the unusual property that the decay of large, semiclassical black holes cannot be described semi-classically. This suggests that the naive ``low energy effective field theory'' is not a completely reliable description of the full theory at low energies, and in particular any conclusions that we draw about inflation based solely on this class of theories may not be reliable. For the same reasons of theoretical control, in this paper we focus on the case where the low-energy effective field theory is a weakly-coupled abelian gauge theory, containing electrically charged particles light enough to discharge subextremal electrically charged black holes and semiclassical (solitonic) monopoles light enough to discharge subextremal magnetically charged black holes. This assumption could be circumvented in examples where a more sophisticated field theory description of the charged particles is available, e.g.\ in cases where the abelian gauge theory arises from Higgsing a non-Abelian gauge group and the monopoles originate from ``hedgehog'' configurations in the parent theory. However, we see no reason to expect that such theories will evade our constraints, hence for simplicity we defer consideration of them to future work. \medskip Our goal in this paper is to give a critical assessment of the state of large-field inflation in light of the WGC. We focus on scenarios with compact axion fields, leaving noncompact models of axion monodromy for future consideration (though some of our remarks may extend to such models). We find that arguments against approximately isotropic models of $N$-flation \cite{Dimopoulos:2005ac} and kinetic alignment \cite{Kim:2004rp,Bachlechner:2014hsa} are robust. The most difficult scenario to rule out arises from loopholes pointed out by de la Fuente, Saraswat, and Sundrum \cite{delaFuente:2014aca,Prashant}. We present conjectured bounds on this scenario that depend on the way that the mass spectrum shifts as the axion VEVs are varied. In every case that we find a bound, the parametrics {\em precisely} compensates for any possible enhancement and determines that the field range is bounded above by $M_{\rm Pl}$ (times at most an order-one number). The organization of this paper is as follows. In section \ref{sec:preliminaries}, we review extranatural inflation~\cite{ArkaniHamed:2003wu}, the Weak Gravity Conjecture and its requirement of a low UV cutoff \cite{ArkaniHamed:2006dz}, and arguments against large-field inflation with a single axion from both the electric \cite{ArkaniHamed:2006dz} and magnetic \cite{delaFuente:2014aca} points of view. In section \ref{sec:multiaxion}, we present arguments for how the magnetic form of the Weak Gravity Conjecture excludes the simplest models of $N$-flation \cite{Dimopoulos:2005ac} and models based on the Kim--Nilles--Peloso alignment mechanism \cite{Kim:2004rp} when it is realized through the structure of kinetic mixing \cite{Bachlechner:2014hsa}. These arguments do not address the case of decay constant alignment where the leading instanton effects arise from highly aligned electric charges in a basis in which the magnetic charges are not aligned \cite{delaFuente:2014aca}. In section \ref{sec:kkmonodromy}, we discuss how the spectrum of modes for extranatural inflation varies while traversing the axion moduli space. We argue that the models that evade our earlier arguments involve spectra with surprising features that one must accept in order to realize large-field inflation. We claim that, in the presence of particles of large charge, the 5d effective field theory breaks down in some regions of the moduli space if the compactification radius is not significantly smaller than the UV cutoff. This motivates the Single-EFT Consistency Criterion (SECC), which demands that any Kaluza-Klein mode which is light in some region of moduli space should have mass below the UV cutoff throughout the moduli space. Equivalently, it requires that the axion shift symmetry $\theta \to \theta + 2\pi$ can be understood as a large gauge transformation within the domain of validity of the UV completion. We emphasize that the SECC can be motivated within the 5d effective field theory in the presence of a Wilson line, not just the dimensionally reduced viewpoint. (In appendix \ref{sec:lattice} we further illustrate the SECC using a gauge-invariant lattice regulator.) A second new conjecture, the Extended Weak Gravity Conjecture, requires that the spectrum of particles should satisfy the WGC bound at every stationary point in moduli space (including local maxima). In section \ref{sec:decayconstantalignment}, we show that the loopholes raised by \cite{delaFuente:2014aca} violate our stronger conjectures. This completes our discussion of bounds on inflation from the WGC. Sections \ref{sec:entropybounds} and \ref{sec:strongforms} address prior claims to have constrained large-field inflation models using either entropy bounds \cite{Kaloper:1999tt,Conlon:2012tz,Boubekeur:2013kga} or hypothetical strong forms of the WGC \cite{Brown:2015iha}. These sections exist to place our paper in context and can be freely skipped. We claim that the entropy bound arguments in the literature rest on overly strong assumptions, and that the ``strong form'' of the WGC is ambiguous when applied to multiple $U(1)$s. We emphasize that we have never used such a strong form in this paper, and that we view the issues of Kaluza-Klein mode monodromy to be the most important new requirements we have used. We conclude in section \ref{sec:conclude} with a discussion of our view of what remains to be done to place our conjectured requirements on a sounder footing. We believe that further progress along these lines can either thoroughly exclude parametrically large field inflation or identify special theories that satisfy all consistency requirements of quantum gravity. \section{Preliminaries} \label{sec:preliminaries} In this section we review the concept of extranatural inflation, the Weak Gravity Conjecture and its implications for the UV cutoff of a theory, and how the WGC rules out models of inflation driven by a single axion field. Along the way we make a few small remarks not present in the existing literature, but readers thoroughly familiar with the WGC can skip to the next section for our new results. \subsection{Axions from extra dimensions} In this paper we will focus on extranatural inflation models~\cite{ArkaniHamed:2003wu} in which the axion field arises by reducing a $p$-form gauge field ($p \geq 1$) on a $p$-dimensional cycle within a compactification manifold. String theory axions \cite{Conlon:2006tq,Svrcek:2006yi} share this feature with simple phenomenological models. It seems plausible that any consistent theory of a compact axion field coupled to quantum gravity can be viewed, in some duality frame, as a member of this class, so we view the restriction to extranatural models as a mild assumption. For example, consider the case of an ordinary (1-form) gauge symmetry. If the compactification manifold contains a circle, we can define an axion-like field in four dimensions from the integral around this circle: \begin{equation} \theta(x) = \oint_0^R dx^5 A_5(x,x^5). \end{equation} Large gauge transformations of $A_5$ lead to $\theta \to \theta + 2\pi n$ (where $n \in \mathbb{Z}$) representing a (gauged) discrete shift symmetry and signifying that the axion field is compact. The same statement holds for axions obtained from higher-rank $p$-forms, where there is always the freedom to perform a large gauge transformation shifting the field by an integral multiple of the volume form of the $p$-cycle we integrate over. Beginning from a 5d action $\int d^5 x (\frac{1}{2} M_5^3 {\cal R} - \frac{1}{4 e_5^2} F_{\mu\nu}^2)$, we obtain a 4d Planck scale set by $M_{\rm Pl}^2 \equiv M_4^2 = 2\pi R M_5^3$, gauge coupling $1/e^2 = 2\pi R/e_5^2$, and an axion decay constant set by \begin{equation} {\cal L}_{\rm axion} = \int d^4 x \frac{1}{2} f^2 \partial_\mu \theta \partial^\mu \theta,~~{\rm where}~~f^2 = \frac{1}{2\pi R\, e_5^2} = \left(\frac{1}{2\pi R\, e}\right)^2. \end{equation} In much of the paper, we will focus on this case, where a four-dimensional axion arises from the dimensional reduction of a one-form gauge field in five dimensions. Nonetheless, most of our arguments generalize straightforwardly to axions originating from $p$-form gauge fields in $D$ dimensions. \subsection{Weak Gravity Conjecture} \label{subsec:WGCreview} A useful formulation of the Weak Gravity Conjecture is that any near-extremal charged black hole should be able to move away from extremality by emitting a charged particle. For a $D$-dimensional theory containing a $p$-form field with coupling $g_p$ also coupled to a massless canonically normalized dilaton $\varphi$ through its kinetic term $\propto e^{-\alpha \varphi \sqrt{16 \pi G}} F_p^2$, the WGC asserts that there should exist a $(p-1)$-brane with charge $q$ under the $p$-form and tension $T$ satisfying the inequality \begin{equation} 8 \pi G \left[\frac{\alpha^2}{2} + \frac{p(D-p-2)}{D-2}\right] T^2 \leq g_{p}^2 q^2. \end{equation} We comment on the detailed $\alpha$, $p$, and $D$ dependence of this expression in a separate paper \cite{Heidenreich:2015nta}. For a $U(1)$ gauge theory in four dimensions with coupling constant $e$, this has the immediate implication that there should exist an electrically charged particle of mass $m$ and charge $q$ satisfying $m < \sqrt{2} e q M_{\rm Pl}$ and a magnetic monopole of mass $m_{\rm mon}$ and charge $q_m$ satisfying $m_{\rm mon} < \sqrt{2} q_m M_{\rm Pl}/e$. As pointed out in \cite{ArkaniHamed:2006dz}, the existence of a monopole satisfying such a bound implies that weakly-coupled gauge theories coupled to gravity must have a UV cutoff that is parametrically below the Planck scale. The self-energy of the monopole from integrating its classical magnetic field down to a distance $r_{\rm min} \sim \Lambda^{-1}$ is linearly divergent: $\delta m_{\rm mon} \sim \frac{q_m^2}{e^2} \Lambda$. The precise coefficient is not essential because we make a naturalness argument, that $m_{\rm mon} \stackrel{>}{{}_\sim} \delta m_{\rm mon}$ (in the absence of fine-tuning), which like all naturalness arguments leaves the precise order-one coefficient in the bound as a matter of taste. Combining the naturalness bound with the magnetic WGC, we learn that \begin{equation} \Lambda \stackrel{<}{{}_\sim} \frac{e}{q_m} M_{\rm Pl}. \end{equation} In other words, the UV cutoff of the theory should be below $e M_{\rm Pl}$, and if the magnetic WGC is satisfied by a monopole of large charge the UV cutoff will be even lower. We emphasize that this magnetic form of the WGC, from~\cite{ArkaniHamed:2006dz}, does not depend on the EWGC, and our arguments based on it are the most robust. Notice that although one could attempt to apply the self-energy argument also to the electric field of electrically charged particles, such an integral must be cut off at the Compton wavelength $m^{-1}$ of the field. As is familiar from QED, the self-energy of the electron is a small correction to its mass. The argument that monopoles can bound the cutoff $\Lambda$ relies on the fact that the magnetic coupling is strong, the monopole is a solitonic object with mass above the cutoff, and that electric and magnetic charges are mutually nonlocal, so a theory containing both necessarily has a fundamental cutoff. The interpretation of the quantity $\Lambda$ that we are bounding is the scale at which it is no longer appropriate to treat the theory as a weakly-coupled, local effective field theory of a $U(1)$ gauge boson. This may be because $U(1)$ embeds into a nonabelian group which has $W$ bosons at the scale $\Lambda$, or may be due to more fundamental new physics such as the string scale or the higher-dimensional Planck scale. \subsection{The list of ingredients} In this paper there are four ingredients that will play a role in our arguments: \begin{enumerate} \item {\bf The instanton effects.} We are dealing with a set of axions that have a potential of the form $\sum_i c_i \cos(\sum_{j} Q_{ij} \theta_j)$. The charge matrix $Q$ is typically determined by a set of wrapped worldvolumes of electrically charged particles in extranatural inflation theories, but could originate from other physics. \item {\bf The electric charges satisfying the WGC.} A set of electrically charged particles exist that satisfy the constraints demanded by Weak Gravity. That these may not be the {\em same} charged particles contributing the dominant instanton effects is one source of loopholes to the simplest arguments. \item {\bf The magnetic charges satisfying the WGC.} A set of magnetically charged particles exist that satisfy the constraints of Weak Gravity. These particles give us crucial access to information about the UV cutoff of the theory and are the central elements of many of our arguments. \item {\bf The kinetic mixing matrix.} Our axions (or the higher-dimensional gauge fields they originated as) in general mix with each other through a matrix $K$. \end{enumerate} A number of models in the literature rely on ``alignment,'' following the work of Kim, Nilles, and Peloso \cite{Kim:2004rp}. A general alignment model is one in which all four of our ingredients can freely vary. In this paper we address two {\em physically distinct} special cases of alignment, which are not simply the same idea in a different basis. We will use the term {\em kinetic alignment} to refer to a scenario in which the electric charges contributing the dominant instanton effects (our first ingredient) and the magnetic charges satisfying the WGC (our third ingredient) are {\em both} simple (small integers in the charge lattice) in a basis where $K$ is arbitrary. This addresses models like \cite{Bachlechner:2014hsa}, though we make the additional assumption of simple magnetic charges. On the other hand, we could consider a model in which the kinetic term is simple but the charge matrix of the instanton effects and the magnetic charges satisfying the WGC are not (so that, for instance, there are large numbers appearing in the instanton matrix while the magnetic charges are simple). This is the scenario of \cite{delaFuente:2014aca}, which we refer to as {\em decay constant alignment}. Given this scenario, we could always redefine our basis to make the electric charges simple while scrambling the kinetic matrix. However, we would then have unusual charge assignments for the magnetic monopoles, and we require a different physical argument than the one we applied to the scenario we referred to as kinetic alignment. Although we only obtain bounds on these two special cases, we expect that a combination of the ideas used in these bounds can exclude the general case. To fix conventions for the electric charge in the extranatural case, we write the coupling of a charged particle to the gauge potential as \begin{equation} S = Q_a \int_P A^a\,, \end{equation} where the integral is over the worldline of the particle, and $a$ indexes the different gauge fields in the case of multiple $U(1)$'s. This leads to an axion potential of the form \begin{equation} V = V_0 \sum_n c_n \cos (n Q_a \theta^a)\,, \end{equation} in the dimensionally reduced theory, where $\theta^a = \oint A^a$. Likewise, we define the magnetic charge enclosed in a spatial region $\Sigma$ as \begin{equation} \tilde{Q}^a \equiv \frac{1}{2 \pi} \int_{\partial \Sigma} F^a\,, \end{equation} where $F^a = d A^a$. The Dirac quantization condition is \begin{equation} \tilde{Q}^a Q_a \in \mathbb{Z} \,. \end{equation} Unless otherwise specified, we always work in a basis where $\tilde{Q}^a$ and $Q_a$ are integrally quantized. \subsection{WGC and single-axion inflation: electric argument} \label{subsec:elecWGCinflation} An argument that the WGC excludes extranatural inflation with super-Planckian decay constants was given already in the original paper \cite{ArkaniHamed:2006dz}. The electric WGC in 5 dimensions tells us that a charged particle exists with mass $m < \sqrt{3/2} e_5 q M_5^{3/2}$. Upon reducing to four dimensions, this implies that the charged particle contributes an instanton action \begin{equation} S_{\rm inst} = 2\pi R m < \sqrt{\frac{3}{2}} \frac{q M_{\rm Pl}}{f}. \label{Sbound} \end{equation} Let us assume that the same instanton generates the axion potential. If we want to focus on large instanton actions so that higher-order corrections to the potential are exponentially suppressed, this shows that we require $f/q \stackrel{<}{{}_\sim} M_{\rm Pl}$, where $f/q$ is the field range determined by this potential. As recently emphasized in \cite{delaFuente:2014aca,Prashant}, this argument is not completely convincing because there is a ``small action loophole'': the prefactor in front of higher-instanton terms can suppress them even if the instanton action $S \ll 1$. Direct calculation confirms that the potential is a sum of terms proportional to $\exp(-2\pi n m R)\cos(n a/f)$ with coefficients decreasing with powers of $n$~\cite{Hosotani:1983xw,Cheng:2002iz,ArkaniHamed:2003wu}. Even at $m = 0$ where the instanton action gives no suppression, the $n^{\rm th}$ term in the sum has a $1/n^5$ suppression which is enough to safely allow inflation. A useful way to understand this power law suppression is to write the contribution of winding number $n$ in terms of the 5d Green's function for charged particle propagation $n$ times around the circle (see Appendix A of \cite{ArkaniHamed:2007gg}), in which case the power law is just the usual cost of propagating a massless field over the long distance $2\pi R n$. One can partially constrain this small action loophole by demanding that the convex hull condition should be satisfied for a 4d theory that includes both the usual $U(1)$ and a Kaluza-Klein $U(1)$. As shown in \cite{Heidenreich:2015nta}, this convex hull condition yields the inequality: \begin{equation} m_0 R \geq \frac{1}{2 z_0 \left(z_0^2-1\right)^{1/2}}, \label{bound2} \end{equation} where \begin{equation} z_0 = \sqrt{\frac{3}{2}} \frac{e q M_{\rm Pl}}{m_0} = \sqrt{\frac{3}{2}} \frac{ q M_{\rm Pl} }{2 \pi f m_0 R}. \end{equation} Here $m_0$ is the mass of the particle in 5d, and we have turned off the dilaton in the 5d theory. The convex hull condition for the 5d theory just enforces $z_0 \geq 1$, which is equivalent to (\ref{Sbound}). For $q=1$, we see that $f \sim (m_0 R)^{-1}$, so the maximal value of the decay constant grows inversely with the instanton action $S = 2 \pi m_0 R$ in the limit $S \rightarrow 0$. However, imposing the stronger condition (\ref{bound2}) and looking at the $m_0 R \rightarrow 0$ limit, we find \begin{equation} f^2 \leq \frac{3 M_{\rm Pl}^2 }{(2 \pi)^2 m_0 R}. \end{equation} This tells us that the maximal allowed value of $f$ grows like $S^{-1/2}$, rather than the na\"ively expected $S^{-1}$. Thus, the weak gravity bound on axion decay constants in the context of extranatural inflation is stronger than expected, but it is not strong enough to close the small action loophole---one may achieve a super-Planckian decay constant by taking $S$ small without violating the electric form of the WGC. \subsection{WGC and single-axion inflation: magnetic argument} \label{subsec:magneticargument} The small action loophole led \cite{delaFuente:2014aca} to propose a different argument for how the WGC can exclude single-axion extranatural inflation. Starting with the UV cutoff $\Lambda \stackrel{<}{{}_\sim} e M_{\rm Pl}$, they demand that the size of the compactification manifold be larger than $\Lambda^{-1}$. Then \begin{equation} 1 \stackrel{<}{{}_\sim} 2\pi R \Lambda \stackrel{<}{{}_\sim} 2\pi R\, e M_{\rm Pl} = \frac{M_{\rm Pl}}{f}. \end{equation} In this way we obtain the constraint $f \stackrel{<}{{}_\sim} M_{\rm Pl}$, completely independent of the size of the leading instanton effect or which charged particle generates it. Notice that because the cutoff is $\Lambda < eM_{\rm Pl}/q_m$ when the monopole satisfying magnetic WGC has charge $q_m$ and a particle of electric charge $q$ leads to an instanton proportional to $\cos(q a/f)$, choosing nonminimal charge assignments anywhere in the argument only makes the bound stronger. \section{Weak Gravity Conjecture and multi-axion models} \label{sec:multiaxion} In this section we will extend the magnetic argument against single-axion inflation from section \ref{subsec:magneticargument} to more general scenarios with multiple axion fields. We will first tackle the simplest case of $N$-flation with the strong assumption of diagonal kinetic terms and minimal charge assignments. Then we relax our assumptions to consider alignment models. \subsection{Warmup: diagonal N-flation} \label{sec:diagonalwarmup} Suppose that we have $N$ $U(1)$ gauge fields $A_i$ with couplings $e_i$ and that the kinetic mixing among them is negligible. We will also assume that in this basis the charge lattice simply consists of integer electric or magnetic charges under each $U(1)$. These are strong simplifying assumptions, but provide a useful starting point. The Weak Gravity Conjecture applied to each gauge field separately implies the existence of electrically and magnetically charged particles satisfying certain bounds. But the constraint for the set of $N$ fields is actually stronger than for any individual field: if we marginally saturate the bound for each $U(1)$ by postulating a magnetic monopole with charge $q_m = 1$ and mass $\sqrt{2} M_{\rm Pl}/e_i$, for example, then a nearly-extremal black hole with large and equal charges under every $U(1)$ will not be able to decay. This is because the extremality bound for a black hole charged under multiple groups depends not on the sum of the charges but on the charges added in quadrature: for a magnetically charged black hole in four dimensions, the bound is $Q_{\rm eff} \equiv \sqrt{Q_1^2/e_1^2 + \ldots + Q_N^2/e_N^2} < M_{\rm BH}/(\sqrt{2} M_{\rm Pl})$. Consider extranatural $N$-flation that moves along the diagonal in each axion direction, attempting to obtain an effective decay constant \begin{equation} f_{\rm eff}^2 = f_1^2 + f_2^2 + \ldots + f_N^2 = \left(\frac{1}{2\pi R}\right)^2 \left(\frac{1}{e_1^2} + \ldots + \frac{1}{e_N^2}\right). \label{eq:feffsqdiagonal} \end{equation} Notice that this is the appropriate expression under the assumption of no kinetic mixing {\em and} the further assumption that the dominant instanton effects give rise to a potential of the form $\sum c_i \cos(a_i/f_i)$, as would be generated for example from wrapped worldlines of electrically charged particles of charge 1 under each gauge group. More general instantons can lead to alignment phenomena in which inflation winds around one direction in axion space multiple times. We will return to such a possibility later. The linear combination of $1/e_i^2$ factors appearing on the right-hand side of (\ref{eq:feffsqdiagonal}) is precisely what appears in the extremality bound for a magnetically charged black hole with equal charge $Q$ under all $N$ gauge groups. Let us build some intuition by considering ways that such a diagonally magnetically charged black hole could decay: \begin{itemize} \item It could emit a monopole of diagonal charge $(q,q,\ldots q)$, so that its charge-to-mass vector points in the same direction after the emission but is now shorter (we take $Q > q > 0$). In this case, the problem essentially reduces to the one-field case. The self-energy of the monopole imposes $m_{\rm mon} \stackrel{>}{{}_\sim} q^2 \sum_i \frac{1}{e_i^2} \Lambda$, while the condition that the black hole moves away from extremality imposes that $m_{\rm mon}^2 \stackrel{<}{{}_\sim} q^2 \sum_i \frac{1}{e_i^2} M_{\rm Pl}^2$. These conditions together with $2\pi R \Lambda \stackrel{>}{{}_\sim} 1$ require that $f_{\rm eff} \stackrel{<}{{}_\sim} M_{\rm Pl}/q$. \item It could emit a monopole charged under a single gauge group. Suppose it emits a particle with mass $m_1$ and charges $(q_1, 0, \ldots, 0)$. The self-energy constraint leads to $m_1 \stackrel{>}{{}_\sim} q_1^2 \Lambda/e_1^2$. If the diagonally-charged black hole emits this particle, its effective charge decreases only by (expanding the square root) $-\Delta Q_{\rm eff} \approx Q q_1/(e_1^2 Q_{\rm eff}) = (q_1/e_1) (f_1/f_{\rm eff})$. As a result, the condition that the monopole can be emitted is no longer $m_1 < \sqrt{2} q_1/e_1 M_{\rm Pl}$ but the stronger condition $m_1 < \sqrt{2} q_1/e_1 (f_1/f_{\rm eff}) M_{\rm Pl}$. This leads to $f_{\rm eff} \stackrel{<}{{}_\sim} M_{\rm Pl}/q_1$. \item Now consider the general case in which the monopole emitted has mass $m$ and charges $(q_1, \ldots q_N)$. For the black hole to move away from extremality we first require that $Q_{\rm eff}$ decreases, so that $\sum q_i/e_i^2 > 0$. A straightforward generalization of the previous argument leads to a bound \begin{equation} f_{\rm eff} \stackrel{<}{{}_\sim} M_{\rm Pl} \frac{\sum_i q_i/e_i^2}{\sum_i q_i^2/e_i^2} \stackrel{<}{{}_\sim} M_{\rm Pl}. \end{equation} The last step follows because charge quantization demands that $q_i^2 > |q_i|$. \end{itemize} These arguments give a suggestive hint of how scenarios with multiple axions can be more strongly constrained by the Weak Gravity Conjecture in a manner that precisely compensates for the expected gain in field range. However, we have made a strong simplifying assumption that the electric charges leading to dominant instanton effects are simple in the same basis that the gauge field kinetic term is diagonal. We will now explore the constraints imposed by the WGC if we relax this assumption. \subsection{Magnetic WGC and kinetic alignment}\label{sec:magwgckineticalignment} Consider the case of a general kinetic matrix for the gauge fields: \begin{equation} -\frac{1}{4} K_{ij} F^i_{\mu \nu} F^{j\mu\nu}. \end{equation} Assume that we are working in a basis in which there are $N$ magnetic monopoles that satisfy the magnetic WGC and have unit charges $(1,0,\ldots,0), (0,1,\ldots,0), \ldots (0,0,\ldots 1)$. We can choose a different basis to diagonalize the kinetic terms: \begin{equation} K = O D O^T, \label{KD} \end{equation} where $O$ is an orthogonal matrix and $D = {\rm diag}(1/g_1^2, \ldots 1/g_N^2)$ is a diagonal matrix. Without loss of generality we can choose \begin{equation} g_{\rm min}^2 \equiv g_1^2 \leq g_2^2 \leq \ldots \leq g_N^2 \equiv g_{\rm max}^2. \end{equation} In this basis, the $i^{\rm th}$ monopole has charge assignments ${\vec o}_i$ that can be read off from the matrix $O$. The self-energy of this monopole gives us an inequality relating the UV cutoff $\Lambda$ and the monopole mass $m_i$: \begin{equation} \left(\sum_j \frac{o_{ij}^2}{g_j^2} \right) \Lambda \stackrel{<}{{}_\sim} m_i. \end{equation} In particular, if we sum over all $i$ and exploit orthogonality, we learn that \begin{equation} \sum_j \frac{1}{g_j^2} \Lambda \stackrel{<}{{}_\sim} m_1 + \cdots + m_N \equiv m_{\rm tot}. \end{equation} The magnetic WGC tells us that the convex hull of the charge-to-mass vectors $\pm {\vec z}_i$ of the $N$ monopoles contains the unit ball. We have \begin{equation} ({\vec z}_i)_j = \frac{o_{ij} M_{\rm Pl}}{g_j m_i}. \end{equation} Since the ${\vec z}_i$ form a basis, the convex hull condition can be restated as the requirement that for any coefficients $\alpha_1, \ldots, \alpha_N$, \begin{equation} \left| \sum_i \alpha_i {\vec z}_i\right| \geq \sum_i \left|\alpha_i\right|. \end{equation} Consider the choice $\alpha_i = \sigma_i m_i$, where $\sigma_i = \pm 1$ is a choice of sign. In this case the convex hull condition tells us that \begin{equation} m_1 + \cdots + m_N \leq \sqrt{\sum_j \left(\sum_i \frac{\sigma_i o_{ij}}{g_j}\right)^2} M_{\rm Pl}. \end{equation} Combining the convex hull condition with the constraint on the cutoff, we learn that for any set of sign choices $\sigma_i$ \begin{equation} \sum_j \frac{1}{g_j^2} \Lambda\stackrel{<}{{}_\sim} \sqrt{\sum_j \left[\frac{1}{g_j^2} \left(\sum_i \sigma_i o_{ij}\right)^2\right]} M_{\rm Pl}. \label{sumeq} \end{equation} There are $2^N$ choices of sign $\sigma_i$, some of which potentially provide much stronger bounds than others. Consider the case where the largest eigenvalue completely dominates, so that we can drop all terms in the sum not proportional to $\frac{1}{g_1^2}$. In \cite{Bachlechner:2014hsa}, it was pointed out that the eigenvector with largest eigenvalue $1/g_1^2$ of a randomly chosen kinetic matrix $K_{ij}$ will almost certainly point close to a diagonal direction of the fundamental cube, e.g. $\sim (1,1,\ldots 1)/\sqrt{N}$. Since the diagonal of an $N$-dimensional cube has length $\sqrt{N}$, this implies (in the extranatural context with minimal instanton charges) an effective decay constant of $f_{\rm eff} \approx \sqrt{N}/(2 \pi R g_1)$ in the direction of largest eigenvalue. However, in this case, we can choose the signs $\sigma_i$ to alternate and nearly cancel so that the sum $\sum_i \sigma_i o_{i1} \sim \frac{1}{\sqrt{N}}$. This leads to an estimated bound \begin{equation} \frac{1}{g_1^2} \Lambda \stackrel{<}{{}_\sim} \sqrt{\frac{1}{g_1^2 N}} M_{\rm Pl}~~\Rightarrow~~\Lambda \stackrel{<}{{}_\sim} \frac{g_1 M_{\rm Pl}}{\sqrt{N}}. \end{equation} This bound is larger than the naive one-field version of the magnetic WGC by a factor of $\sqrt{N}$, so imposing that $\Lambda R \stackrel{<}{{}_\sim} 1$ precisely produces $f_{\rm eff} \stackrel{<}{{}_\sim} M_{\rm Pl}$. It is possible to make a more general version of the argument that excludes any case in which one eigenvalue $g_1$ dominates the sums (\ref{sumeq}), without making an assumption about the eigenvector. The cutoff $\Lambda$ in this case obeys \begin{equation} \Lambda \lesssim g_1 \left| \sum_i \sigma_{i} o_{i1} \right| M_{\rm Pl}. \label{smallcutoffeq} \end{equation} On the other hand, the 4d Lagrangian for the axions is given by \begin{equation} \frac{1}{2(2 \pi R)^2} D_{ij} \partial_\mu\theta_i \partial^\mu\theta_j - \sum_i A_i e^{-S_i} \cos\left(\sum_j o_{ij} \theta_j \right), \end{equation} where $D$ is the diagonal matrix of (\ref{KD}). In the limit in which $1/g_1^2$ is much larger than the other eigenvalues of $D$, we may approximate the axion moduli space radius by considering only the field displacement in the direction of largest eigenvalue $|\Delta\theta_1|$. The maximal displacement is given by the largest value of $|\theta_1|$ satisfying the conditions $$ |o_{i1} \Delta\theta_1| \leq \pi, $$ for all $i$. Thus, \begin{equation} |\Delta\theta_1|_{\rm max} = \frac{\pi}{{\rm Max}_i|o_{i1}|}. \end{equation} Combining this with our previous bound (\ref{smallcutoffeq}) and setting $\Lambda R \stackrel{>}{{}_\sim} 1$, we get a bound on the radius of axion moduli space, \begin{equation} r \stackrel{<}{{}_\sim} \frac{\Lambda}{2\pi g_1} |\Delta\theta_1|_{\rm max} \stackrel{<}{{}_\sim} \frac{1}{2} \left| \frac{ \sum_i \sigma_{i} o_{i1} }{{\rm Max}_i|o_{i1}|} \right| M_{\rm Pl} \label{rboundsmallg1} \end{equation} Finally, it is not hard to see that we can choose the signs $\sigma_i$ so that this fraction is smaller than $1$. Order the $o_{i1}$'s in descending order of their magnitude. Set $\sigma_1 = +1$. Then, recursively define, \begin{equation} \sigma_k = \left\{ \begin{array}{lr} {\rm sgn}(o_{k1}) & \mbox{ : }~~~\sum_{i=1}^{k-1} \sigma_i o_{i1} < 0 \\ -{\rm sgn}(o_{k1}) & \mbox{ : }~~~\sum_{i=1}^{k-1} \sigma_i o_{i1} \geq 0 \end{array} \right. \end{equation} Since the $o_{k1}$s are decreasing in magnitude, the partial sum $|\sum_{i=1}^{k-1} \sigma_i o_{i1}|$ can never jump more than $|o_{11}|$ by adding a new term. By picking the signs in this way, we ensure that we are always moving towards the origin, so the magnitude of the partial sums is necessarily decreasing with $k$. Since the magnitude of the first partial sum is just $|o_{11}|$, we see that the full sum must be smaller in magnitude than $|o_{11}|$. Thus, (\ref{rboundsmallg1}) gives \begin{equation} r \stackrel{<}{{}_\sim} \frac{1}{2} M_{\rm Pl}. \end{equation} This excludes any kinetic alignment model with a single dominant large eigenvalue, again under the assumption that the instanton effects are controlled by minimal electric charges in the same basis for which the magnetic monopoles satisfying the WGC have minimal charge. A {\em parametric} violation of this assumption, such as instanton effects that are highly aligned, can evade our arguments. We will discuss such a case in section \ref{sec:decayconstantalignment}. The assumption of single-eigenvalue dominance, on the other hand, is made only for simplicity. It is straightforward to check in the two-axion case that the bound holds for completely arbitrary eigenvalues. Furthermore, simple numerical studies in which the kinetic matrix is chosen from a Wishart distribution, $K_{ij} \sim W_{N}(\sigma^2,N)$, reveal that indeed the radius of moduli space decreases with increasing $N$. \section{New conjectures on EFT over the moduli space} \label{sec:kkmonodromy} \subsection{Exploring the moduli space: masses and Kaluza--Klein reduction} In this section we develop a new tool for constraining large-field axion models arising from extra dimensions, which opens an opportunity to obtain powerful constraints on models of axion monodromy. This approach relies, in part, on the nontrivial manner in which shift symmetries are realized in the effective theory. The potential energy in extranatural inflation (including string axion models) is a sum of cosine terms from instantons of various winding numbers, respecting an exact discrete shift symmetry. However, other terms in the effective theory preserve the shift symmetry in a less transparent way. Consider the case of a 4d axion obtained by dimensional reduction of a 5d 1-form gauge field, and suppose that in five dimensions there is a fermion $\Psi$ with charge $q$ under the gauge field. (The case of a charged scalar field is similar.) Its action is \begin{equation} \int d^5 x \sqrt{-g} \left(i {\bar \Psi} \Gamma^M D_M \Psi + m_5 {\bar \Psi} \Psi + \frac{c}{\Lambda} D_M {\bar \Psi} D^M \Psi + \ldots\right), \label{eq:5dchargedaction} \end{equation} where $D_M = \partial_M - i q A_M$, $\Lambda$ is the UV cutoff of the theory, and the dots represent various higher-dimension operators. The five Dirac matrices $\Gamma^M$ correspond to the usual 4d Dirac matrices together with $-i \gamma^5$. We emphasize that $\Lambda$ is the scale at which the local, 5d abelian gauge theory breaks down. In particular, we have no guarantee of five-dimensional locality holding at distances shorter than $\Lambda^{-1}$. We study this theory on a background of ${\mathbb R}^{3,1} \times S^1$ with the fifth dimension having a periodic identification $y \sim y + 2\pi R$ with a background gauge field $A_5 = \frac{\theta}{2\pi R}$. Although fixing $A_5$ to be constant is a gauge choice, there is a gauge-invariant Wilson loop determined by $\theta$ which is well-defined modulo $2\pi$. The compactified theory contains a term \begin{equation} \int d^5 x \sqrt{-g} \frac{q\theta}{2\pi R} {\bar \Psi} \Gamma^5 \Psi, \label{eq:effectivemassterm} \end{equation} that we may think of as an effective mass (albeit one that depends on the spontaneous breaking of 5d Lorentz symmetry) which potentially decouples $\Psi$ from the effective theory if $\theta$ is large enough. This 5d term gives rise to a (CP-odd) mass term $\propto i \theta {\bar \psi} \gamma^5 \psi$ in the 4d theory, which can have important dynamical consequences when $\theta$ is large. At first glance, such a mass term constitutes a hard breaking of the shift symmetry for $\theta$---even of the {\em gauged} $\theta \to \theta + 2\pi$ symmetry! The resolution of this puzzle is that there is a monodromy in the Kaluza--Klein spectrum. Using the Kaluza-Klein decomposition $\Psi(x,y) = \sum_{n = -\infty}^{\infty} \exp(i n y/R) \psi_n(x)/\sqrt{2\pi R}$, this action leads to a 4d effective theory \begin{equation} {\cal L}_{\rm eff} = \sum_{n = -\infty}^{\infty} \left(i {\bar \psi}_n \gamma^\mu D_\mu \psi_n + m_5 {\bar \psi}_n \psi_n + i \frac{n - \frac{q \theta}{2\pi}}{R} {\bar \psi_n} \gamma^5 \psi_n + \frac{c}{\Lambda} \left|\frac{n - \frac{q \theta}{2\pi}}{R}\right|^2 {\bar \psi}_n \psi_n+ \ldots \right). \label{eq:kkdecomposedL} \end{equation} If we were to truncate this theory to a few low-lying modes, we would find a violation of the shift symmetry $\theta \to \theta + 2\pi$. But this symmetry is a large gauge transformation in the higher-dimensional UV completion, so it cannot be violated. Writing the EFT for all Kaluza--Klein modes makes the answer manifest. There is a monodromy effect that rearranges the spectrum; when $\theta \to \theta + 2\pi$, the mode with label $n$ acquires the same mass spectrum that the mode with label $n - q$ previously had. Because the derivative $\partial_5$ and the contribution of $A_5$ are always packaged together in a covariant derivative, this will be true of arbitrary higher-dimension operators as well. Recall that for a Dirac fermion with mass term $m {\bar \psi} \psi + i \mu {\bar \psi} \gamma^5 \psi$, the physical mass is $\sqrt{m^2 + \mu^2}$. In particular, all 4d fields have mass larger than the 5d mass $m_5$. \subsection{Consistency of a single EFT across axion moduli space}\label{sec:secc} We have seen that the 5d theory compactified on a circle with a Wilson loop $\theta = \oint A_5 dx^5$ turned on has a spectrum that depends nontrivially on the value of $\theta$. Let us ask what happens when we move a large distance in moduli space. Tracking a single KK mode adiabatically as $\theta$ varies, we find that its CP-odd mass is shifted by \begin{equation} \Delta m = \frac{q \Delta\theta}{2 \pi R} \end{equation} In particular, if $\Delta \theta \stackrel{>}{{}_\sim} 2 \pi R \Lambda/q$, then a KK mode which is initially light acquires a large mass of order the cutoff $\Lambda$, and exits the effective theory. In fact, when we move this far in moduli space, the entire KK spectrum is shifted, so that the modes which were initially light are heavy, and modes initially above the cutoff are light. Since our description of the five-dimensional theory breaks down at $\Lambda$ (and in particular 5d locality may not hold above this scale), it is possible that in the process new physics can emerge from the cutoff and become light, ruining our effective description. Thus, if we wish to retain control of the KK spectrum, we should impose: \begin{eqnarray} q\, \Delta \theta \stackrel{<}{{}_\sim} 2 \pi R \Lambda. \label{eq:monodromyfieldrange} \end{eqnarray} We emphasize that this is {\em not} a statement about the 4d effective theory cut off at the compactification scale, which obviously does not include Kaluza--Klein modes that may be important elsewhere in the moduli space. It is a statement about the 4d theory including a tower of weakly coupled modes all the way up to the cutoff $\Lambda$, which is fully equivalent to the 5d theory on the Wilson loop background. One point that we should emphasize is that the breakdown of effective field theory that are we discussing does {\em not} correspond to a violation of perturbative unitarity in high-energy scattering in 5d. The Wilson loop is gauge-invariant only when integrated over the full circle, so short-distance 5d scattering experiments do not detect it. Local scattering experiments are not the only way to detect a failure of EFT, however, and the KK mode spectrum is a physical observable that does so. One clear instance in which a subtlety of this kind \emph{does not} arise is when the path in moduli space that we have taken winds many times around a small periodic circle (without monodromy). In this case, the exact shift symmetry of the axion ensures that nothing dramatic can occur. However, we emphasize that inflation requires a motion in moduli space which is not periodic, either due to monodromy or because the size of the circle is large. In this case, the shift symmetry does not help. This suggests an alternate perspective on the problem. The periodicity of $\theta$ arises because we can do a large gauge transformation $A_M \to A_M - \partial_M \chi$ for which $\chi$ is not single-valued on the circle but $e^{i \chi}$ is. In particular, we can identify $\theta = 2\pi$ with $\theta = 0$ by performing the transformation \begin{eqnarray} A_5 \to A_5 - 1/R,~~\chi = y/R,~~\Psi \to e^{-iqy/R} \Psi. \end{eqnarray} While this appears at first glance to be a completely innocent operation, notice that if we are working within an effective field theory with UV cutoff $\Lambda$, this large gauge transformation can bring in modes that are outside the validity of our effective field theory. In particular, if we do not require \begin{eqnarray} \label{eqn:periodicitycondition} \frac{q}{R} \stackrel{<}{{}_\sim} \Lambda, \end{eqnarray} then the low-frequency modes of the gauge-transformed $\Psi$ field involve very high-frequency modes of the original field, and vice versa. If we do not require~(\ref{eqn:periodicitycondition}), then even the periodicity of $\theta$ becomes a subtle question in the low-energy theory! Heuristically, another way to see a problem with these large field ranges is to consider the effective mass for $\Psi$~(\ref{eq:effectivemassterm}): \begin{eqnarray} \int d^5 x \sqrt{-g} \frac{q\theta}{2\pi R} {\bar \Psi} \Gamma^5 \Psi \,. \end{eqnarray} If~(\ref{eq:monodromyfieldrange}) is violated then $\Psi$ receives an effective mass which removes it from the low-energy effective field theory. Of course, the full term involves $\partial_5 - i qA_5$, so the large mass obtained from the Wilson loop can be compensated by high-frequency oscillations in $y$, but these high-frequency modes are not part of the EFT that we started with at the origin of moduli space. We elaborate on this point in appendix \ref{sec:lattice}, using a manifestly gauge-invariant lattice regulator to explore how physical quantities can depend on the cutoff if $\Lambda R$ is not large compared to $q$. Large effective masses far out on the moduli space are particularly suspect in cases where $\Psi$ plays an important dynamical role. For instance, if $\Psi$ provides one of the dominant instanton contributions to the potential, what does it mean to compute $V(\theta)$ for a value of $\theta$ for which it is inconsistent to keep track of the particle generating the potential? If $\Psi$ is a field that is necessary to satisfy the electric WGC, decoupling it from the effective theory is inconsistent with the EWGC. We propose a new constraint on theories of extranatural inflation based on this consistency requirement. One statement of the constraint is the following:\\ \noindent {\bf Single-EFT Consistency Criterion (SECC):} in order to have a controlled description of a portion of the moduli space within a single effective field theory, we demand that any field which is part of the EFT at one point of the moduli space is not decoupled by terms like (\ref{eq:effectivemassterm}) in a different region of the moduli space. Equivalently, if a Kaluza-Klein mode is light somewhere in the moduli space, this mode should exist within the effective theory at the origin of moduli space. This constrains $R \Lambda$ to satisfy (\ref{eq:monodromyfieldrange}).\\ \noindent Loosely, in a controlled theory a mode cannot appear ``out of the blue.'' This seems to us to be a sufficiently well-motivated criterion that it is worthwhile to explore its consequences. An equivalent statement, if we want to describe the {\em entire} moduli space in a single EFT, is:\\ \noindent {\bf Single-EFT Axion Periodicity Criterion}: The periodic identification of 4d axions arising from an underlying higher-dimensional gauge theory should arise from large gauge transformations that are well-defined within the higher-dimensional EFT. Specifically, if the theory has a UV cutoff $\Lambda$, then fields which are smooth on scales much larger than $\Lambda^{-1}$ should not oscillate on length scales shorter than $\Lambda^{-1}$ after the gauge transformation.\\ The SECC assumes that we should be able to work with a single well-defined 5d effective field theory. One might imagine a patchwork of effective field theories, each valid over a limited range of $\theta$, which are matched onto each other in overlapping regimes. Nothing intrinsically seems to prevent us from considering the 5d theory on a Wilson line background with any particular value of $\theta$; what we have seen is that connecting the theories at different values of $\theta$ may be difficult. One might consider the case of Seiberg-Witten theory \cite{Seiberg:1994rs}, in which vacua with weakly coupled electrons and with weakly coupled monopoles cannot coexist in the same EFT from the IR point of view but are guaranteed to be smoothly joined together due to well-understood UV physics. Our claim is that because our 5d theory came with a built-in cutoff at $\Lambda$, we do not actually have such a sharp understanding of the UV physics in this case. It may exist if we embed the 5d theory in a more complete UV setting. If the large gauge transformations that guarantee an identification $\theta \sim \theta + 2\pi$ in the four-dimensional effective field theory are not actually valid operations in the UV completion that we started with, this suggests that we do not truly have a controlled theory of axions. In such a case it is unclear what a computation of the axion potential as a periodic function of the $\theta$'s even means. Nonetheless, we cannot give any fully rigorous argument in favor of the Single-EFT Consistency Criterion. In this paper, we will explore the consequences of the SECC, while welcoming debate on its merits. \subsection{Consequences of Single-EFT Consistency for monodromy}\label{sec:monodromy} The Single-EFT Consistency Criterion, in the form of the bound (\ref{eq:monodromyfieldrange}), is a significant potential obstacle to any model based on axion monodromy. To see why, consider any model in which an axion field winds around the circle $N$ times in the presence of monodromy. We have the constraint \begin{eqnarray} N \stackrel{<}{{}_\sim} R\Lambda \end{eqnarray} from the SECC. But we also have, from the magnetic form of the WGC, the additional constraint \begin{eqnarray} \Lambda \stackrel{<}{{}_\sim} e M_{\rm Pl}. \end{eqnarray} These conditions together with $f = 1/(2 \pi R\, e)$ imply \begin{eqnarray} N f \stackrel{<}{{}_\sim} M_{\rm Pl}, \end{eqnarray} so the effective {\em total} field range from winding $N$ times around the circle is {\em still} bounded above by the Planck scale. Monodromy was important in this argument. Without monodromy, if the physical state were {\em exactly} the same after each trip around the circle, we could get away with only requiring that a gauge transformation $\theta \to \theta + 2\pi$ is well-defined (and then repeat it $N$ times) rather than that a larger field range $\Delta \theta \sim 2\pi N$ is accessible within the effective theory. Although we have phrased our argument in terms of 1-form gauge fields in five dimensions, a similar constraint will arise from the SECC for the more general $p$-form models. Just as our charged field $\Psi$ obtained an effective Lorentz-violating mass $\sim A_5 {\bar \Psi} \Gamma^5 \Psi$ in the presence of a background gauge field, the presence of a background $p$-form will add a Lorentz-violating tension term to the worldvolume effective theory of a $(p-1)$-brane, potentially decoupling it from the effective field theory. The SECC argument against monodromy is not airtight. In section \ref{sec:decayconstantalignment} we will consider a two-axion model of inflation in which one axion winds $N$ times around the circle, but we will see in section \ref{subsec:fundamentaldomain} that this does not necessarily violate the SECC. The reason is that there is a compensating contribution to the mass of the charged fields coming from the second axion. This gives some insight into how a monodromy model might successfully escape the SECC. However, in the model we will discuss, the existence of any charged fields with {\em different} charges than those producing the dominant instanton effects will restore the power of the SECC, whereas the one case that evades the SECC is constrained by a different requirement that we will formulate in section \ref{sec:kkimplications}. \subsection{The Weak Gravity Conjecture across moduli space}\label{sec:kkimplications} There is one other conjecture, in a similar spirit to the SECC but differing in its details and its implications, that is worth considering:\\ \noindent {\bf Extended Weak Gravity Conjecture (XWGC):} The weak gravity conjecture should be satisfied at any stationary point of the potential.\\ In fact, we will only use this condition applied at extrema of the potential, rather than generic stationary points. We could also have imposed a stronger condition that the WGC holds {\em everywhere} in the moduli space, though at least for the cases we consider the results would be equivalent. This provides a new viewpoint on the small-action loophole. A charged particle with 5d mass $m_5 = 0$ obviously satisfies the (electric) WGC in five dimensions, and we saw that such a particle can generate a potential compatible with inflation despite the lack of exponential suppression of higher harmonics. However, the XWGC demands that the electric WGC is satisfied also at stationary points of the potential away from $\theta = 0$. These are classically stable states, but those that are not local minima will eventually tunnel away from the critical point. Because tunneling can be a slow process, and charged black holes discharge quickly when the WGC is satisfied \cite{Gibbons:1975kk,Schumacher:1985zz,Khriplovich:1999gm}, it seems plausible that the WGC should hold even in these unstable states. As mentioned in the introduction, we suspect that the requirement that black holes decay is actually a dynamical requirement that they shed charge often enough relative to uncharged Hawking quanta, rather than a simple kinematic statement that they can decay at all. Further work on black hole thermodynamics may help to justify or refute the XWGC by quantifying the timescale on which we require charge to be lost. The Kaluza--Klein modes have masses spaced by $1/R$, so at the maximum of the potential $\theta = \pi/q$ the masses are maximally shifted and the lightest electrically charged particle has $m = 1/(2 R)$, or larger if we begin with $m_5 \neq 0$. Let us assume, as in~\S\ref{subsec:elecWGCinflation}, that the same particle which generates the leading contribution to the axion potential satisfies the XWGC. Let us first give a simple heuristic argument for why the XWGC could close the small-action loophole. For a particle of charge $q$, we obtain the bound: \begin{equation} \frac{1}{2 R} < \sqrt{2}q e M_{\rm Pl} = \frac{q M_{\rm Pl}}{\sqrt{2} \pi R f}~~\Rightarrow~~\frac{f}{q} < \frac{\sqrt{2}M_{\rm Pl}}{\pi}. \end{equation} Assuming that the same particle generates the axion potential, $f/q$ is precisely the effective axion field range, and the small action loophole appears to have been closed without invoking the magnetic form of the conjecture.\footnote{The argument based on the magnetic WGC is still somewhat stronger, because we don't need to assume that the charged particle which generates the leading contribution to the axion potential has any other special role to play.} The argument we have just given ignores an important effect. The compactification on the circle produces a second $U(1)$ gauge field, namely the KK $U(1)$ arising from graviton modes with one leg on the circle. At nonzero values of $\theta$, the two $U(1)$ gauge fields mix and the correct WGC to consider is the convex hull condition applied to our original $U(1)$ gauge theory and the Kaluza-Klein $U(1)$. We present a detailed derivation of this statement and discussion of the mixing effect in \cite{Heidenreich:2015nta}. The weak gravity bound becomes \begin{equation} m^2 \leq \gamma e^2 q^2 M_{\rm Pl}^2 + \frac{g_{\rm KK}}{R^2} \left(n - \frac{q \theta}{2\pi}\right)^2. \end{equation} The constant $\gamma$ is $2$ as above if the radion mode is stabilized, but is $3/2$ if the radion is unstabilized. Similarly, the constant $g_{\rm KK}$ is $1$ for an unstabilized radion and $2$ for a stabilized radion. If a 5d particle obeys the WGC, then any of its KK modes in 4d will in fact obey this inequality for any value of $\theta$, undermining the heuristic argument we gave above. However, our conclusion survives once we take the convex hull condition into account. The reason we obtain a bound from the convex hull condition is that there is not a KK mode in every direction in charge space. We want to apply the convex hull condition to the charge-to-mass vectors \begin{align} {\vec z} = \left(z, z_{\rm KK}\right) &= \frac{1}{m(n,\theta)} \left(\sqrt{\gamma} e q M_{\rm Pl},~\frac{g_{\rm KK}}{R} \left(n - q \frac{\theta}{2\pi}\right)\right), \\ m(n,\theta) &= \sqrt{m_5^2 + \frac{1}{R^2}\left(n - q\frac{\theta}{2\pi}\right)^2}. \end{align} Without loss of generality, we specialize to the case $q = 1$ for simplicity and set $f = 1/(2 \pi e R)$. We choose $\frac{\theta}{2\pi} = 1/2$ so that the KK charge of the particles is $n - \frac{\theta}{2\pi}$, an odd half-integer. We take $m_5 = 0$ to study the small-action loophole. Any other value of $m_5$ will, for fixed charges $q$ and $n$, lead to a shorter vector ${\vec z}$ and thus a tighter constraint. We have a set of charge-to-mass vectors \begin{equation} {\vec z}_n = \left(\frac{1}{n-\frac{1}{2}} \frac{\sqrt{\gamma} M_{\rm Pl}}{2 \pi f},~ {\rm sgn}\left(n-\frac{1}{2}\right)\, g_{\rm KK}\right). \end{equation} We also have a set of charge-to-mass vectors from KK gravitons or dilatons, which are uncharged under the $U(1)$ but carry KK charge, and for unstabilized radion will always saturate the WGC for their direction in the charge lattice: \begin{equation} {\vec z}_{{\rm grav};n} = \left(0,~{\rm sgn}(n)\, g_{\rm KK}\right). \end{equation} All of these vectors are outside the open unit ball, so in the direction of any ${\vec z}_n$, we satisfy WGC. But of course the striking thing about these vectors is that they all have a ``$\pm g_{\rm KK}$'' in the second entry. That is, the KK charge always satisfies the WGC bound (saturating it when the radion is unstabilized), and we're at a point on the moduli space where {\em every} charged particle has KK charge due to the axion effect. \begin{figure}[!h] \begin{center} \scalebox{1.4}{ \begin{tikzpicture}[line width=1.5 pt, axis/.style={very thick, ->, >=stealth'}] \draw[axis] (-1.5,0)--(1.5,0) node(xline)[right] {$z$}; \draw[axis] (0,-1.5)--(0,1.5) node(xline)[right] {$z_{\rm KK}$}; \draw[dashed] (0,0) circle (1.0); \filldraw[orange] (0,1.0) circle (0.04); \filldraw[orange] (0,-1.0) circle (0.04); \filldraw[blue] (1.1-0.015,1.0-0.015) rectangle (1.1+0.015,1.0+0.015); \filldraw[blue] (-1.1-0.015,1.0-0.015) rectangle (-1.1+0.015,1.0+0.015); \filldraw[blue] (1.1-0.015,-1.0-0.015) rectangle (1.1+0.015,-1.0+0.015); \filldraw[blue] (-1.1-0.015,-1.0-0.015) rectangle (-1.1+0.015,-1.0+0.015); \filldraw[blue] (0.367-0.015,1.0-0.015) rectangle (0.367+0.015,1.0+0.015); \filldraw[blue] (-0.367-0.015,1.0-0.015) rectangle (-0.367+0.015,1.0+0.015); \filldraw[blue] (0.367-0.015,-1.0-0.015) rectangle (0.367+0.015,-1.0+0.015); \filldraw[blue] (-0.367-0.015,-1.0-0.015) rectangle (-0.367+0.015,-1.0+0.015); \filldraw[blue] (0.22-0.015,1.0-0.015) rectangle (0.22+0.015,1.0+0.015); \filldraw[blue] (-0.22-0.015,1.0-0.015) rectangle (-0.22+0.015,1.0+0.015); \filldraw[blue] (0.22-0.015,-1.0-0.015) rectangle (0.22+0.015,-1.0+0.015); \filldraw[blue] (-0.22-0.015,-1.0-0.015) rectangle (-0.22+0.015,-1.0+0.015); \filldraw[blue] (0.157-0.015,1.0-0.015) rectangle (0.157+0.015,1.0+0.015); \filldraw[blue] (-0.157-0.015,1.0-0.015) rectangle (-0.157+0.015,1.0+0.015); \filldraw[blue] (0.157-0.015,-1.0-0.015) rectangle (0.157+0.015,-1.0+0.015); \filldraw[blue] (-0.157-0.015,-1.0-0.015) rectangle (-0.157+0.015,-1.0+0.015); \node at (1.5,1.2) {\tiny $(z_1, 1)$}; \end{tikzpicture} \end{center} \caption{How the XWGC closes the small-action loophole when $\theta = \pi$. We depict the case $g_{\rm KK} = 1$ for convenience. The horizontal axis is the charge-to-mass ratio $z$ for the $U(1)$ gauge group giving rise to the axion. The vertical axis is the charge-to-mass ratio for Kaluza-Klein charge. The points on the vertical axis (orange circles) correspond to graviton KK modes. The points off the axis (blue squares) correspond to charged particle KK modes, which as $n \to \infty$ accumulate near the orange points. We see that the convex hull condition demands that the horizontal coordinate $z_1$ at $n = 1$ be $\geq 1$, leading to the bound $f < \frac{\sqrt{\gamma}}{\pi} M_{\rm Pl}$.} \label{fig:xwgcsmallaction} \end{figure} The convex hull requirement for these vectors is depicted in figure \ref{fig:xwgcsmallaction}. We see that the requirement that the unit circle is contained in the convex hull imposes $z_1 \geq 1$, i.e. \begin{equation} f < \frac{\sqrt{\gamma}}{\pi} M_{\rm Pl}. \end{equation} If this requirement is not satisfied, a black hole with charge vector $(Q, 0)$ under the two $U(1)$ gauge groups will not be able to decay. Notice that we can easily construct such charge vectors from combinations of KK modes with opposite signs of the KK number. Thus, we see that the XWGC (in its convex hull formulation) closes the small-action loophole for a single axion, despite the subtleties introduced by the KK $U(1)$. It is clear from figure \ref{fig:xwgcsmallaction} that, because the value of $z_{\rm KK}$ is the same for every state, we can simply project the problem down to the lower-dimensional problem of considering only $z$. This will continue to be true in a scenario where we consider multiple $U(1)$ gauge bosons: a theory of $n$ gauge bosons in 5d gives rise to an ($n+1$)-dimensional convex hull problem in 4 dimensions, but so long as we go to a point on the moduli space where every charged particle in the theory has a common value (up to a sign) for $z_{\rm KK}$, the added KK direction can be projected out of the argument. We will exploit this below in discussing a scenario with two $U(1)$ groups in 5d. Our main motivation for studying the SECC and XWGC is to apply them to a version of alignment inflation which the usual WGC is not strong enough to exclude. We will consider this scenario in the next section. \subsection{The Lattice Weak Gravity Conjecture}\label{sec:LWGC} In \cite{Heidenreich:2015nta}, we introduced another form of the WGC:\\ \noindent {\bf The Lattice Weak Gravity Conjecture (LWGC)}: At every point in the charge lattice, there exists a state that satisfies the WGC.\\ \noindent This conjecture is obeyed, for instance, by string states of the $SO(32)$ heterotic string as well as Kaluza-Klein reduction of pure gravity on a torus. At first, this conjecture seems to highlight an apparent loophole in our arguments against $N$-flation in section \ref{sec:multiaxion}. If there is an instanton for every point on the charge lattice lying outside the unit ball, then the leading instantons can lie on or just outside the unit ball, implying actions which are larger by a factor of $\sqrt{N}$ relative to the case where only $N$ instantons satisfy the convex hull condition. However, the large number of subleading instantons required by the LWGC are sufficient to spoil the flatness of the potential. In particular, consider a theory with $N$ axions that just marginally satisfies the LWGC in every possible direction. This implies the existence of infinite tower of instantons of every possible integral charge $\vec{Q} = (Q_1,...,Q_N)$. Further, setting the actions of the leading instantons to be $\mathcal{O}(1)$, and setting each of the decay constants to be Planckian, $f_i \sim M_{\rm Pl}$, we have \begin{equation} f_{\rm eff} \sim \sqrt{N} M_{\rm Pl}. \end{equation} The convex hull condition is satisfied because of the infinite number of subleading instantons, which densely fill the unit sphere. A necessary condition for this inflationary model is that the subleading instanton actions must scale with the instanton charges as \begin{equation} S_{\vec{Q}}^2 \sim \sum_i Q_i^2. \label{quadrature} \end{equation} If, on the other hand, the actions were to grow linearly with the charges, $S_{\vec{Q}} \sim \sum_i |Q_i|$, then the subleading instantons would not densely fill in the unit sphere but instead a cube of diagonal length $M_{\rm Pl}$ centered at the origin. This would not contain the unit ball, so the convex hull condition would be violated. However, the particular growth of the instanton actions in (\ref{quadrature}) that allows this scenario to satisfy the LWGC is also what leads to its downfall. The instanton actions are smaller in this scenario than in a model in which the instanton actions grow linearly, which means their contributions to the potential are larger. We will now show that this enhancement yields large corrections to the inflationary potential, making it unsuitable for inflation. For this, we need only consider the special class of instantons with charges $(\pm 1, ... \pm 1)$. Of course, there are many more instantons that will contribute to the inflationary potential, but for our purposes it suffices to show that just the instantons in this special class combine to give large contributions. There are $2^N$ such instantons, which have action $S \approx \sqrt{N} S_{\rm leading}$ relative to the actions of the leading instantons, $S_{\rm leading}$. The $N$-flation potential generated by the leading instantons is of the form \begin{equation} V(\phi_i) \supset \mathcal{A} e^{-S_{\rm leading}} \cos (\phi_i/f) \label{eq:leadingpotential} \end{equation} The subleading instantons under consideration, on the other hand, give a potential of the form \begin{equation} V(\phi_i) \supset \mathcal{A} e^{-\sqrt{N} S_{\rm leading}} \cos (\sum_i \eta_i \phi_i/f). \end{equation} Here $\eta_i = \pm 1$ depending on which instanton is being considered. We consider inflation along the diagonal direction $\phi_i = \phi$. A necessary condition for inflation is that the potential contributions from these instantons must in fact be negligible compared to those from the leading instantons. However, there are $2^N$ instantons of the form in question, and each one introduces a potential contribution of magnitude $e^{-\sqrt{N} S_{\rm leading}}$. Thus, the total potential contribution from these instantons grows roughly as $2^N e^{-\sqrt{N} S_{\rm leading}}$. To protect the $N$-flation potential of (\ref{eq:leadingpotential}) from these subleading effects at large $N$, we must therefore take $S_{\rm leading} \gtrsim \sqrt{N}$. However, the LWGC implies $f \lesssim M_{\rm Pl}/S_{\rm leading}$, so $f_{\rm eff} \sim \sqrt{N} f \lesssim M_{\rm Pl}$. Hence, parametric enhancement of the effective decay constant via isotropic $N$-flation is inconsistent with the LWGC. The LWGC also restricts models of decay constant alignment, which we discuss in the following section. \section{Decay constant alignment} \label{sec:decayconstantalignment} The arguments we have given in section \ref{sec:multiaxion} break down when the charge assignments of the dominant instanton effects are not small integers in the same basis where the magnetic monopole charges satisfying the magnetic WGC are. This case is especially subtle. For concreteness, we will focus the two-axion model of \cite{delaFuente:2014aca}. We assume a basis in which two compact axion fields, $\theta_A$ and $\theta_B$, have diagonal kinetic terms with decay constants $f_A$ and $f_B$: \begin{equation} {\cal L}_{\rm kin} = \frac{1}{2} f_A^2 \partial_\mu \theta_A \partial^\mu \theta_A + \frac{1}{2} f_B^2 \partial_\mu \theta_B \partial^\mu \theta_B. \end{equation} We further assume that in this basis the magnetic monopoles satisfying WGC have charge assignments $(1,0)$ and $(0,1)$, leading to the constraints \begin{eqnarray} f_A, f_B \stackrel{<}{{}_\sim} M_{\rm Pl}. \end{eqnarray} However, we assume that the dominant instanton effects arise from electric charges $(1,0)$ and $(N,1)$ that are highly aligned in this basis. As a result, the potential behave as \begin{eqnarray} V(\theta_A, \theta_B) \approx V_0 \left(1-\cos(\theta_A)\right) + {\tilde V}_0 \left(1 - \cos(N \theta_A + \theta_B)\right). \end{eqnarray} As emphasized in \cite{delaFuente:2014aca}, this provides a UV setting for the alignment mechanism of Kim, Nilles, and Peloso \cite{Kim:2004rp} which appears to satisfy the WGC constraint. Inflation occurs on a trajectory for which $\theta_A$ winds once around the circle while $\theta_B$ winds $N$ times, leading to an effective decay constant \begin{eqnarray} f_{\rm eff} \approx N f_B. \end{eqnarray} This inflationary trajectory is illustrated in figure \ref{fig:axiondomain}. The instantons can be generated from worldlines of charged particles, but \cite{delaFuente:2014aca} also discusses a scenario in which the factor of $N$ can be the level of a Chern-Simons coupling $A \wedge G^a \wedge G^a$ in the 5d theory, potentially arising from a quantized flux in an even higher-dimensional theory. \begin{figure}[!h] \begin{center} \scalebox{0.85}{ \begin{tikzpicture}[line width=1.5 pt, axis/.style={very thick, ->, >=stealth'}] \draw[fill=blue, opacity=0.2] (0,-0.75)--(0,0.75)--(-1.5,8.25)--(-1.5,6.75)--cycle; \draw[axis] (-3,0)--(3,0) node(xline)[right] {$\theta_A$}; \draw[axis] (0,-0.75)--(0,8.0) node(xline)[right] {$\theta_B$}; \draw[blue,very thick, ->, >=stealth'] (-0.75,3.75)--(0,0); \node at (1.5,0) {|}; \node at (1.5,-0.5) {$2\pi$}; \node at (-1.5,0) {|}; \node at (-1.5,-0.5) {$-2\pi$}; \node at (0,1.5) {$-$}; \node at (0.5,1.5) {$2\pi$}; \node at (0,3.0) {$-$}; \node at (0.5,3.0) {$4\pi$}; \node at (0,4.5) {$-$}; \node at (0.5,4.5) {$6\pi$}; \node at (0,6.0) {$-$}; \node at (0.5,6.0) {$8\pi$}; \node at (0,7.5) {$-$}; \node at (0.5,7.5) {$10\pi$}; \node[red] at (-1.05,4.5) {$\circ$}; \node[red] at (-1.05,6.0) {$\circ$}; \node[purple] at (0.0,-0.5) {$\times$}; \node[purple] at (-1.5,7.0) {$\times$}; \begin{scope}[shift={(6.5,1.0)}] \draw[fill=blue, opacity=0.2] (0,0)--(3,0)--(3,3)--(0,3)--cycle; \draw[blue,very thick, ->, >=stealth'] (2.4,3)--(3,0); \draw[blue,very thick, ->, >=stealth'] (1.8,3)--(2.4,0); \draw[blue,very thick, ->, >=stealth'] (1.5,1.5)--(1.8,0); \draw[axis] (-1,0)--(5,0) node(xline)[right] {$\theta_A$}; \draw[axis] (0,-1)--(0,5) node(xline)[right] {$\theta_B$}; \node at (3,0) {|}; \node at (3,-0.5) {$2\pi$}; \node at (0,3.0) {$-$}; \node at (-0.5,3.0) {$2\pi$}; \end{scope} \end{tikzpicture} \end{center} \caption{Two views of the fundamental domain (shaded) of the two axions for the case $N = 5$, together with a trajectory (thick blue arrow) beginning at a maximum of the potential and ending at the origin. For clarity, the view on the left and right are not drawn to the same scale. The right hand view is a ``natural'' parametrization with $0 \leq \theta_{A,B} \leq 2\pi$ but requires that we discontinuously change the value of $\theta$ and execute a corresponding monodromy on the Kaluza--Klein spectrum when wrapping around the torus. The left-hand view chooses a parametrization in which the values of $\theta_{A,B}$ and the Kaluza-Klein masses change smoothly during all of inflation. To illustrate the periodic identifications imposed on the boundaries, we show two points labeled with a red $\color{red} \circ$ that are identified and two points labeled with a purple $\color{purple} \times$ that are identified.} \label{fig:axiondomain} \end{figure} \subsection{Parametrizing the fundamental domain} \label{subsec:fundamentaldomain} If we consider the axion fields to range over $0 \leq \theta_A, \theta_B \leq 2\pi$, as depicted in the right-hand side of figure \ref{fig:axiondomain}, then the periodic identification $\theta_A \to \theta_A + 2\pi$ that wraps the left-hand side of the square onto the right-hand side shifts the mass of a state with charge $(N,1)$ by $N/R$, suggesting the possibility that a cutoff near $1/R$ is too low for consistency. On the other hand, the inflaton trajectory winds multiple times, and we can think of this large mass shift as a consequence of the monodromies induced every time we wrap around the torus and shift our coordinate $\theta$ discontinuously. A more useful parametrization of the moduli space is to ``unwind'' it so that it is aligned with the inflaton trajectory, as in the left panel of figure \ref{fig:axiondomain}. In this case, moving off the right edge of the space wraps back to the left edge at a lower point, corresponding to $\theta_B \to \theta_B - 2\pi$ with $\theta_A$ unchanged. The identification of the upper left edge with the bottom right edge corresponds to $(-2\pi,\theta_B) \to (0, \theta_B - 2 N \pi)$. This is a large change in $\theta_B$, but it leaves $N\theta_A + \theta_B$ fixed, so for a particle of charge $(N,1)$ there is no change in the mass spectrum for this transformation. Thus, if the only particles in our effective theory have charges $(1,0)$ and $(N,1)$, the SECC imposes no obstacle to taking $\Lambda \sim 1/R$. The masses of the Kaluza--Klein modes of these fields are at most of order $1/R$ throughout the moduli space. This illustrates a general fact: the physical criterion that we would like to impose is that there is a single effective theory that is valid everywhere on the moduli space. Some parametrizations of the moduli space might obscure the existence of this theory, while others make it manifest. \subsection{Case 1: dominant instantons satisfy electric WGC}\label{subsec:case1xwgc} The first case we consider is that the same charges $(1,0)$ and $(N,1)$ that control the axion potential also are responsible for satisfying the electric WGC. The argument of \cite{delaFuente:2014aca}, emphasized to us by the authors \cite{Prashant}, is that once the magnetic WGC constraint that $f_A, f_B \stackrel{<}{{}_\sim} M_{\rm Pl}$ is imposed, there is no further WGC constraint. The charged particles can be arbitrarily light, and direct calculation confirms that higher instanton contributions are numerically small. However, this changes if we impose our stronger XWGC conjecture from section \ref{sec:kkimplications}. The potential has a local maximum where the arguments of both cosines are $\pi$, i.e. $N \theta_A + \theta_B = \pi$ and $\theta_A = \pi$. At this point, the lightest KK modes for both charged particles have mass $1/(2 R)$ (assuming that $m_5 = 0$). As in the XWGC discussion above, there is also a third, Kaluza-Klein, $U(1)$ gauge group to consider, but at this point in moduli space every charged particle in the theory has $z_{\rm KK} = \pm g_{\rm KK}$, so the third dimension can be projected out of the argument. Thus the XWGC tells us that, assuming we started with the best-case scenario where the charged particles have negligible 5d masses, the charge-to-mass vectors at the maximum of the potential (and indeed at any generic point in moduli space) are of order \begin{eqnarray} {\vec z}_1 & = & \left(\frac{\sqrt{2} e_A M_{\rm Pl}}{m_1}, 0\right) \sim \left(e_A R M_{\rm Pl}, 0\right) \sim \left(\frac{M_{\rm Pl}}{f_A}, 0\right), \nonumber \\ {\vec z}_2 & = & \left(\frac{\sqrt{2} N e_A M_{\rm Pl}}{m_2}, \frac{\sqrt{2} e_B M_{\rm Pl}}{m_2}\right) \sim \left(\frac{N M_{\rm Pl}}{f_A}, \frac{M_{\rm Pl}}{f_B}\right). \end{eqnarray} Now we require that the convex hull of the vectors $\pm {\vec z}_1, \pm {\vec z}_2$ contains the unit sphere. The line passing through the points $(-\alpha, 0)$ and $(N\alpha, \beta)$ in the $(x,y)$ plane is $(N+1) \alpha y = \beta (\alpha + x)$, so if we read off where this crosses the $y$-axis, we obtain the constraint $\beta \geq N+1$. Substituting the vectors we're interested in, \begin{eqnarray} \frac{M_{\rm Pl}}{f_B} \stackrel{>}{{}_\sim} N+1~~\Rightarrow~~f_{\rm eff} = N f_B \stackrel{<}{{}_\sim} M_{\rm Pl}. \end{eqnarray} This shows that our conjecture that the electric WGC should be satisfied at all stationary points in the axion moduli space is strong enough to exclude decay constant alignment in the scenario where the same particles are responsible for both satisfying the WGC and supplying the dominant instanton effects. The LWGC postulates the existence of an instanton satisfying the WGC at every point on the charge lattice. If this version of the WGC is true, we must therefore consider models with additional instantons. We now turn our attention to these models. \subsection{Case 2: additional particles satisfy electric WGC}\label{subsec:case2secc} On the other hand, we could consider a different scenario. The reason we obtained such a large bound on $f_B$ from the convex hull condition is that its charge-to-mass vector was nearly aligned with that of $f_A$. This was necessary to obtain a large field range in inflation, but what if we satisfy the convex hull condition (and hence the electric WGC) with {\em different} vectors than the ones that dominate the instanton contributions? This possibility was, again, suggested to us by the authors of \cite{delaFuente:2014aca}. Suppose that we have three relevant charges, $(1,0), (N,1),$ and $(0,1)$. The first two supply the dominant instanton contributions, while the first and third satisfy the WGC. We can take the 5d mass of the $(0,1)$ particle to be somewhat large compared to $1/R$, so that its instanton contribution is suppressed, but not parametrically large by a factor of $N$, so that we do not shrink its mass-to-charge vector by enough to obtain the desired bound on $f_B$. This is the point at which the SECC becomes crucial. We can consider which modes are present in our effective theory as we move around the moduli space. The particles with charge $(q_A, q_B)$ have their masses shifted by $\frac{1}{2\pi R}\left(q_1 \theta_A + q_2 \theta_B\right)$. As discussed in section \ref{subsec:fundamentaldomain}, in order to adiabatically track the mass of a particular mode, we should work in the ``unwound'' moduli space where the inflaton trajectory is continuous, as in the left-hand panel of figure \ref{fig:axiondomain}. In this fundamental domain neither $\theta_A$ nor $N \theta_A + \theta_B$ is parametrically large, but $\theta_B$ itself is. Precisely because the inflaton direction winds around the $\theta_B$ circle multiple times, when we unwind the moduli space we find that $0 \leq \theta_B \leq 2\pi N$. Thus, the new particle of charge $(0,1)$ that we invoked to satisfy the electric WGC without running into difficulty with the convex hull condition is not part of a single consistent effective field theory defined over the entire axion moduli space unless we satisfy the bound \begin{eqnarray} 2\pi R \Lambda \stackrel{>}{{}_\sim} 2 \pi N. \end{eqnarray} Again, this is the precise parametric bound that we need to obtain $f_{\rm eff} \stackrel{<}{{}_\sim} M_{\rm Pl}$. In fact, in this particular case we do not even have to invoke the full SECC. We only need to require a consistent set of modes {\em along the inflaton trajectory}. This leaves open the possibility that this weaker requirement and the XWGC are sufficient assumptions, without the full SECC. Notice that this argument, unlike many of our previous arguments, relies on the EWGC: if the state of charge $(1,0)$ which satisfies the convex hull condition is never in the effective theory to begin with, then the SECC does not further constrain the model. However, so long as there is \emph{some} effective field theory description of this state, the SECC will rule out the model, whereas theories which violate the EWGC may present other control problems, as explained in the introduction. \subsection{Status of decay constant alignment} To summarize, we have excluded the particular example of decay constant alignment with charges $(1,0)$ and $(N,1)$ using the two conjectured constraints from section \ref{sec:kkmonodromy}. The XWGC is crucial for avoiding the small-action loophole in which we use the same light fields to generate the dominant instanton effects and to satisfy the electric WGC. The SECC is needed for the case when the particles generating the instanton effects and satisfying the electric WGC are not the same. In both cases, the parametric constraint that we extract is {\em precisely} what is necessary to obtain a maximum field range of $M_{\rm Pl}$. This is, at the least, very suggestive. It calls for further effort to understand how strongly motivated our conjectured constraints are. Again, the possible counterargument to the XWGC is that stationary points in the moduli space that are not minima have a finite lifetime before tunneling to a lower point on the potential, so arguments based on concerns about exactly stable remnants are not directly relevant. For the SECC, the concern is that despite the inconsistency of a single effective field theory, an ultraviolet completion may somehow allow a patchwork theory to be constructed. We do not have definitive rebuttals to these possible counterarguments, but our constraints appear plausible and well-motivated to us, and it is very interesting that the XWGC and the SECC precisely exclude a model that otherwise appears to sail through the WGC's tests with flying colors. Throughout this section we have referred to charged particles of charge $(N,1)$. As explained in \cite{delaFuente:2014aca}, it may be possible to generate instanton effects through other means, such as Chern-Simons couplings to nonabelian gauge theories that confine. This will not affect our arguments. Some set of charged particles must exist to satisfy the electric WGC, and even if they do not contribute significant instanton effects, their charge assignments still lead to uncontrolled large gauge transformations at the boundaries of moduli space. We have not attempted to derive a bound in detail for arbitrary charge assignments or arbitrary numbers of axions, but by putting together the ideas of this section with those we employed in section \ref{sec:multiaxion}, we expect that any model based on compact axion fields with no crucially new physics idea as input can be excluded. \section{Do Entropy Bounds Exclude Large-Field Inflation?} \label{sec:entropybounds} One constraint widely believed to apply in any theory of quantum gravity is an entropy bound, loosely speaking that the logarithm of the number of microstates accessible to a system bounded by a surface of area $A$ is at most of order $A M_{\rm Pl}^2$. Such conjectures originated with Bekenstein~\cite{Bekenstein:1980jp} and were given a sharp covariant form by Bousso~\cite{Bousso:1999xy}. Entanglement entropy in quantum field theory~\cite{Srednicki:1993im,Casini:2009sr} has been shown to satisfy such a bound~\cite{Casini:2008cr,Bousso:2014sda,Bousso:2014uxa}, suggesting that it will be difficult to place theories in the swampland simply by arguing that they violate an entropy bound. On the other hand, several authors have argued for precisely such statements regarding large-field inflation models~\cite{Kaloper:1999tt,Conlon:2012tz,Boubekeur:2013kga}, including recently in the context of the Weak Gravity Conjecture~\cite{Brown:2015iha}. Our goal in this section is to critically review these arguments. We find that they rely on unjustified assumptions. Our assessment is that models of large-field inflation are consistent with entropy bounds. \subsection{The reheating objection} An argument against theories with a large number of $e$-folds has been given based on entropy production during reheating~\cite{Kaloper:1999tt,Brown:2015iha}. Essentially, the objection is that to the extent that we can trust the semiclassical picture of the post-reheating universe as a radiation-dominated phase, it will contain a hot plasma with entropy per unit volume $s \propto T^3$. Because the entropy scales with volume and the entropy bound scales with area, it seems there is a potential conflict. But if we consider a radiation-dominated universe, \begin{eqnarray} \rho \sim H^2 M_{\rm Pl}^2 \sim T^4, \end{eqnarray} so in a volume of radius $R \sim H^{-1}$, the entropy associated with the radiation is \begin{eqnarray} S \sim T^3 R^3 \sim \frac{1}{T} H^2 M_{\rm Pl}^2 H^{-3} \sim \frac{M_{\rm Pl}^2}{H^2} \frac{H}{T} \sim S_{\rm Bek} \frac{T}{M_{\rm Pl}}. \end{eqnarray} Thus, the Bekenstein bound is safely satisfied in a Hubble-size volume provided the temperature is much less than the Planck energy, which is surely true whenever effective field theory is valid. The potential to derive a contradiction between inflation with many $e$-folds and entropy bounds arises from considering volumes of radius $R \gg H^{-1}$. \cite{Kaloper:1999tt} framed the problem in terms of the particle horizon, which grows exponentially during inflation, so the ratio of volume to area becomes much larger than for a Hubble-sized volume. The uncertainty about what region to apply entropy bounds to was a major motivating reason for Bousso's covariant entropy conjecture~\cite{Bousso:1999xy}, which had a significant impact because it gave a precise criterion for which such problems do not arise. Given that substantial evidence has accumulated that Bousso's variation of the entropy bound is the correct criterion, it does not appear that reheating after inflation is problematic even in theories with many $e$-folds. \subsection{Bit-counting arguments} \label{sec:bitcounting} Another argument that axions with super-Planckian decay constants can violate entropy bounds arises from a bit-counting argument. We impose an ultraviolet cutoff on the theory by defining a minimum distance scale $\ell$ and count pixels of area $\ell^2$ on the horizon. There are $\sim 1/(\ell^2 H^2)$ such pixels. This exceeds the bound if $\ell < M_{\rm Pl}^{-1}$. If we keep the UV cutoff of our theory below the Planck scale, there should be no problem. The UV cutoff in refs.~\cite{Conlon:2012tz,Boubekeur:2013kga} was taken to be $\ell \sim f_a^{-1}$. The argument was that the free-particle two-point function of our axion field, $\left<a(x) a(0)\right> \sim \frac{1}{x^2}$, can no longer be a valid estimate at $x \stackrel{<}{{}_\sim} f_a^{-1}$, because it grows without bound even though the field itself is compact and satisfies $a(x) < 2 \pi f_a$. This is a reasonable argument. We could refine it a bit, by noting that the two-point function is not physical because $a(x)$ is not gauge invariant, but $e^{i a(x)/f_a}$ is, so we can build a Lagrangian out of the latter and expand to find non-renormalizable interactions suppressed by the scale $f_a$, which fixes the cutoff. Although the argument that compactness of the field space implies a UV cutoff is correct, it only imposes an {\em inequality}, $\ell > f_a^{-1}$. A contradiction with entropy bounds arises only if we take this to be an equation. But of course, in any theory of quantum gravity we expect it will not be sensible to talk about distances below the Planck length, so we should never take $\ell < M_{\rm Pl}^{-1}$. Nothing in this argument precludes the possibility of theories with a range of scales $f_a \gg M_{\rm Pl} \gg \ell^{-1}$. \subsection{The classical entropy current argument} A further argument considered the inflaton as a perfect fluid and constructed the entropy density from its stress energy tensor~\cite{Boubekeur:2013kga}. This leads to an equation \begin{eqnarray} {\dot S}_\phi = \frac{8\pi^2}{H^3} {\dot \phi}^2. \end{eqnarray} \cite{Boubekeur:2013kga} then gives an argument with the following logical structure: $\Delta S_\phi$ is computed by integrating the above derivative. The expression for $\Delta S_\phi$ is broken into two pieces and bounded by considering the absolute value of these two pieces. This establishes that $\Delta S_\phi < S_{\rm max} \sim f_a^2/H^2$. It is then observed that $S_{\rm max} > S_{\rm Bek}$ for large-field inflation. But, without some estimate of {\em how close} $\Delta S_\phi$ actually comes to $S_{\rm max}$, this proves nothing. In fact, we can obtain a much better estimate of $\Delta S_\phi$. For slow-roll inflation we have $\dot \phi \approx -\frac{1}{3H} \frac{\partial V}{\partial \phi}$, so we can rewrite the integral as: \begin{eqnarray} \Delta S_\phi = \int dt \frac{8\pi^2}{H^3} {\dot \phi} \left(-\frac{1}{3H} \frac{\partial V}{\partial \phi}\right) \approx -\frac{8 \pi^2}{3 H^4} \int d\phi \frac{\partial V}{\partial \phi} \approx \frac{8 \pi^2}{3 H^4} \left|\Delta V\right|, \end{eqnarray} with the approximation that $H$ is relatively constant during the time period considered. Given that during inflation $V \approx 3 H^2 M_{\rm Pl}^2$, this is exactly consistent with the Gibbons-Hawking entropy of a Hubble patch of de Sitter space~\cite{Gibbons:1977mu}, so no bound is violated. In fact, a detailed study of de Sitter thermodynamics in the context of inflation, including perturbations, was undertaken some time ago by Frolov and Kofman, who found no inconsistencies~\cite{Frolov:2002va}. \section{Assessing strong conjectures} \label{sec:strongforms} The original weak gravity paper \cite{ArkaniHamed:2006dz} posited a ``strong form'' of the WGC, which stipulates that in theories with a single $U(1)$, the \emph{lightest} charged particle in the spectrum, and not just any charged particle, should have a charge-to-mass ratio larger than that of an extremal black hole. As pointed out in \cite{delaFuente:2014aca,Prashant} and later discussed in \cite{Rudelius:2015xta, Brown:2015iha, Bachlechner:2015qja, Hebecker:2015rya, Brown:2015lia}, there is a loophole in the electric WGC which leaves open the possibility of axion inflation, but which would be closed by this strong form of the WGC. In particular, consider a theory with a single axion $a$ and two instantons of action $S_1 \ll S_2$ and associated decay constants $f_1$, $f_2$, respectively. Each instanton will introduce a term in the axion potential of the form, \begin{eqnarray} V(a) \supset \mathcal{A}_i e^{-S_i} \cos a/f_i, \label{Veq} \end{eqnarray} with $\mathcal{A}_i$ some coefficients. Now, as long as $f_2 S_2 < M_{\rm Pl}$, the ordinary, mild form of the WGC will be satisfied. $f_1$ is left unbounded, and since $S_1 \ll S_2$, the potential contributions from the first instanton will dominate those from the second. On the other hand, the strong form of the WGC requires that the instanton of smaller action, which is $S_1$ in our scenario, must also satisfy the bound $f_1 S_1 < M_{\rm Pl}$. Thus, if we assume $S_1 > 1$, we find that $f_1$ is constrained to be sub-Planckian, and the axion is unsuitable for inflation. However, there are a few problems with invoking the ``strong form'' of the WGC to close such loopholes. First off, straightforwardly generalizing the strong form to theories with multiple $U(1)$s proves problematic, as it implies constraints on the spectrum that are clearly far too strong. To see why, consider a very simple theory with two $U(1)$s and two particles of mass $m_1$, $m_2$ with charges $(q_1,0)$ and $(0,q_2)$. If one considers either of the $U(1)$'s in this basis, the na\"ive ``strong form of the WGC'' holds that lightest particle charged under each $U(1)$ should have $q/m > 1/M_{\rm Pl}$. However, suppose we now make a very small basis rotation of our $U(1)$s, so that particles 1 and 2 now have charges $$(q_1+ \mathcal{O}(\varepsilon^2),q_1 \varepsilon + \mathcal{O}(\varepsilon^2))\,,~~(-q_2 \varepsilon+ \mathcal{O}(\varepsilon^2), q_2 + \mathcal{O}(\varepsilon^2)),$$ respectively. In this new basis, the statement that the lightest particle charged under each $U(1)$ should have sufficiently large charge-to-mass ratio is problematic: if $m_1 < m_2$, then by taking $\varepsilon$ small enough, we can ensure that $q_1 \varepsilon / m_1$ is too small to satisfy the bound. If $m_2 < m_1 $, then we can do the same with $q_2 \varepsilon / m_2$. The only way this conjecture could hold is if $m_1 =m_2$ i.e. if every particle in the spectrum has precisely the same mass. This is clearly unacceptable. This problem may be remedied. Namely, we may define the ``strong form'' of the WGC for theories with $N$ $U(1)$s to be the statement that the lightest particles whose charge-to-mass vectors span the full $\mathbb{R}^N$ should satisfy the convex hull condition. It is easy to check that in a theory with a single $U(1)$, this definition of the strong WGC reduces to the usual one. Furthermore, the $0$-form generalization of this strong form would indeed place strong restrictions on axion moduli space diameters and close the aforementioned loophole. However, even if we use this updated $N$-species strong form of the conjecture, there are other problems with invoking the strong WGC to rule out axion inflation. To begin with, it does not rule out a closely related loophole achieved by taking $\mathcal{A}_1 < \mathcal{A}_2$ in (\ref{Veq}). In this case, one could take $f_2$ arbitrarily large and $S_1 \lesssim S_2$ and still satisfy the strong WGC. As long as $\mathcal{A}_1 e^{-S_1} \cos{a/f_1}$ is sufficiently smaller than $\mathcal{A}_2 e^{-S_2} \cos{a/f_2}$, the potential will be dominated by the latter term. Secondly, it does not close the small action loophole discussed previously, in which the instanton actions are taken smaller than $1$. This limit is difficult to arrange in a controlled string compactification, but as we have seen, it is not such a problem in simpler extranatural scenarios. Most important, however, is the fact that the strong form of the WGC does not derive from arguments based on either effective field theory or black hole thermodynamics. Though further developments could change the situation, we currently see no compelling reason to believe that the WGC holds in its stronger form.\footnote{See however \cite{Heidenreich:2015nta} for a discussion of the ``lattice WGC'' (LWGC), a candidate strong form which avoids some of these pitfalls and can be motivated by consistency considerations and string theory examples.} \section{Conclusions} \label{sec:conclude} We have argued that the original magnetic form of the Weak Gravity Conjecture and the UV cutoff that it implies, appropriately generalized to multiple $U(1)$ gauge fields, exclude a variety of $N$-flation models including models of kinetic alignment. We summarize the claimed constraints, and the assumptions on which they rely, in Table \ref{tab:wgcresults}. The theories that are excluded in this way have in common the feature that the magnetic charges satisfying the magnetic WGC and the electric charges leading to the dominant instanton effects are simple (not parametrically large and aligned) in the same basis. We believe that these arguments are robust. They can possibly be evaded by considering a theory with a cancelation or tuning in the monopole masses (so that the monopoles are much lighter than the semiclassical self-energy estimate). The only other potential way out is if the compactification radius could somehow be consistently taken to be much smaller than the smallest distance $\Lambda^{-1}$ for which we trust the monopole solution. Because we expect the description of a local $U(1)$ gauge theory to break down at $\Lambda$, this would require going beyond the abelian effective field theory to a more complete ultraviolet description. \afterpage{% \clearpag \thispagestyle{empty \begin{sidewaystable} \centering \begin{tabular}{l|l|l|l|l|l|l|l} Assumptions $\backslash$ Reference: & ref.~\cite{ArkaniHamed:2006dz} & ref.~\cite{delaFuente:2014aca} & ref.~\cite{Brown:2015lia} & sec.~\ref{sec:diagonalwarmup} & sec.~\ref{sec:magwgckineticalignment} & sec.~\ref{subsec:case1xwgc} & sec.~\ref{subsec:case2secc} \\ \hline Single axion & \cmark & \cmark & & & & & \\ $K$ diagonal & \cmark & \cmark & WLOG & \cmark & & \cmark & \cmark \\ $S_{\rm inst} > 1$ & \cmark & & \cmark & & & & \\ Instanton charges obey ElWGC & \cmark & & \cmark & & & \cmark & \xmark \\ MagWGC obeyed by simple charges & & & & \cmark & \cmark & \cmark & \cmark \\ Instantons simple, $\sum a_i \cos(\theta_i)$ & & & & \cmark & \cmark & & \\ \hline Constrained by: & ElWGC & MagWGC & ElWGC & MagWGC & MagWGC & ElXWGC + & Single EFT + \\ & & & & & & MagWGC & MagWGC \\ \end{tabular} \caption{WGC constraints on (compact) axion inflation models with various assumptions. Each column is a scenario for which a constraint has been claimed; a \cmark~indicates that an assumption is made, and an \xmark~indicates that the {\em opposite} assumption is made. The entry ``WLOG'' for ``without loss of generality'' indicates that this assumption was made but, due to the lack of other related assumptions, it is a completely general basis choice. The single-axion assumption implies the $K$ diagonal assumption. The assumption that ``instanton charges obey the electric WGC'' misses the loophole where the states that satisfy the electric WGC make negligible contributions to the potential (for instance, they may have mass near the Planck scale, outside the effective theory). In theories with multiple axions the Convex Hull Condition is always assumed to be part of the definition of the appropriate WGC. Abbreviations: ElWGC = Electric Weak Gravity Conjecture, MagWGC = Magnetic Weak Gravity Conjecture, ElXWGC = Electric Extended Weak Gravity Conjecture.} \label{tab:wgcresults} \end{sidewaystable} \clearpage } A very interesting model in which the electric charge vectors for the instantons are highly aligned in the basis where magnetic charges are simple has been previously claimed to evade the Weak Gravity Conjecture \cite{delaFuente:2014aca}. We agree that it cannot be straightforwardly ruled out by the original WGC. However, this model implies surprising features that arise from the nontrivial dependence of the masses of charged particles on the values of the axion fields. We have proposed additional conjectures that would exclude such surprises. The two assumptions are that the mass spectrum at all extrema in the moduli space should satisfy the Weak Gravity Conjecture and that a light mode that is present in some region of moduli space should be part of the effective theory throughout the entire moduli space (rather than moving above the cutoff). We have no definitive proof of these statements, but they appear to be plausible, and we find it very compelling that they precisely parametrically exclude the one scenario that otherwise evades our arguments. Further study of these conjectures, as well as possible application of them to other models like axion monodromy inflation \cite{Silverstein:2008sg,McAllister:2008hb,Berg:2009tg,Kaloper:2011jz,Kaloper:2014zba} (with non-periodic potentials, unlike all cases considered in this paper), seem to us to be the most likely avenue for progress. We expect that the general argument sketched in section \ref{sec:monodromy} can exclude many such models, although they must be considered on a case-by-case basis to see if loopholes exist. Ultraviolet completions of other theories that apply large field ranges to the hierarchy problem \cite{Graham:2015cka} or to generating a light dilaton \cite{Contino2010,Bellazzini:2013fga,Coradeschi:2013gda} may be susceptible to similar constraints. More generally, noncompact fields (like the scalar moduli which are supersymmetric counterparts of string axions) could give rise to large-field inflation, and the prospects for constraining them with the WGC are not clear. Still, it is thought that noncompact fields in string theory are highly constrained, and that effective field theory always breaks down in the presence of super-Planckian field ranges~\cite{Vafa:2005ui,Ooguri:2006in,Douglas:2005hq}. Super-Planckian field excursions in {\em space}, rather than in time, tend to collapse into black holes~\cite{Nicolis:2008wh}, which may point in the direction of general arguments against the consistency of effective field theories of super-Planckian fields coupled to gravity~\cite{Zohar}. The Weak Gravity Conjecture has been established as a powerful tool to cull the space of theories of inflation. The possibility of future measurements of nonzero $r$ offers the hope that we can confront our understanding of general properties of quantum gravity against real empirical knowledge of our universe. We hope that the study of the WGC can offer tentative steps in the direction of a phenomenology of quantum gravity. \section*{Acknowledgments} We thank Thomas Bachlechner, Cliff Cheung, Thomas Dumitrescu, Peng Gao, Cody Long, Liam McAllister, Grant Remmen, Prashant Saraswat, Matt Schwartz, Gary Shiu, Eva Silverstein, Raman Sundrum, and Cumrun Vafa for discussions or correspondence. MR thanks Thomas Bachlechner, Liam McAllister, and Alexander Westphal for useful discussions after the first version of the paper, and in particular for suggesting the addition of Table \ref{tab:wgcresults}. BH is supported by the Fundamental Laws Initiative of the Harvard Center for the Fundamental Laws of Nature. The work of MR is supported in part by the NSF Grant PHY-1415548. TR is supported in part by the National Science Foundation under Grant No. DGE-1144152. While preparing the revised version of this paper, MR was supported in part by the National Science Foundation under Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics.
1,108,101,562,693
arxiv
\section{Introduction: BH accretion, the low luminosity AGN, and X-ray emission} Understanding the nuclear activity in nearby galaxies is essential for constraining the galaxy formation and evolution process. While energetically unimpressive, the nearby galactic nuclei offer the best case scenario for (1) the most common state of accretion in the current universe, (2) the end-point of quasar evolution, or simply (3) their scaled-down version. Observationally, the low redshift $z \approx 0$ accretion systems are the best testbeds for identifying the processes involved in triggering and further fuelling accretion onto the central black hole (BH) because they offer unique joint investigations of both the nuclear accretion and the properties of the host, i.e., the star-formation (SF). Unlike the optically luminous quasars, which are radiating close to their Eddington limit \citep{kol06}, and only for short ($\sim 10^7$ yr) times \citep{yu02}, accretion activity in nearby galaxy centers appears extremely diverse, spanning $> 6$ orders of magnitude in the Eddington ratio, and (maybe consequently) a wide range in their duty-cycles \citep{hec04, ho08}. This variety provides empirical constraints to model predictions linking the BH growth rate and the host bulge formation. The two are indeed connected: it is the younger galaxies that host the more rapidly growing BHs \citep{hec04,cidfernandes05,con08}. The exact physical mechanism responsible for this link, and in general for the close interplay between SF and BH accretion remains however elusive. Simulations place important constraints on different models for the way black holes are fueled, and provide a quantitative and physical distinction between local, low luminosity, weak (or quiescent) AGN activity, and violent, merger-driven bright quasars \citep{hopkins06, hopkins09}. Environmental studies of the nearby active galactic nuclei are consistent with these ideas of non-merger-driven fueling for the weak BH growth observed in the nearby universe \citep{con06, con08}. More recent analysis of the observed distribution of Eddington ratios as a function of the BH masses provide additional constraints, suggesting that even at $z \approx 0$ there might be two distinct regimes of BH growth, which are determined by the supply of cold gas in the host bulge. The BH regulates its own growth at a rate that is independent of the interstellar medium's characteristics as long as the gas is plentiful, but when the gas runs out the BH's growth will be regulated by the rate at which evolved stars lose their mass \citep{kauffmann09}. These different fueling modes at low luminosities must manifest differently at wavelengths outside the optical regime, allowing further means to constrain and discriminate among them. The most ubiquitous type of activity at $z \approx 0$ that resembles that of quasars is identified at optical wavelengths as either a narrow-lined Low Ionization Nuclear Emission Regions(LINER, L) or a ``transition'' (T) object, whose properties border on the definition of a starburst galaxy and an AGN. Emission-line ratio diagnostics \citep{bpt, vei87, kew06} that have been quite successful in identifying cases where the dominant ionization mechanism is either accretion onto a black hole (i.e., Seyferts, Ss) or radiation from hot, young stars (i.e., H {\sc ii} nuclei), remain inconclusive for the majority of Ls and Ts. The Ls that exhibit quasar-like broad lines (L1s, by analogy with Seyfert 1s) are unambiguously accretion-powered sources, however those lacking these features (the L2s) could have lines generated by shocks, post-starbursts, or other processes unrelated with accretion. Deciphering the underlying emission source(s) of these ambiguous nuclei is an ongoing struggle. Recent analyses of the emission properties of the low luminosity AGN (LLAGN) in relation to a wide variety of characteristics of their hosts, together with considerations of their small and large scale environments, reveal a sequence {\it H II $\rightarrow$ Seyfert/Transition Object $\rightarrow$ LINER $\rightarrow$ Passive} ({\it H II $\rightarrow$ S/T $\rightarrow$ L $\rightarrow$ P}) that these objects obey, at least in a statistical sense (Constantin \& Vogeley 2006; Schawinski et al. 2007; Constantin et al. 2008). This sequence traces trends in (1) increasing host halo mass, (2) increasing environmental density, (3) increasing central BH mass and host stellar mass, (4) decreasing BH accretion rate, (5) aging of the stellar population associated with their nuclei, and (6) decreasing in the amount of dust obscuration, which might translate into a decrease in the amount of material available for star-forming or accretion. This sequence therefore suggests a process of transformation of galaxies from SF via AGN to quiescence, which may be the first empirical evidence for an analogous duty cycle to that of the high $z$ bright systems (i.e., quasars). State of the art hydrodynamical models provide clear support for such a scenario, by showing that during mergers, the BH accretion peaks considerably {\it after} the merger started, and {\it after} the star-formation rate has peaked (e.g., Di Matteo et al. 2005, Hopkins et al. 2006). Constraining the nature of this {\it H II $\rightarrow$ S/T $\rightarrow$ L} sequence at $z \approx 0$ will improve our understanding of the degree to which the LLAGN phenomenon fits into the galactic BH accretion paradigm. The X-ray emission is, arguably, the most sensitive test for accretion and its intensity and efficiency, and thus, it is of great interest to test and validate this sequence against large homogeneous X-ray selected samples. The Chandra Multiwavelength Project (ChaMP; Green et al. 2004) provides a unique opportunity for this, by providing the largest-area Chandra survey to date, which, when cross-matched with the SDSS, provides an unprecedented number of galaxies in the local Universe for which we can combine and contrast measurements of the X-ray and optical emission. The sample of $\sim 110$ Chandra X-ray detected nearby galaxies (excluding broad line objects) analysed in this study represents a significant improvement in both sample size and homogeneity both for X-ray selection and optical spectral type coverage. Previous studies of the relation between the X-ray nuclear emission, optical emission line activity and black hole masses provide important physical constraints to the LLAGN phenomenon. Almost invariably, the conclusions are that LLAGN are probably scaled down versions of more luminous AGN \citep{ho01, pan06}, and that $M_{\rm BH}$ is not the main driver of the (soft) X-ray properties \citep{gree07}. The LLAGN are claimed to be X-ray detected at relatively high rates, and are found to be relatively unabsorbed, obscuration appearing to play only a minor role in their detection rates and/or in classifying them as types 1 and 2 in X-rays \citep{rob00, hal01, min08, ho08}, with the exception of those known to be Compton thick. Nonetheless, the X-ray investigations of AGN activity at its lowest levels remain largely restricted to Ls and Ss. Deciphering the ambiguous nature of Ls in particular, has been the target of many X-ray studies of LLAGN focused on these sources \citep{yaq95, ish96, iyo98,iyo01,ter98,ter00, ter00b,ter02,pell00a,pell00b,pell02,georg02,ptak96, ptak99, ptak04, rob01}. A hard, power-law AGN signal is generally $spectrally$ resolved for the majority of them, however, the corresponding energy (photon) index is marginally steeper (softer) than in (broad line) Ss; many of them require a soft thermal component, that somehow differs from the blackbody soft excess commonly seen in Ss and quasars. The Fe K$\alpha$ emission or the Compton reflection component are usually weak in these sources, indicating that X-ray reprocessing is not by material in an optically thick accretion disk \citep{light88, geo91}. Because most of these studies are based on large-beam observations, mostly $ASCA$ or $BeppoSAX$, Ls' emission has also quite often been attributed to stellar processes. Higher spatial resolution $Chandra$ and $XMM-Newton$ observations \citep{bohr01, kim03, pell03, ter03, filho04, page04, star05, flo06, gon06, sor06} remain torn between these findings, as the stellar interpretation persists for quite a number of these sources. In this work we approach the LLAGN phenomenon via the {\it H ~II $\rightarrow$ S/T $\rightarrow$ L $\rightarrow$ Passive} galaxy evolutionary sequence described above. In particular, we test the validity of the sequence within X-ray selected LLAGN via a large variety of optical emission properties, which first provided evidence for the sequence, and also in their X-ray properties. We combine the ChaMP X-ray detections with a sample of SDSS DR4 nearby galaxies that excludes broad line objects, creating a large sample of galaxy nuclei that spans a range of spectral types, from passive to actively line emitting systems, including the star-forming and accreting types, along with those of mixed or ambiguous ionization. Through measurements of their X-ray spectra\footnote{i.e., the shape of the spectral energy distribution quantified by the photon index $\Gamma$ that best fits a power-law $N(E) \propto E^{-\Gamma}$}, fluxes and luminosities, we characterize the sequence in terms of strength and mode of accretion. We provide here the first investigation of the relation between $\Gamma$ and the Eddington ratio $L/L_{\rm edd}$ at the lowest levels of accretion. We reveal a rather surprising anti-correlation between these two measures, opposite to what luminous AGN and quasars exhibit. This finding reveals a turning point in the general $\Gamma - L/L_{\rm edd}$ relation followed by AGN, which is identical (within the errors) to that shown by the $\Gamma - L/L_{\rm edd}$ trends in black hole X-ray Binaries (XRBs). Throughout this work we assume $\Omega_m = 0.3$, $\Omega_{\Lambda} = 0.7$, and $H_0 = 70h$ km s$^{-1}$ Mpc$^{-1}$. \section{The ChaMP-based LLAGN Sample} \label{sample} The sample of LLAGN employed in this study is obtained by cross-matching the SDSS DR4 spectroscopic sample of galaxies with the X-ray detected sources identified as part of the $Chandra$ Multiwavelength Project (ChaMP). ChaMP is a wide-area serendipitous X-ray survey based on archival X-ray images of the $|b|> 20$ deg sky, obtained with Chandra's AXAF CCD Imaging spectrometer (ACIS). A summary of the survey is presented in ~\citet{gre04} and ~\citet{gre09}, while ChaMP results and data can be found at {\tt http://hea-www.harvard.edu/CHAMP}. The X-ray analysis extends to a total of 392 fields through $Chandra$ Cycle 6, that cover a total of $\sim 30$ deg$^2$ of sky area. We limit the investigation to the DR4 dataset in order to employ the measurements of absorption and emission line fluxes and equivalent widths (EW) drawn from the catalog built by the MPA/JHU collaboration\footnote{publicly available at {\tt http://www.mpa-garching.mpg.de/SDSS/} \citep{bri04}}. Here the line emission component is separated and subtracted from the total galaxy spectrum based on fits of stellar population synthesis templates ~\citep{tre04}. The catalog does not include broad-line objects. To relate the central BH accretion activity to the host properties, we employ stellar masses of galaxies and the H$\delta_A$ Balmer absorption-line index as a proxy for the age of the associated stellar population, as calculated and presented by \citet{kau03}. A detailed analysis of these properties, and their relation to the AGN phenomenon revealed through optical signatures, are presented in \citet{kau04}. The cross-match of all ChaMP sky regions imaged by ACIS with the SDSS DR4 spectroscopic footprint results in a parent sample of 15955 galaxies on or near a chip, and a subset of 199 sources that are X-ray detected. Among those, only 107 sources have an off-axis angle (OAA) $\theta < 0.2$ deg. and avoid $ccd=8$ due to high serial readout noise; these objects comprise the main sample we employ for this study. Subsequent subsections present details of the X-ray spectral analysis, together with a presentation of their general optical and X-ray properties, the definition of the subsamples based on their optical spectral properties, and a discussion of the selection effects associated with these samples. \subsection{X-ray spectral analysis} \label{xanalysis} $Chandra$ imaging with ACIS provides energy resolution sufficient to constrain the X-ray spectral properties as well. To characterize the X-ray activity of the ChaMP-SDSS galaxies included in our sample, we perform direct spectral fits to the counts distribution using the full instrument calibration, known redshift and Galactic 21cm column\footnote{Neutral Galactic column density $N^{Gal}_H$ taken from \citet{Dickey90} for the $Chandra$ aimpoint position on the sky.} $N^{Gal}_H$. Source spectra are extracted from circular regions with radii corresponding to energy encircled fractions of $\sim 90\%$, while the background region encompasses a 20\arcsec\, annulus, centered on the source, with separation 4\arcsec, from the source region. Any nearby sources are excised, both from the source and the background regions. The spectral fitting is done via {\tt yaxx}\footnote{http://cxc.harvard.edu/contrib/yaxx} \citep{Aldcroft06}, an automated script that employs the CIAO $Sherpa$\footnote{http://cxc.harvard.edu/sherpa} tool. Each spectrum is fitted in the range 0.5 -- 8 keV by two different models: (1) a single power-law plus absorption fixed at the Galactic 21cm value: (model "{\tt PL}"), and (2) a fixed power-law of photon index $\Gamma = 1.9$ plus intrinsic absorption of column $N_H$ (model "{\tt PLfix}"). These fits use the Powell optimization method, and provide a robust and reliable one-parameter characterization of the spectral shape for any number of counts. Spectra with less than 100 net counts were fit using the ungrouped data with Cash statistics \citep{Cash79}, while those with more than 100 counts were grouped to a minimum of 16 counts per bin and fit using the $\chi^2$ statistic with variance computed from the data. For the (9) objects with more than 200 counts we employ a third model where both the slope of the power-law and the intrinsic absorption are free to vary (model "{\tt PL\_abs}"). Many of the X-ray detected galaxies in our sample have relatively few net counts (mean 76, median 19). In such cases, instrumental hardness ratios is often used, in the belief that genuine spectra fitting is not warranted by the data quality. We stress however that spectral fitting provides the most consistent and robust estimates of the physical parameters of interest, the power law slope and intrinsic absorption. Because the ChaMP X-ray exposures span a variety of intervening Galactic columns, include data from both back- and front-side ACIS CCDs, and span 6 years of observations, the spectral response between sources varies significantly. While constraints from spectral fitting may not be tight for low count sources, use of unbinned event data and the appropriate response gives an optimal and unbiased estimate of the fit parameters and their uncertainties, especially important when absorption may be present at different redshifts. Classical hardness ratio analysis on the other hand amounts to grouping the data into two rather arbitrary bins, introducing potential biases and statistical complexity. Interpreting the hardness ratio value for ChaMP sources in disparate fields requires incorporating the instrument response in any case, so we strongly prefer spectral fitting. Nevertheless, when there is only one free parameter, only the overall spectral shape is constrained. This simple parametrization proves generally sufficient to model the 0.5 -- 8 keV spectra of these objects. Comparisons of 0.5 -- 8 keV fluxes $f_x$ (and luminosities $L_X$) obtained from the {\tt PL} and {\tt PLfix} models show good agreement, for the whole sample of galaxies, the average (median) of the difference in these values being 0.07 (0.01) dex. We caution that the simple power-law fits we use here could be misleading for objects where the absorption is complex (i.e., a partial covering, with one or more absorber potentially being ionized). However, the data quality is insufficient to show that the situation is more complex than a simple power-law. We compile a set of "best" measurements for $\Gamma$ or $N_H$, by using the values obtained from the {\tt PL} (intrinsic $N_H$ fixed at zero) and {\tt PLfix} ($\Gamma$ fixed at 1.9) models, respectively. For objects with more than 200 counts we use the $\Gamma$ and $N_H$ values obtained from the {\tt PL\_abs}. The mean $\Gamma$ for the whole sample of 107 galaxies is 2.03 $\pm$1.38, with a median of 2.04. The level of intrinsic absorption is generally low. More than 85\% of the sample exhibits $N_H < 1 \times 10^{22}$ cm$^{-2}$, while for 60\% of the objects the spectral fits are consistent with zero intrinsic absorption. Note that, given the simplified model used in fitting the X-ray spectra, these values might not necessarily represent the true distribution of absorption in these objects. Individual measurements of all of these X-ray properties, together with their observational parameters, like the total number of X-ray counts, the exposure time, the off-axis angle, together with their corresponding X-ray source ID, are listed in Table ~\ref{table-xray} for all 107 objects. Contribution from thermal emission is expected for some of the objects included in this sample of nearby galaxies, whether or not they show line emission activity. Such a component may provide a reasonable contribution to the total (X-ray) emission even in objects where the dominant ionization mechanism, as identified optically, is a compact nuclear source, i.e., an AGN. LINERs, for instance, have frequently been associated with photoionization by hot, young stars \citep{fil92, shi92, bar00}, clusters of planetary nebula nuclei \citep{tan00}, or more recently (and perhaps more consistent with their older stellar populations), by hot post-AGB stars and white dwarfs \citep{Stasinka08}. We attempted fitting the $0.5 - 8$ keV spectra with a Raymond-Smith (R-S) thermal plasma model, with the abundance fixed at 0.5 solar. The choice of abundance level is inspired by previous investigations of LLAGN, e.g., \citet{ptak99} and \citet{ter02}, even though in many cases the abundance remains poorly constrained. This R-S model fit results seem physically feasible for only about the third of the sample. Reasonable values, in agreement with previous findings, i.e., $kT \la 2$ keV are only found for 35 sources. For another third the best fit $kT > 10$ keV. We discuss in more detail the results of fitting this model as a function of the optical spectroscopic properties of these sources in Section~\ref{xgen}. As probably expected, it is the passive galaxies that show the softest power-law slopes we measure ($\Gamma>3.5$), suggesting that, for these cases in particular, a power-law representation may be incorrect. If we instead fit a Raymond-Smith model, we derive reasonable typical temperatures near $\sim 0.7$ keV. Nonetheless, since even these objects are likely to have some power-law contribution from X-ray binaries, the R-S model is perhaps no better a characterization of the true spectrum than a power-law. The X-ray flux distribution we derive from the R-S model fits to passive galaxies is not significantly different ($\pm 25\%$), so we prefer to retain the power-law fits everywhere to facilitate more direct comparison of the different spectral classes. \subsection{The Optical Spectral Classification} \label{oclass} We identify and classify accretion sources and other types of active systems in both the parent galaxy sample and the X-ray detected subsample, based on their optical emission line properties. It has been argued \citep{ho97s, con06, kew06} that the best way to separate accretion sources from starbursts or other types of active systems is via a set of three diagnostic diagrams, which employ four line flux ratios: [\ion{O}{3}]$\lambda$5007/H$\beta$, [\ion{N}{2}]$\lambda$6583/H$\alpha$, [\ion{S}{2}]$\lambda\lambda$6716,6731/H$\alpha$, and [\ion{O}{1}]$\lambda$6300/H$\alpha$. Thus, for both samples, we first select a subset of strong emission-line sources that show significant emission in all six lines used in the type classification (H$\alpha$, H$\beta$, [\ion{O}{3}],[\ion{N}{2}], [\ion{S}{2}], and [\ion{O}{1}]), and a set of passive objects that show insignificant line emission activity. An emission feature is considered to be significant if its line flux is positive and is measured with at least 2$\sigma$ confidence. Following ~\citet{kew06} classification criteria, the emission-line objects are separated into Seyferts (Ss), LINERs (Ls), Transition objects (Ts), and star-forming, or H {\sc ii} galaxy nuclei. This method of classifying low luminosity actively line-emitting galaxy nuclei has the disadvantage that it leaves unclassified a high fraction ($\sim$ 40\%) of galaxies, which show strong emission features, but not in all six lines considered here. The condition for strong emission in [\ion{O}{1}] in particular is significantly restrictive. Moreover, another quite large ($\sim$ 25\%) fraction of the emission-line objects remains unclassified, as their line ratios, although accurately measured, do not correspond to a clear spectral type in all three diagrams. In the majority of such cases, while the [\ion{N}{2}]/H$\alpha$ ratio shows relatively high, S-like values, the corresponding [\ion{S}{2}]/H$\alpha$ and/or [\ion{O}{1}]/H$\alpha$ place them in the T or H {\sc ii} -like object regime; thus, because the [\ion{S}{2}] and [\ion{O}{1}] emission lines are better AGN-diagnostics than [\ion{N}{2}], these systems are likely to be excluded from the AGN samples selected via these classifications. As a consequence, our samples based on the 6-line classification are small. To enlarge our samples of galaxy nuclei of all spectral types, we also explored an emission-line classification based on only the [\ion{O}{3}]/H$\beta$ vs. [\ion{N}{2}]/H$\alpha$ diagram, i.e., a 4-line classification method, for the X-ray detected sources. The emission line galaxy samples comprise thus all objects showing at least 2$\sigma$ confidence in the line flux measurements of these four lines only. The delimitation criteria of H {\sc ii}'s and T's remain unchanged, while Ss and Ls are defined to be all objects situated above the ~\citet{kew06} separation line, and with [\ion{O}{3}]/H$\beta$ $>3$ and $<3$ respectively. The 4-line and 6-line classifications result in significantly different classes when applied to optically selected galaxies in general \citep{con06}. In particular, true properties become heavily blended into the dominant population of LINERs (or Ts, depending on the separation lines used in the diagnostic diagrams). Interestingly, however, when applied to the X-ray sample, the 4-line classes fall well within the 6-line loci; although Ss and Ls are separated only by their [\ion{O}{3}]/H$\beta$ line flux ratio, they remain clearly separated into the [\ion{S}{2}]/H$\alpha$ and [\ion{O}{1}]/H$\alpha$ diagrams as well. Figure ~\ref{bpt} shows how the 6-line (top) and the 4-line (bottom) classifications compare for the ChaMP X-ray detected galaxies. Although the sample of X-ray detected galaxies is small, this comparison indicates that adding X-ray detection makes the 4-line classification more secure, and that the need for the (usually unavailable) [\ion{O}{3}]/H$\beta$ vs. [\ion{S}{2}]/H$\alpha$, and [\ion{O}{3}]/H$\beta$ vs. [\ion{O}{1}]/H$\alpha$ diagrams is not as stringent as in the cases where only optical information is available. We will thus consider for the analysis presented in this paper only the 4-line classification. \subsection{The X-ray detection fraction of the LLAGN} \label{statistics} The "cleaning" role of the X-ray detection in finding and defining LLAGN is even more obvious when we compare the fractions of X-ray detected objects by spectral type, both relative to the parent sample of nearby optically selected objects and the subsample of X-ray detected galaxies. Table ~\ref{stats} lists these percentages, where for the parent optically selected sample we consider only the SDSS galaxies on ACIS chips (excluding chip S4, $ccd \# 8$), and with off-axis angle $\theta < 0.2$\,deg, consistent with the conditions used in compiling the X-ray samples. The first two columns show the number (and fraction) that each spectral type represents, among (1) the optical parent sample and (2) the X-ray-detected subsample. The third column lists the raw fraction of X-ray detections per spectral type. That the X-ray detection is very efficient in finding LLAGN, particularly Seyferts, is quite apparent. While narrow-line Ss usually make up only $<2 - 4\%$ \citep{ho97d, con06} of the optically selected nearby galaxies, the X-ray detection increases the chances of finding them tenfold. Ts and Ls, where an AGN contribution to the total ionization power is expected, are also much better represented in the X-ray detected sample of galaxies, their fractions being $\sim 3 \times$ larger than when only optical selection is employed. Some 65\% of the optically defined Ss are detected in X-rays, while the other spectral types hardly reach an X-ray detection fraction of $20\%$. Ls are the second most X-ray active sources within nearby galaxies, while Ts and the galaxies that show some/weak emission line activity account for less than 1/5th of the sample. As we discuss further in Section ~\ref{xgen}, only the luminous H{\sc ii}s are detected in X-rays. While the X-ray detection rate of the H {\sc ii}s is basically consistent with zero, when detected, their X-ray emission is moderately strong, $L_X = 10^{39} - 2.5 \times 10^{41}$ erg s$^{-1}$; half of the H {\sc ii} detections show X-ray luminosities higher than the level that can be reached without contribution from AGN ($10^{40}$ erg s$^{-1}$), suggesting once more that the nuclear emission in these sources might not be completely driven by stellar processes. The only previous studies that encompass the whole spectral variety of LLAGN are \citet{rob00} and \citet{par08}, which employ ROSAT (HRI and RASS respectively). Their search for X-ray emitting nearby galaxy nuclei, optically characterized via the Palomar and SDSS surveys respectively, concluded in soft X-ray detection rates of $\sim 70\%$ of both Ss and Ls (HRI), $\sim 70\%$ of Ss and $\sim 60\%$ Ls (RASS), $\sim 40\%$ (HRI) and $< 10\%$ (RASS) of H {\sc ii}'s, and $\sim 30\%$ of passive galaxies (both cases). Comparison of these detection rates to ours is problematic, because of their softer instrument bandpass, lower sensitivity, but wider sky coverage. Hard X-ray studies of homogeneously selected samples including all spectral types of nearby active galaxies are practically non-existent. Ls, and particularly those found in the Palomar survey, have been clearly privileged in terms of X-ray targeting \citep{ho01}. Their detection rates are found to be significantly higher than what we report here based on the serendipitous ChaMP survey. Ho et al. (2001) reports a $\sim 70\%$ detection rate, while, when chosen for having a flat-spectrum radio core, Ls are found to be $100\%$ X-ray active \citep{ter03}. Later studies claiming better accounting for selection and classification as LINER \citep{sat04, dud04, pel05, sat05, flo06, gon06} conclude with lower fractions, $\sim 50\%$. ChaMP has the distinct advantage of presenting a large, homogeneous serendipitous sample of LLAGN. The detection fraction is not an intrinsic property of galaxies, but rather a convolution of galaxy properties with optical survey depth, and the X-ray sensitivity vs. sky area curve. As described in \citep{gre09}, the ChaMP is characterising the X-ray sensitivity at the position of every SDSS galaxy, which will enable us to compile the unbiased fraction of galaxies by optical spectral type (e.g., Ls) that fall in X-ray luminosity bins, from sky volumes complete to those limits. We will present the results of such an investigation in a subsequent paper. \subsection{Selection Effects: the ChaMP X-ray Galaxy Sample is Minimally Biased} \label{selection} X-ray and optical emission are correlated. Thus, while the X-ray selection is one of the most powerful tools to exploit in detecting accretion sources, it is also expected that this selection picks up, selectively, the brightest (optical) sources. Due to its serendipitous character, ChaMP should however reduce such effects. Note that, out of 107 X-ray detections that this ChaMP sample of galaxies provides, only 13 are targets. We explore in Figure ~\ref{host} the biases that X-ray detection potentially adds to our sample. Comparisons of the distributions of redshift $z$, apparent and absolute $r$-band (SDSS) magnitudes, $r$ and $M_r$ respectively, and of the concentration index $C$\footnote{$C$ = $R_{50}/R_{90}$, where $R_{50}$ and $R_{90}$ are the radii from the center of a galaxy containing 50\% and 90\% of the Petrosian flux} as a proxy for the morphological type of these galaxies. Both for the whole optically and X-ray selected samples (histograms on the left panel) and separately per spectral type (the right panel) show, pleasingly, that biases are not strong. However, H {\sc ii} galaxies are of lower $z$ and brighter $M_r$ when detected in X-ray. As shown in the next section, the H {\sc ii}s are generally weak X-ray sources. The tendency for X-ray detected galaxies to appear brighter in apparent magnitude ($r$, by $\sim 0.5$ mag) seems to be caused by the large difference in $r$ between the H{\sc ii} galaxies alone, as all the other types of sources show very similar ranges, averages or medians, when analysed separately. The concentration index $C$ appears to be somewhat larger for the X-ray objects. Several factors are likely to account for this. The H {\sc ii}s are the least concentrated optically, and have the lowest X-ray detection fraction. X-ray detection is sensitive to the AGN activity, which is more prevalent in the early type (massive bulge-dominated) galaxies \citep{ho03, kau04}. Nuclear activity is also expected to increase $C$ simply by adding light to the core. Note that there is no significant difference in this parameter in regard to Ss' X-ray selection. However, Ss make up for a tiny fraction of $z \approx 0$ galaxies, and the morphology of their hosts spans quite a range. \subsection{General X-ray Properties in Relation to Optical Spectroscopic Classification} \label{xgen} X-ray information about the nearby galaxy centers constrains the contribution of accretion-related processes to their optical spectral characteristics. We present in this section the distributions of a variety of X-ray properties for the whole sample as well as for subsamples by optical spectral class. Figure ~\ref{hostx} shows the distributions of the X-ray counts, the $0.5 - 8$\,keV unabsorbed X-ray fluxes, the best-fit X-ray photon indices $\Gamma$ and the intrinsic neutral hydrogen column densities $N_H$. These measurements are shown for the whole sample of X-ray detected nearby galaxy nuclei (left column), and separately per spectral type (right column). For X-ray fluxes, we show the values derived using the primary power-law fitting models discussed in Section ~\ref{xanalysis}: (1) a single power-law with no intrinsic absorption and (2) a fixed power-law ($\Gamma=1.9$) with absorption. The $f_x$ values are generally consistent with each other for the whole range of brightness and optical spectral type. As expected, the X-ray brightest objects are among Ss, however, even for this spectral type the range of values remains pretty broad, spanning 3 orders of magnitude. Note also that the few H {\sc ii} galaxies that are X-ray detected are in general brighter than the passive systems. In terms of the X-ray spectral shape, the ChaMP nearby galaxies are quite a diverse population. The mean photon index per optical spectral type shows however a rather clear dependence on the spectral type: Ss show the hardest $0.5 - 8$\,keV spectral shape, becoming softer and softer from Ts to Ls to the Passive galaxies which are clearly the softest. The H {\sc ii} galaxies are unexpectedly hard in average $\Gamma = 1.46$, however, two particularly hard detections clearly weight the subsample in this direction. The Ts and Ls average at $\Gamma \approx 2$. The $N_H$ values are poorly constrained for this sample, and there is no obvious correlation with the optical spectral type. It is however obvious that the Ss are the sources with the highest fraction of non-zero absorption. Since all our Ss are of type 2, i.e., lack broad emission lines in their spectra, the unification paradigm predicts that many will show signature of absorption in X-rays. A typical unabsorbed power-law $\Gamma = 1.9$ requires a column density of $N_H = 8 \times 10^{21}$ cm$^{-2}$ to yield an apparent $\Gamma = 1$ similar to the mean value found for our S subsample, which is consistent with the observed mean $N_H$ for these particular systems. \section{Probing The Sequence} \label{sequence} The {\it H {\sc ii} $\rightarrow$ $S$/$T$ $\rightarrow$ L} evolutionary sequence proposes a comprehensive picture for the co-evolution of AGN and their host galaxies. This scenario is supported by and strengthens previous studies of AGN, star-formation activity and their co-evolution in nearby galaxies \citep{kau04, hec04}, and may enhance our understanding of how AGN work and evolve in relation to both their hosts and their environments \citep{con08}. X-rays, as primary signatures of supermassive BH accretion, offer a critical verification of the proposed sequence. $L_X$ and spectral fits characterize the sequence in terms of strength and mode of accretion, especially the order of $S$s and $T$s within the {\it H {\sc ii} $\rightarrow$ $S$/$T$ $\rightarrow$ L} cycle. Both the bulge nebular properties, and the small and large scale environments of $S$s and $T$s, are very similar and remain intermediate between those of H {\sc ii}s and $L$s. The only parameters showing a ``jump'' in the otherwise smooth trends are H$\alpha$/H$\beta$ Balmer decrements and the nearest neighbor distance $d_{\rm 1nn}$ \citep{con08}. H$\alpha$/H$\beta$ provides a measure of absorption, and perhaps also the amount of fuel available for accretion, which we can now test directly against both $L_X$ (accretion power) and X-ray spectral constraints. We present in Figure ~\ref{seqo} a comparison of the degree to which optically and X-ray selected galaxies follow the proposed sequence in terms of the black hole mass $M_{\rm BH}$, obtained via $\sigma_*$ measurements and the $M_{\rm BH} -\sigma_*$ relation \citep{tre02}, the (dust corrected) stellar mass $M^*$ \citep{kau03a}, the Balmer decrement H$\alpha$/H$\beta$ as a proxy for dust extinction, the H$\delta_A$ Balmer absorption index as a measure of the age of the associated stellar population, $L[O III]$, and the accretion rate expressed as $L/L_{\rm edd}$, where $L = L_{\rm bol} = 600 \times L[O III]$ for the bolometric correction \citep{hec04, kauffmann09}, and the $L[O III]$ is extinction-corrected using the corresponding Balmer decrements and a $\tau \propto \lambda^{-0.7}$ attenuation law \citep{charlot00}. Given the (spectral) definition of passive galaxies (i.e., lacking optical emission line activity), there are no measurements of the Balmer decrement, $L[O III]$ and $L/L_{\rm edd}$ for these sources. It is notable that in all measures, and for all types of sources, the X-ray and optically selected sources are very similar, and appear to obey the {\it S $\rightarrow$ T $\rightarrow$ L $\rightarrow$ P} sequence. If anything, the sequence appears stronger among the X-ray selected galaxies, both in median/average values and in their distributions of individual measurements, which span smaller ranges of values than for the optically selected objects. Among the weakest accreting objects, Ls and Passive galaxies, the X-ray selection tends to pick up more massive objects, with heavier BHs, and older stellar populations; there is however no obvious difference in these parameters for the other types of sources. As expected, these massive systems also appear to have smaller [\ion{O}{3}] luminosities and accretion rates, accentuating the sequential $S$ to $T$ to $L$ drop in these parameters suggested by optically selected/defined objects. Also, while the Balmer decrement distributions suggest that, in general, the X-ray selection is not strongly affected by dust, the slightly more obscured X-ray detected H {\sc II}s and less obscured X-ray detected Ls make the sequence more apparent. Figure ~\ref{seqx} illustrates how the 0.5 - 8 keV X-ray luminosity $L_X$ and the corresponding $L/L_{\rm edd}$ behavior along the {\it S $\rightarrow$ T $\rightarrow$ L $\rightarrow$ P} sequence. For the sake of comparison, we show $L_X$ values obtained via two spectral fitting models, 1. a power-law with no intrinsic absorption (only the Galactic one) and 2. a fixed power-law with $\Gamma = 1.9$ and with variable intrinsic absorption added to the Galactic level. The sequence is supported by both types of measurements, albeit stronger when the simple power-law model is used. As discussed in Sections ~\ref{xanalysis} and ~\ref{xgen}, the free power-law with no intrinsic absorption spectral model, that seems to best characterize this sample of objects, makes the statistical sequence even more probable. For the sequence in the accretion rate, we calculate $L/L_{\rm edd}$ using $L/L_X = 16$ for the bolometric luminosity, as suggested by \citet{ho08}, and the BH masses estimated from their host stellar velocity dispersion $\sigma_*$ using \citet{tre02}. We also contrast the X-ray measurements with those where $L_{\rm bol}$ derived from $L [O III]$; note that while $L_X$ and the derived $L/L_{\rm edd}$ are available for the whole sample of 107 objects, only 69 of them exhibit strong $2-\sigma$ detectable, [\ion{O}{3}] line emission. The $S \rightarrow T \rightarrow L \rightarrow P$ galaxy sequence compares well in both optical and X-ray measurements. The values of all these parameters decrease monotonically in both median and average values, from Ss to Ts, to Ls and Passive galaxies, consistent with what optical properties of these sources put forward. The H {\sc ii}s are the only apparent exception here. The small number statistics for these galaxy nuclei preclude any strong conclusions. Given the expected high sensitivity to soft sources that these measurements provide, their spectra tend to be on the hard side. Obscuration is a notorious cause for spectral hardening, and thus, the possibility that these objects hide in their centers obscured BH accretion is still not ruled out. Note also that the power-law slope $\Gamma$ that best fits the X-ray spectra increases from Ss to Ts, to Ls, in both median and average values, showing a tendency of softening of the spectra from Ss to passive galaxies along the proposed sequence (Figure ~\ref{hostx}). This is quite an interesting finding, as in other AGN, mostly the luminous type 1 AGN, observations suggest an opposite trend: the stronger accreting (and more luminous) sources are the softer ones, most recently quantified by \citet{kelly08, she08}. It is interesting to note that a spectral softening with strengthening of the accretion process/rate is also a generally common feature of the X-ray Binaries (XRBs) with reasonably high $L/L_{\rm edd}$ \citep{kub04}. Note however that, when the Eddington ratio is less than a critical value, $L/L_{\rm edd} \la 0.01$, i.e., XRBs are observed in their low/hard state, there is a clear trend for softening with further weakening of the accretion rate \citep{kalemci05, yamaoka05, yuan07}. We investigate this finding in more detail in the following section, and discuss the analogy with the XRB phenomenon. \section{Dependence of $\Gamma$ on $L/L_{\rm edd}$} \label{gamma} Investigations of how X-ray parameters depend on the accretion rate relative to the Eddington rate are expected to offer important constraints on physical models of the AGN X-ray emitting plasma, particularly its geometry. A hot, optically thin corona that Compton up-scatters UV photons from the optically thick disk \citep{sha73, haa91} seems to fit reasonably well the X-ray emission of the highly accreting systems, particularly when it is associated with a hot, possibly patchy and "skin"-like structure "sandwich"-ing the cold disk \citep{nay00, cze03}. Other geometries remain however viable, among them an accretion disk evaporating into a hot inner flow \citep{sha76, zdz99}, or combinations of a hot inner flow and the patchy corona \citep{pou97, sob04}. For LLAGN, with $L/L_{\rm edd} < 10^{-9} - 10^{-5}$, the accretion flow has been hypothesized to originate from a geometrically thick and hot disk-like structure that is inefficient at converting gravitational potential energy into radiation, the radiatively inefficient accretion flow (RIAF) model \citep{nar94, bla99, nar00}. Models suggest that there is a transition/switch from a standard disk to an advection dominated accretion flow (ADAF; or, a radiatively inefficient accretion flow, RIAF) when $L/L_{\rm edd}$ declines below a critical value within a certain transition radius \citep{esin97,yua04,lu04}. In either case, radiation pressure driven outflows can also alter the physics of the corona \citep{pro05}. Because the efficiency in producing an X-ray accretion flow and in driving the outflows depends on the BH mass and its accretion rate, it is important to understand the inter-dependence of these parameters on the X-ray properties, particularly the shape of the X-ray spectrum, i.e., the X-ray photon index $\Gamma$. The relationship between $\Gamma$ and the Eddington ratio $L/L_{\rm edd}$ is relatively well studied, and yet a controversial issue. These two parameters seem to be positively correlated for objects accreting at relatively high Eddington ratios, i.e., quasars, luminous type 1 Seyferts \citep{kelly08, she08}, while for low $L/L_{\rm edd}$ values the situation remains uncertain, mainly due to the lack of quality data in that regime. The conclusion so far seems to be that the shape of the hard X-ray power-law is largely controlled by $L/L_{\rm edd}$. For the luminous, strongly/efficiently accreting sources, it is proposed that the corona acts as a "thermostat" by (Compton) cooling more efficiently when the disk emission increases, producing more soft photons, and thus steepening the hard X-ray spectrum. This scenario also accounts nicely for the generally narrow range of $L/L_{\rm edd}$ and $\Gamma$ values measured in (optically selected) quasars or luminous AGN, in general [$L/L_{\rm edd} \sim 0.3$ with a typical dispersion of a factor of $\sim 5$; \citep{mcl04, kol06, net07, shen08}. $\Gamma \sim 1.5 - 2.5$; \citep{vig05, she08}]. For LLAGN, the $\Gamma - L/L_{\rm edd}$ relation remains only vaguely constrained. \subsection{$\Gamma - L/L_{\rm edd}$ relation for nearby AGN/galaxies} \label{llagncorrel} The relation between the X-ray photon index $\Gamma$ and the Eddington ratio for the ChaMP X-ray detected galaxies is illustrated in Figure ~\ref{gamma_edd}. The Eddington luminosity $L_{\rm edd}$ is calculated as indicated in Section ~\ref{sequence}, using $M_{\rm BH}$ values estimated based on stellar velocity dispersion $\sigma_*$ via \citet{tre02}, while the bolometric luminosity is calculated based on $L_X$, using the average bolometric correction of $L_{\rm bol}/L_X = 16$ \citep{ho08}. There is a rather clear trend of spectral hardening with increasing accretion rate. $\Gamma$ and $L/L_{\rm edd}$ are found to be negatively correlated. Defining an accretion rate, i.e., calculating $L/L_{\rm edd}$, for galaxies with $L_X < 10^{42}$ erg s$^{-1}$ may be misleading if the X-ray emission in these objects is dominated by X-ray sources other than an accreting super massive BH (e.g., individual compact binaries or hot diffuse gas). Note however that the $x$-axis of Figure~\ref{gamma_edd} is simply the measure of $L_X/M_{\rm BH}$ (or better, $L_X/{\sigma_*}^4$), where both $L_X$ and the BH mass $M_{\rm BH}$ (or $\sigma_*$) are measured in exactly the same manner for all of the objects involved, and the trend remains even if this parameter is not interpreted as an accretion rate. The strongest likely dilution to accretion emission comes from contributions of the hot ISM in passive galaxies. Hot gas emission is soft, and relatively stronger in more massive hosts, both of which would push the passive galaxy points toward softer X-ray spectra (larger $\Gamma)$ and lower $L_X/M_{\rm bh}$, which might spuriously accentuate the observed trend even in the absence of significant accretion power. Indeed, the observed trend is weakened once the passive galaxies are removed (Table~\ref{lsf}). So while we rule out significant extended emission contributions in this sample, more detailed examination of such objects is warranted to determine the relative fractions of nuclear vs. extended hot gas contributions. We measure the significance of the $\Gamma - L/L_{\rm edd}$ anti-correlation using the Spearman-rank test. The Spearman-rank coefficient, the chance probability, and the number of sources for each correlation are listed in Table ~\ref{spearman}. We fit the anti-correlation points with a linear least-squares method for the whole sample and for the subsamples of galaxies corresponding to different spectral types, using the {\tt mpfit}\footnote{http://www.physics.wisc.edu/\~craigm/idl/fitting.html} routine, being able to account for the errors in $\Gamma$. We show, for comparison, both the error weighted (continuous line) and the unweighted (dotted line) best-fits in Figure ~\ref{gamma_edd}. The scatter around the best linear fit is large, and as expected the points with the largest error bars show the largest deviation. However, within the errors, the results of the fit remain unchanged when we use only objects with small measurements errors (i.e., $\Delta \Gamma/\Gamma < 50\%$). The results for the linear regression coefficients and the corresponding $\chi^2$ and $dof$ values are listed in Table ~\ref{lsf}. For the sake of considering ``cleaner'' AGN-like activity only, we also list here the results of such a fitting techniques to samples that exclude the Passive galaxies and the H {\sc ii}s, and for samples of luminous X-ray systems ($L_X \ga 10^{42}$ erg s$^{-1}$) only, and indications of an anticorrelation, albeit weaker, remain. The linear regression fits might appear at odds with the conclusion of the Spearman test, which indicates that $\Gamma$ and $L/L_{\rm edd}$ are possibly anticorrelated for all the subsamples presented here. Note however that such a discrepancy appears for the subsamples where the Spearman test remains rather inconclusive as, the probability that an anticorrelation appears by chance is large. Moreover, the Spearman test ignores the errors, and thus tests the unweighted data, for which linear regression fits are always consistent with negative slopes. It is clear however that for Seyferts in particular, there is no evidence for either positive or negative correlation between $\Gamma$ and $L/L_{\rm edd}$. This has been seen in other samples as well \citep{winter09}, and it is an important result. Nevertheless, investigations of the LLAGN phenomenon should not be restricted to these types of sources only. In an attempt to provide some more physical insights into the reality and significance of this new trend, we also explore here the possible correlations between various X-ray measures that might influence (if not artificially create) it. In particular, we scrutinize the way our measured $\Gamma$ relates to the number of counts, the X-ray flux and luminosity. Some studies showed that the photon index correlates with the X-ray luminosity, becoming softer in more luminous sources, whether measured in soft, $0.2 - 2$ keV \citep{forster96, lu99, gierlinski04, williams04}, or hard, $2-10$ keV \citep{dai04, porquet04, wang04} bands only, or comprising the whole spectrum, and even as they vary \citep{chiang00, petrucci00, vaughan01}. However, because many others have not found such trends, the idea that the choice of the sample involved in these studies may contribute to producing the correlations has also been put forward. Interestingly, our ChaMP sample does not show this correlation either. Figure ~\ref{gammal} illustrates the dependence of $\Gamma$ measured in the $0.5 - 8$ keV range as a function of the total number of counts in this energy range, the X-ray flux, and the X-ray luminosity. As before, we emphasize the different optical spectral classifications; the fact that, e.g., the various types of galaxies separate rather well from each other in diagrams like this suggests that the correlation is physical, and most probably related to the accretion physics in these nuclei, rather than an artificial effect of fitting various (simple) models to their X-ray spectra. \subsection{Comparison with high $z$ QSOs} \label{qsocorrel} The anti-correlation between $\Gamma$ and $L/L_{\rm edd}$ (or simply, $L_X/M_{\rm BH}$) that we find to characterize the nuclear emission of nearby galaxies is certainly surprising. This trend is opposite to what more luminous galaxy nuclei, i.e., quasars, exhibit: their X-ray spectra soften as they become more luminous. Figure ~\ref{gamma_edd_qso} shows the $\Gamma - L/L_{\rm edd}$ anti-correlation followed by the low luminosity galaxy nuclei along with measurements of $\Gamma$ and $L/L_{\rm edd}$ for a sample of SDSS quasars with optical spectra that are ChaMP detected and X-ray analysed in the same manner we handled our sample of nearby galaxy nuclei \citep{gre09}. The quasar $L/L_{\rm edd}$ values are obtained using the average bolometric correction of $L/L_X = 83$ estimated by \citet{ho08}, with no luminosity-dependence, and the black hole masses from \citet{shen08}. To ease comparison with previous work on quasars, we use in this plot the $2 - 10$ keV $L_X$ luminosity, which is obtained by extrapolating the measured unobscured $L_{\rm 0.5 - 8 keV}$ value, using the best $\Gamma$ measurements. Note that the anti-correlation between $\Gamma$ and $L/L_{\rm edd}$ that nearby galaxy nuclei show is even more pronounced when the $2 - 10$ keV $L_X$ is plotted against $\Gamma$. This is expected, as softer objects will be less luminous in $2-10$ keV than in the $0.5 - 8$ keV range, while the hard objects will be more luminous. The magnitude of this effect, i.e., the ratio of the two $L_X$ values is a function of the photon index $\Gamma$, as given by \begin{equation} L_X(2 - 10\ keV) = L_X(0.5 - 8\ keV) \times \frac{10^{2-\Gamma} - 2^{2-\Gamma}}{8^{2-\Gamma} - 0.5^{2-\Gamma}}. \end{equation} The largest difference, and thus the most significant effect on the shape of the $\Gamma - L/L_{\rm edd}$ trend, is for the softest ($\Gamma \ga 3$) objects, however it does not exceed $\sim1$ dex. Note that for these particular objects the measurement errors are also among the highest, and hence contributed the least weight to the best-fit $\Gamma - L/L_{\rm edd}$ relation. The ChaMP quasars fall well within the locus of values expected based on previous work. The ChaMP quasars do not show, however, a clear $\Gamma - L/L_{\rm edd}$ positive correlation. The ChaMP quasars span a wide range in redshift and their $L/L_{\rm edd}$ measurements reflect a mix of BH mass estimates based on all H$\beta$, \ion{Mg}{2}, and \ion{C}{4}, which may add scatter to an underlying correlation \citep{shen08}. We show for comparison the results of linear regression fits of the $\Gamma - L/L_{\rm edd}$ correlation reported by \citet{she08} and by \citet{kelly08}, with the results for the $H\beta$ and \ion{C}{4} based estimates of the $M_{bh}$ shown separately. Note that we use here these data only for the purpose of global comparison of the quasar properties with those of the nearby galaxy nuclei, and do not attempt to improve upon the previous work on the characterization or calibration of the $\Gamma - L/L_{\rm edd}$ relation for quasars. Clearly, the quasar $\Gamma$'s are not negatively correlated with their Eddington ratios, even with the shift in $L_X$ produced by the energy conversion mentioned above, which might contribute to such trend (the quasar X-ray data is analysed and measured via the same techniques employed for the nearby galaxy nuclei). Thus, over the whole $L/L_{\rm edd}$ range we explore here by putting together luminous and weak AGN, there is clearly a break in the $\Gamma - L/L_{\rm edd}$ correlation, which seems to happen at $L/L_{\rm edd} \approx 10^{-2}$, and $\Gamma \ga 1.5$. It is important to note that, for the samples of quasars investigated by both \citet{kelly08} and \citet{she08}, the scatter in the claimed correlation between $\Gamma$ and $L/L_{\rm edd}$ is largest as the samples reach the weakest accretion rates (while hardly reaching the $L/L_{\rm edd} \approx 10^{-2}$ level), and the hardest values ever measured for the quasar photon index $\Gamma \ga 1.5$; while suggestive of a break, the quasar data alone cannot however be used to single it out. Other studies that include measurements of lower luminosity managed to point out these types of deviations from the expected correlation \citep{zhang08, gre09}, however, the data remained sparse at those levels of accretion, leaving the effect unquantified. We note here that the comparison of the nearby galaxies' central X-ray emission with that of the high redshift QSOs may be inadequate since X-rays from SF activity is not yet accounted for in the former. We addressed this issue by including in the modelling of the original X-ray spectrum (Section ~\ref{xanalysis}) a $\Gamma=2$ component with the expected $L_X(SF)$. A power-law with $\Gamma=2$ provides the best simple description of the mix of hot gas and high mass X-ray binaries (HMXBs) that comprise the SF activity \citep[e.g.,][]{kim92a, kim92b, nandra94, ptak99, george00, colbert04, reddy04, lehmer05}. To estimate the potential $L_X(SF)$ in each galaxy, we use star-formation rates (SFR) calculated and available for these objects in the MPA/JHU catalog (Section ~\ref{sample}; Brinchmann et al. 2004), along with the $L_X - SFR$ correlation quantified by Gilvanov, Grimm, \& Sunyaev (2004). There are 83 objects with total $L_X$ above the line describing the $L_X - SFR$ correlation, for which we modelled the remaining - presumably AGN component. This reanalysis produces significant deviations from the $\Gamma - L_X/L_{\rm edd}$ correlation only for 6\% of the sample (5 out of the 83 objects involved in this analysis). The scatter and the error bars for both $\Gamma$ and $L_X$ are only slightly increased. The slope of the correlation flattens but remains consistent within the errors with our previous measurements. \subsection{The ``break'' in the $\Gamma - L/L_{\rm edd}$ correlation and comparison with XRBs} \label{break} The X-ray photon index $\Gamma$ and the Eddington ratio $L/L_{\rm edd}$ show a double-sloped relation, with positive and negative correlations above and below $L/L_{\rm edd} \approx 0.01$ and $\Gamma \ga 1.5$ respectively. We see now that, a given AGN spectral index $\Gamma$ may correspond to two different luminosity levels, with the luminosity difference greater for sources characterized by softer spectra (Figure ~\ref{gamma_edd_qso}). This break in the $\Gamma - L/L_{\rm edd}$ relation, when studied over a large range of accretion power, fits well into the theoretical ideas of a transition in the AGN accretion mode: a standard \citep{sha73} accretion disk/corona at high Eddington rates, i.e., quasar phase, and an ADAF \citep{nar94} at low $L/L_{\rm edd}$. This break we find in the $\Gamma - L/L_{\rm edd}$ relation may provide the best empirical evidence to date for such a transition. The inflection point in the $\Gamma - L/L_{\rm edd}$ relation is dominated by Ts, consistent with the optical nature of these systems. While hypothesized to be the result of mixed AGN and SF ionization \citep{ho03, con09}, their AGN component could be either an S- or L-like. Ss appear to be the low $L$ quasar-like, efficiently accreting systems, while the Ls are the ADAF objects, so T's locus at the break region is suggestive of a switch in the accretion mode. The optically-defined Ts are then transition systems that map the X-ray inflection as well. Note that this idea of transition in the accretion mode, from ADAF to standard-disk as the accretion rates increases, has been first proposed, and later developed, mainly based on results of investigations of smaller BH mass accretors, i.e., stellar mass black-hole X-ray binaries, or XRBs; see \citet{narayan08} for a recent review. Interestingly, the relation between $\Gamma$ and $L/L_{\rm edd}$ that XRBs exhibit in the different phases of their temporal variability is also multivalued: a given spectral index $\Gamma$ may correspond to two different luminosity levels, with the luminosity difference greater for sources characterized by softer spectra. That is, the XRBs show a positive $\Gamma - L/L_{\rm edd}$ correlation while in their high/soft states \citep{kub04}, and an anti-correlation while in their low/hard states \citep{yamaoka05}. Several clear examples of such a turn (or convergence point) in the $\Gamma - L/L_{\rm edd}$ relation measured in XRBs are illustrated in, e.g., \citet{yuan07} and \citet{wu08}. In terms of the physical phenomenology governing the black hole accretion process over more than 10 orders of magnitude in the Eddington ratio, the transition from an ADAF to a standard disk accretion seems to adequately account for both the XRB and AGN emission properties. The ADAF scenario qualitatively explains the anti-correlation via Comptonization of thermal synchrotron photons as the dominant cooling mechanism at low $L/L_{\rm edd}$ ratios: the Compton $y$-parameter increases with increasing of the optical depth, which is caused by increase of the accretion rate, the X-ray spectrum becomes harder \citep{esin97}. Further increase in the accretion rate would cause both an increase in the released energy and a decrease in the electron temperature, weakening the corona, and consequently the optical depth, reducing the $y$-parameter, and leading to softer spectra, and thus the positive $\Gamma - L/L_{\rm edd}$ correlation \citep{janiuk00}. The ADAF scenario has been relatively successful in explaining a variety of the LLAGN properties \citep{ho08}, while the standard disk-corona model has been widely invoked to explain quasar emission. Our finding of a non-monotonic $\Gamma - L/L_{\rm edd}$ relation seems to provide the first direct empirical link between these two different types of accretion in AGN. The AGN-XRB physical analogy has been discussed rather extensively \citep{mac05}, and is particularly supported by the two "fundamental planes" of the BH activity on all mass scales, the $L_{\rm radio} - L_X - M_{bh}$ \citep{merloni03, falcke04} and the $L_{\rm bol} - M_{bh} - T_{\rm break}$ \citep{mchardy06}. There was however not much evidence for existence of spectral states in massive black holes, similar to those of the stellar black holes. Our discovery of an anti-correlation between $\Gamma$ and $L/L_{\rm edd}$, and thus the discovery of a turning point in the $\Gamma - L/L_{\rm edd}$ relation for AGN, provides this evidence, which constitutes an important empirical constraint to the idea that these systems are really the analogs of each other, in spite of the vast difference of scales. \section{Conclusions \& Discussion} This study of serendipitous Chandra nearby sources brings together for the first time a large homogeneous sample of active and inactive galaxy nuclei selected and classified based on their optical spectral properties. With a minimal selection bias, we characterize the X-ray properties of low luminosity AGN via measurements of the X-ray spectral shape, fluxes, and luminosities. These measurements add important information and provide new constraints to the proposed {\it H II $\rightarrow$ S/T $\rightarrow$ L $\rightarrow$ P} galaxy evolutionary sequence. Optical observations reveal that at least in statistical terms, along this sequence, (1) the host halo mass increases, (2) the environmental density increases, (3) both central BH mass and stellar mass increase, while (4) the rate of accretion onto the central Bh decreases, (5) the stellar population ages, and (6) the material that could be used for accretion and/or star-formation is less and less available. The X-ray data support these evolutionary trends and bring surprising new insights into the nature of the LLAGN phenomenon. There are two main results of this analysis that we want to emphasize here: \begin{itemize} \item The {\it (H II $\rightarrow$) Seyfert $\rightarrow$ Transition Object $\rightarrow$ LINER $\rightarrow$ Passive Galaxy} sequence suggested by a large variety of optical measures is supported by X-ray measurements. Both the spectral shape and the accretion power, as measured by $L_X$ and the Eddington ratio $L/L_{\rm edd}$, with $L=L_{\rm bol} = 16 \times L_X$, show a clear trend toward softer, less X-ray luminous and less actively accreting sources from $S$s to $T$s, to $L$s, and, at the end, the $Passive$ galaxies. The rather ambiguous (in some optical properties) succession of $S$ and $T$ phases is now significantly constrained by the X-ray activity to follow in a sense of decreasing accretion power. \item There is a rather strong anti-correlation between the shape of the X-ray spectral energy distribution, quantified via the power-law index $\Gamma$, and $L/L_{\rm edd}$. This finding translates into a break in the $\Gamma-L/L_{\rm edd}$ correlation exhibited by AGN of all powers, with spectral softening on either side of $L/L_{\rm edd} \approx 0.01$. The transition point is identical to that where stellar mass BH accretors (XRBs) exhibit their turn in analogous $\Gamma-L/L_{\rm edd}$ trends. \end{itemize} The distribution of points within the observed $\Gamma-L/L_{\rm edd}$ relation exhibited by both weak and powerful AGN might have other implications as well. The presence of some LLAGN below the $\Gamma \la 1.5$ turning point, which are mostly Seyferts, suggests that obscuration might also play a role in shaping the observed $\Gamma-L/L_{\rm edd}$ trends at low and high $z$. These particular objects' rather hard photon indices must be the indication of gas absorption of their $0.5 - 8$ keV spectra. Their Eddington ratios huddle near the $10^{-2}$ level, being generally weak when compared to the luminous quasars, and among the strongest LLAGN. Thus, the anti-correlation between $\Gamma$ and $L/L_{\rm edd}$ shown by the nearby galaxy nuclei may well be interpreted as a relation between absorption and accretion rate, the objects accreting at higher rates being more obscured. The probably naive extrapolation of this idea at higher $L/L_{\rm edd}$ suggests then that more active AGN are also more absorbed. Consequently, these increasingly absorbed systems, that would be the type 2 ones [according to the AGN "unification" scenario that separates (the observed appearance of) AGN in terms of orientation relative to the line of sight] would be decreasingly likely to be included in the (current) optically selected AGN/quasar samples. This interpretation is certainly consistent with the general results of various quests for type 2 quasars: their X-ray spectra are harder than their type 1 counterparts \citep{zakamska04, ptak06}, and they are highly obscured \citep{zakamska05}. The type 2 quasars are definitely scarce compared with the type 1, and their fraction relative to that of the type 1 ones decreases with increasing luminosity \citep{reyes08}. Thus, this may as well be the explanation for the dearth of type 2 quasars. If the type 1-2 dichotomy and consequently the "unification" are only about the observing angle, the AGN-galaxy evolutionary sequence suggested by the properties of the different types of nearby galactic nuclei should be even stronger once inclination effects are removed, as we would have a clearer view of the central engine. On the other hand, the 1-2 type separation, and thus the unification, might be the result of evolution. Some simulations suggest that for the luminous quasars, the type 1 (unobscured) phase comes after a phase of "blowing-out" circumnuclear matter, which might mean after the quasars were observable as type 2. By adding type 1 LLAGN to investigations of the $\Gamma-L/L_{\rm edd}$ connection we might be able to understand better the way the proposed evolutionary sequence does or does not challenge the unification scenario. Also, the exact location of the turning point in the $\Gamma-L/L_{\rm edd}$ relation remains to be better quantified in terms of both parameters. Of particular caution is combining the $L/L_{\rm edd}$ measurements of LLAGN with those of quasars, mainly because the methods used in estimating their BH masses are not necessarily compatible. The BH masses of LLAGN are based on the $M_{bh}-\sigma_*$ relation, while the (usually high $z$) quasar BH masses are obtained from the widths of the optical broad emission lines via scaling relations; the scaling relations are calibrated on $M_{bh}-\sigma_*$, which, recent work suggests, does not necessarily hold at high $z$. Another way of refining the exact location of the break lies of course in better estimates of $\Gamma$, i.e., higher quality (higher signal-to-noise) X-ray measurements, and/or larger samples. The latter alternative, in particular, seems to be feasible, as larger and larger samples of well characterized samples of optically selected AGN become available for cross-correlation with X-ray detections from, e.g., serendipitous surveys like ChaMP. Future work (Constantin \& ChaMP 2009, in prep.) will explore the multi-wavelength properties of a large sample of AGN that brings together nearby LLAGN of type 2 with nearby quasars (type 1 AGN), thus attempting to reconcile both the type 1-2 dichotomy and the problem of mix-matching BH masses, along with providing larger statistics, and better means of quantifying extrinsic effects such as absorption and non-thermal processes (i.e., Comptonized emission from the accretion disk's corona) that enable improved constraints to the analogy with XRBs. We will also address in this work the biases that are potentially present in the previously claimed relationships between the between $L_X$ and optical emission line luminosities for LLAGN, together with the impact of using optical emission lines to estimate $L_{\rm bol}$, and thus to the overall shape of the $\Gamma-L/L_{\rm edd}$ relation. \acknowledgements AC thanks Christy Tremonti for valuable discussions regarding the MPA/JHU catalog. Support for this work was provided by the National Aeronautics and Space Administration through {\em Chandra} Award Number AR7-8015A issued by the {\em Chandra} X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060.
1,108,101,562,694
arxiv
\section{Introduction} Evolutionary computing relies on the evolution of candidate solutions over a finite number of generations to obtain accurate solutions for complex optimization problems. Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO), among other Bio-inspired algorithms (BAs), have been applied to successfully solve a diversity of such problems applied in engineering and sciences. BAs guide the evolution of a population of individuals (candidate solutions) to improve their fitness to achieve a feasible solution to the target problem. BAs apply specialized computational rules to promote individual information exchange to the benefit of the population. However, optimization through such BA approaches demands exhaustive computations and efficient resources. Parallelization is one of the natural mechanisms applied to speed up and improve the accuracy of solutions obtained by BAs. This work studies Parallel Island Models (PIMs) that partition the population among their islands (processors) and simultaneously run a BA in each island. In addition to the exchange of individuals, PIMs promote migration between islands. When all islands run the same BA, such models are called homogeneous PIMs (HoPIMs). This work improves heterogeneous PIMs (HePIMs) \cite{Lucas2021}, in which islands may run different BAs, allowing algorithmic \textit{reconfiguration} on their islands, i.e., islands may dynamically update their BAs. In addition to an adequate and well-calibrated migration policy required by HePIMs, reconfigurable HePIMs exchange information between islands to decide how islands should reconfigure their BAs. Silveira {\em et al.} \cite{Lucas2021} introduced reconfigurable HePIMs running four different BAs in their islands, namely, Genetic Algorithm (${\mbox{\texttt{GA}}}$), double-point crossover GA (${\mbox{\texttt{GAD}}}$), Differential Evolution (${\mbox{\texttt{DE}}}$), and self-adjusting Particle Swarm Optimization (${\mbox{\texttt{PSO}}}$) (see e.g., \cite{Holland1973}, \cite{DE1970}, and \cite{eberhart1995particle}). PIMs performance depends on a good calibration of the breeding cycle parameters (related to the involved BA) and, in addition to vital aspects of the parallel island architecture as island communication synchronism, island migration frequency, communication topology, and migration policy. We select two successful asynchronous HePIMs from \cite{Lucas2021} maintaining their parameters and adding the reconfiguration frequency. The new reconfigurable HePIMs are tested in solving the unsigned reversal distance problem (URD), an ${\cal NP}$-hard problem (\cite{Caprara1997sorting}). Approaches to solve URD are applied in combinatorics to explain algebraic aspects of permutations (\cite{DELIMA201859}) and, in genomics, to analyze the evolutionary distance between genomes (\cite{kececioglu1993exact}), also represented as permutations. \vspace{2mm} \noindent{\bf Main contributions.} We design new reconfigurable HePIMs that run ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$, and ${\mbox{\texttt{PSO}}}$\ in their islands using two successful asynchronous topologies from \cite{Lucas2021}. Non-reconfigurable HePIMs computed competitive solutions regarding most HoPIMs. The new reconfigurable architectures showed promising results computing solutions that exceed the quality of pure HePIMs and are very competitive regarding the best adapted HoPIMs, namely, the ones that run ${\mbox{\texttt{DE}}}$\ in all their islands. The heterogeneity of the new model effectively shares, through the migration policy, the good results experimented by individuals guided by different BAs in each island to the whole architecture. Furthermore, the reconfiguration ability shares the good experiences of the BAs in each island of the model. Adding the reconfiguration capability, the new model exceeds the flexibility of HoPIMs (all islands may update their BA to a unique BA) and of HePIMs (reconfiguration may update island BAs to the fix configuration of any non-reconfigurable HePIM) \vspace{2mm} \noindent{\bf Organization.} Sec. \ref{sec:background} discusses PIMs, the unsigned reversal distance problem, and the four selected BAs: ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$, and ${\mbox{\texttt{PSO}}}$. Sec. \ref{sec:topologies} introduces the new reconfigurable HePIMs explaining how different BAs are reconfigured. Then, Sec. \ref{sec:experimentsaccuracy} presents experiments and discusses accuracy results and statistical analysis. Finally, Sec. \ref{sec:relatedwork} present related work before Sec. \ref{sec:conclusion} that concludes and discussed future work. Source and data used in the experiments are available at \href{http://genoma.cic.unb.br}{http://genoma.cic.unb.br}. \section{Background}\label{sec:background} \subsection{Parallel island model (PIM)}\label{ssec:pim} PIMs were proposed initially for GAs \cite{Crainic2003} and, besides improving speed-up, it is expected that such models also boost the solutions provided by sequential ${\mbox{\texttt{GA}}}$. The population is distributed into islands whose number is determined by the developer, running their BAs in parallel. The connection between the islands establishes the model's topology. \textit{Static} PIMs maintain the connections fixed during the execution, whereas \textit{dynamic} models admit changes during the process. Linked islands exchange individuals to evolve. Such a transfer can be uni- or bi-directionally. Different topologies and strategies to implement them are available (e.g. \cite{Duarte2020,Lucas2020,Sudholt2015parallel}). \textit{Homogeneous} PIMs execute the same BA in all islands, whereas \textit{heterogeneous} models admit different BAs running in their islands. Figure \ref{fig:heterogeneousBTree} illustrates an heterogeneous static bi-directional tree topology. The edges show the connections between islands and remain unchanged, while vertices represent the islands with their BAs. A \textit{migration policy} guides the exchange of individuals between islands during the evolutionary process. PIMs have breeding-cycle and migration parameters tuned to improve the quality of solutions. In the following, the migration parameters are briefly presented. Some of them consider the classification of individuals as {\bf best}, {\bf worst} and {\bf random}, based on a rank established according to their fitness. The first half corresponds to the best and the second to the worst individuals, whereas random individuals are selected randomly. \begin{itemize \item Individuals number ({\it IN}): number of individuals emigrating from each island. \item Emigrant Individuals ({\it EMI}): rule the type of individuals selected for emigration among: 1. {\bf best}, 2. {\bf worst}, and 3. {\bf random}. \item Immigrant Individuals ({\it IMI}): determines the type of individuals in the target island replaced by immigrants among: 1. {\bf worst}, 2. {\bf random}, and 3. {\bf similar}. Similar individuals have the same classification as their replacement immigrants according to the fitness rank. \item Emigration Policy ({\it EP}): defines whether individuals are {\bf cloned} or {\bf removed} in the local island when they emigrate to the target island. \item Migration Interval ({\it MI}): corresponds to a percentage of iterations of the evolutionary process, called generations, after which the migration process is redone. Each island separately evolves its population by $\textit{MI} \times \textit{maxIt}$ generations, where \textit{maxIt} is the total number of iterations performed by each BA. \end{itemize} PIMs are classified according to the synchroneity in which islands evolve their population. In \textit{Synchronous} PIMs, islands evolve performing each generation simultaneously, whereas, in \textit{asynchronous} PIMs, islands evolve independently even during migration. The latest mimics the behavior found in nature. Here, we introduce reconfigurable heterogeneous PIMs. First, at a fixed generation percentage, called {\it Reconfiguration Frequency (RF)}, it is checked what islands have the best and the worst solutions regarding a metric based on the fitness average and the variance of the population. Then, the worst island updates its BA to the BA applied by the best island. \subsection{Case-study}\label{subsec:case} The evolutionary distance between two organisms can be computed as the number of rearrangements needed to transform a genome into another one by using some evolutionary measure. The authors consider the minimum number of reversals to compute the distance between unichromosomal organisms in this work. Permutations on $\{1,\cdots,n \}$ represent a genome containing $n$ genes. Given a genome $\pi=(\pi_1, \pi_2, ..., \pi_n)$, where $1\leq i, \pi_i\leq n$, a reversal $\rho^{j,k}$, for $1\leq j \leq k \leq n$, transforms $\pi$ into $\pi'=(\cdots, \pi_{j-1},\pi_k,\cdots,\pi_j, \pi_{k+1},\cdots)$, that is, it inverts the elements between $\pi_j$ and $\pi_k$. If the orientation of the genes is known, each one receives a positive or negative sign, and the genome is a signed permutation. There are two evolutionary problems related to computing the distance by reversals. The signed reversal distance (SRD) problem asks for the minimum number of reversals needed to transform a signed permutation into another. On the other hand, the unsigned reversal distance (URD) problem consists of computing such a number between unsigned permutations, which orientation of genes is unknown. It is well-known that SRD belongs to class $\cal P$ \cite{Hannenhall1999}, whereas URD is an ${\cal NP}$-hard problem \cite{Caprara1997sorting}. Our models are applied to solve URD. The fitness used by the algorithms is computed over signed permutations, generated after a random assignment of signs to each gene of a permutation. \subsection{Local Evolutionary Engines | bio-inspired Algorithms} Four BAs, widely used for analyzing optimization problems and having distinct adaptability characteristics are applied. \begin{itemize \item Simple Genetic Algorithm (${\mbox{\texttt{GA}}}$): to evolve local population, ${\mbox{\texttt{GA}}}$\ considers a breeding cycle where the best parents are selected and produce offspring by applying one-point crossover (Fig. \ref{fig:crossover} (a)). Then, the descendants replace the worst individuals in the current population. The breeding cycle relies on four parameters, namely, the percentages of {\it selection} and {\it replacement}, and the probability of application of {\it mutation} and {\it crossover}. ${\mbox{\texttt{GA}}}$\ was developed by J. H. Holland in the 1970s \cite{Holland1973}. \item Double-point Crossover Genetic Algorithm (${\mbox{\texttt{GAD}}}$): it has a similar behavior than ${\mbox{\texttt{GA}}}$\, except by the technique to promote {\it crossover}, illustrated in Fig. \ref{fig:crossover} (b), and how the local population evolves: in contrast with ${\mbox{\texttt{GA}}}$\, descendants replace individuals randomly selected in ${\mbox{\texttt{GAD}}}$. \item Differential Evolution (${\mbox{\texttt{DE}}}$): it was proposed by Storn and Price \cite{DE1970}, and is a method to optimize functions over the real multidimensional space $\mathbb{R}^n$. We adapt the algorithm by restricting the domain of the function as the set of permutations. Two main parameters guide the evolutionary process: the {\it mutation factor} $F_M$, applied to individuals randomly selected from the population to generate mutants, and the {\it probability of crossover} $P_C$. The local population evolves by replacing individuals having the worst fitness with mutants. \item Self-adjusting Particle Swarm Optimization (${\mbox{\texttt{PSO}}}$): it was introduced by Eberhart and Kennedy \cite{eberhart1995particle} and is based on the behavior of social organisms in groups. Originally, ${\mbox{\texttt{PSO}}}$\ was developed to optimize continuous functions from particles' velocity and position in an $n$-dimensional space. At each iteration, the vector representing a particle velocity is built from the best positions of such a particle and all particles, the velocity of the particle in the previous iteration, the individual and global acceleration (that influence the distance a particle can cover), and the weight of inertia (momentum). In this paper, we use the ${\mbox{\texttt{PSO}}}$\ proposed in \cite{pso2011}, which is self-adaptive since momentum and the individual and global acceleration coefficients are self-tuning during the search process. \end{itemize} To adapt ${\mbox{\texttt{PSO}}}$\ and ${\mbox{\texttt{DE}}}$\ to URD, to each $n$-dimensional real vector $v$ randomly generated is associated a signed permutation constructed from the unsigned permutation $\pi = (\pi_1, \ldots, \pi_n)$ gave as input: if the $i-$th entry of $v$ belongs to the interval $[0, 0.5)$ then $\pi_i$ receives a negative orientation; case such a coordinate is in $[0.5, 1]$ then $\pi$ is assigned positively. However, if the continuous vector representation has an entry outside of the interval $[0,1]$, a correct orientation is randomly generated. For ${\mbox{\texttt{GA}}}$\ and ${\mbox{\texttt{GAD}}}$, the orientation of the genes in each individual is randomly generated as $\pm 1$. After the transformation of an unsigned to signed permutations, the linear algorithm to solve the SRD problem, proposed by Bader \emph{et al.} \cite{Bader2001linear}, computes the fitness of each particle/individual. \begin{figure}[!ht] \centering {\scriptsize \[\begin{array}{c} \mbox{(a) One-point crossover}\\[2mm] \begin{array}{c} \begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline 1&2&3&4&5&6&7&8 \\ \hline \end{array}\\[2mm] {\color{cyan}\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline 5&8&6&4&3&1&2&7 \\ \hline \end{array}} \end{array} {\color{blue} \huge\leadsto} \begin{array}{c} \begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline 1&2&3&{\color{cyan}4}&{\color{cyan}3}&{\color{cyan}1}&{\color{cyan}2}&{\color{cyan}7} \\ \hline \end{array}\\[1mm] \begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline {\color{cyan}5}&{\color{cyan}8}&{\color{cyan}6}&4&5&6&7&8 \\ \hline \end{array} \end{array} \end{array} \] \[\begin{array}{c} \mbox{(b) Double-point crossover}\\[2mm] \begin{array}{c} \begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline 1&2&3&4&5&6&7&8 \\ \hline \end{array}\\[2mm] {\color{cyan}\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline 5&8&6&4&3&1&2&7 \\ \hline \end{array}} \end{array} {\color{blue} \huge\leadsto} \begin{array}{c} \begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline 1&2&3&{\color{cyan}4}&{\color{cyan}3}& 6&7&8\\ \hline \end{array}\\[1mm] \begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline {\color{cyan}5}&{\color{cyan}8}&{\color{cyan}6}&4&5&{\color{cyan}1}&{\color{cyan}2}&{\color{cyan}7} \\ \hline \end{array} \end{array} \end{array} \]} \vspace{-4mm} \caption{One-point and double-point crossing operators.} \label{fig:crossover} \end{figure} \section{Communication Topologies} \label{sec:topologies} We select a static and a dynamic topology that successfully addressed URD in \cite{Lucas2020} for homogeneous PIMs and in \cite{Lucas2021} for non reconfigurable heterogeneous PIMs. We choose asynchronous models since the dynamic asynchronous HePIM was the one that provided the best results. The static topology is a 12-island bi-directional binary tree (tree to the left in Figure \ref{fig:heterogeneousBTree}), and the dynamic topology is the 12-island complete graph (graph to the left in Figure \ref{fig:heterogeneousCGraph}). In the complete graph topology all pairs of islands may exchange individuals. The island communication dynamism is acquired by exploring diversity and quality into each island, given by fitness variance and average metrics. Variance measures islands' diversity: high variance represents high individuals' diversity, improving the chances of evolution into islands. Fitness average measures the quality of island populations. According to such metrics, the islands are ranked as {\bf good}, {\bf bad}, and {\bf medium}. Migrations exchange individuals between good and bad islands, and medium and medium islands only (for short, {\it gbmm}). Reconfiguration uses the same metric to update the BA executed by islands, according to the best performance experienced by other islands. So, reconfigurable HePIMs perform migration and updating of BAs at some intervals during their evolution. The models introduced in this paper are ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$. The former one uses the static tree topology and the latter one the dynamic complete graph topology. Both topologies are asynchronous and evolve through a refined migration policy that allows exchange of individuals maintaining diversity of the model, and furthermore, through the new feature of dynamic reconfiguration that allows updating the BAs executed in their islands, improving in this manner the performance of the island model. Reconfiguration cycles for ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ are illustrated in Figures \ref{fig:heterogeneousBTree} and \ref{fig:heterogeneousCGraph}, respectively. \begin{figure*}[!ht] \centering \includegraphics[width=1\textwidth]{HeteroReconfigStatic.eps} \caption{Example of reconfiguration in the static binary tree topology. Red dotted nodes represent islands that have undergone current reconfiguration, while black dotted nodes label islands that have undergone reconfiguration in previous cycles.} \label{fig:heterogeneousBTree} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=1\textwidth]{HeteroReconfigDynamic.eps} \caption{Example of reconfiguration in the dynamic complete graph topology. Red dotted nodes represent islands that have undergone current reconfiguration, while black dotted nodes label islands that have undergone reconfiguration in previous cycles.} \label{fig:heterogeneousCGraph} \end{figure*} \section{Experiments and analysis of accuracy}\label{sec:experimentsaccuracy} As in \cite{Lucas2021}, all PIMs, including the new reconfigurable models were implemented using the {\tt MPI} library of {\tt C} in Linux, and for the sake of comparison, experiments were executed on a computational platform using two Xeon E5-2620 2.4 GHz six core processors with hyper-threading. The basis to compare the performance of PIMs, are sequential versions of ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$\ and ${\mbox{\texttt{PSO}}}$\ with populations of size $24 n \log n$ and breeding cycles fixed as $n$. Also, we select eight 12-island asynchronous HoPIMs, designed in \cite{Lucas2020}, running exclusively one of the BAs: ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$\ or ${\mbox{\texttt{PSO}}}$. Furthermore, we select two asynchronous HePIMs designed in \cite{Lucas2021}. The homogeneous models are ${\cal P}^{\mbox{\texttt{GA}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\texttt{GAD}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\texttt{PSO}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\texttt{GA}}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\texttt{GAD}}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$, and ${\cal P}^{\mbox{\texttt{PSO}}}_{\mbox{\tiny gbmm12A}}$. The superscripts denote the BA used by the homogeneous model; the subscript prefixes, whether the model use the static tree ({\tt Tr}) or the dynamic complete graph topology ({\tt gbmm}); and, the subscript suffix {\tt 12A} indicates the number of islands and that the model is asynchronous. From \cite{Lucas2021}, we select the heterogeneous PIMs ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, being the latter one the HePIM that provided the best quality results. \subsection{Parameter Setup} The parameters for BAs, HoPIMs and non-reconfigurable HePIMs obtained in \cite{Lucas2021} were used. The \emph{parameter tuning} adopted the ``taxonomy T1'' in \cite{EIBEN201119}. Table \ref{tab:parameters} presents the parameter ranges. For percentages, the tested values range between $2\%$ and $100\%$. For probabilities, the values range from $0.02$ to $1.0$, and for the mutation parameter from $0.01$ to $0.02$. For ${\mbox{\texttt{DE}}}$, the $F_M$ parameter ranges from $1\%$ to $2\%$ since values above $2\%$ degrade the quality of solutions. For ${\mbox{\texttt{PSO}}}$, the parameters to guide the particles in the search space are self-adjusting. The setup tested BAs, HoPIMs and HePIMs over packages of twenty $n$-gene permutations, $n \in \{50,60,\ldots,140,150\}$. All parameters reference values were evaluated and those that provided the best solutions selected (see Tables \ref{table:parametersettingGA_GAD} and \ref{table:parametersettingDE_PSO}). HePIMs use the same evolutionary parameters than HoPIMs, and only migration parameters were calibrated. Reconfigurable HePIMs add reconfiguration frequency to the associated HePIMs (see Table \ref{table:parametersettingHet}). \begin{table}[!t] {\small \caption{Estimated Values for the Parameters} \label{tab:parameters} \vspace{-3mm} \begin{center} \begin{tabular}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{}& \multicolumn{1}{|c|}{Parameter}& \multicolumn{1}{|c|}{Estimated values}\\ \hline \multirow{4}{*}{${\mbox{\texttt{GA}}}$\ and ${\mbox{\texttt{GAD}}}$}&\mbox{\it crossover} & $0.02, 0.04,\cdots,0.98, 1.0$ \\ \cline{2-3} &\mbox{\it mutation} & $0.01, 0.011,\cdots,0.019, 0.02$\\ \cline{2-3} &\mbox{\it selection} & $2\%, 4\%,\cdots,98\%, 100\%$\\ \cline{2-3} &\mbox{\it replacement} & $2\%, 4\%,\cdots,98\%, 100\%$ \\ \hline \multirow{2}{*}{${\mbox{\texttt{DE}}}$} &\mbox{\it $P_C$} & $0.02, 0.04,\cdots,0.98, 1.0$ \\ \cline{2-3} &\mbox{\it $F_M$} & $1\%, 1.1\%,\cdots,1.9\%, 2\%$ \\ \hline \multirow{5}{*}{Migration} & \mbox{\it IN} & 1,2,3,4,5,6,7,8,9,10,11,12,13\\ \cline{2-3} &\mbox{\it EMI} & 1=Best, 2=Worst, 3=Random \\ \cline{2-3} &\mbox{\it EP} & 1=Clone, 2=Remove\\ \cline{2-3} &\mbox{\it IMI} & 1=Worst, 2=Random, 3=Similar\\ \cline{2-3} &\mbox{\it MI} & $2\%, 4\%,\cdots,98\%, 100\%$ \\\hline \end{tabular} \end{center}} \vspace{-6mm} \end{table} \begin{table}[!t] {\small \caption{Parameter Settings for ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, and associated HoPIMs.} \label{table:parametersettingGA_GAD} \vspace{-3mm} \begin{center} \begin{tabular} {|@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}}|} \hline \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{}& \multicolumn{2}{|c|}{${\cal P}^{\mbox{\texttt{GA}}}$}& \multicolumn{1}{|c|}{}& \multicolumn{2}{|c|}{${\cal P}^{\mbox{\texttt{GAD}}}$}\\ \hline \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{Parameter} & \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\texttt{GA}}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny Tr12A}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny gbmm12A}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\texttt{GAD}}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny Tr12A}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny gbmm12A}}$}\\ \hline {\it crossover} &$.90$ &$.98$ & $.96$ & $.92$ &$.98$ & $.98$\\ \hline {\it mutation} &$.02$ &$.015$ &$.011$ & $.01$ &$.01$ &$.01$ \\ \hline {\it selection} &$60\%$ &$92\%$ &$94\%$ & $98\%$ &$98\%$ &$94\%$ \\ \hline {\it replacement} &$60\%$ &$70\%$ &$70\%$ & $90\%$ &$80\%$ &$90\%$ \\ \hline \mbox{\it IN} & &9 &5 & &12 &5 \\ \hline \mbox{\it EMI} & &1 &1 & &1 &1 \\ \hline \mbox{\it EP} & &2 &2 & &2 &1\\ \hline \mbox{\it IMI} & &1 &1 & &1 &1 \\ \hline \mbox{\it MI} & &$30\%$ &$30\%$& &$14\%$ &$12\%$ \\\hline \end{tabular} \end{center} } \vspace{-5mm} \end{table} \begin{table}[!t] {\small \caption{Parameter Settings for ${\mbox{\texttt{DE}}}$, ${\mbox{\texttt{PSO}}}$, and associated HoPIMs.} \label{table:parametersettingDE_PSO} \vspace{-3mm} \begin{center} \begin{tabular} {|@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}}|} \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \multicolumn{2}{|c|}{${\cal P}^{\mbox{\texttt{DE}}}$} & \multicolumn{2}{|c|}{${\cal P}^{\mbox{\texttt{PSO}}}$} \\ \hline \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{Parameter}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\texttt{DE}}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny Tr12A}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny gbmm12A}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny Tr12A}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny gbmm12A}}$}\\ \hline $P_C$ &$.74$ &$.72$ &$.78$ &&\\ \hline $F_M$ & $1\%$ &$1.4\%$ &$1\%$ &&\\ \hline \mbox{\it IN} & &3 &5 &6 &5\\ \hline \mbox{\it EMI} & &1 &1 &3 &3\\ \hline \mbox{\it EP} & &1 &2 &2 &2\\ \hline \mbox{\it IMI} & &1 &1 &1 &2 \\ \hline \mbox{\it MI} & &$14\%$ &$12\%$ &$12\%$ &$22\%$\\ \hline \end{tabular} \end{center}} \vspace{-5mm} \end{table} \begin{table}[!t] {\small \caption{Parameter Settings for HePIMs.} \label{table:parametersettingHet} \vspace{-3mm} \begin{center} \begin{tabular} {|@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}} |@{\hspace{0.2mm}}c@{\hspace{0.2mm}}|} \hline \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{Parameter}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12S}}$}& \multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$}\\ \hline \mbox{\it IN} &3 &3 &6 &6 \\ \hline \mbox{\it EMI} &1 &1 &3 &3 \\ \hline \mbox{\it EP} &2 &2 &1 &1 \\ \hline \mbox{\it IMI} &3 &3 &3 &3\\ \hline \mbox{\it MI} &$10\%$ &$10\%$ & $14\%$ & $14\%$ \\\hline \mbox{\it RF} & & 14\% & &24\% \\ \hline \end{tabular} \end{center}} \vspace{-8mm} \end{table} \subsection{Analysis of Accuracy}\label{sec:analysis} HePIMs use parameters taken from Tables \ref{table:parametersettingGA_GAD}, \ref{table:parametersettingDE_PSO} and \ref{table:parametersettingHet} according to the parameter setting obtained in \cite{Lucas2021}. In addition, the new reconfigurable HePIMs performed reconfigurations after each 14\% and 24\% generations, giving a total of seven and four reconfiguration cycles, respectively, for the static tree and dynamic complete graph topologies. For each permutation size, $n \in \{100,110,\ldots,150\}$, one package of one hundred unsigned permutations with $n$ genes was randomly generated. All PIMs were executed ten times using each one of the $n$ permutations of size $n$, and the average of these executions for each permutation is taken as the result. The average gives the computed number of reversals for each unsigned permutation. The accuracies of non- and reconfigurable HePIMs are compared. The radar chart in Fig. \ref{fig:sequentialPIMs}, from previous experiments, shows that ${\mbox{\texttt{DE}}}$\ is the best adapted BA for the URD problem \cite{Lucas2021}. In contrast, ${\mbox{\texttt{PSO}}}$\ provided the worst results, while ${\mbox{\texttt{GA}}}$\ and ${\mbox{\texttt{GAD}}}$, in this order, gave competitive results. The six radii of the chart represent the accuracy for inputs of size $100, 110$ to $150$. The ranges and scales in each radius of the radar chart are adjusted for the sake of presentation. PIMs with the tree and the complete graph topologies outperformed, as expected, their sequential versions \cite{Lucas2021}. The radar charts to the left in Figs. \ref{fig:het_tree12A} and \ref{fig:het_gbmm12A} show that the HoPIMs maintained the order of competitivity of their best and worst adapted BAs: the best quality solutions were obtained by ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ and the worst by ${\cal P}^{\mbox{\texttt{PSO}}}_{\mbox{\tiny Tr12A}}$\, for the static tree model, while for the dynamic complete graph topology, the best solutions were computed by ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$\ and the worst by ${\cal P}^{\mbox{\texttt{PSO}}}_{\mbox{\tiny gbmm12A}}$. In contrast to the fact that ${\mbox{\texttt{GAD}}}$\ provided better accuracy than ${\mbox{\texttt{GA}}}$, the homogeneous models ${\cal P}^{\mbox{\texttt{GA}}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\texttt{GA}}}_{\mbox{\tiny gbmm12A}}$\ respectively outperformed ${\cal P}^{\mbox{\texttt{GAD}}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\texttt{GAD}}}_{\mbox{\tiny gbmm12A}}$. Table \ref{tab:reconfigEnd} exemplify the final island configuration. We ran ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ over one hundred entries of size 100 and computed the average of the final distribution of the four BAs over the islands. Surprisingly the proportion of islands running ${\mbox{\texttt{GA}}}$\ is dominant for ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ that can be explained since the final average results of sets of islands with the same BA are very similar for this model (see right chart on Fig. \ref{fig:het_tree12A}). On the other side, the distribution BAs over islands for ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ is better balanced (cf. right chart on Fig. \ref{fig:het_gbmm12A}). \begin{figure}[!ht] \centering \includegraphics[width=0.46\textwidth]{chart_sequentials.eps} \caption{Accuracy of the sequential BAs ${\mbox{\texttt{DE}}}$, ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$\ and ${\mbox{\texttt{PSO}}}$.} \label{fig:sequentialPIMs} \end{figure} Figs. \ref{fig:het_tree12A} and \ref{fig:het_gbmm12A} include the accuracy for the HePIMs ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, respectively. The charts on the left show that the accuracy of these models is very competitive regarding the HoPIMs with the same architecture being only surpassed by ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$, respectively. The charts on the right show (dashed lines) the best final average results obtained by each set of three islands in the HePIMs (${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$, and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, respectively) that execute the same BA. These charts make it evident that the migration policy and application of diverse BAs in the islands of the heterogeneous architectures successfully propagates the results obtained in all islands, but are not enough to outperform the quality of the homogeneous model running the best adapted BA, ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$, respectively. Performance of the new reconfigurable models is also included in Figs. \ref{fig:het_tree12A} and \ref{fig:het_gbmm12A}. The reconfigurable HePIM ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ computed better quality results than the pure heterogeneous model ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$, and closed the gap between this and the best adapted homogeneous architecture ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$, running DE (see radar chart on the left in Fig. \ref{fig:het_tree12A}). It makes evident that adding the versatility of reconfiguration to the heterogeneous PIMs may improve their performance. On the other side, the reconfigurable HePIM ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ computed quality results that are indistinguishable from the competitive ones computed by the non reconfigurable heterogeneous model, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$. \begin{figure*}[!ht] \centering \includegraphics[width=0.52\textwidth]{chart_hom_rec_het_tr12A.eps}\hspace{-2mm} \includegraphics[width=0.48\textwidth]{chart_rec_het_tr12A.eps} \caption{(a) Accuracy of ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ and related HoPIMs; (b) Accuracy of ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ and average results of each set of islands in ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ running the same BA.} \label{fig:het_tree12A} \vspace{-4mm} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=0.528\textwidth]{chart_hom_rec_het_gbmm12A.eps}\hspace{-2mm} \includegraphics[width=0.462\textwidth]{chart_rec_het_gbmm12A.eps} \caption{(a) Accuracy of ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ and related HoPIMs; (b) Accuracy of ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ and average results of each set of islands in ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$\ running the same BA.} \label{fig:het_gbmm12A} \vspace{-4mm} \end{figure*} Fig. \ref{fig:heterogeneousPIMs} compares the accuracy of the four non- and reconfigurable HePIMs: ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, and. ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$. From the experiments, it is clear that the new reconfigurable heterogeneous architectures, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, add to the versatility of heterogeneous PIMs the flexibility of dynamically updating the BA executed in each island promoting in this manner not only data diversity, but also algorithmic dynamism. Reconfigurable heterogeneous PIMs open up a promising new exploration space where, unlike von Newman's style of algorithmic exploration focused on efficient information data management, algorithmic data dynamism enters as a crucial player in the game (see, for example, \cite{Hartenstein2010}, \cite{Hartenstein2013}). \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth]{chart_rec_heterogeneous.eps} \caption{Accuracy of the non reconfigurable and reconfigurable HePIMs} \label{fig:heterogeneousPIMs} \end{figure} \begin{table}[!t] {\small \caption{Example of the final distribution of ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$\ and ${\mbox{\texttt{PSO}}}$.} \label{tab:reconfigEnd} \begin{center} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{}& \multicolumn{1}{|c|}{${\mbox{\texttt{GA}}}$}& \multicolumn{1}{|c|}{${\mbox{\texttt{GAD}}}$}& \multicolumn{1}{|c|}{${\mbox{\texttt{DE}}}$} & \multicolumn{1}{|c|}{${\mbox{\texttt{PSO}}}$} \\ \hline \multirow{1}{*}{${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$}& 49.25\%& 4.09\% & 18.08\% & 28.58\% \\[1mm] \hline \multirow{1}{*}{${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$} &29.42\% &17.5\% & 28.25\% & 24.83\% \\[1mm] \hline \end{tabular} \end{center}} \end{table} \subsection{Statistical Analysis}\label{ssec:statisticalanalysis} Statistical tests validated experiments using $95\%$ of significance level represented in the tests as $\alpha = 0.05$. The samples are the sets of one hundred outputs obtained in Section \ref{sec:analysis}. Initially, the Friedman test was applied to define the control algorithm. Then, Holm's test was applied to check the null hypothesis that the performance of the control algorithm is the same concerning the remaining algorithms, according to Garc\'ia and Herrera approach \cite{garcia2008} (see also \cite{demvsar2006statistical}, and \cite{derrac2011}). Table \ref{table:holmStaticHe} presents the statistical results for the PIMs with binary tree topology: ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$, and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$. The Friedman test selects ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$\ as the control algorithm, and the null hypothesis is discarded for all samples. The model ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ is the second-best confirming the discussion in Section \ref{sec:analysis}. Table \ref{table:holmDynamicHe} shows the statistical results for the models with complete graph topology, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, and the best homogeneous version, ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$, which is selected as the control algorithm. Finally, Table \ref{table:holmHeterogeneous} gives the statistical tests for the four reconfigurable and non-reconfigurable HePIMs. The selected control algorithm was ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$. Holm's procedure rejects the null hypotheses (for $p$-value $\le0.05$), thus, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$\ has statistical significance only for ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, and compared to ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ there is no statistical significance confirming the discussion in Section \ref{sec:analysis}. \begin{table}[ht] { \scriptsize \caption{Holm test for the tree topology PIMs. } \label{table:holmStaticHe} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\bfseries L} & \multicolumn{1}{|c|}{\bfseries Control} & \multicolumn{1}{|c|}{\bfseries i} & \multicolumn{1}{|c|}{\bfseries Algorithm} & \multicolumn{1}{|c|}{\bfseries \boldmath{ $p$}-value} & \multicolumn{1}{|c|}{\bfseries \boldmath{\!$\alpha / i$\!}}\\ \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries Algorithm} & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries }\\[0.5mm] \hline \multirow{3}{*}{100} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 7.74959960741012E-11 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 1.8505741383968991E-9 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{110} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 1.346035421050821E-15 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 5.560820039745642E-15 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{120} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 3.722840351917189E-19 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 3.2599374788722365E-13 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{130} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 4.218936534105464E-12 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 1.449490502746956E-11 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{140} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 3.1593469401304723E-16 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 1.4870457587052685E-9 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{150} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 5.124221656690746E-19 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$& 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 9.723622409009922E-15 & 0.050 \\[0.5mm] \hline \end{tabular} \vspace{-2mm}} \end{table} \begin{table}[ht] { \scriptsize \caption{Holm test for the complete graph PIMs. } \label{table:holmDynamicHe} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\bfseries L} & \multicolumn{1}{|c|}{\bfseries Control} & \multicolumn{1}{|c|}{\bfseries i} & \multicolumn{1}{|c|}{\bfseries Algorithm} & \multicolumn{1}{|c|}{\bfseries \boldmath{ $p$}-value} & \multicolumn{1}{|c|}{\bfseries \boldmath{\!$\alpha / i$\!}}\\ \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries Algorithm} & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries }\\[0.5mm] \hline \multirow{3}{*}{100} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 7.43098372370352E-7 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 1.1648657367238803E-5 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{110} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 9.672204071723814E-19 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$& 1 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2.9294885290101255E-14 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{120} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 1.792019989925749E-15 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 1.792019989925749E-15 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{130} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 3.943363947351002E-17 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2.3828362635579084E-15 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{140} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 4.82026808703977E-22 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 1.0213251630273183E-20 & 0.050 \\[0.5mm] \hline \multirow{3}{*}{150} & \multirow{3}{*}{} & 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 3.4176814448375205E-24 & 0.025 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 1.440728401105864E-23 & 0.050 \\[0.5mm] \hline \end{tabular} \vspace{-2mm}} \end{table} \begin{table}[ht] { \scriptsize \caption{Holm test for reconf. and non-reconf. HePIMs. } \label{table:holmHeterogeneous} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\bfseries L} & \multicolumn{1}{|c|}{\bfseries Control} & \multicolumn{1}{|c|}{\bfseries i} & \multicolumn{1}{|c|}{\bfseries Algorithm} & \multicolumn{1}{|c|}{\bfseries \boldmath{ $p$}-value} & \multicolumn{1}{|c|}{\bfseries \boldmath{\!$\alpha / i$\!}}\\ \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries Algorithm} & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries } & \multicolumn{1}{|c|}{\bfseries }\\[0.5mm] \hline \multirow{3}{*}{100} & \multirow{3}{*}{} & 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 6.219448336201955E-7 & 0.016 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 3.5448303585580045E-5 & 0.025 \\[.26mm] \cline{3-6} & & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.3958980057181269 & 0.05\\[0.5mm] \hline \multirow{3}{*}{110} & \multirow{3}{*}{} & 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 2.1442166253671064E-13 & 0.016 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 7.221976514824151E-13 & 0.025 \\[.26mm] \cline{3-6} & & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.1709035202307971 & 0.05\\[0.5mm] \hline \multirow{3}{*}{120} & \multirow{3}{*}{} & 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 2.1442166253672077E-13 & 0.016 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 3.908557567773197E-9 & 0.025 \\[.26mm] \cline{3-6} & & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.7218258402177081 & 0.05\\[0.5mm] \hline \multirow{3}{*}{130} & \multirow{3}{*}{} & 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 1.076195601000617E-12 & 0.016 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 3.415515804642443E-11 & 0.025 \\[.26mm] \cline{3-6} & & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.4113137917762579 & 0.05\\[0.5mm] \hline \multirow{3}{*}{140} & \multirow{3}{*}{} & 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 4.342593992240847E-19 & 0.016 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 2.1442166253671531E-13 & 0.025 \\[.26mm] \cline{3-6} & & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.8694817827381613 & 0.05\\[0.5mm] \hline \multirow{3}{*}{150} & \multirow{3}{*}{} & 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 3.3892419952526653E-19 & 0.016 \\[.26mm] \cline{3-6} & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 2.4806730968270847E-15 & 0.025 \\[.26mm] \cline{3-6} & & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.912770619096563 & 0.05\\[0.5mm] \hline \end{tabular} \vspace{-2mm}} \end{table} \section{Related Work}\label{sec:relatedwork} As far as we know, no HePIM has been proposed that may dynamically update their BAs as proposed in this work. Here, we discuss a few works related to non-reconfigurable HePIMs. Bianchini and Brown \cite{Bianchini1993} proposed HePIMs with ring and torus topologies and applied them to the task map scheduling problem showing that HePIMs compute better solutions than HoPIMs. In addition, they observed that adding islands is better than increasing the population. Also, Lin \textit{et al.} \cite{Shyn1994} proposed HePIMs, considering several migration strategies and topologies addressing the graph partitioning problem. They showed that 25-island PIMs are better than the sequential GA, using a migration strategy that replaces the worst individuals on target islands. Furthermore, they showed that exchanging individuals regarding their fitness-based population similarity instead gets good results without speed degradation. Izzo \textit{et al.} \cite{sinc2009} proposed an asynchronous-migration HePIM from variations of the ${\mbox{\texttt{DE}}}$\ algorithm. Asynchrony was shown more intuitive and suitable over TCP/IP, where resources might become available/unavailable at any time. Izzo \textit{et al.} models showed better performance than their sequential versions. Gong and Fukunaga \cite{GoFu2011} proposed a ${\mbox{\texttt{GA}}}$-based HePIM that randomly selects different parameters for each processor. Some processors are expected to be assigned parameters that perform well on a given problem. Such model may be considered a one-cycle reconfigurable model. However, it applies only an initial adjust of the same algorithm and does not update BAs dynamically as ours reconfigurable HePIMs. Duarte \textit{et al.} \cite{duarte2018} proposed an attractiveness-based migration policy for five-island HePIMs that is based on island solutions' quality. Attractiveness was adjusted in \cite{Duarte2020}, inspired by the natural phenomenon known as stigmergy \cite{Capriles2007}, and the mechanism to compute islands' connections. Silveira {\em et al.} \cite{lucas2016} proposed HoPIMs for a sequential ${\mbox{\texttt{GA}}}$\ introduced in \cite{lucas2015} to solve the unsigned translocation problem. Such PIMs outperformed the accuracy obtained by the ${\mbox{\texttt{GA}}}$\ after careful calibration of the migration and breeding cycle parameters and exploration of a variety of topologies \cite{silveira2018,silveira2019}. Further, Silveira {\em et al.} \cite{Lucas2020} analyzed synchronous HoPIMs for ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{PSO}}}$, and the Social Spider Algorithm (SSA). Experiments showed that HoPIMs applying ${\mbox{\texttt{PSO}}}$\ and ${\mbox{\texttt{GA}}}$\ are competitive, while those running SSA gave the best speed-ups but computed the worst-accuracy solutions. Finally, Silveira {\em et al.} \cite{Lucas2021} proposed a variety of HePIMs to deal with URD. In this work, we select Lucas {\em et al.} models ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$\ for our comparison with the new reconfigurable HePIMs. HePIMs also have been conceived to solve multiobjective optimization problems (MOPs). We believe that MOPs are exciting applications to our reconfigurable HePIMs since each island may update its BA to optimize a single objective function. For example, Zang \emph{et al.} \cite{Zang2011} proposed a multi-swarm optimizer that handles each objective function of a MOP with a different slave swarm, and a master swarm covers gaps among non-dominated optima using a multiobjective ${\mbox{\texttt{PSO}}}$. Also, Xu \emph{et al.} \cite{Xu2018} proposed a model with EAs using two subpopulations to solve dynamic interval MOPs, which are MOPs that change interval parameters of their objectives or constraints over time. In addition, Gong \emph{et al.} \cite{Gong2020} proposed a model that handles a cooperative co-evolutionary MOP based on dynamic interval similarity. Gong \emph{et al.} approach split decision variables according to their interval similarity and interval parameters. Then, decision variables are optimized cooperatively. Furthermore, Hashimoto \emph{et al.} \cite{Hashimoto2018} proposed a HePIM to solve multi-task problems, where each island evaluates an objective. Migrants are selected at high migration frequency, and removed randomly on each local island, replacing the worst individuals in the target islands. Since immigrants went to islands responsible for different objectives, their fitness values are the worst, assuming they have fewer chances of being suitable for the target island objective. \section{Conclusions and future work}\label{sec:conclusion} Reconfigurable heterogeneous PIMs were introduced. Such architectures can run and dynamically update different bio-inspired algorithms on their islands. Two reconfigurable PIM architectures with two different archipelago-topologies were designed: a static binary tree topology, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, and a dynamic complete graph topology, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$. The asynchronous models ran four different BAs in their islands: ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{PSO}}}$, and ${\mbox{\texttt{DE}}}$. The new reconfigurable HePIMs were tested over the unsigned reversal distance problem and computed results that outperformed the quality of associated non-reconfigurable HePIMs. Experiments, evaluated statistically, made evident the potential of reconfigurable HePIMs. Such reconfigurable models preserve the power of heterogeneous PIMs to navigate efficiently in the space of feasible solutions through a healthy balance between individual and island diversity and migration policy. Also, the new reconfiguration feature gives the architecture the flexibility to evolve dynamically, improving the model's algorithmic adaptability to solve the target problem. The architectures ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ reached quality results very competitive regarding non-reconfigurable associated architectures ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, closing the gap with the homogeneous best-adapted model that uses ${\mbox{\texttt{DE}}}$. Future work will explore reconfigurable HePIMs over different problems and with a greater variety of BAs. In particular, we believe that the dynamic algorithmic heterogeneity promoted by the new model will be helpful to deal with multiobjective optimization problems. Indeed, the reconfiguration would promote the application of the best-adapted BA to each target problem over the islands. \bibliographystyle{ieeetr}
1,108,101,562,695
arxiv
\section{Introduction}\label{s.intro} Some instances of functional calculi are as old as modern mathematics. The very concept of a functional calculus, however, even nowadays still remains partly heuristic, with no definitive and widely accepted precise mathematical definition. One reason for this situation lies in the variety of instances such a definition has to cover. In particular those calculi are of interest, where unbounded operators are not just the starting point but the target. These calculi are called {\em unbounded} in the following, in order to distinguish them from the {\em bounded} calculi, which by (our) definition are those that yield only bounded operators. (In particular, we do not intend any continuity or suppose any topology when we speak of a bounded calculus here.) To illustrate our terminology, consider the Borel calculus of a normal operator $A$ on a Hilbert space. Even if $A$ is unbounded, the calculus $f\mapsto f(A)$ yields bounded operators as long as $f$ is a bounded function. Hence, restricting the Borel calculus to bounded functions yields a bounded calculus in our terminology. In contrast to that, even if $A$ is bounded, there are Borel functions $f$ such that $f(A)$ is unbounded. Hence, the full Borel calculus is an unbounded functional calculus. Now, there is little controversy about what a bounded functional calculus should be, namely an algebra homomorphism $\Phi: \calF \to \mathcal{L}(X)$ where $\calF$ is an algebra and $X$ is a Banach space. Of course, there are obvious variations possible concerning, e.g., the properties of $\calF$ (unital? commutative? a function algebra?) or the space $X$ (just locally convex?) or topological requirements for $\calF$ and $\Phi$. All these, however, are somehow unproblematic because the whole situation lies still within the terminological realm of classical representation theory. In contrast, unbounded functional calculi lie beyond that realm, as the set of unbounded operators on a Banach space is not an algebra any more. There is simply no classical terminological framework to cover unbounded calculi. Surprisingly, despite the ever-growing importance of the functional calculus for sectorial operators since its inception by McIntosh \cite{McI86} in 1986 and subsequently of related calculi on strips, parabolas and other regions, there have been only a few authors (most notable: deLaubenfels \cite{deLaubenfels1995}) showing a strong will to operate with a reasonably abstract definition of a functional calculus. The first attempt, to the best of our knowlegde, to not just give an axiomatic definition of an unbounded functional calculus but also to develop the associated abstract theory is due to the author of the present article. It was published in \cite{Haa05b} and then incorporated in and made widely known through the book \cite{HaaseFC}. However, although not without merit and already quite abstract, it has some shortcomings, which we adress in the following. A {\em first shortcoming} is that the definition of a functional calculus given there is intimately tied to a construction, algebraic extension by regularization. At the time, when the book was written, this was natural: algebraic extension is a central tool, an elegant and easily manageable way of reducing an unbounded calculus to its bounded part. Nevertheless, from an advanced point of view it should be obvious that a definition based on a specific construction cannot be regarded the definite answer to the axiomatization problem. A {\em second shortcoming} of the framework from \cite{HaaseFC} is that only commutative algebras $\calF$ were allowed. Admittedly, the field of applications up to now almost exclusively involves commutative algebras (algebras of scalar functions). But in the future, genuinely non-commutative situations like functional calculi arising from Lie group representations will become more and more important. Hence, there is a desire to have a setting that does not rely on commutativity. Finally, a {\em third shortcoming} of the approach from \cite{HaaseFC} is that topological ways of extending a functional calculus are disregarded. (This is of course not a failure of the axiomatic framing itself, but of its theoretical elaboration.) Actually, the need for such extensions had been formulated already in \cite{Haa05b} and there was also a somewhat halfhearted attempt to provide them, but that did not find much resonance. With the present article, we are making a new attempt to find an adequate axiomatization of the notion of an (unbounded) functional calculus and to develop its theory while avoiding the named shortcomings. \bigskip The paper is divided into two parts. The first part is devoted to the elaboration of the theory. As such, it is quite abstract and sometimes technical (due to the lack of commutativity assumptions). The second illustrates the theory in some familiar situations, but with a stress on formerly unknown aspects, mainly regarding topological extensions. A reader chiefly interested in the second part may safely skip the technical sections of the first part for the time being and only revert to them when necessary. In the following we give a short synopsis of the two parts. \medskip The first part starts with the axioms of a calculus and their immediate consequences (Section \ref{s.afc}) and then proceeds with the introduction of basic auxiliary notions like {\em determination}, {\em algebraic cores} (Section \ref{s.det}), and {\em anchor sets} (Section \ref{s.anc}). The main theoretical problem here consists in providing criteria ensuring that an anchor set is actually determining. Whereas this is almost trivially true in a commutative situation (Theorem \ref{anc.t.com}), some work is necessary to find such criteria without commutativity (Theorem \ref{anc.t.main}). This dichotomy permeates also the subsequent Section \ref{s.urc}, where the problems of uniqueness and compatibility are addressed. In Section \ref{s.ext} we discuss the {\em algebraic extension}. Surprisingly, no commutativity hypothesis whatsoever is needed to make algebraic extension work (Theorems \ref{ext.t.ext} and \ref{ext.t.ancgen}). However, compatibility of successive extensions cannot be guaranteed without additional assumptions (Theorem \ref{ext.t.succ-comp}). In Section \ref{s.api} we briefly touch upon {\em approximate identities}. This had been missed out in \cite{HaaseFC}, a first abstract result was given by Clark \cite{Clark2009}. In Section \ref{s.dua} we introduce the concept of a {\em dual calculus}. In Section \ref{s.top} we discuss {\em topological extensions}. This concludes the first part. \medskip In the second part, we illustrate the theory with some familiar examples: sectorial operators (Section \ref{s.sec}), semigroup generators (Section \ref{s.sgr}) and normal operators (Section \ref{s.spt}). We provide new topological extension theorems for the sectorial calculus (Section \ref{s.sectop}) and the Hille--Phillips calculus (Section \ref{sgr.s.topext}). In the sectorial case, we show how these topological extensions covers calculi defined in the literature like the Stieltjes calculus and the Hirsch calculus (Section \ref{s.hir}). For generators of bounded semigroups we show that the Hille--Phillips calculus is included in a certain topological extension of the sectorial calculus (Section \ref{sgr.s.sec-sgr}). In the Section \ref{s.spt} on normal operators on Hilbert space we report on a consistent functional calculus approach to the spectral theorem (elaborated in \cite{Haase2020bpre}). \subsection*{Notation and Terminology} We use the letters $X,Y, \dots$ generically to denote Banach spaces. By default and unless otherwise stated, the scalar field is $\mathbb{K} =\mathbb{C}$. The space of bounded linear operators from $X$ to $Y$ is denoted by $\mathcal{L}(X;Y)$, and $\mathcal{L}(X)$ if $X= Y$. A subset $\calD \subseteq \mathcal{L}(X)$ is called {\bf point-separating} if \[ \bigcap_{D\in \calD} \ker(D) = \{0\}. \] A (closed) linear relation in $X$ is any (closed) linear subspace $A\subseteq X \oplus X$. Linear relations are called {\em multi-valued operators} in \cite[Appendix A]{HaaseFC}, and we use freely the definitions and results from that reference. In particular, we say that a bounded operator $T$ {\bf commutes} with a linear relation $A$ if $TA \subseteq AT$, which is equivalent to \[ (x,y) \in A \Rightarrow (Tx, Ty) \in A. \] If $\mathcal{E}$ is a (multiplicative) semigroup, we denote its {\bf center} by \begin{equation}\label{intro.e.center} \mathrm{Z}(\mathcal{E}) = \{ d\in \mathcal{E} \,\,|\,\, \forall \, e\in \mathcal{E} : de = ed\}. \end{equation} If $\calF$ is a semigroup and $\mathcal{E}\subseteq \calF$ is any subset, we shall frequently use the notation \[ [f]_\mathcal{E} := \{ e\in \mathcal{E} \,\,|\,\, ef \in \mathcal{E}\} \qquad (f\in \calF). \] \part{Abstract Theory} In the first part of this article, we treat the theory of functional calculus in an abstract, axiomatic fashion. We aim at generality, in particular we do not make any standing commutativity assumption. However, we emphasize that commutativity of the algebras greatly simplifies theory and proofs. \section{Axioms for Functional Calculi}\label{s.afc} Let $\calF$ be an algebra with a unit element $\mathbf{1}$ and let $X$ be a Banach space. A mapping \[ \Phi: \calF \to \mathcal{C}(X) \] from $\calF$ to the set of closed operators on $X$ is called a {\bf proto-calculus} (or: $\calF$-proto-calculus) on $X$ if the following axioms are satisfied ($f,\: g\in \calF$, $\lambda \in \mathbb{C}$): \begin{aufzi} \item[\quad(FC1)] $\Phi(\mathbf{1}) = \mathrm{I}$. \item[\quad (FC2)] $\lambda \Phi(f) \subseteq \Phi(\lambda f)$ \quad and \quad $\Phi(f) + \Phi(g) \subseteq \Phi(f +g)$. \item[\quad (FC3)] $\Phi(f)\Phi(g) \subseteq \Phi(fg)$\quad and\quad \[ \dom(\Phi(f)\Phi(g)) = \dom(\Phi(g)) \cap \dom(\Phi(fg)). \] \end{aufzi} A proto-calculus $\Phi: \calF \to \mathcal{C}(X)$ is called a {\bf calculus} if the following fourth axiom is satisfied: \begin{aufzi} \item[\quad(FC4)] The set $\bdd(\calF, \Phi)$ of $\Phi$-bounded elements is determining $\Phi$ on $\calF$. \end{aufzi} Here, an element $f\in \calF$ is called {\bf $\Phi$-bounded} if $\Phi(f)\in \mathcal{L}(X)$, and the set of $\Phi$-bounded elements is \[ \bdd(\Phi) := \bdd(\calF,\Phi) := \{ f\in \calF \,\,|\,\, \Phi(f) \in \mathcal{L}(X)\} = \Phi^{-1}(\mathcal{L}(X)). \] \medskip The terminology and meaning of Axiom (FC4) shall be explained in Section \ref{s.det} below. For the time being we only suppose that $\Phi: \calF \to \mathcal{C}(X)$ is a proto-calculus. The following theorem summarizes its basic properties. \begin{thm}\label{afc.t.pro-cal} Let $\Phi: \calF \to \mathcal{C}(X)$ be a proto-calculus on a Banach space $X$. Then the following assertions hold ($f,\:g \in \calF$, $\lambda \in \mathbb{C}$): \begin{aufzi} \item If $\lambda \neq 0$ or $\Phi(f) \in \mathcal{L}(X)$ then $\Phi(\lambda f) = \lambda \Phi(f)$. \item If $\Phi(g)\in \mathcal{L}(X)$ then \[ \Phi(f) + \Phi(g) = \Phi(f+g)\quad \text{and}\quad \Phi(f)\Phi(g) = \Phi(fg). \] \item If $fg= \mathbf{1}$ then $\Phi(g)$ is injective and $\Phi(g)^{-1} \subseteq \Phi(f)$. If, in addition, $fg=gf$, then $\Phi(g)^{-1} =\Phi(f)$. \item The set $\bdd(\calF,\Phi)$ of $\Phi$-bounded elements is a unital subalgebra of $\calF$ and \[ \Phi: \bdd(\calF,\Phi) \to \mathcal{L}(X) \] is an algebra homomorphism. \end{aufzi} \end{thm} \begin{proof} a)\ One has \[ \Phi(f) = \Phi(\lambda^{-1} \lambda f) \supseteq \lambda^{-1} \Phi(\lambda f)\supseteq \lambda^{-1} \lambda \Phi(f) =\Phi(f). \] Hence, all inclusions are equalities, and the assertion follows. \smallskip\noindent b)\ By Axiom (FC2) and a) \begin{align*} \Phi(f) &= \Phi(f +g - g) \supseteq \Phi(f+g) + \Phi(-g) = \Phi(f+g) - \Phi(g) \\ & \supseteq \Phi(f) + \Phi(g) - \Phi(g) = \Phi(f). \end{align*} Hence, all inclusions are equalities and the first assertion in b) follows. For the second, note that by Axiom (FC3) $\Phi(f)\Phi(g) \subseteq \Phi(fg)$ with \[ \dom(\Phi(f)\Phi(g)) = \dom(\Phi(g)) \cap \dom(\Phi(fg)) = \dom(\Phi(fg)), \] hence we are done. \smallskip\noindent c)\ By (FC3), if $fg= \mathbf{1}$ then $\Phi(f)\Phi(g) \subseteq \Phi(fg) = \Phi(\mathbf{1}) = \mathrm{I}$. Hence, $\Phi(g)$ is injective and $\Phi(f)\supseteq \Phi(g)^{-1}$. If $fg= gf$, by symmetry $\Phi(f)$ is injective too, and $\Phi(g) \supseteq \Phi(f)^{-1}$. This yields $\Phi(f) = \Phi(g)^{-1}$ as desired. \smallskip\noindent d) follows directly from b). \end{proof} \section{Determination}\label{s.det} We shall write \begin{equation}\label{det.e.[f]} [f]_\mathcal{E} := \{ e\in \mathcal{E} \,\,|\,\, ef\in \mathcal{E}\} \end{equation} whenever $\calF$ is any multiplicative semigroup, $\mathcal{E} \subseteq \calF$ and $f\in \calF$. In our context, $\calF$ shall always be an algebra. \medskip Given a proto-calculus $\Phi: \calF \to \mathcal{C}(X)$ and $f\in \calF$ the set of its {\bf $\Phi$-regularizers} is \[ \reg(f, \Phi) := [f]_{\bdd(\calF, \Phi)} = \{ e\in \calF \,\,|\,\, e, \, ef \in \bdd(\calF,\Phi)\}. \] By Theorem \ref{afc.t.pro-cal}, $\reg(f, \Phi)$ is a left ideal in $\bdd(\calF,\Phi)$. The elements of the set \[ \reg(\Phi):= \reg(\calF, \Phi) := \bigcap_{f\in \calF} \reg(f,\Phi) \] are called {\bf universal regularizers}. Of course, it may happen that $e= 0$ is the only universal regularizer. \begin{rem}\label{det.r.reg-old} The definition of a regularizer here differs from and is more general than the one given in \cite{HaaseFC}. It has been argued in \cite{Haase2008pre} (eventually published as \cite{Haase2017}) that such a relaxation of terminology is useful. \end{rem} Let $f\in \calF$. A subset $\mathcal{M} \subseteq \bdd(\calF,\Phi)$ is said to {\bf determine $\Phi(f)$} if \begin{equation}\label{afc.e.determining} \Phi(f)x = y \quad \,\,\Longleftrightarrow\,\,\quad \forall e \in \mathcal{M} \cap \reg(f, \Phi):\,\, \Phi(ef)x = \Phi(e)y \end{equation} for all $x,\: y\in X$. And $\mathcal{M}$ is said to {\bf strongly determine} $\Phi(f)$ if the set $[f]_{\mathcal{M}}$ determines $\Phi(f)$, i.e. if \begin{equation}\label{afc.e.strong-determining} \Phi(f)x = y \quad \,\,\Longleftrightarrow\,\,\quad \forall e \in [f]_{\mathcal{M}} :\,\, \Phi(ef)x = \Phi(e)y \end{equation} for all $x,\: y\in X$. \begin{rems}\label{det.r.det-super} \begin{aufziii} \item Although very useful, the terminology ``$\mathcal{M}$ determines $\Phi(f)$'' is to be used with caution: there might be $g\in \calF$ with $f\neq g$ and $\Phi(g) = \Phi(f)$ and such that $\Phi(g)$ is not determined by $\mathcal{M}$ in the above sense. (In other words, the expression ``$\Phi(f)$'' has to be interpreted symbolically here, and not as a mathematical object.) With this caveat in mind, there should be little danger of confusion. However, if we want to be completely accurate, we shall use the alternative formulation ``{\bf $\mathcal{M}$ determines $\Phi$ at $f$}''. \item We observe that in both equivalences \eqref{afc.e.determining} and \eqref{afc.e.strong-determining} only the implication ``$\Leftarrow$'' is relevant, as the implication ``$\Rightarrow$'' simply means $\Phi(e)\Phi(f) \subseteq \Phi(ef)$ for $e\in \mathcal{M}\cap \reg(f,\Phi)$ or $e\in [f]_{\mathcal{M}}$, respectively; and this follows from Axiom (FC3). An immediate consequence of this observation is that a strongly determining set is determining. \end{aufziii} \end{rems} A set $\mathcal{E} \subseteq \bdd(\calF,\Phi)$ is said to be {\bf determining $\Phi$ on} or is {\bf $\Phi$-determining for} $\calF$ if it determines $\Phi(f)$ for each $f\in \calF$. (If $\calF$ is understood, reference to it is often dropped, and one simply speaks of $\Phi$-determining sets.) Note that by Remark \ref{det.r.det-super}, {\em any superset of a determining set is again determining}. A subset $\mathcal{E}$ of $\bdd(\calF, \Phi)$ is called an {\bf algebraic core} for $\Phi$ on $\calF$ if $\mathcal{E}$ strongly determines $\Phi(f)$ for each $f\in \calF$. In this terminology, Axiom (FC4) simply requires $\bdd(\calF, \Phi)$ to be an algebraic core for $\Phi$ on $\calF$. Again, by Remark \ref{det.r.det-super} {\em any superset of an algebraic core is also an algebraic core}. \begin{exa}[Bounded (Proto-)Calculi] A proto-calculus $\Phi: \calF \to \mathcal{C}(X)$ with $\calF = \bdd(\calF, \Phi)$ is nothing else than a unital algebra representation $\Phi: \calF \to \mathcal{L}(X)$. By unitality, Axiom (FC4) is automatically satisfied in this situation. So each unital representation by bounded operators is a calculus. \end{exa} \section{Anchor Sets}\label{s.anc} Let $\mathcal{E}$ be a set and let $\Phi: \mathcal{E} \to \mathcal{L}(X)$ be any mapping. An element $e\in \mathcal{E}$ is called an {\bf anchor element} if $\Phi(e)$ is injective. More generally, a subset $\mathcal{M} \subseteq \mathcal{E}$ is called an {\bf anchor set} if $\mathcal{M} \neq \emptyset$ and the set $\{ \Phi(e) \,\,|\,\, e\in \mathcal{M}\}$ is point-separating, i.e., if \[ \bigcap_{e\in \mathcal{M}} \ker(\Phi(e)) = \{0\}. \] If $\calF$ is a semigroup, then we say that $f\in \calF$ is {\bf anchored} in $\mathcal{E}$ (with respect to $\Phi$) if the set $[f]_\mathcal{E}$ is an anchor set. And we call $\calF$ {\bf anchored in $\mathcal{E}$} if each $f\in \calF$ is anchored in $\mathcal{E}$. If we want to stress the dependence on $\Phi$, we shall speak of {\bf $\Phi$-anchor elements/sets} and of elements/sets being {\bf $\Phi$-anchored} in $\mathcal{E}$. \medskip Suppose that $\Phi: \calF \to \mathcal{C}(X)$ is a proto-calculus and $f\in \calF$. Any set $\mathcal{M} \subseteq \bdd(\Phi, \calF)$ which determines $\Phi(f)$ must be an anchor set, just because $\Phi(f)$ is an operator and not just a linear relation. The converse question, i.e., whether a particular anchor set does actually determine $\Phi(f)$ is the subject of the present section. We start with a simple case. \begin{thm}\label{anc.t.com} Let $\Phi: \calF \to \mathcal{C}(X)$ be a calculus and let $f\in\calF$. If $\calF$ is commutative and $\mathcal{E} \subseteq \reg(f, \Phi)$ is an anchor set, then $\mathcal{E}$ determines $\Phi(f)$. \end{thm} \begin{proof} Let $x,y\in X$ such that $\Phi(ef)x = \Phi(e)y$ for all $e\in \mathcal{E}$. Then for all $e\in \mathcal{E}$ and $g\in \reg(\Phi, \calF)$ we have \begin{align*} \Phi(e)\Phi(gf)x & = \Phi(egf)x = \Phi(gef)x = \Phi(g) \Phi(ef)x = \Phi(g) \Phi(e)y = \Phi(ge)y \\ & = \Phi(eg)y = \Phi(e)\Phi(g)y. \end{align*} Since $\mathcal{E}$ is an anchor set, it follows that $\Phi(gf)x = \Phi(g)y$ for all $g\in \reg(\Phi, \calF)$. Since $\Phi$ is a calculus, this implies $\Phi(f)x = y$. \end{proof} Without commutativity, things become much more complicated. It actually may come as a surprise that the following result is true in general. \begin{thm}\label{anc.t.main} Let $\Phi: \calF \to \mathcal{C}(X)$ be a calculus, and let $\mathcal{E} \subseteq \bdd(\calF,\Phi)$ be such that $\bdd(\calF, \Phi)$ is anchored in $\mathcal{E}$. Then $\mathcal{E}$ determines $\Phi$ on $\calF$. If, in addition, $\mathcal{E}$ is multiplicative, then $\mathcal{E}$ is an algebraic core for $\Phi$. \end{thm} For the proof of Theorem \ref{anc.t.main} we need some auxiliary results about determining sets and anchor sets. This is the subject of the following lemma. \begin{lem}\label{anc.l.pro-cal-deter} Let $\Phi: \calF \to \mathcal{C}(X)$ be a proto-calculus, let $f\in \calF$, let $\mathcal{M} \subseteq \bdd(\calF,\Phi)$ and $\calB \subseteq \calF$. Then the following assertions hold: \begin{aufzi} \item If $\mathcal{M}$ determines $\Phi(f)$, then it is an anchor set. The converse holds if $f\in \bdd(\calF, \Phi)$. \item If $\mathcal{M}$ determines $\Phi(f)$ (is an anchor set) and $\mathcal{M} \subseteq \calN \subseteq \bdd(\calF, \Phi)$ then $\calN$ determines $\Phi(f)$ (is an anchor set). \item If $\mathcal{M}$ determines $\Phi(f)$ (is an anchor set) and $\calN_g \subseteq \bdd(\calF, \Phi)$ is an anchor set for each $g\in \mathcal{M}$, then also $\calN := \bigcup_{g\in \mathcal{M}} \calN_g g$ is determines $\Phi(f)$ (is an anchor set). \item Suppose that $\calB \mathcal{M} = \{ bg \,\,|\,\, b \in \calB,\, g\in \mathcal{M}\} \subseteq \bdd(\Phi, \calF)$. If $\calB \mathcal{M}$ is an anchor set, then so is $\mathcal{M}$. If $\mathcal{M} \subseteq \reg(f, \Phi)$ and $\calB\mathcal{M}$ determines $\Phi(f)$, then so does $\mathcal{M}$. \item If $T\in \mathcal{L}(X)$ commutes with all operators $\Phi(e)$ and $\Phi(ef)$ for $e\in \mathcal{M}$, and $\mathcal{M}$ determines $\Phi(f)$, then $T$ commutes with $\Phi(f)$. \end{aufzi} \end{lem} \begin{proof} a)\ The first assertion follows from \eqref{afc.e.determining} and the fact that $\Phi(f)$ is an operator and not just a linear relation. For the second suppose that $\Phi(f) \in \mathcal{L}(X)$ and $\mathcal{M}$ is an anchor set. Let $x, y\in X$ such that $\Phi(ef)x= \Phi(e)y$ for all $e\in \mathcal{M}$. Then, since $\Phi(f)$ is bounded, \[ \Phi(e)y = \Phi(ef)x = \Phi(e)\Phi(f)x. \] Since $\mathcal{M}$ is an anchor set, it follows that $\Phi(f)x = y$. \smallskip\noindent b)\ is trivial (and has already been mentioned above). \smallskip\noindent c)\ Suppose first that $\mathcal{M}$ determines $\Phi(f)$ and that $\Phi(egf)x = \Phi(eg)y$ for all $g\in \mathcal{M}$ and all $e\in \calN_g$ such that $eg\in \reg(f, \Phi)$. Fix $g\in \mathcal{M}\cap \reg(f,\Phi)$. Then for each $e\in \calN_g$ we have $eg\in \reg(f,\Phi)$ and hence \[ \Phi(e) \Phi(gf)x = \Phi(egf)x = \Phi(eg)y = \Phi(e)\Phi(g)y \] Since $\calN_g$ is an anchor set, it follows that $\Phi(gf)x= \Phi(g)y$. Since $\mathcal{M}$ determines $\Phi(f)$, it follows that $\Phi(f)x = y$ as claimed. Similary (and even more easily) one proves that $\calN$ is an anchor set if $\mathcal{M}$ is one. \smallskip\noindent d)\ Suppose that $\calB \mathcal{M}$ is an anchor set. For each $b\in \calB$ and $f\in \mathcal{M}$ we have $\Phi(bf) = \Phi(b)\Phi(f)$ and hence $\ker(\Phi(f)) \subseteq \ker(\Phi(bf))$. It follows readily that $\mathcal{M}$ is an anchor set. Now suppose that $\mathcal{M} \subseteq \reg(f, \Phi)$ and that $\calB \mathcal{M}$ determines $\Phi(f)$. Let $x,y\in X$ such that $\Phi(gf)x = \Phi(g)y$ for all $g\in \mathcal{M}$. Then $\Phi(bgf)x = \Phi(b) \Phi(gf)x= \Phi(b)\Phi(g)y= \Phi(bg)y$ for all $b\in \calB$ and $g\in \mathcal{M}$. Hence, by hypothesis, $\Phi(f)x = y$. \vanish{ \smallskip\noindent e)\ Since $\calB$ determines $\Phi(f)$ and $\mathcal{M}$ is an anchor set, $\mathcal{M} \calB$ determines $\Phi(f)$. (This can be seen as an application of c).) By hypothesis and b) above, $\calC \mathcal{M}$ determines $\Phi(f)$. Then, d) implies that $\mathcal{M}$ determines $\Phi(f)$. } \smallskip\noindent e)\ Suppose that $\Phi(f)x = y$. Then, for each $e\in \mathcal{M}$, \[ \Phi(ef)Tx = T\Phi(ef)x = T\Phi(e)y = \Phi(e)Ty. \] Since $\mathcal{M}$ determines $\Phi(f)$, $\Phi(f)Tx = Ty$ as claimed. \end{proof} Let us mention that assertion c) is by far the most important part of Lemma \ref{anc.l.pro-cal-deter}. As a first consequence, we note the following result. \begin{prop}\label{anc.p.EM} Let $\Phi: \calF \to \mathcal{C}(X)$ be a proto-calculus, let $f\in \calF$, and let $\mathcal{M}, \mathcal{E}\subseteq \bdd(\Phi, \calF)$ such that $\mathcal{M}$ determines $\Phi(f)$. Suppose that one of the following condititions holds: \begin{aufziii} \item $\mathcal{M}$ is anchored in $\mathcal{E}$. \item For each $g\in \mathcal{M}$ there is an anchor set $\calN_g$ such that $\calN_g g \subseteq \calB \mathcal{E}$, where $\calB := \{ h \in \calF \,\,|\,\, h \mathcal{E} \subseteq \bdd(\Phi, \calF)\}$. \end{aufziii} Then $\mathcal{E}$ determines $\Phi(f)$. \end{prop} \begin{proof} 1)\ For each $g\in \mathcal{M}' := \mathcal{M} \cap \reg(f, \Phi)$ let $\calN_g := [g]_\mathcal{E}$. By hypothesis, this is an anchor set. Hence, by Lemma \ref{anc.l.pro-cal-deter}.c), $\calN := \bigcup_{g\in \mathcal{M}'} \calN_g g$ determines $\Phi(f)$. As $\calN \subseteq \mathcal{E} \cap \reg(f, \Phi)$, also $\mathcal{E}$ determines $\Phi(f)$. \smallskip\noindent 2)\ By hypothesis, $\mathcal{E}$ is an anchor set and $\mathcal{M}$ determines $\Phi(f)$. Hence, by Lemma \ref{anc.l.pro-cal-deter}.c), $\mathcal{E} \mathcal{M} = \bigcup_{g \in \mathcal{M}} \mathcal{E} g$ determines $\Phi(f)$. Since $\mathcal{E} \mathcal{M}\subseteq \calB \mathcal{E}$, also the latter set determines $\Phi(f)$. Lemma \ref{anc.l.pro-cal-deter}.d) then implies that $\mathcal{E}$ determines $\Phi(f)$. \end{proof} \begin{rem} Theorem \ref{anc.t.com} is a corollary of Proposition \ref{anc.p.EM}. (Apply part 2) with $\calN_g = \mathcal{E}$ for each $g\in \mathcal{M} := \reg(\Phi, \calF)$.) Actually, we realize that one may replace \ the overall commutativity assumption on $\calF$ by a weaker condition, e.g.: {\em $[f]_\mathcal{E} \cap \{ e\in \mathcal{E} \,\,|\,\, e \mathcal{M} \subseteq \mathcal{M} e\}$ is an anchor set.} \end{rem} Finally, we are going to prove Theorem \ref{anc.t.main}. \begin{proof}[Proof of Theorem \ref{anc.t.main}] Write $\mathcal{M} := \bdd(\Phi, \calF)$ and fix $f\in \calF$. Then $\mathcal{M}$ determines $\Phi(f)$, since $\Phi$ is a calculus, and $\mathcal{M}$ is anchored in $\mathcal{E}$, by assumption. Hence, by Proposition \ref{anc.p.EM}.1), $\mathcal{E}$ determines $\Phi(f)$. \smallskip\noindent For the second assertion, suppose in addition that $\mathcal{E}$ is multiplicative. Fix $g\in \reg(f, \Phi)$. Then, as above, $[g]_\mathcal{E}$ is an anchor set. Also, for each $e\in [g]_\mathcal{E}$ one has $egf\in \mathcal{M}$ and hence $[egf]_\mathcal{E}$ is also an anchor set. It follows that $\calN_g := \bigcup_{e\in [g]_\mathcal{E}} [egf]_\mathcal{E} e$ is an anchor set. By Lemma \ref{anc.l.pro-cal-deter}.c), the set \[ \calN := \bigcup_{g\in \reg(f, \Phi)} \calN_g g \] determines $\Phi(f)$. But $\mathcal{E}$ is multiplicative, and therefore \[ \calN = \bigcup_{g\in \reg(f, \Phi)} \calN_g g = \bigcup_{g\in \reg(f, \Phi)} \bigcup_{e\in [g]_\mathcal{E}} [egf]_\mathcal{E} eg \subseteq [f]_\mathcal{E}. \] Hence, also $[f]_\mathcal{E}$ determines $\Phi(f)$. \end{proof} \vanish{ Also, part e) shows that commutativity helps to identify determining sets. This is the reason why many results become much nicer under commutativity assumptions. \begin{thm}\label{anc.t.EM} Let $\Phi: \calF \to \mathcal{C}(X)$ be a proto-calculus, and let $\mathcal{E}, \mathcal{M} \subseteq \bdd(\calF, \Phi)$ such that $\mathcal{M}$ is anchored in $\mathcal{E}$. Then for $f\in \calF$ the following assertions hold: \begin{aufzi} \item If $\mathcal{M}$ determines $\Phi(f)$ then also $\mathcal{E}$ does. \item If $\mathcal{M}$ strongly determines $\Phi(f)$ and if, in addition, $\mathcal{E} \mathcal{M}$ is anchored in $\mathcal{E}$ and $\mathcal{E}$ is multiplicative, then also $\mathcal{E}$ strongly determines $\Phi(f)$. \end{aufzi} \end{thm} \begin{proof} a) For each $g\in \mathcal{M}' := \mathcal{M} \cap \reg(f, \Phi)$ let $\calN_g := [g]_\mathcal{E}$. By hypothesis, this is an anchor set. Hence, by Lemma \ref{anc.l.pro-cal-deter}.c), $\calN := \bigcup_{g\in \mathcal{M}'} \calN_g g$ determines $\Phi(f)$. As $\calN \subseteq \mathcal{E} \cap \reg(f, \Phi)$, also $\mathcal{E}$ determines $\Phi(f)$. \smallskip\noindent b) Fix $g\in [f]_{\mathcal{M}}$. Then, by hypothesis, $[g]_\mathcal{E}$ is an anchor set. Also, for each $e\in [g]_\mathcal{E}$ one has $egf\in \mathcal{E}\mathcal{M}$ and hence $[egf]_\mathcal{E}$ is also an anchor set. It follows that $\calN_g := \bigcup_{e\in [g]_\mathcal{E}} [egf]_\mathcal{E} e$ is an anchor set. By Lemma \ref{anc.l.pro-cal-deter}.c) \[ \bigcup_{g\in [f]_{\mathcal{M}}} \calN_g g = \bigcup_{g\in [f]_{\mathcal{M}}} \bigcup_{e\in [g]_\mathcal{E}} [egf]_\mathcal{E} eg \subseteq [f]_\mathcal{E} \] determines $\Phi(f)$ as claimed. \end{proof} \begin{rem}\label{anc.r.center} The somehow awkward assumption ``$\mathcal{E} \mathcal{M}$ is anchored in $\mathcal{E}$'' in part b) of the theorem is automatically satisfied when $\mathcal{E}$ is commutative. Actually, the even weaker assumption ``for each $g\in \mathcal{M}$ the set $[g]_\mathcal{E} \cap \mathrm{Z}(\mathcal{E})$ is an anchor set'' suffices. (Recall from \eqref{intro.e.center} that $\mathrm{Z}(\mathcal{E})$ is the center of $\mathcal{E}$.) \end{rem} \begin{cor}\label{anc.c.EM} Let $\Phi: \calF \to \mathcal{C}(X)$ be a calculus, and let $\mathcal{E} \subseteq \bdd(\calF,\Phi)$ be such that $\bdd(\calF, \Phi)$ is anchored in $\mathcal{E}$. Then $\mathcal{E}$ determines $\Phi$ on $\calF$. If, in addition, $\mathcal{E}$ is multiplicative, then $\mathcal{E}$ is an algebraic core for $\Phi$. \end{cor} \begin{proof} Apply Theorem \ref{anc.t.EM} a.1) and b.1) with $\mathcal{M} = \bdd(\calF, \Phi)$. \end{proof} \section{Uniqueness, Restriction, Compatibility}\label{s.urc} Let $\Phi_1, \Phi_2: \calF \to \mathcal{C}(X)$ be calculi, let $f\in \calF$ and suppose that $\mathcal{M}\subseteq \calF$ determines both calculi $\Phi_1$ and $\Phi_2$ at $f$ (cf. Remark \ref{det.r.det-super}). Suppose further that $\Phi_1$ and $\Phi_2$ agree on $\mathcal{M}$. Can one conclude that $\Phi_1(f) = \Phi_2(f)$? A moment's reflection reveals that the answer might be ``no'' in general. The reason is that we do not know whether $\Phi_1$ and $\Phi_2$ coincide on products $ef$ for $e\in \mathcal{M}$. This is the original motivation for introducing the concept of strong determination. \begin{lem}\label{urc.l.uni} Suppose that $\Phi_1, \Phi_2 : \calF \to \mathcal{C}(X)$ are two proto-calculi and $\mathcal{E} \subseteq \calF$ such that $\Phi_1\res{\mathcal{E}} = \Phi_2\res{\mathcal{E}}$. If $\mathcal{E}$ is an algebraic core for $\Phi_2$ then \[ \Phi_1(f) \subseteq \Phi_2(f)\qquad \text{for each $f\in \calF$}. \] In particular, if $\mathcal{E}$ is an algebraic core for both calculi, then $\Phi_1 = \Phi_2$. \end{lem} \begin{proof} Let $f\in \calF$. And suppose that $x,y\in X$ are such that $\Phi_1(f)x= y$. Then for every $e\in [f]_\mathcal{E}$ we have \[ \Phi_1(ef)x = \Phi_1(e)y. \] This is the same as $\Phi_2(ef)x= \Phi_2(e)y$, as $\Phi_1$ and $\Phi_2$ agree on $\mathcal{E}$. By hypothesis, $[f]_\mathcal{E}$ determines $\Phi_2(f)$, so it follows that $\Phi_2(f)x= y$. This shows $\Phi_1(f)\subseteq \Phi_2(f)$. \end{proof} Combining Lemma \ref{urc.l.uni} with Theorem \ref{anc.t.main} we arrive at the following uniqueness statement. \begin{thm}[Uniqueness]\label{urc.t.uni} Let $\Phi_1, \Phi_2: \calF \to \mathcal{C}(X)$ be calculi. Suppose that there is $\mathcal{E} \subseteq \calF$ with the following properties: \begin{aufziii} \item $\Phi_1(e) = \Phi_2(e) \in \mathcal{L}(X)$ for all $e\in \mathcal{E}$. \item $\calF$ is anchored in $\mathcal{E}$ (with respect to one/both calculi). \end{aufziii} Then $\Phi_1 = \Phi_2$. \end{thm} \begin{proof} By passing to \[ \mathcal{E}':= \bigcup_{n\ge 1} \mathcal{E}^n = \{ e_1\cdots e_n \,\,|\,\, n \in \mathbb{N}, \, e_1, \dots, e_n \in \mathcal{E}\}, \] the multiplicative semigroup generated by $\mathcal{E}$ in $\calF$, we may suppose that $\mathcal{E}$ is multiplicative. Then Theorem \ref{anc.t.main} yields that $\mathcal{E}$ is an algebraic core for both calculi. Hence, Lemma \ref{urc.l.uni} yields $\Phi_1 = \Phi_2$. \end{proof} \medskip \subsection{Pull-Back and Restriction of a Calculus} Suppose that $\calF$ is a unital algebra and $\Phi: \calF \to \mathcal{C}(X)$ is a proto-calculus. Further, let $\calG$ be a unital algebra and $\eta: \calG \to \calF$ a unital algebra homomorphism. Then the mapping \[ \eta^*\Phi: \calG \to \mathcal{C}(X),\qquad (\eta^*\Phi)(g) := \Phi(\eta(g)) \] is called the {\bf pull-back} of $\Phi$ {\bf along} $\eta$. It is easy to see that, in general, $\eta^*\Phi$ is a proto-calculus as well. A special case occurs if $\calG$ is a subalgebra of $\calF$ and $\eta$ is the inclusion mapping. Then $\eta^*\Phi = \Phi\res{\calG}$ is just the {\bf restriction} of $\Phi$ to $\calG$. \begin{lem}\label{urc.l.pull-back} Let $\calF$ be a unital algebra and $\Phi: \calF \to \mathcal{C}(X)$ a proto-calculus. Furthermore, let $\calG$ be a unital algebra, $\eta: \calG \to \calF$ a unital homomorphism, $\mathcal{E} \subseteq \bdd(\eta^*\Phi, \calG)$, and $g\in \calG$. Then the following assertions hold. \begin{aufzi} \item $\bdd(\eta^*\Phi, \calG) = \eta^{-1}( \bdd(\Phi, \calF))$,\\ \quad $\eta (\bdd(\eta^*\Phi, \calG)) = \bdd(\Phi, \calF) \cap \eta(\calG) = \bdd(\Phi\res{\eta(\calG)}, \eta(\calG))$. \item $\mathcal{E}$ is an $\eta^*\Phi$-anchor if and only if $\eta(\mathcal{E})$ is a $\Phi$-anchor. \item $\eta( [g]_\mathcal{E}) \subseteq [\eta(g)]_{\eta(\mathcal{E})}$. \item $\mathcal{E}$ determines $\eta^*\Phi$ at $g$ if and only if $\eta(\mathcal{E})$ determines $\Phi$ at $\eta(g)$. \item If $\mathcal{E}$ is an algebraic core for $\eta^*\Phi$ then $\eta(\mathcal{E})$ is an algebraic core for $\Phi\res{\eta(\calG)}$. \end{aufzi} \end{lem} \begin{proof} a), b), and c) follow directly from the definition of $\eta^*\Phi$. \smallskip\noindent d) This follows since for $x,y\in X$ and $e \in \mathcal{E}$ \[ \Phi(\eta(e) \eta(g))x= \Phi(\eta(e))y \quad\Leftrightarrow \quad (\eta^*\Phi)(eg)x = (\eta^*\Phi)(e)y. \] e) This follows from c) and d). \end{proof} We have already remarked that $\eta^*\Phi$ is a proto-calculus, whenever $\Phi$ is one. The following example shows that, even if $\Phi$ is a calculus, $\eta^*\Phi$ need not be one. \begin{exa} Let $A$ be an unbounded operator with non-empty resolvent set $\varrho(A)$. Let $\calF$ be the algebra of all rational functions with poles in $\varrho(A)$ and let $\Phi$ be the natural calculus (as described, e.g., in \cite[Appendix A.6]{HaaseFC}). Let $\calG$ be the algebra of polynomial functions and $\eta: \calG \to \calF$ the inclusion mapping. Then $\eta^*\Phi = \Phi\res{\calG}$ is simply the restriction of $\Phi$ to $\calG$. And this is not a calculus, as the only functions in $\calG$ that yield bounded operators are the constant ones. \end{exa} We say that $\eta$ is {\bf $\Phi$-regular} if $\eta^*\Phi$ is a calculus. And a subalgebra $\calG$ of $\calF$ is called {\bf $\Phi$-regular}, if the restriction of $\Phi$ to $\calG$ is a calculus, i.e., if the inclusion mapping is $\Phi$-regular. \begin{cor}\label{urc.c.Phi-reg} In the situation of Lemma \ref{urc.l.pull-back}, the following statements are equivalent: \begin{aufzii} \item $\eta$ is a $\Phi$-regular mapping, i.e., $\eta^*\Phi$ is a calculus. \item $\eta(\calG)$ is a $\Phi$-regular subalgebra of $\calF$, i.e., $\Phi\res{\eta(\calG)}$ is a calculus. \end{aufzii} \end{cor} \begin{proof} This follows from a) and d) of Lemma \ref{urc.l.pull-back} with $\mathcal{E} = \bdd(\eta^*\Phi)$. \end{proof} \vanish{ \begin{lem}\label{urc.l.Phi-reg} Let $\calF, \calG$ be unital algebras, let $\Phi: \calF \to \mathcal{C}(X)$ be a calculus, and let $\eta: \calG \to \calF$ be a unital algebra homomorphism. Define $\mathcal{E} := \eta(\calG) \cap \bdd(\calF, \Phi)$. Then $\mathcal{E} = \eta(\bdd(\eta^*\Phi, \calG))$ and the following assertions are equivalent: \begin{aufzii} \item The mapping $\eta$ is a$\Phi$-regular \item The subalgebra $\eta(\calG)$ is $\Phi$-regular. \item The set \[ \mathcal{M} := \{ g\in \bdd(\calF, \Phi) \,\,|\,\, [g]_\mathcal{E} \,\,\text{is an anchor set}\} \] determines $\Phi$ on $\eta(\calG)$. \end{aufzii} This is the case, e.g., if $\bdd(\calF, \Phi) \subseteq \eta(\calG)$. \end{lem} \begin{proof} (i)$\Leftrightarrow$(ii): Note that $\mathcal{E} = \bdd(\eta(\calG), \Phi)$ and \[ \mathcal{E}':= \bdd(\calG, \eta^*\Phi) = \eta^{-1} \bdd(\calF, \Phi) = \eta^{-1}\mathcal{E}. \] A short argument reveals: $\mathcal{E}$ determines $\Phi$ on $\eta(\calG)$ if and only if $\mathcal{E}'$ determines $\eta^*\Phi$ on $\calG$. The first is equivalent to $\calG$ being $\Phi$-regular; the latter is equivalent to $\eta$ being $\Phi$-regular. \smallskip\noindent (ii)$\Rightarrow$(iii): For each $e\in \mathcal{E}$ one has $[e]_\mathcal{E} = \mathcal{E}$ (since $\mathcal{E}$ is a subalgebra of $\calF$) and this is an anchor set (since $\mathbf{1} \in \mathcal{E}$). It follows that $\mathcal{E} \subseteq \mathcal{M}$ and hence the claim. \smallskip\noindent (iii)$\Rightarrow$(ii): By definition, $\mathcal{M}$ is anchored in $\mathcal{E}$. So by Theorem \ref{anc.t.EM}.a) it follows that $\mathcal{E}$ determines $\Phi$ on $\eta(\calG)$. \end{proof} } It seems that, in general, one cannot say much more. However, here is an interesting special case, when one can simplify assumptions. \begin{thm}\label{urc.t.pull-back-com} Let $\calF$ be a commutative unital algebra and $\Phi: \calF \to \mathcal{C}(X)$ a calculus. Let $\calG$ be a unital subalgebra of $\calF$ such that $\reg(g, \Phi)\cap \calG$ is an anchor set for each $g\in \calG$. Then $\calG$ is $\Phi$-regular, i.e., $\Phi\res{\calG}$ is a calculus. \end{thm} \begin{proof} This follows immediately from Theorem \ref{anc.t.com}. \end{proof} In view of Lemma \ref{urc.l.pull-back} we obtain the following consequence. \begin{cor}\label{urc.c.com} Let $\calF$ be a commutative unital algebra and $\Phi: \calF \to \mathcal{C}(X)$ a calculus. Furthermore, let $\calG$ be a unital algebra and $\eta: \calG \to \calF$ a unital homomorphism. Then $\eta$ is $\Phi$-regular if and only if for each $g\in \calG$ the set $\reg(\eta(g),\Phi) \cap \eta(\calG)$ is an anchor set. \end{cor} \vanish{ \begin{proof} In view of Lemma \ref{urc.l.pull-back} we may suppose without loss of generality that $\calG \subseteq \calF$ and $\eta$ is the inclusion mapping. Fix $f\in \calG$ and suppose that $x,y\in X$ are such that $\Phi(ef)x= \Phi(e)y$ for all $e\in \calG \cap \reg(f,\Phi)$. For each $g\in \reg(f, \Phi)$ one has \begin{align*} \Phi(e)\Phi(gf)x &= \Phi(egf)x = \Phi(gef)x = \Phi(g)\Phi(ef)x = \Phi(g)\Phi(e)y \\ & = \Phi(eg)y = \Phi(eg)y= \Phi(e)\Phi(g)y. \end{align*} By assumption, $\reg(f, \Phi)\cap \calG$ is an anchor set, hence it follows that $\Phi(gf)x= \Phi(g)y$. As $\reg(f, \Phi)$ determines $\Phi(f)$, we obtain $\Phi(f)x = y$ as claimed. \end{proof} } \medskip \subsection{Compatibility and Composition Rules} Suppose one has set up a functional calculus $\Phi= ( f\mapsto f(A))$ for an operator $A$ and a second functional calculus $\Psi= (g \mapsto g(B))$ for an operator $B$ which is of the form $B = f(A)$. Then one would expect a ``composition rule'' of the form $g(B) = (g\circ f)(A)$. This amounts to the identity $\Psi = \eta^*\Phi$, where $\eta = (g \mapsto g\circ f)$ is an algebra homomorphism that links the domain algebras of the two calculi. The following theorem, which basically is just a combination of results obtained so far, yields criteria for this composition rule to hold true. \begin{thm}\label{urc.t.com} Let $\calF$ and $\calG$ be unital algebras and $\eta: \calG \to \calF$ a unital algebra homomorphism. Furthermore, let $\Phi: \calF \to \mathcal{C}(X)$ and $\Psi: \calG\to \mathcal{C}(X)$ be proto-calculi, and let $\mathcal{E}$ be an algebraic core for $\Psi$ such that \[ \Phi(\eta(e)) = \Psi(e) \qquad (e\in \mathcal{E}). \] Then the following statements are equivalent: \begin{aufzii} \item $\Phi \circ \eta = \Psi$. \item $\eta(\calG)$ is a $\Phi$-regular subalgebra of $\calF$. \item $\eta(\mathcal{E})$ is an algebraic core for the restriction of $\Phi$ to $\eta(\calG)$. \end{aufzii} Moreover, {\rm (i)-(iii)} hold true if, e.g., $\Phi$ is a calculus and $\calF$ is commutative. \end{thm} \begin{proof} (i)$\Rightarrow$(iii): Since $\mathcal{E}$ is an algebraic core for $\Psi$ and, by (i), $\Psi = \eta^*\Phi$, the set $\eta(\mathcal{E})$ must be an algebraic core for $\Phi$ on $\eta(\calG)$, by e) of Lemma \ref{urc.l.pull-back}. \smallskip\noindent (iii)$\Rightarrow$(ii): If (iii) holds then (ii) follows a fortiori. \smallskip\noindent (ii)$\Rightarrow$(i): If (ii) holds that $\eta^*\Phi$ is a calculus. Also, by hypothesis, $\Psi$ is a calculus. These calculi agree on $\mathcal{E}$, and this is an anchor (since it is an algebraic core for $\Psi$). Hence, by the Uniqueness Theorem \ref{urc.t.uni}, $\eta^*\Phi = \Psi$, i.e., (i). \smallskip\noindent Finally, suppose that $\Phi$ is a calculus and $\calF$ is commutative. Let $g \in \calG$. Then $[g]_\mathcal{E}$ is a $\Psi$-anchor set. Hence, $[\eta(g)]_{\eta(\calG)}$ is a $\Phi$-anchor set. By Corollary \ref{urc.c.com}, $\eta$ is $\Phi$-regular, i.e., (ii). \end{proof} See also Theorem \ref{ext.t.succ-comp} below for more refined compatibility criteria. \section{Algebraic Extension}\label{s.ext} From now on, we suppose that $\mathcal{E}$, $\calF$ and $\Phi$ are such that \begin{aufziii} \item $\calF$ is a unital algebra, \item $\mathcal{E}$ is a (not necessarily unital) subalgebra of $\calF$, \item $\Phi: \mathcal{E} \to \mathcal{L}(X)$ is an algebra representation. \end{aufziii} Our goal is to give conditions on $\calF$ such that a given representation $\Phi: \mathcal{E} \to \mathcal{L}(X)$ can be extended to an $\calF$-calculus in a unique way. A glance at Theorem \ref{anc.t.main} leads us to hope that it might be helpful to require in addition to 1)--3) also: \begin{aufziii}\setcounter{aufziii}{3} \item Each $f\in \calF$ is anchored in $\mathcal{E}$. \end{aufziii} The next result tells that under these assumptions there is indeed a unique calculus on $\calF$ extending $\Phi$. \begin{thm}[Extension Theorem]\label{ext.t.ext} Let $\calF$ be a unital algebra and $\mathcal{E} \subseteq \calF$ a subalgebra. Furthermore, let $X$ be a Banach space and let $\Phi: \mathcal{E} \to \mathcal{L}(X)$ be an algebra homomorphism such that $\calF$ is anchored in $\mathcal{E}$. Then there is a unique calculus \[ \fourier{\Phi}: \calF \to \mathcal{C}(X) \] such that $\fourier{\Phi}\res{\mathcal{E}} = \Phi$. \end{thm} \begin{proof} Uniqueness follows directy from Theorem \ref{urc.t.uni}. Moreover, since $\mathcal{E}$ is multiplicative, for each $f\in \calF$ the set $[f]_\mathcal{E}$ must determine $\fourier{\Phi}(f)$. Hence, for the existence proof we have no other choice than to {\em define} \begin{equation}\label{afc.eq.ext-def} \fourier{\Phi}(f)x = y \quad \stackrel{\text{\rm def}}{\Longleftrightarrow}\quad \forall\, e\in [f]_\mathcal{E} : \Phi(ef)x= \Phi(e)y \end{equation} for any $x,y\in X$ and $f\in \calF$. Note that since $[f]_\mathcal{E}$ is an anchor set, $\fourier{\Phi}(f) \in \mathcal{C}(X)$. It remains to show that $\fourier{\Phi}$ extends $\Phi$ and satisfies (FC1)---(FC4). \smallskip\noindent (FC1):\ By hypothesis, $[\mathbf{1}]_\mathcal{E} = \mathcal{E}$ is an anchor set. Hence, $x=y$ is equivalent to \[ \Phi(e)x= \Phi(e)y \qquad \text{for all $e\in \mathcal{E}$}, \] which, by definition \eqref{afc.eq.ext-def} is equivalent to $\fourier{\Phi}(\mathbf{1})x= y$. \smallskip\noindent Next, let us show that $\fourier{\Phi}$ extends $\Phi$. To that end, let $f\in \mathcal{E}$. Then $[f]_\mathcal{E} = \mathcal{E}$, and $\fourier{\Phi}(f)x = y$ is equivalent to \[ \Phi(e)y = \Phi(ef)x = \Phi(e)\Phi(f)x \quad \text{for all $e\in \mathcal{E}$}, \] which is equivalent to $y= \Phi(f)x$ (since $\mathcal{E}$ is an anchor). \smallskip\noindent (FC2): Let $\lambda \in \mathbb{C}$ and $f\in \calF$ and take $x,y\in X$ with $\lambda\fourier{\Phi}(f)x = y$. We need to show that $\fourier{\Phi}(\lambda f)x = y$. This is clear if $\lambda = 0$. If $\lambda \neq 0$ we find $\fourier{\Phi}(f)x = \lambda^{-1}y$ and hence $\Phi(ef)x= \Phi(e)(\lambda^{-1}y)$, or better \[ \Phi(e (\lambda f))x = \Phi(e)y \] for every $e\in [f]_\mathcal{E} = [\lambda f]_\mathcal{E}$. And the latter statement just tells that $\fourier{\Phi}(\lambda f)x = y$, as desired. Now pick $f, g\in \calF$ and suppose that $\fourier{\Phi}(f)x= y$ and $\fourier{\Phi}(g)x = z$. Take $h\in [f + g]_\mathcal{E}$ and $e\in [hf]_\mathcal{E}$. Then $eh\in [f]_\mathcal{E} \cap [g]_\mathcal{E}$ and hence \[ \Phi(ehf)x = \Phi(eh)y \quad \text{and}\quad \Phi(ehg)x = \Phi(eh)z. \] This yields \[ \Phi(e) \Phi(h(f+g))x = \Phi(ehf)x + \Phi(ehg)x= \Phi(eh)(y+z) = \Phi(e) \Phi(h)(y+z). \] Since $[hf]_\mathcal{E}$ is a anchor set, it follows that \[ \Phi(h(f+g))x = \Phi(h)(z+y) \] and since $h\in [f+g]_\mathcal{E}$ was arbitrary, we arrive at $\fourier{\Phi}(f+g)x = y$. \smallskip\noindent (FC3): Let $\fourier{\Phi}(g)x = y$ and $\fourier{\Phi}(f)y = z$, and let $h\in [fg]_\mathcal{E}$. Then for each $e\in [hfg]_\mathcal{E}$ and $e'\in[ehf]_\mathcal{E}$ one has $e'e h\in [f]_\mathcal{E}$ and $e'ehf \in [g]_\mathcal{E}$ and hence \[ \Phi(e') \Phi(e) \Phi(hfg)x = \Phi(e'e h fg)x = \Phi(e'ehf)y = \Phi( e'eh)z = \Phi(e')\Phi(e)\Phi(h)z. \] Since $[ehf]_\mathcal{E}$ is anchor set, $\Phi(e) \Phi(hfg)x = \Phi(e)\Phi(h)z$, and since $[hfg]_\mathcal{E}$ is an anchor set, $\Phi(hfg)x = \Phi(h)z$. All in all we conclude that $\fourier{\Phi}(fg)x = z$. This proves the inclusion \[ \fourier{\Phi}(f) \fourier{\Phi}(g) \subseteq \fourier{\Phi}(fg). \] A corollary to that is the domain inclusion \[ \dom(\fourier{\Phi}(f) \fourier{\Phi}(g)) \subseteq \dom(\fourier{\Phi}(g)) \cap \dom(\fourier{\Phi}(fg)). \] For the converse, suppose that $x \in \dom(\fourier{\Phi}(g)) \cap \dom(\fourier{\Phi}(fg))$ and define $y,z \in X$ by \[ \fourier{\Phi}(g)x = y\quad \text{and} \quad \fourier{\Phi}(fg)x = z. \] Let $e\in [f]_\mathcal{E}$ and $e'\in [efg]_\mathcal{E}$. Then $e'e f\in [g]_\mathcal{E}$ and hence $\Phi(e'efg)x = \Phi(e'ef)y$. Also, $e'e\in [fg]_\mathcal{E}$ and hence $\Phi(e'efg)x = \Phi(e'e)z$. It follows that \[ \Phi(e')\Phi(ef)y = \Phi(e'ef)y = \Phi(e'efg)x = \Phi(e'e)z= \Phi(e') \Phi(e)z \] Since $[efg]_\mathcal{E}$ is an anchor set, $\Phi(ef)y = \Phi(e)z$. Since $e\in [f]_\mathcal{E}$ was arbitrary, $\fourier{\Phi}(f)y = z$, and hence $x\in \dom(\fourier{\Phi}(f) \fourier{\Phi}(g))$. \smallskip\noindent (FC4) is satisfied by construction. This concludes the proof. \end{proof} \medskip \subsection{The Maximal Anchored Subalgebra} In practice, $\calF$ may be too large and may fail to satisfy the anchor-condition 4). In this case one might look for the maximal subalgebra of $\calF$ which is anchored in $\mathcal{E}$. However, it is not obvious that such an object exists. To see that it does, let us define \begin{equation}\label{ext.eq.ancgen-def} \ancgen{\mathcal{E},\calF,\Phi} := \{ f\in \calF \,\,|\,\, \forall\, e\in \mathcal{E} : [ef]_\mathcal{E} \,\,\text{is a $\Phi$-anchor set}\}. \end{equation} \begin{lem}\label{ext.l.non-deg} Let $\calF$ be a unital algebra, $\mathcal{E}\subseteq \calF$ a subalgebra and $\Phi: \mathcal{E} \to \mathcal{L}(X)$ an algebra representation. Then the following statements are equivalent: \begin{aufzii} \item $\mathcal{E}$ is an anchor set. \item $\mathbf{1}$ is anchored in $\mathcal{E}$. \item $\mathcal{E} \neq \emptyset$ and $\ancgen{\mathcal{E},\calF,\Phi} \neq \emptyset$. \end{aufzii} \end{lem} \begin{proof} Straightfoward. \end{proof} The algebra representation $\Phi: \mathcal{E} \to \mathcal{L}(X)$ is called {\bf non-degenerate} if (i)--(iii) from Lemma \ref{ext.l.non-deg} are satisfied, otherwise {\bf degenerate}. \begin{rem}\label{ext.r.degenerate} If $\Phi$ is degenerate then there are two possibilities: 1st case: $\mathbf{1} \notin \mathcal{E}$. Then $\mathcal{E} ':= \mathcal{E} \oplus \mathbb{C} \mathbf{1}$ is a unital subalgebra of $\calF$ and by \[ \fourier{\Phi}(f) := \Phi(e) + \lambda \mathrm{I},\qquad f = e + \lambda\mathbf{1},\, e\in \mathcal{E},\,\lambda \in \mathbb{C} \] a unital representation $\fourier{\Phi}: \mathcal{E} \oplus \mathbb{C}\mathbf{1} \to \mathcal{L}(X)$ is defined. This new representation is clearly non-degenerate if $X \neq \{0\}$. 2nd case: $\mathbf{1} \in \mathcal{E}$. Then $P := \Phi(\mathbf{1})$ is a projection and one can restrict the representation to $\mathcal{L}(Y)$, where $Y:= \ran(P)$. All in all we see that degenerate representations can be neglected. \end{rem} \begin{thm}\label{ext.t.ancgen} Let $\calF$ be a unital algebra, $\mathcal{E} \subseteq \calF$ a subalgebra and $\Phi: \mathcal{E} \to \mathcal{L}(X)$ a non-degenerate algebra representation. Then $\ancgen{\mathcal{E}, \calF, \Phi}$ is a unital subalgebra of $\calF$ containing $\mathcal{E}$ and anchored in $\mathcal{E}$. Moreover, $\ancgen{\mathcal{E}, \calF, \Phi}$ contains each unital subalgebra of $\calF$ with these properties. \end{thm} \begin{proof} For the proof we abbreviate $\calF' := \ancgen{\mathcal{E},\calF,\Phi}$. \smallskip\noindent Suppose that $\calF_0$ is a unital subalgebra of $\calF$ that contains $\mathcal{E}$ and is anchored in $\mathcal{E}$. If $f\in \calF_0$ and $e\in \mathcal{E}$ then $ef\in \calF_0$ again and hence $[ef]_\mathcal{E}$ is an anchor set. This shows that $\calF_0 \subseteq \calF'$. \smallskip\noindent As $\Phi$ is non-degenerate, $\mathcal{E} \subseteq \calF'$ and $\mathbf{1} \in \calF'$. Let $f\in \calF'$. Then \[ \bigcup_{e\in \mathcal{E}} [ef]_\mathcal{E} e \subseteq [f]_\mathcal{E}. \] For each $e\in \mathcal{E}$, $[ef]_\mathcal{E}$ is an anchor set (since $f\in \calF'$) and $\mathcal{E}$ is an anchor set (since $\Phi$ is non-degenerate). It follows that $[f]_\mathcal{E}$ is an anchor set as well. Since $f\in \calF'$ was arbitrary, $\calF'$ is anchored in $\mathcal{E}$. \smallskip\noindent It remains to show that $\calF'$ is a subalgebra of $\calF$. To this end, fix $f,g\in \calF'$. Then \[ \bigcup_{e\in [f]_\mathcal{E}} [eg]_\mathcal{E} e \subseteq [f]_\mathcal{E} \cap [g]_\mathcal{E} \subseteq [f+g]_\mathcal{E}. \] It follows that $[f+g]_{\mathcal{E}}$ is an anchor set. Since by definition $\mathcal{E} \cdot \calF' \subseteq \calF'$, it follows that $[d(f+g)]_\mathcal{E} = [df + dg]_\mathcal{E}$ is an anchor set for each $d\in \mathcal{E}$. Hence, $f+g \in \calF'$. Likewise, the inclusion \[ \bigcup_{e\in [f]_\mathcal{E}} [efg]_\mathcal{E} e \subseteq [fg]_\mathcal{E} \] implies that $[fg]_\mathcal{E}$ is an anchor set. Since as above one can replace here $f$ by $df$ for each $d\in \mathcal{E}$, it follows that $fg\in \calF'$. \end{proof} \begin{rem}\label{ext.r.ancgen} Let, as before, $\calF$ be a unital algebra, $\mathcal{E} \subseteq \calF$ a subalgebra and $\Phi: \mathcal{E} \to \mathcal{L}(X)$ a non-degenerate representation. Then: \begin{aufzi} \item {\em $\ancgen{\mathcal{E}, \calF, \Phi}$ contains each $f\in \calF$ such that $\calZ(\mathcal{E}) \cap [f]_\mathcal{E}$ is an anchor set.} \item {\em If $\calF$ is commutative, then \quad $ \ancgen{\mathcal{E},\calF,\Phi} = \{ f\in \calF \,\,|\,\, [f]_\mathcal{E} \,\,\text{is an anchor set}\}$.} \end{aufzi} Indeed, a) follows from the inclusion \[ \calZ(\mathcal{E}) \cap [f]_\mathcal{E} \subseteq [ef]_\mathcal{E} \quad \text{for all $e\in \mathcal{E}$}, \] which is easy to establish. And b) follows from a). This shows that our present approach generalizes the one in \cite[Chapter 7]{HaaseLFC}. \end{rem} \vanish{ This remark is relevant, e.g. for the situation when $\calF$ consists of operator-valued functions defined on some subset of $\mathbb{C}$ and $\mathcal{E}$ is characterized by a pure growth condition. Then $\calZ(\mathcal{E})$ contains scalar-valued functions which may suffice to regularize an operator-valued function $f$ multiplicatively.} Let us summarize the results of this section by combining Theorems \ref{ext.t.ext} and \ref{ext.t.ancgen}. \begin{cor}\label{ext.c.ancgen} Let $\calF$ be a unital algebra, $\mathcal{E} \subseteq \calF$ a subalgebra and $\Phi: \mathcal{E} \to \mathcal{L}(X)$ a non-degenerate representation. Then there is a unique extension $\fourier{\Phi}$ of $\Phi$ to a calculus on $\ancgen{\mathcal{E}, \calF, \Phi}$. Moreover, $\mathcal{E}$ is an algebraic core for $\fourier{\Phi}$. \end{cor} Corollary \ref{ext.c.ancgen} allows to extend any non-degenerate representation $\Phi$ of a subalgebra $\mathcal{E}$ of a unital algebra $\calF$ to the subalgebra $\ancgen{\mathcal{E},\calF,\Phi}$ of $\mathcal{E}$-anchored elements. We shall call this the {\bf canonical extension} of $\Phi$ within $\calF$, and denote it again by $\Phi$ (instead of $\fourier{\Phi}$ as in the corollary). \medskip \subsection{Successive Extensions}\label{ext.s.succ} Very often, one performs an algebraic extension in a situation, when there is already some calculus present. The following situation is most common: Let $\calF$ be a unital subalgebra of a unital algebra $\calG$, and let $\mathcal{E} \subseteq \calF$ be a subalgebra which is an algebraic core for a calculus $\Phi: \calF \to \mathcal{C}(X)$. Furthermore, let $\mathcal{E}'$ be a subalgebra of $\calG$ and $\Psi: \mathcal{E}' \to \mathcal{L}(X)$ a representation with \[ \mathcal{E} \subseteq \mathcal{E}', \qquad \Psi\res{\mathcal{E}} = \Phi\res{\mathcal{E}}. \] Then $\Psi$ is non-degenerate, and one can perform an algebraic extension within $\calG$, yielding \[ \calF':= \ancgen{\mathcal{E}',\calG, \Psi}. \] We denote the extension again by $\Psi$. The following picture illustrates the situation% \footnote{Observe that since $\mathcal{E}$ is an algebraic core for $\Phi$, the calculus on $\calF$ can be considered an algebraic extension of $\Phi\res{\mathcal{E}}$. Hence the title ``Successive Extensions''.}: \[ \xymatrix{ & \calG \\ \calF \ar@{-}[ur]& \quad \calF':= \ancgen{\mathcal{E}',\calG, \Psi} \ar@{-}[u]\\ & \mathcal{E}' \ar@{-}[u]\\ \mathcal{E} \ar@{-}[ur]\ar@{-}[uu]& } \] For a function $f\in \calF$ one may ask, under which conditions one has $f\in \calF'$ and $\Psi(f) = \Phi(f)$. The following result gives some answer. \begin{thm}\label{ext.t.succ-comp} In the situation described above, let $f\in \calF$. Then \[ f\in \calF' \quad \text{and}\quad\Phi(f) = \Psi(f) \] if any one of the following conditions is satisfied: \begin{aufziii} \item $f\in \calF'$ and $\Phi'(f)\in \mathcal{L}(X)$. \item $f\in \calF'$ and $\Phi'(f)$ is densely defined and $\Phi(f) \in \mathcal{L}(X)$. \item For each $e'\in \mathcal{E}'$ there is a $\Psi$-anchor set $\mathcal{M}_{e'}\subseteq \mathcal{E}'$ such that $\mathcal{M}_{e'} e'\subseteq \calD'\cdot [f]_\mathcal{E}$, where \[ \calD' := \{ d'\in \calF' \,\,|\,\, d'\cdot\mathcal{E} \subseteq \mathcal{E}'\}. \] \item $\mathcal{E}'= \mathcal{E}$. \item $\calZ(\mathcal{E}') \cap [f]_\mathcal{E}$ is an anchor set. \item $\mathcal{E} \subseteq \mathrm{Z}(\mathcal{E}')$. \item $\mathcal{E}'$ is commutative. \end{aufziii} \end{thm} \begin{proof} 1) and 2): If $f \in \calF \cap \calF'$ then $\Phi'(f) \subseteq \Phi(f)$ by Lemma \ref{urc.l.uni}. Then 1) is sufficient since $\Phi(f)$ is an operator, and 2) is sufficient since $\Phi'(f)$ is closed. \smallskip\noindent 3)\ We prove first that $f\in \calF'$. Let $e'\in \mathcal{E}'$. Take $\mathcal{M}_e'$ as in the hypotheses. Then \[ \mathcal{M}_{e'}e'f \subseteq \calD' [f]_\mathcal{E} f \subseteq \calD'\mathcal{E} \subseteq \mathcal{E}'. \] It follows that $\mathcal{M}_{e'} \subseteq [e'f]_{\mathcal{E}'}$. Since $e'\in \mathcal{E}'$ was arbitrary, $f\in \calF'$. (Recall that $\calF' =\ancgen{\mathcal{E}',\calG,\Psi}$ and cf. \eqref{ext.eq.ancgen-def}). \smallskip\noindent For the identity $\Phi(f) = \Psi(f)$ it suffices to show that $[f]_\mathcal{E}$ determines $\Psi(f)$. But this follows directly from Proposition \ref{anc.p.EM}, part 2), with $(\Phi, \calF)$ replaced by $(\Phi', \calF')$ and $\mathcal{M} := \mathcal{E}'$. \smallskip\noindent Let us now examine the cases 4)--7). In case 4), one has $\mathcal{E} = \mathcal{E}'$ and one can take $\mathcal{M}_{e'} = [e'f]_\mathcal{E}$ for $e'\in \mathcal{E}$ in 3). In case 5) one can take $\mathcal{M}_{e'} = \mathrm{Z}(\mathcal{E}') \cap [f]_\mathcal{E}$ independently of $e'\in \mathcal{E}'$. Case 6) is an instance of case 5), since $[f]_\mathcal{E}$ is an anchor set by the assumption that $\mathcal{E}$ is an algebraic core for $\Phi$ on $\calF$. Finally, case 7) obviously implies case 6). \vanish{ \smallskip\noindent Next, note that $\Psi(f) \subseteq \Phi(f)$ by Lemma \ref{urc.l.uni} (applied to $\calF \cap \calF'$). To prove the converse inclusion, take $x,y\in X$ with $\Phi(f)x = y$ and $e'\in [f]_{\mathcal{E}'}$. We need to show that $\Psi(e'f)x= \Psi(e')y$. To this end, fix $c' \in \mathcal{M}_{e'}$ as in the hypothesis. By assumption, there is $d'\in\calD'$ and $e\in [f]_\mathcal{E}$ such that $c'e'= d'e$. Since $\Phi(f)x= y$ we obtain $\Phi(ef)x= \Phi(e)y$ and hence \begin{align*} \Psi(c')\Phi(e'f)x & = \Psi( c'e'f)x = \Psi(d'ef)x = \Psi(d') \Psi(ef)x = \Psi(d') \Phi(ef)x \\ & = \Psi(d') \Phi(e)y = \Psi(d') \Psi(e)y = \Psi(d'e)y = \Psi(c'e ')y \\ & = \Psi(c')\Psi(e')y. \end{align*} Since, by assumption, $\mathcal{M}_{e'}$ is an anchor set for $\Psi$, it follows that $\Psi(e'f)x = \Psi(e')y$ as desired. } \end{proof} \vanish{ \medskip \subsection*{Admissible Subalgebras} Is the canonical extension necessarily consistent with an already given calculus? In general, the answer might be ``no'' (although we do not know of an explicit counterexample). The following lemma is the best we can achieve at present. \begin{lem}\label{ext.l.admis} Let $\calF$ be a unital algebra, $\Phi: \calF \to \mathcal{C}(X)$ a proto-calculus and $\mathcal{E}\subseteq \bdd(\calF;\Phi)$ a subalgebra on which $\Phi$ is non-degenerate. Then one can restrict $\Phi$ to $\mathcal{E}$ and consider its canonical extension $\fourier{\Phi\res{\mathcal{E}}}$ to $\ancgen{\mathcal{E}, \calF,\Phi} \subseteq \calF$. The following assertions are equivalent: \begin{aufzii} \item $\fourier{\Phi\res{\mathcal{E}}} = \Phi\res{\ancgen{\mathcal{E},\calF,\Phi}}$. \item $\ancgen{\mathcal{E}, \calF, \Phi}$ is $\Phi$-regular, i.e., the restriction of $\Phi$ to $\ancgen{\mathcal{E}, \calF,\Phi}$ is a calculus. \item The set $\reg(f,\Phi) \cap \ancgen{\mathcal{E},\calF,\Phi}$ is $\Phi$-determining for $f$, for each $f\in \ancgen{\mathcal{E}, \calF,\Phi}$. \end{aufzii} \end{lem} \begin{proof} The implication (i)$\Rightarrow$(ii) holds since by Theorem \ref{ext.t.ext} the canonical extension is a calculus. The converse follows from the uniqueness part of that theorem. And the equivalence (ii)$\Leftrightarrow$(iii) follows immediately from the definition of a calculus, since $\reg(f;\Phi) \cap \ancgen{\mathcal{E},\calF,\Phi} = \reg(f;\Phi\res{\ancgen{\mathcal{E}, \calF,\Phi}})$. \end{proof} In the situation of Lemma \ref{ext.l.admis}, if $\mathcal{E}$ is such that (i)--(iii) hold, then we call $\mathcal{E}$ a {\bf $\Phi$-admissible} subalgebra of $\calF$. In general, it may be difficult to identify admissible subalgebras. However, if $\calF$ is commutative and $\Phi$ is a calculus (and not just a proto-calculus), then the situation is simple: \begin{cor}\label{ext.c.admis-comm} Let $\calF$ be a commutative unital algebra, $\Phi: \calF \to \mathcal{C}(X)$ an $\calF$-calculus on a Banach space $X$ and $\mathcal{E} \subseteq \bdd(\calF,\Phi)$ a subalgebra on which $\Phi$ is non-degenerate. Then $\mathcal{E}$ is admissible. \end{cor} \begin{proof} By construction, the algebra $\calF':= \ancgen{\mathcal{E}, \calF, \Phi}$ is anchored in $\mathcal{E}$ and $\mathcal{E} \subseteq \bdd(\calF,\Phi) \cap \calF'$. Hence, Theorem \ref{urc.t.pull-back-com} yields that $\calF'$ is $\Phi$-regular. \end{proof} \begin{rem} At present it is unkown whether Corollary \ref{ext.c.admis-comm} holds without the assumption of commutativity. Related to this is the question what happens if one performs an extension, say from $\mathcal{E}$ to $\calF':= \ancgen{\mathcal{E}, \calF, \Phi}$, then takes a subalgebra $\mathcal{E}'$ of $\Phi$-bounded elements of $\calF'$ and performs another extension, now starting with $\mathcal{E}'$. If $\calF$ is commutative, then nothing strange can happen, and no ``new'' functions are included in the domain of the functional calculus. (This is an easy exercise.) \end{rem} }% \section{Approximate Identities}\label{s.api} Let $\calF$ be a {\em commutative} unital algebra, $\Phi: \calF \to \mathcal{C}(X)$ a proto-calculus, and $\mathcal{E} \subseteq \bdd(\calF,\Phi)$ a subset of $\Phi$-bounded elements. A sequence $(e_n)_n$ in $\mathcal{E}$ is called a {\bf (weak) approximate identity} in $\mathcal{E}$ (with respect to $\Phi$), if $\Phi(e_n) \to \mathrm{I}$ strongly (weakly) as $n \to \infty$. \medskip Let $f\in \calF$. A (weak) approximate identity $(e_n)_n$ is said to be a (weak) approximate identity {\bf for $f$}, if \[ \Phi(e_n)\Phi(f) \subseteq \Phi(f_n)\Phi(e_n) = \Phi(f_ne_n) \in \mathcal{L}(X) \quad \text{for all $n \in \mathbb{N}$}. \] More generally, $(e_n)_n$ is said to be a {\bf common} (weak) approximate identity {\em for} all the elements of subset $\mathcal{M} \subseteq \calF$, if $(e_n)_n$ is a (weak) approximate identity for each $f\in \mathcal{M}$. Finally, we say that $f\in \calF$ {\bf admits} a (weak) approximate identity in $\mathcal{E}$ if there is a (weak) approximate identity for $f$ in $\mathcal{E}$. More generally, we say that the elements of a subset $\mathcal{M} \subseteq \calF$ {\bf admit} a common (weak) approximate identity in $\mathcal{E}$, if there is a common (weak) approximate identity in $\mathcal{E}$ for them. \medskip Note that by the uniform boundedness principle, a weak approximate identity $(e_n)_n$ is uniformly $\Phi$-bounded, i.e., satisfies $\sup_{n\in \mathbb{N}} \norm{\Phi(e_n)} < \infty$. \begin{lem}\label{api.l.api} Let $(e_n)_n$ be a weak approximate identity for $f\in \calF$ with respect to $\Phi$ and let \[ D := \mathrm{span}\bigcup_{n\in \mathbb{N}} \ran(\Phi(e_n)). \] Then the following assertions hold: \begin{aufzi} \item $\{e_n \,\,|\,\, n \in \mathbb{N}\}$ is an anchor set. \item $D$ is dense in $X$ and $D \subseteq \dom(\Phi(f))$. In particular, $\Phi(f)$ is densely defined. \item $\dom(\Phi(f)) \cap D$ is a core for $\Phi(f)$. If $(e_n)_n$ is an aproximate identity for $f$ then $\Phi(e_n)x \to x$ within the Banach space $\dom(\Phi(f))$ for each $x\in \dom(\Phi(f_n))$. \item For all $n\in \mathbb{N}$ \[ \cls{\Phi(e_n)\Phi(f)} = \Phi(e_nf) = \Phi(fe_n) = \Phi(f)\Phi(e_n). \] \end{aufzi} \end{lem} \begin{proof} a) is trivial and b) follows from Mazur's theorem, as $D$ is clearly weakly dense in $X$. \smallskip\noindent c)\ Let $x,y\in X$ with $\Phi(f)x = y$. Then by hypothesis, for each $n \in \mathbb{N}$ we have $(\Phi(e_n)x, \Phi(e_n)y) \in \Phi(f)$, so $\Phi(e_n)x\in D \cap \dom(\Phi(f))$. Since $(\Phi(e_n)x, \Phi(e_n)y) \to (x,y)$ weakly, the space $\Phi(f)\res{D}$---considered as a subspace of $X\oplus X$---is weakly dense in $\Phi(f)$. By Mazur's theorem again, this space is strongly dense, hence $D$ is a core for $\Phi(f)$. If $(e_n)_n$ is even a strong approximate identity, then $\Phi(e_n)x \to x$ and $\Phi(f)\Phi(e_n)x = \Phi(e_n)y \to y$ strongly. \smallskip\noindent d) Suppose that $\Phi(fe_n) \in \mathcal{L}(X)$ for all $n\in \mathbb{N}$. Then $\Phi(f) \Phi(e_n) = \Phi(e_nf) \in \mathcal{L}(X)$, and hence $\ran(\Phi(e_n)) \subseteq \dom(\Phi(f))$. If follows that $D\subseteq \dom(\Phi(f))$, and $\Phi(f)$ is densely defined, by b). By hypothesis, \[ \Phi(e_n)\Phi(f) \subseteq \Phi(f) \Phi(e_n) = \Phi(fe_n). \] Since the left-most operator is densely defined, we obtain \[ \cls{\Phi(e_n)\Phi(f)} = \Phi(fe_n). \] On the other hand, by (FC3), \[ \Phi(e_n)\Phi(f) \subseteq \Phi(e_nf) \] and the latter is a closed operator. It follows that $\Phi(e_nf) = \Phi(fe_n)$ as claimed. \end{proof} By virtue of the preceding lemma, we obtain the following result. \begin{thm}\label{api.t.api} Let $\Phi: \calF \to \mathcal{L}(X)$ be a proto-calculus. \begin{aufzi} \item If $(e_n)_n$ is a (weak) approximate identity for $f,g\in \calF$, then it is a (weak) approximate identity for $f+g$ and $\lambda f$ ($\lambda \in \mathbb{C}$) and one has \[ \cls{\Phi(f) + \Phi(g)} = \Phi(f + g). \] \item If $(e_n)_n$ is a strong approximate identity for $f,g\in \calF$, then $(e_n^2)_n$ is a strong approximate identity for $fg$, and one has \[ \cls{\Phi(f) \Phi(g)} = \Phi(fg). \] \end{aufzi} \end{thm} \begin{proof} a)\ Since $\Phi(e_nf) = \Phi(fe_n)$ and $\Phi(e_ng) = \Phi(ge_n)$ are bounded, so is \[ \Phi(e_n(f+g)) = \Phi(e_n f + e_n g) = \Phi(e_nf) + \Phi(e_n g) = \dots = \Phi((f+g)e_n). \] It follows that $(e_n)_n$ is a (weak) approximation of identity for $f+g$ and, hence, that $D$ is a core for $\Phi(f+g)$. But $D \subseteq \dom(\Phi(f)) \cap \dom(\Phi(g))$, and so we are done. \smallskip\noindent b) Since $(e_n)_n$ is an approximate identity, it is bounded, and hence also $(e_n^2)_n$ is an approximate identity. Note that \begin{align*} \Phi(fge_n^2) & = \Phi(f(ge_n)e_n) = \Phi(f) \Phi(ge_n) \Phi(e_n) = \Phi(f) \Phi(e_ng) \Phi(e_n) \\ & = \Phi(fe_nge_n) = \Phi(fe_n) \Phi(ge_n) \in \mathcal{L}(X), \end{align*} and continuing the computation yields $\Phi(fge_n^2) = \Phi(e_n^2fg)$. This proves the first claim. The second follows easily. \end{proof} \vanish{ \begin{exa} If $\Phi: \mathcal{M}(K,\Sigma) \to \mathcal{L}(H)$ is any measurable functional calculus then each $f\in \mathcal{M}(K,\Sigma)$ admits an approximate identity in $\mathcal{E} = \calM_\mathrm{b}(K,\Sigma)$, for instance $e_n := \frac{n}{n + \abs{f}}$ or $e_n := \mathbf{1}_{\set{\abs{f}\le n}}$, $n\in \mathbb{N}$. \end{exa} \begin{exa} Suppose that $-A$ generates a bounded $C_0$-semigroup on a Banach space $X$ and $f\in \Mer(\mathbb{C}_+)$ is such that $f(A)$ is defined in the extended Hille--Phillips calculus for $A$. If \[ D_\infty := \bigcap_{n\ge 0} \dom(A^n) \] is contained in $\dom(f(A))$ then $f$ admits an approximate identity. In particular, $D_\infty$ is a core for $f(A)$ (Exercise \ref{mafc.ex.D-infty}). \end{exa} \vanish{ \section{The Generator of a Functional Calculus} Recall from Chapter \ref{c.afc} that an operator $A$ is called the generator of an $\calF$-calculus $\Phi$ if $\calF$ is a space of functions on a set $D\subseteq \mathbb{C}$, the function $\mathbf{z} \in \calF$ and $\Phi(\mathbf{z}) = A$. Later on, we extended this terminology towards the situation when $\Phi(\mathbf{z})$ is not, but $\Phi((\lambda - \mathbf{z})^{-1})$ is well defined for some $\lambda \in \mathbb{C}$. By virtue of the canonical extension, one can unify such auxiliary definitions of a generator, in the following way. Suppose that $\calF$ is an algebra of functions on a set $D\subseteq \mathbb{C}$ and $\Phi: \mathcal{E} \to \mathcal{L}(X)$ is a non-degenerate representation, where $\mathcal{E}$ is a subalgebra of $\calF$. Then we call the operator $A$ the {\bf generator} of the calculus $\Phi$ if $\mathbf{z}$ is anchored in $\mathcal{E}$ with respect to $\Phi$ and $\fourier{\Phi}(\mathbf{z}) = A$. That is, $A$ is the generator of the canonical extension of $\Phi$. In applications, the following situation is typical: there is an element $g\in \mathcal{E}$ and some $\lambda \in \mathbb{C}\setminus D$ such that $e := (\lambda - \mathbf{z})^{-1}g \in \mathcal{E}$ is an anchor element. Since \[ e\cdot \mathbf{z} = -g + \lambda e \in \mathcal{E}, \] $e$ is an anchor element for $\mathbf{z}$. Hence, in such a situation, $\fourier{\Phi}(\mathbf{z}) = \Phi(e)^{-1} \Phi(e \cdot \mathbf{z})$ is the generator of $\Phi$. In particular, the above happens when $(\lambda - \mathbf{z})^{-1} \in \mathcal{E}$ and $\Phi( (\lambda - \mathbf{z})^{-1})$ is injective, since one can then take $e= (\lambda - \mathbf{z})^{-1}$. \begin{cor}\label{mafc.c.generator} Let $\calF$ be an algebra of functions on $D\subseteq \mathbb{C}$, let $\mathcal{E} \subseteq \calF$ be a subalgebra and let $\Phi: \mathcal{E} \to \mathcal{L}(X)$ be a representation. Suppose that there is an operator $A$ on $X$, a number $\lambda \in \varrho(A) \setminus D$ and $g\in \mathcal{E}$ such that $\Phi(g)$ is injective, $g\cdot (\lambda - \mathbf{z})^{-1} \in \mathcal{E}$ and \[ \Phi(g (\lambda - \mathbf{z})^{-1}) = \Phi(g) R(\lambda,A). \] Then $A$ is the generator of $\Phi$. \end{cor} \begin{proof} By what we have seen above, with $e := g \cdot (\lambda - \mathbf{z})^{-1}$ we have $ e \cdot \mathbf{z} = - g + \lambda e \in \mathcal{E}$ and $e$ is an anchor element for $\mathbf{z}$. It follows that \[ \Phi(e \mathbf{z}) = \Phi(- g + \lambda e) = -\Phi(g) + \lambda \Phi(g)R(\lambda,A)= \Phi(g) [ -\mathrm{I} + \lambda R(\lambda,A)]. \] Hence, \begin{align*} \fourier{\Phi}(\mathbf{z}) & = \Phi(e)^{-1} \Phi(e \mathbf{z}) = (\lambda - A) \Phi(g)^{-1} \Phi(g)[-\mathrm{I} + \lambda R(\lambda,A)] \\ & = (\lambda - A) [-\mathrm{I} + \lambda R(\lambda,A)] = A \end{align*} as claimed.\qed \end{proof} \bigskip \noindent The Extension Theorem will unfold its true power only in coming chapters. However, we can already review the calculi known so far in the light of Theorem \ref{mafc.t.ext}. \section{The Dual Calculus}\label{s.dua} For a calculus $(\calF, \Phi)$ on a Banach space $X$ one is tempted to define a ``dual calculus'' on $X'$ by letting $\Phi'(f) := \Phi(f)'$. This is premature in at least two respects. First, if $\Phi(f)$ is not densely defined, $\Phi(f)'$ is just a linear relation and not an operator. Secondly, even if the first problem is ruled out by appropriate minimal assumptions, it is not clear how to establish the formal properties of a calculus for $\Phi'$. \medskip To tackle these problems, we shall take a different route and define the dual calculus by virtue of the extension procedure described in Section \ref{s.ext}. To wit, let $\Phi:\calF \to \mathcal{C}(X)$ be any proto-calculus and let $\calB := \bdd(\calF, \Phi)$ the set of $\Phi$-bounded elements. For $b \in \calB$ we define \[ \Phi'(b) := \Phi(b)' \in \mathcal{L}(X'). \] As $\calF$ may not be commutative, the mapping $\Phi'$ may not be a homomorphism for the original algebra structure. To remedy this defect, we pass to the {\bf opposite algebra} $\calF^\mathrm{op}$, defined on the same set $\calF$ with the same linear structure but with the ``opposite'' multiplication \[ f \cdot_\mathrm{op} g = gf\qquad (g,f\in \calF). \] For any subset $\mathcal{M} \subseteq \calF$ we write $\mathcal{M}^\mathrm{op}$ when we want to consider $\mathcal{M}$ as endowed with this new multiplication. This applies in particular to $\calB$, whence we obtain \[ [f]_{\calB^\mathrm{op}} = \{ e\in \calB \,\,|\,\, e\cdot_\mathrm{op} f \in \calB\} = \{ e\in \calB \,\,|\,\, fe \in \calB\} \] for $f\in \calF^\mathrm{op}$. The mapping \[ \Phi': \calB^\mathrm{op} \to \mathcal{L}(X') \] is a unital algebra homomorphism. We then can pass to its canonical extension to the algebra \[ \calF' := \ancgen{\calB^\mathrm{op}, \calF^\mathrm{op}, \Phi'}; \] as usual, we shall denote that extension by $\Phi'$ again. The mapping \[ \Phi': \calF' \to \mathcal{L}(X') \] is called the {\bf dual calculus} associated with $\Phi$. By construction, it is a calculus (Theorem \ref{ext.t.ext}). \begin{thm}\label{dua.t.main} Let $(\calF, \Phi)$ be a proto-calculus with dual calculus $(\calF', \Phi')$, and let $f\in \calF'$. Define \[ D_f := \mathrm{span}\{ \Phi(e)x \,\,|\,\, x\in X, \,\,e\in [f]_{\calB^\mathrm{op}}\}. \] Then the following assertions hold: \begin{aufzi} \item $D$ is dense in $X$ and $D \subseteq \dom(\Phi(f))$. In particular, $\Phi(f)$ is densely defined. \item $\Phi(f)'\subseteq \Phi'(f)$, with equality if and only if $D_f$ is a core for $\Phi(f)$. \item $\Phi(f)$ is bounded if and only if $\Phi'(f)$ is bounded; in this case $\Phi'(f)= \Phi(f)'$. \end{aufzi} \end{thm} \begin{proof} a) Let $e \in [f]_{\calB^\mathrm{op}}$. Then $e, fe\in \calB$. Hence, $\ran(\Phi(e)) \subseteq \dom(\Phi(f))$. This yields the inclusion $D \subseteq \dom(\Phi(f))$. Since by construction, $[f]_{\calB^\mathrm{op}}$ is a $\Phi'$-anchor, one has \begin{equation} \bigcap_{e\in [f]_{\calB^\mathrm{op}}} \ker(\Phi(e)') = \{0\}. \end{equation} By a standard application of the Hahn--Banach theorem, $D$ is dense in $X$. \vanish{ \smallskip\noindent b) Let $e \in [f]_{\calB^\mathrm{op}}$ as before. Then, \[ \Phi'(e\cdot_\mathrm{op} f) = \Phi(fe)'= (\Phi(f) \Phi(e))' \supseteq \Phi(e)'\Phi(f)'= \Phi'(e)\Phi(f)'. \] It follows that if $x',y'\in X'$ are such that $\Phi(f)'x' = y'$, then \[ \Phi'(e\cdot_\mathrm{op} f)x'= \Phi'(e)y'\] for all $e\in [f]_{\calB^\mathrm{op}}$, and hence $\Phi'(f)x' = y'$ by construction of the dual calculus. } \smallskip\noindent b)\ Fix $x',y'\in X$. Note the following equivalences: \begin{align*} \Phi'(f)x'= y' & \Leftrightarrow\,\, \forall e\in [f]_{\calB^\mathrm{op}} : \Phi'(e\cdot_\mathrm{op} f)x'= \Phi'(e) y' \\ & \Leftrightarrow\,\, \forall e\in [f]_{\calB^\mathrm{op}} : \Phi(fe)'x'= \Phi(e)' y' \\ & \Leftrightarrow\,\, \forall e\in [f]_{\calB^\mathrm{op}},\, z\in X : \dprod{\Phi(f)\Phi(e)z}{x'} = \dprod{\Phi(e)z}{y'} \\ & \Leftrightarrow\,\, (x',-y') \perp (\Phi(f) \cap (D_f \oplus X)), \end{align*} where we identify $\Phi(f)$ with its graph as a subset of $X \oplus X$. On the other hand, \[ \Phi(f)'x'= y ' \,\,\Leftrightarrow\,\, (x',-y') \perp \Phi(f). \] From this it is evident that $\Phi(f)'\subseteq \Phi'(f)$. Furthermore, since both operators $\Phi(f)'$ and $\Phi'(f)$ are weakly$^*$ closed, by the Hahn--Banach theorem one has equality if and only if $\Phi(f) \cap (D_f \oplus X))$ is dense in $\Phi(f)$. The latter just means that $D_f$ is a core for $\Phi(f)$. \smallskip\noindent c)\ If $\Phi(f)$ is bounded, then so is $\Phi(f)'$, and hence by b) $\Phi'(f) = \Phi(f)'$. Suppose, conversely, that $\Phi'(f) \in \mathcal{L}(X')$. Since $\Phi'(f)$ has a closed graph for the weak$^*$ topology, it follows from Theorem \ref{sal.t.cgt} that there is $T\in \mathcal{L}(X)$ such that $\Phi'(f) = T'$. By b), $\Phi(f)'\subseteq T'$, which in turn implies that \[ T = T''\cap (X\oplus X) \subseteq \Phi(f)'' \cap (X\oplus X) = \Phi(f), \] since $\Phi(f)$ is closed, see \cite[Prop.A.4.2.d]{HaaseFC}. This implies that $\Phi(f) = T$, so $\Phi(f)$ is indeed bounded. \end{proof} A calculus $(\calF, \Phi)$ on a Banach space $X$ is called {\bf dualizable} if $\calF'= \calF^\mathrm{op}$, i.e., the dual calculus is defined on $\calF^\mathrm{op}$. Equivalently, $(\calF, \Phi)$ is dualizable if for each $f\in \calF$ and each $b \in \bdd(\calF, \Phi)$ the space \[ D_{fb} = \mathrm{span}\{ \Phi(e)x \,\,|\,\, x\in X,\, e, fbe \in \bdd(\calF, \Phi)\} \] is dense in $X$. For a dualizable calculus one has \[ \bdd(\calF, \Phi) = \bdd(\calF', \Phi') \] by c) of Theorem \ref{dua.t.main}. \medskip If $\Phi$ on $\calF$ is a non-dualizable calculus then ${\calF'}^\mathrm{op}$ (i.e., $\calF'$ with the original algebra structure) is a $\Phi$-regular subalgebra of $\calF$ (since it contains $\bdd(\calF, \Phi)$) and we may restrict $\Phi$ to this algebra. In a sense, ${\calF'}^\mathrm{op}$ is the largest subalgebra such that the restriction of $\Phi$ to it is a dualizable calculus. \section{Topological Extensions}\label{s.top} The algebraic extension procedure discussed in Section \ref{s.ext} is based on a ``primary'' or ``elementary'' calculus $\Phi: \mathcal{E} \to \mathcal{L}(X)$ that can be extended. In this section we discuss the form of possible other----topological---ways of extending a primary calculus. Whereas the algebraic extension is canonical when a superalgebra is given, a topological extension depends also on the presence of a given topological structure on the superalgebra. In the following we want to formalize the idea of a topological extension in such generality that the extant examples are covered. However, we admit that experience with topological extensions as such is scarce, so that the exposition given here is likely to be replaced by a better one some time in the future. \medskip Let $\calF$ be an algebra and $\Lambda$ a set. An {\bf (algebraic) convergence structure} on $\calF$ over $\Lambda$ is a relation \[ \tau \subseteq \calF^\Lambda\times \calF \] with the following properties: \begin{aufziii} \item $\tau$ is a subalgebra of $\calF^\Lambda \times \calF$. \item For each $f\in \calF$ the pair $((f)_{\lambda \in \Lambda}, f)$ is in $\tau$. (Here, $(f)_{\lambda \in \Lambda}$ is the constant family.) \end{aufziii} The convergence structure is called {\bf Hausdorff}, if $\tau$ is actually an operator and not just a relation. If $\Lambda = \mathbb{N}$, we speak of a {\bf sequential} convergence structure. Given a convergence structure $\tau$, one writes $f_\lambda \stackrel{\tau}{\to} f$ in place of $((f_\lambda)_\lambda, f) \in \tau$ and says that $(f_\lambda)_\lambda$ {\bf $\tau$-converges to} $f$. From 1) and 2) it follows that $\dom(\tau) \subseteq \calF^\Lambda$ is an algebra containing all constant families. The structure $\tau$ is Hausdorff if and only if one has \[ f_\lambda \stackrel{\tau}{\to} f,\, f_\lambda \stackrel{\tau}{\to} g \quad \Rightarrow\quad f=g. \] \medskip From now on, we consider the following situation: $\mathcal{E}'$ is a unital algebra, $\mathcal{E} \subseteq \mathcal{E}'$ is a subalgebra, and $\Phi: \mathcal{E} \to \mathcal{L}(X)$ is a representation; $\calA \subseteq \mathcal{L}(X)$ is a subalgebra such that $\Phi(\mathcal{E})\subseteq \calA$; and $\tau = (\tau_1, \tau_2)$ is a pair of convergence structures $\tau_1$ on $\mathcal{E}'$ and $\tau_2$ on $\calA$ over the same index set $\Lambda$. (The latter will be called a {\bf joint convergence structure} on $(\mathcal{E}',\calA)$ in the following.) In this situation, the set \[ \mathcal{E}^{\tau} := \{ f\in \mathcal{E}' \,\,|\,\, \exists\, (e_\lambda)_\lambda \, \text{in $\mathcal{E}$},\, T\in \calA : e_\lambda \stackrel{\tau_1}{\to} f,\,\, \Phi(e_\lambda) \stackrel{\tau_2} \to T\} \] is a subalgebra of $\mathcal{E}'$ containing $\mathcal{E}$. Suppose in addition that $\Phi$ is {\bf closable with respect to $\tau$}, which means that \begin{equation}\label{sec.eq.tau-closable} (e_\lambda)_\lambda \in \mathcal{E}^\Lambda,\,T\in \calA,\, e_\lambda \stackrel{\tau_1}{\to} 0,\, \Phi(e_\lambda) \stackrel{\tau_2}{\to} T\, \quad \Rightarrow\quad T=0. \end{equation} Then one can define the {\bf $\tau$-extension} $\Phi^\tau : \mathcal{E}^\tau \to \mathcal{L}(X)$ of $\Phi$ by \[ \Phi^\tau(f) := T \] whenever $(e_\lambda)_\lambda \in \mathcal{E}^\Lambda$, $e_\lambda \stackrel{\tau_1}{\to}f$, and $\Phi(e_\lambda) \stackrel{\tau_2}{\to} T$. (Indeed, \eqref{sec.eq.tau-closable} just guarantees that $\Phi^\tau$ is well-defined, i.e., $\Phi^\tau(f)$ does not depend on the chosen $\tau$-approximating sequence $(e_\lambda)_\lambda$.) \begin{thm}\label{top.t.top-ext} The so-defined mapping $\Phi^\tau: \mathcal{E}^\tau \to \mathcal{L}(X)$ is an algebra homomorphism which extends $\Phi$. \end{thm} \begin{proof} Straightfoward. \end{proof} In practice, one wants to combine a topological with an algebraic extension, and that raises a compatibility issue. To explain this, let us be more specific. \medskip \noindent Let $\mathcal{E}$ be an algebraic core for a calculus $\Phi: \calF \to \mathcal{C}(X)$, let $\calG$ be a superalgebra of $\calF$ and let $\mathcal{E}'$ be a subalgebra of $\calG$ containing $\mathcal{E}$: \[ \calF \subseteq \calG\quad \text{and}\quad \mathcal{E} \subseteq \mathcal{E}' \subseteq \calG. \] Suppose further that $\tau= (\tau_1, \tau_2)$ is a joint convergence structure on $(\mathcal{E}', \calA)$, where $\calA$ is a subalgebra of $\mathcal{L}(X)$ containing $\Phi(\mathcal{E})$, and that $\Phi\res{\mathcal{E}}$ is closable with respect to $\tau$. As above, let $\Phi^\tau$ denote the $\tau$-extension of $\Phi\res{\mathcal{E}}$ to the algebra $\mathcal{E}^\tau \subseteq \mathcal{E}'$. Starting from $\mathcal{E}^\tau$ we can extend $\Phi^\tau$ algebraically to \[ \calG^\tau := \ancgen{\mathcal{E}^\tau, \calG, \Phi^\tau}, \] and we denote this extension again by $\Phi^\tau$. The question arises whether $\Phi^\tau$ is an extension of $\Phi$. This problem has been already discussed in Section \ref{ext.s.succ} in a more general context, so that Theorem \ref{ext.t.succ-comp} and the subsequent remarks apply. In particular, we obtain the following: \begin{cor}\label{top.c.top-ext-comp} In the situation described above, the following assertions hold: \begin{aufzi} \item If $f\in \calF \cap \calG^\tau$ then $\Phi^\tau(f) \subseteq \Phi(f)$, so that $\Phi^\tau(f) = \Phi(f)$ if $\Phi^\tau(f)$ is bounded. In particular, $\Phi^\tau= \Phi$ on $\mathcal{E}^\tau \cap \calF$. \item If $f\in \calF$ is such that $\mathrm{Z}(\mathcal{E}^\tau) \cap [f]_\mathcal{E}$ is an anchor set, then $f\in \calG^\tau$ and $\Phi^\tau(f) = \Phi(f)$. In particular, $\Phi^\tau$ extends $\Phi$ if $\mathcal{E} \subseteq \mathrm{Z}(\mathcal{E}^\tau)$. \end{aufzi} \end{cor} If $\mathcal{E}$ is commutative and $\tau_1$ is Hausdorff, then $\mathcal{E}^\tau$ is also commutative. Hence, in this case, 2) is applicable and it follows that $\Phi^\tau$ extends $\Phi$. \vanish{ The following lemma deals with compatibility of a topological extension with a given calculus. \begin{lem}\label{top.l.comp} Let $\Phi: \calF \to \mathcal{C}(X)$ be a proto-calculus and $\mathcal{E} \subseteq \bdd(\calF, \Phi)$ a subalgebra. Suppose that $\tau$ is a sequential convergence structure on $\calF$ with respect to which the restriction $\Phi\res{\mathcal{E}}$ is closable. Then \begin{equation}\label{top.eq.comp1} \Phi(f) \subseteq \Phi^\tau(f) \qquad \text{for each $f\in \mathcal{E}^\tau$ which is $\Phi$-anchored in $\mathcal{E}$.} \end{equation} One has $\Phi(f) = \Phi^\tau(f)$ if at least one of the following assertions holds. \begin{aufziii} \item $[f]_\mathcal{E}$ determines $\Phi(f)$. \item $[f]_\mathcal{E}$ is a $\Phi$-anchor set and $\Phi(f)$ is densely defined. \end{aufziii} \end{lem} \begin{proof} Let $f\in \mathcal{E}^\tau$ be $\Phi$-anchored in $\mathcal{E}$. For each $e\in [f]_\mathcal{E}$ we have $e\in \mathcal{E}$ and $ef\in \mathcal{E}$, and hence \begin{equation}\label{top.eq.comp2} \Phi(ef) = \Phi^\tau(ef)= \Phi^\tau(e) \Phi^\tau(f) = \Phi(e) \Phi^\tau(f). \end{equation} Consequently, if $x,y \in X$ are such that $\Phi(f)x = y$, then \[ \Phi(e)y = \Phi(ef)x = \Phi(e) \Phi^\tau(f)x. \] Since, by hypothesis, $[f]_\mathcal{E}$ is a $\Phi$-anchor set, we obtain $\Phi^\tau(f)x = y$. This yields \eqref{top.eq.comp1}. \smallskip\noindent 1) If $[f]_\mathcal{E}$ determines $\Phi(f)$, then from \eqref{top.eq.comp2} it follows that $\Phi^\tau(f) \subseteq \Phi(f)$. The converse inclusion has just been proved, as a determining set is also an anchor set. \smallskip\noindent 2) If $[f]_\mathcal{E}$ is an anchor set then, as proved above, $\Phi(f) \subseteq \Phi^\tau(f)$. But the latter operator is bounded, and the former is closed and densely defined. Hence they must agree. \end{proof} Lemma \ref{top.l.comp} shows in particular that if a calculus $\Phi: \calF \to \mathcal{C}(X)$ is obtained by an algebraic extension of a given ``primary'' calculus $\Phi\res{\mathcal{E}}: \mathcal{E} \to \mathcal{L}(X)$, then extending that primary calculus topologically within $\calF$ does not lead to something new. However, the topological extension might involve also objects outside of $\calF$ (within an even larger algebra $\calG$, say) and then one obtains a strict (but compatible) extension of the original calculus. Of course, one can extend $\Phi^\tau$ algebraically, and Theorem \ref{urc.t.com} then yields that this new algebraic extension is still compatible with the original calculus on $\calF$. \begin{rems} The idea of a topological extension in the abstract theory of functional calculus was introduced in \cite{Haa05b}, with a (however easy-to-spot) mistake in the formulation of closability \cite[(5.2)]{Haa05b}. There, with an immediate application in mind, the discussion was still informal. In our attempt to formalize it, we here introduce the notion of a ``convergence structure'' which, admittedly, is {\em ad hoc}. We have not browsed through the literature to find a suitable and already established notion. Maybe one would prefer a richer axiomatic tableau for such a notion and is tempted to add axioms, e.g., require that $\Lambda$ is directed and that families that coincide eventually display the same convergence behaviour. On the other hand, only axioms 1) and 2) are needed to prove Theorem \ref{top.t.top-ext}. In practice, one may often choose $\tau_2$ to be operator norm or strong convergence, but other choices are possible. (See Sections \ref{s.sectop} and \ref{sgr.s.topext} below for an interesting example. It was the latter example that led us to acknowledge that one needs flexibility of the convergence notion also on the operator side.) \end{rems} \part{Examples} In this second part of the article, we want to illustrate the abstract theory with some well-known examples. However, we focus on the supposedly less well-known aspects. \section{Sectorial Operators}\label{s.sec} A closed operator $A$ on a Banach space $X$ is called {\bf sectorial} if there is $\omega \in [0, \pi)$ such that $\sigma(A)$ is contained in the sector $\cls{\sector{\omega}}$ and the function $\lambda \mapsto \lambda R(\lambda,A)$ is uniformly bounded outside every larger sector. The minimal $\omega$ with this property is called the {\bf sectoriality angle} and is denoted by $\omega_\mathrm{se}(A)$. For $\omega > 0$ we let $\mathcal{E}(\sector{\omega})$ be the set of functions $f\in \mathrm{H}^\infty(\sector{\omega})$ such that \[ \int_{\partial{\sector{\delta}}} \abs{f(z)}\frac{\abs{\mathrm{d}{z}}}{\abs{z}} <\infty \qquad \text{for all $0 \le \delta < \omega$}. \] If $f \in \mathcal{E}(\sector{\omega})$ and $A$ is a sectorial operator on $X$ of angle $\omega_\mathrm{se}(A) < \omega$ then we define \begin{equation}\label{sec.eq.def} \Phi_A^\omega(f) := \frac{1}{2\pi \mathrm{i}} \int_{\partial{\sector{\delta}}} f(z) R(z,A) \mathrm{d}{z} \end{equation} Note that the norm condition on the resolvent of $A$ and the integrability condition on $f$ just match in order to render this integral absolutely convergent. It is a classical fact that \[ \Phi_A^\omega: \mathcal{E}(\sector{\omega}) \to \mathcal{L}(X) \] is an algebra homomorphism and \[ \Phi_A^\omega\bigl( \frac{\mathbf{z}}{(1 + \mathbf{z})^2}\bigr) = A(1+A)^{-1}. \] Standard complex analysis arguments yield that for each $f\in \mathcal{E}(\sector{\omega})$ one has \[ \lim_{z\to 0} f(z) = \lim_{z\to \infty} f(z) = 0 \qquad \text{whenever $\abs{\arg z}\le \delta$} \] for each $0 < \delta < \omega$. As a consequence, \[ \ker(A) \subseteq \ker \Phi_A^\omega(f). \] It follows that $\Phi_A^\omega$ is degenerate if $A$ is not injective. Since we do not want to assume the injectivity of $A$, we could follow Remark \ref{ext.r.degenerate} and extend $\Phi^\omega_A$ to the unital algebra \[ \mathcal{E}_1(\sector{\omega}) := \mathcal{E}(\sector{\omega}) \oplus \mathbb{C} \mathbf{1} \] and make this the basis of the algebraic extension procedure from Section \ref{s.ext}. However, it is easily seen that the function $(1+\mathbf{z})^{-1}$ is not anchored in $\mathcal{E}_1(\sector{\omega})$. So, the resulting calculus would be ``too small'' in the sense that it would not cover some natural functions of $A$. In order to deal with this problem, one extends $\Phi_A^\omega$ further to the algebra \[ \mathcal{E}_{e}(\sector{\omega}) := \mathcal{E}(\sector{\omega}) \oplus \mathbb{C}\mathbf{1} \oplus \mathbb{C} \frac{1}{1 + \mathbf{z}} \] by definining \[ \Phi_A( (1 + \mathbf{z})^{-1}) := (1+A)^{-1}. \] It follows from properties of $\Phi_A^\omega$ on $\mathcal{E}(\sector{\omega})$ that this extension is indeed an algebra homomorphism. At this point one may perform an algebraic extension as in Theorem \ref{ext.t.ext} within a ``surrounding'' algebra $\calF$. A natural choice for $\calF$ is the field $\Mer(\sector{\omega})$ of all meromorphic functions on the sector $\sector{\omega}$. The domain of the resulting calculus, which is again denoted by $\Phi_A^\omega$, is the algebra \[ \dom(\Phi_A^\omega) := \ancgen{\mathcal{E}_{e}(\sector{\omega}), \Mer( \sector{\omega}), \Phi_A^\omega}. \] Whereas the algebras $\mathcal{E}$ and $\mathcal{E}_e$ are described independently of $A$, the algebra $\dom(\Phi_A^\omega)$ is heavily dependent on $A$, and is, as a whole, quite arcane in general. \medskip Note that there is still a dependence of our calculus on the choice of $\omega > \omega_\mathrm{se}(A)$. This can be eliminated as follows: for $\omega_1 > \omega_2> \omega_{\mathrm{se}}(A)$ one has a natural embedding \[ \eta: \Mer(\sector{\omega_1}) \to \Mer(\sector{\omega_2}),\qquad \eta(f) := f\res{\sector{\omega_2}}. \] By the identity theorem of complex analysis, $\eta$ is injective. Of course we expect compatibility, i.e, \[ \Phi_A^{\omega_2}(f\res{\sector{\omega_2}}) = \Phi_A^{\omega_1}(f)\qquad \text{for each $f\in \dom(\Phi_A^{\omega_2})$}. \] Since all involved algebras are commutative, by Theorem \ref{urc.t.com} this has to be verified only for functions $f\in \mathcal{E}_e(\sector{\omega_1})$, and hence effectively only for functions $f\in \mathcal{E}(\sector{\omega_1})$. For such functions it is a consequence of a path-deformation argument. \medskip By letting $\omega$ approach $\omega_\mathrm{se}(A)$ from above, we obtain a ``tower'' of larger and larger algebras and their ``union'' \[ \mathcal{E}[\sector{\omega_\mathrm{se}(A)}] := \bigcup_{\omega > \omega_\mathrm{se}(A)} \mathcal{E}(\sector{\omega}), \] and likewise for $\mathcal{E}_e[\sector{\omega_\mathrm{se}(A)}]$ and $\Mer[\sector{\omega_\mathrm{se}(A)}]$. A precise definition of this union would require the notion of a meromorphic function germ on $\cls{\sector{\omega_\mathrm{se}(A)}}\setminus\{0\}$. The resulting calculus is called the {\bf sectorial calculus} for $A$ and ist denoted by $\Phi_A$ here. It can be seen as an ``inductive limit'' of the calculi $\Phi_A^\omega$. \section{Topological Extensions of the Sectorial Calculus}\label{s.sectop} Of course the question arises whether the sectorial calculus $\Phi_A$ covers all ``natural'' choices for functions $f$ of $A$. The answer is ``no'', at least when the operator $A$ is not injective. To understand this, we look at functions of the form \begin{aufziii} \item $f(z) = \int_{\mathbb{R}_+} \frac{z}{t+ z} \, \mu(\mathrm{d}{t})$ \quad and \item $g(z) = \int_{\mathbb{R}_+} (tz)^n \mathrm{e}^{-tz}\, \mu(\mathrm{d}{t})$, \end{aufziii} where $\mu \in \mathrm{M}(\mathbb{R}_+)$ is a complex Borel measure on $\mathbb{R}_+ = [0,\infty)$. It is easy to see that by 1) a holomorphic function $f$ on $\sector{\pi}$ is defined, bounded on each smaller sector. And by 2), a holomorphic function $g$ on $\sector{\pi/2}$ is defined, bounded on each smaller sector. Of course, in any reasonable functional calculus one would expect \begin{equation}\label{sectop.eq.Hirsch} f(A) = \int_{\mathbb{R}_+} A(t+A)^{-1} \, \mu(\mathrm{d}{t}) \end{equation} for each sectorial operator $A$ and \begin{equation}\label{sectop.eq.sgrp} g(A) = \int_{\mathbb{R}_+} (tA)^n\mathrm{e}^{-tA} \, \mu(\mathrm{d}{t}) \end{equation} for each sectorial operator $A$ with $\omega_{\mathrm{se}}(A) < \frac{\pi}{2}$. If $A$ is injective, this is true for the sectorial calculus. However, if $A$ is not injective, then $\mu$ can be chosen so that $[f]_{\mathcal{E}_e}$ is not an anchor set and, consequently, $f$ is not contained in the domain of $\Phi_A$. A prominent example for this situation is the function \[ f(z) = \frac{1}{\lambda - \log z} = \int_0^\infty \frac{-1}{(\lambda - \log t)^2 + \pi^2} (t+ A)^{-1} \, \mathrm{d}{t}, \] which plays a prominent role for Nollau's result on operator logarithms \cite[Chapter 4]{HaaseFC}. Similar remarks apply in case 2). The mentioned ``defect'' of the sectorial calculus $\Phi_A$ can be mended by passing to a suitable topological extension as described in Section \ref{s.top}. Actually, this has been already observed in \cite{Haa05b}, where uniform convergence was used as the underlying convergence structure. \medskip \noindent Here, we intend to generalize the result from \cite{Haa05b} by employing a weaker convergence structure on a larger algebra. Define \[ \mathrm{H}^\infty(\sector{\omega}\cup\{0\}) := \{ f\in \mathrm{H}^\infty(\sector{\omega}) \,\,|\,\, f(0) := \lim_{z\searrow 0} f(z) \,\,\text{exists}\}. \] We say that a sequence $(f_n)_n$ in $\mathrm{H}^\infty(\sector{\omega}\cup\{0\})$ converges {\bf pointwise and boundedly} (in short: bp-converges) on $\sector{\omega}\cup\{0\}$ to $f\in \mathrm{H}^\infty(\sector{\omega}\cup\{0\})$ if $f_n(z) \to f(z)$ for each $z\in \sector{\omega}\cup\{0\}$ and $\sup_n \norm{f_n}_{\infty, \sector{\omega}} < \infty$. It is obvious that bp-convergence is a Hausdorff sequential convergence structure as introduced in Section \ref{s.top}. We take bp-convergence as the first component of the joint convergence structure $\tau$ we need for a topological extension. The second component, is described as follows. For a set $\calB \subseteq \mathcal{L}(X)$ let \[ \calB':= \{ S\in \mathcal{L}(X) \,\,|\,\, \forall\, B \in \calB: SB = BS \} \] be its {\bf commutant} within $\mathcal{L}(X)$. Let \[ \calA_A := \{ (1+A)^{-1}\}' = \{ R(\lambda, A) \,\,|\,\, \lambda \in \varrho(A)\}' \] For $((T_n)_n, T) \in \calA_A^\mathbb{N} \times \calA_A$ we write \[ T_n \stackrel{\tau_A^s}{\to} T \] if there is a point-separating set $\calD \subseteq \calA_A'$ such that \[ DT_n \to DT \quad \text{strongly, for each $D\in \calD$}. \] (Recall that $\calF$ is point-separating if $\bigcap_{D\in \calD} \ker(D) = \{0\}$.) Note that $\calA_A'$ is a commutative unital subalgebra of $\mathcal{L}(X)$ closed with respect to strong convergence and containing all resolvents of $A$. \begin{lem} The relation $\tau_A^s$ on $\calA_A^\mathbb{N} \times \calA_A$ is an algebraic Hausdorff convergence structure. \end{lem} \begin{proof} This follows easily from the fact that $\calA'$ is commutative. \end{proof} \begin{rem} We note that, in particular, one has \[ (1 +A)^{-m} T_n \to (1+A)^{-m} T \,\, \text{strongly} \quad \Rightarrow\quad T_n \stackrel{\tau_A^s}{\to} T \] that for any $m \in \mathbb{N}_0$. This means that $\tau_A^s$-convergence is weaker than strong convergence in any extrapolation norm associated with $A$. \end{rem} We shall show that $\Phi_A$ on $\mathcal{E}_e(\sector{\omega})$ is closable with respect to the joint convergence structure \begin{equation} \tau = \text{( bp-convergence on $\mathrm{H}^\infty(\sector{\omega}\cup\{0\})$ , $\tau_A^s$ on $\calA_A$ )}. \end{equation} We need the following auxiliary information. \begin{lem}\label{sectop.l.aux} Let $A$ be sectorial, let $\omega_\mathrm{se}(A) < \omega < \pi$ and let $e\in \mathcal{E}(\sector{\omega})$. Then \[ \ran(\Phi_A(e)) \subseteq \cls{\ran}(A). \] \end{lem} \begin{proof} Let $\varphi_n := \frac{n\mathbf{z}}{1 + n\mathbf{z}}$. Then $\varphi_n \to \mathbf{1}$ pointwise and boundedly on $\sector{\omega}$. By Lebesgue's theorem, $\Phi_A(\varphi_n e)\to \Phi_A(e)$ in norm. But \[ \Phi_A(\varphi_n e) = nA (1 + nA)^{-1} \Phi_A(e). \] The claim follows. \end{proof} Now we can head for the main result. \begin{thm}\label{sectop.t.main} Let $A$ be a sectorial operator on a Banach space $X$, let $\omega\in (\omega_\mathrm{se}(A),\pi)$ and let $(f_n)_n$ be a sequence in $\mathrm{H}^\infty(\sector{\omega}\cup\{0\})$ such that $f_n \to 0$ pointwise and boundedly on $\sector{\omega} \cup \{0\}$. Suppose that $\Phi_A(f_n)$ is defined and bounded for each $n\in\mathbb{N}$, and that $\Phi_A(f_n) \stackrel{\tau_2}{\to} T \in \calA_A$ strongly. Then $T= 0$. \end{thm} \begin{proof} For simplicity we write $\Phi$ in place of $\Phi_A$, and $\mathcal{E}$ and $\mathcal{E}_e$ in place of $\mathcal{E}(\sector{\omega})$ and $\mathcal{E}_e(\sector{\omega})$, respectively. By passing to $f_n - f_n(0)\mathbf{1}$ we may suppose that $f_n(0)= 0$ for each $n\in \mathbb{N}$. By hypothesis, there is a point-separating set $\calD \subseteq \calA_A'$ such that \[ D\Phi(f_n) \to DT \quad \text{strongly, for all $D\in \calD$}. \] We fix $D\in \calD$ for the time being. Now, take $e := \mathbf{z} (1+\mathbf{z})^{-2}$ and observe that $ef_n \in \mathcal{E}$ and $\Phi(ef_n) \to 0$ in operator norm by Lebesgue's theorem and the very definition of $\Phi$ in \eqref{sec.eq.def}. On the other hand, \[ D\Phi(ef_n) = D\Phi(e) \Phi(f_n) = \Phi(e)D \Phi(f_n) \to \Phi(e)DT \] strongly. This yields $A (1 + A)^{-2}DT = \Phi(e)DT = 0$, and hence \begin{equation}\label{sectop.eq.aux1} \ran( (1+A)^{-2}DT) \subseteq \ker(A). \end{equation} If $\ker(A) = \{0\}$ then $DT=0$ and hence $T=0$ since $D$ was arbitrary from $\calD$. So suppose that $A$ is not injective and define $e_0 := (1 + \mathbf{z})^{-2}$. We claim that $e_0f_n \in \mathcal{E}$. To prove this, note that, by hypothesis, $f_n$ is anchored in $\mathcal{E}_e$. Since $A$ is not injective, $[f]_{\mathcal{E}_e}$ must contain at least one function $e_1$ with $e_1(0)\neq 0$. Write \[ e_1= e_2 + c\mathbf{1} + \frac{d}{1 + \mathbf{z}} = e_2+ \frac{c+d}{1+ \mathbf{z}} + c\frac{\mathbf{z}}{1 + \mathbf{z}} \quad \text{for certain $c,\:d \in \mathbb{C}$ and $e_2\in \mathcal{E}$}. \] Now multiply by $f_n$ and $(1 + \mathbf{z})^{-1}$ to obtain \[ (c+d)e_0f_n = \frac{e_1f_n}{1+ \mathbf{z}} - e_2\frac{f_n}{1+ \mathbf{z}} - c \frac{\mathbf{z}}{(1+\mathbf{z})^2} \in \mathcal{E}. \] (Note that $e_1 f_n \in \mathcal{E}_e$.) But $c+d = e_1(0) \neq 0$, and hence $e_0f_n \in \mathcal{E}$ as claimed. Finally, apply Lemma \ref{sectop.l.aux} above with $e = e_0 f_n$ to see that \[ \ran( \Phi(e_0f_n)) \subseteq \cls{\ran}(A). \] As \begin{align*} \Phi(e_0f_n)D & = D \Phi(e_0 f_n) = D \Phi(e_0)\Phi(f_n)= \Phi(e_0)D\Phi(f_n) \\ & = (1 +A)^{-2}D\Phi(f_n) \to (1 + A)^{-2} T \end{align*} strongly, it follows that \begin{equation}\label{sectop.eq.aux2} \ran( (1 + A)^{-2} DT) \subseteq \cls{\ran}(A). \end{equation} Taking \eqref{sectop.eq.aux1} and \eqref{sectop.eq.aux2} together yields \[ \ran( (1 + A)^{-2} DT) \subseteq \ker(A) \cap \cls{\ran}(A) = \{0\}, \] since $A$ is sectorial \cite[Prop.{ }2.1]{HaaseFC}. This means that $(1+A)^{-2}DT=0$, and since since $D\in \calD$ was arbitrary, it follows that $T=0$ as desired. \end{proof} As an immediate consequence we obtain the result already announced above. \begin{cor}\label{sectop.c.main} The sectorial calculus $\Phi_A$ on $\mathrm{H}^\infty(\sector{\omega}\cup\{0\}) \cap \bdd(\Phi_A,\dom(\Phi_A))$ is closable with respect to the joint convergence structure \[ \text{\rm ( bp-convergence on $\sector{\omega} \cup \{0\}$ , $\tau_A^s$-convergence within $\calA_A$ )}. \] \end{cor} Based on Corollary \ref{sectop.c.main} we apply Theorem \ref{top.t.top-ext} and obtain the {\bf bp-extension} $\Phi_A^\mathrm{bp}$ of the sectorial calculus $\Phi_A$ for $A$. Since the relevant function algebras are commutative, there is no compatibility issue, cf. Corollary \ref{top.c.top-ext-comp}. As bp-convergence is weaker then uniform convergence, Corollary \ref{sectop.c.main} implies in particular the result from \cite[Section 5]{Haa05c} that the sectorial calculus is closed with respect to \[ \text{( uniform convergence on $\sector{\omega}\cup\{0\}$ , operator norm convergence ),} \] which obviously is also a joint algebraic sequential convergence structure. The respective topological extension (and also its canonical algebraic one) shall be called the {\bf uniform extension} of the sectorial calculus and denoted by $\Phi_A^\mathrm{uni}$. Obviously, we have $\dom(\Phi_A^\mathrm{uni}) \subseteq \dom(\Phi^\mathrm{bp}_A)$ and $\Phi^\mathrm{uni}_A = \Phi^\mathrm{bp}_A$ on $\dom(\Phi_A^\mathrm{uni})$. \begin{rem} The bp-extension is ``large'' in a sense, since bp-convergence and $\tau_A^s$-convergence are relatively weak requirements. (Actually, they are the weakest we can think of at the moment.) On the other hand, the uniform is quite ``small''. Whereas the bp-extension is interesting in order to understand what a ``maximal'' calculus could be for a given operator, the uniform extension is interesting in order to understand the ``minimal'' extension necessary to cover a given function. \end{rem} \section{Stieltjes Calculus and Hirsch Calculus}\label{s.hir} The defect of the sectorial calculus mentioned in the previous section has, as a matter of fact, been observed by several other people working in the field. Of course, this has not prevented people from working with operators of the form \eqref{sectop.eq.Hirsch} or \eqref{sectop.eq.sgrp}. The former one has actually been used already by Hirsch in \cite{Hirsch1972}. It was extended algebraically by Martinez and Sanz in \cite{MartinezSanz1998,MartinezSanzTFPO} under the name of ``Hirsch functional calculus''. Dungey in \cite{Dungey2009b} considers the operator $\psi(A)$ for \[ \psi(z) = \int_0^1 z^\alpha \, \mathrm{d}{\alpha} = \frac{z-1}{\log z} \] and remarks that $\psi(A)$ is defined within the Hirsch calculus but not within the sectorial calculus. Batty, Gomilko and Tomilov in \cite{BattyGomilkoTomilov2015} embed the Hirsch calculus (which is not a calculus in our sense because the domain set is not an algebra) into a larger calculus (in our sense) which is an algebraic extension of a calculus for the so-called bounded Stieltjes algebra. \medskip \subsection{The Stieltjes Calculus}\label{hir.s.sti} In \cite{BattyGomilkoTomilov2015}, Batty, Gomilko and Tomilov define what they call the {\em extended Stieltjes calculus} for a sectorial operator. In this section, we introduce this calculus and show that it is contained in the uniform extension of the sectorial calculus. According to \cite[Section 4]{BattyGomilkoTomilov2015}, the {\bf bounded Stieltjes algebra} $\widetilde{\calS}_b$ consists of all functions $f$ that have a representation \begin{equation}\label{sectop.eq.stieltjes} f(z) = \int_{\mathbb{R}_+} \frac{\mu(\mathrm{d}{s})}{(1 + sz)^m} \end{equation} for some $m\in \mathbb{N}_0$ and some $\mu \in \mathrm{M}(\mathbb{R}_+)$. Each such $f$ is obviously holomorphic on $\sector{\pi}$ and bounded on each smaller sector. It is less obvious, however, that $\widetilde{S}_b$ is a unital algebra, a fact which is proved in \cite[Section 4.1]{BattyGomilkoTomilov2015}. \begin{prop}\label{sectop.p.stieltjes} Suppose $f$ is a bounded Stieltjes function with representation \eqref{sectop.eq.stieltjes} and $A$ is a sectorial operator on a Banach space $X$. Then \begin{equation} \label{sectop.aux.stieltjes2} \Phi^\mathrm{uni}_A(f) = \int_{\mathbb{R}_+} (1 + sA)^{-m} \mu(\mathrm{d}{s}), \end{equation} where $\Phi^\mathrm{uni}$ is the uniform extension of the sectorial calculus for $A$. \end{prop} \begin{proof} By subtracting $\mu\{0\}$ we may suppose that $\mu$ is supported on $(0, \infty)$. Also, we may suppose that $m \ge 1$. Define \[ f_n(z) := \int_{[\frac{1}{n}, n]} \frac{\mu(\mathrm{d}{s})}{(1+ sz)^m} \quad \text{and} \quad a_n := \int_{[\frac{1}{n}, n]} \mathbf{1} \mathrm{d}{\mu}. \] Then \[ g_n(z) := f_n(z) - \frac{a_n}{1+z} = \int_{[\frac{1}{n}, n]} \frac{1}{(1+ sz)^m} - \frac{1}{1 + z} \, \mu(\mathrm{d}{s}). \] The function under the integral is contained in $\mathcal{E}$. A moment's reflection reveals that $g_n \in \mathcal{E}$ also, and that one can apply Fubini's theorem to compute $\Phi_A(g_n)$. This yields \begin{align*} \Phi_A(g_n) & = \int_{[\frac{1}{n}, n]} \Phi_A\Bigl(\frac{1}{(1+ sz)^m} - \frac{1}{1 + z}\Bigr) \, \mu(\mathrm{d}{s}) \\ & = \int_{[\frac{1}{n}, n]} (1+ sA)^{-m} - (1 + A)^{-1} \, \mu(\mathrm{d}{s}) \\ & = \int_{[\frac{1}{n}, n]} (1+ sA)^{-m} \, \mu(\mathrm{d}{s}) - a_n (1 + A)^{-1}, \end{align*} and hence $f_n \in \mathcal{E}_e$ with \[ \Phi_A(f_n)= \int_{[\frac{1}{n}, n]} (1+ sA)^{-m} \, \mu(\mathrm{d}{s}) \to \int_{\mathbb{R}_+} (1+ sA)^{-m} \, \mu(\mathrm{d}{s}) \] in operator norm. Since $f_n \to f$ uniformly on $\sector{\omega}\cup\{0\}$ for each $\omega < \pi$, the proof is complete. \end{proof} Instead of operating with a topological extension, the authors of \cite{BattyGomilkoTomilov2015} use \eqref{sectop.aux.stieltjes2} as a definition. They then have to show independence of the representation \cite[Prop.{ }4.3]{BattyGomilkoTomilov2015}, compatibility with the holomorphic calculus \cite[Lemma 4.4]{BattyGomilkoTomilov2015}, and the algebra homomorphism property \cite[Propositions 4.5 and 4.6]{BattyGomilkoTomilov2015}. In our approach, all these facts follow from Proposition \ref{sectop.p.stieltjes} and general theory.\footnote{The reader might object that the ``general theory'' presented in this article is quite involved. We agree, but stress the fact that only a commutative version of this theory, which is relatively simple, is needed here.} \medskip \subsection{The Hirsch Calculus}\label{hir.s.hir} Developing further the approach of Hirsch \cite{Hirsch1972}, Martinez and Sanz in \cite{MartinezSanz1998} and \cite{MartinezSanzTFPO} define the class $\calT$ of all functions $f$ that have a representation \[ f(z) = a + \int_{\mathbb{R}_+} \frac{z}{1 + zt} \, \nu(\mathrm{d}{t}), \] where $a\in \mathbb{C}$ and $\nu$ is a Radon measure on $\mathbb{R}_+$ satisfying \[ \int_{\mathbb{R}_+} \frac{ \abs{\nu}(\mathrm{d}{t})}{1+t} < \infty. \] One can easily see that $\calT$ is contained in the algebraically extended Stieltjes algebra. Just write \[ f(z) = a + z \int_{[0,1]} \frac{1}{1 + tz} \, \nu(\mathrm{d}{t}) + \int_{(1, \infty)} \frac{z}{1 + zt} \, \nu(\mathrm{d}{t}) := a + z g(z) + h(z) \] and note that $g$ and $h$ are bounded Stieltjes functions. The latter is obvious for $g$; and for $h$ it follows from the identity \begin{equation}\label{sectop.eq.Hirsch-aux} h(z) = \int_{(1, \infty)} \frac{z}{1 + zt} \, \nu(\mathrm{d}{t}) = \int_{(1, \infty)} \frac{\nu(\mathrm{d}{t})}{t} - \int_{(1, \infty)} \frac{1}{1 + tz}\, \frac{\nu(\mathrm{d}{t})}{t}. \end{equation} We obtain \begin{align*} \Phi_A^\mathrm{uni}(f) & = a + A \Phi_A^\mathrm{uni}(g) + \Phi_A^\mathrm{uni}(h) \\ & = a + A \int_{[0,1]} (1 + tA)^{-1} \, \nu(\mathrm{d}{t}) + \int_{(1, \infty)} A (1+tA)^{-1} \, \nu(\mathrm{d}{t}) \end{align*} by a short computation using \eqref{sectop.eq.Hirsch-aux}. This coincides with how $f(A)$ is defined in \cite[Def.{ }4.2.1]{MartinezSanzTFPO}. Hence, the Hirsch calculus (which is not a functional calculus in our terms since $\calT$ is not an algebra) is contained in the uniform extension of the sectorial calculus. \medskip \subsection{Integrals involving Holomorphic Semigroups}\label{hir.s.hol} If $A$ is sectorial of angle $\omega_{\mathrm{se}}(A) < \frac{\pi}{2}$, then $-A$ generates a holomorphic semigroup \[ T_A(\lambda ) := \mathrm{e}^{-\lambda A} := (\mathrm{e}^{-\lambda \mathbf{z}})(A) \qquad (\lambda \in \sector{\omega_\mathrm{se}(A) - \frac{\pi}{2}}), \] see \cite[Section 3.4]{HaaseFC}. If one has $\alpha = 0$ or $\re \alpha > 0$, and one restricts $\lambda$ to a smaller sector, the function $\lambda \mapsto (\lambda A)^\alpha T_A(\lambda)$ becomes uniformly bounded. Hence, one can integrate with respect to a bounded measure. The following result shows that also these operators are covered by the uniform extension of the sectorial calculus. \begin{prop}\label{hir.p.hol} Let $0 \le \varphi < \frac{\pi}{2}$, let $\mu$ be a complex Borel measure on $\cls{\sector{\varphi}}$ and let $\alpha = 0$ or $\re \alpha > 0$. Then the function \[ f(z) := \int_{\cls{\sector{\varphi}}} (\lambda z)^\alpha \mathrm{e}^{-\lambda z}\, \mu(\mathrm{d}{\lambda}) \] is holomorphic on $\sector{\frac{\pi}{2} - \varphi}$ and uniformly bounded on each smaller sector. If $A$ is any sectorial operator on a Banach space with $\omega_{\mathrm{se}}(A) + \varphi < \frac{\pi}{2}$, then \[ \Phi_A^\mathrm{uni}(f) = \int_{\cls{\sector{\varphi}}} (\lambda A)^\alpha \mathrm{e}^{-\lambda A}\, \mu(\mathrm{d}{\lambda}), \] where $\Phi^\mathrm{uni}$ is the uniform extension of the sectorial calculus for $A$. \end{prop} We point out that Proposition \ref{hir.p.hol} applies in particular to the case that $\varphi = 0$ and $\cls{\sector{\varphi}} = \mathbb{R}_+$ is just the real axis. \begin{proof} The proof follows the line of the proof of Proposition \ref{sectop.p.stieltjes} and we only sketch it. First one subtracts a constant to reduce to the case that $\mu$ has no mass at $\{0\}$. Then one uses the approximation \[ \int_{\lambda \in \cls{\sector{\varphi}}, \frac{1}{n}\le \abs{\lambda} \le n} \dots\, \mu(\mathrm{d}{\lambda}) \quad\to\quad \int_{\cls{\sector{\varphi}}} \dots\, \mu(\mathrm{d}{\lambda}) \qquad (n \to \infty) \] first for scalars and then for operators. This reduces the claim to establishing the identity \[ \Phi_A\Bigl(\int_{\lambda \in \cls{\sector{\varphi}}, \frac{1}{n}\le \abs{\lambda} \le n} (\lambda \mathbf{z})^\alpha \mathrm{e}^{-\lambda \mathbf{z}}\mu(\mathrm{d}{\lambda})\Bigr) = \int_{\lambda \in \cls{\sector{\varphi}}, \frac{1}{n}\le \abs{\lambda} \le n} (\lambda A)^\alpha \mathrm{e}^{-\lambda A}\, \mu(\mathrm{d}{\lambda}). \] If $\re \alpha > 0$ then this is a simple application of Fubini's theorem. If $\alpha = 0$ then one has to write \[ \mathrm{e}^{-\lambda \mathbf{z}} = \frac{1}{1 + \mathbf{z}} + \Bigl( \mathrm{e}^{-\lambda \mathbf{z}} - \frac{1}{1+\mathbf{z}}\Bigr) \] and use Fubini for the second summand. \end{proof} \section{Semigroup and Group Generators}\label{s.sgr} We define a {\bf bounded semigroup} to be uniformly bounded mapping $T: \mathbb{R}_+ \to \mathcal{L}(X)$ which is strongly continuous on $(0, \infty)$ and satisfies the semigroup laws \[ T(0)= \mathrm{I}, \qquad T(s+t) = T(s) T(t) \qquad (t,s > 0). \] (This has been called a {\em degenerate semigroup} in \cite{HaaseFC}.) For $\mu \in \mathrm{M}(\mathbb{R}_+)$ one can define \[ \Psi_T(\mu) := \int_{\mathbb{R}_+} T(s) \, \mu(\mathrm{d}{s}) \in \mathcal{L}(X) \] as a strong integral. The mapping \[ \Psi_T : \mathrm{M}(\mathbb{R}_+) \to \mathcal{L}(X) \] is an algebra homomorphism with respect to the convolution product. There is a unique linear relation $B$ on $X$, called the {\bf generator} of $T$, such that \[ (\lambda - B)^{-1} = \int_0^\infty \mathrm{e}^{-\lambda t}T(t)\, \mathrm{d}{t} \] for one/all $\lambda \in \mathbb{C}$ with $\re \lambda > 0$ \cite[Appendix A.8]{HaaseFC}. Because of $\Psi_T(\delta_0) = \mathrm{I}$, the representation $\Psi_T$ is not degenerate. However, its restriction to $\mathrm{M}(0, \infty)$ might be. In fact, this is the case if and only if the common kernel $\bigcap_{t> 0} \ker(T(t))$ is not trivial, if and only if $B$ is not operator. From now one, we confine ourselves to the non-degenerate case, i.e., we suppose that $B$ is an operator. Instead of at $B$ we shall be looking at \[ A := -B \] it the following. \medskip Note that the Laplace transform \[ \mathcal{L}: \mathrm{M}(\mathbb{R}_+) \to \mathrm{C}_{\mathrm{b}}(\cls{\mathbb{C}_+}) \qquad \mathcal{L} \mu(z) := \int_{\mathbb{R}_+} \mathrm{e}^{-zs}\,\mu(\mathrm{d}{s}) \qquad (\re z \ge 0). \] is injective. Here, $\mathbb{C}_+ := \{ z\in \mathbb{C} \,\,|\,\, \re z > 0\} = \sector{\frac{\pi}{2}}$. Its image is the {\bf Hille--Phillips algebra} \[ \calL\mathcal{M}(\mathbb{C}_+) := \{ \mathcal{L} \mu \,\,|\,\, \mu \in \mathrm{M}(\mathbb{R}_+)\}, \] a unital algebra under pointwise multiplication. The mapping \[ \Phi_T : \calL\mathcal{M}(\mathbb{C}_+) \to \mathcal{L}(X), \qquad \Phi_T(f) = \Psi_T(\mathcal{L}^{-1} f) \] is called the {\bf Hille--Phillips calculus} (HP-calculus, for short) for $A$. One has \[ \Phi_T((\lambda + \mathbf{z})^{-1}) = (\lambda +A)^{-1} \] for all $\lambda \in \mathbb{C}_+$. Finally, we extend $\Phi_T$ algebraically within the field $\Mer(\mathbb{C}_+)$ of meromorphic functions on $\mathbb{C}_+$ and call this the {\bf extended Hille--Phillips calculus}. \medskip \subsection{The Complex Inversion Formula}\label{sgr.s.coi} The semigroup can be reconstructed from its generator by the so-called complex inversion formula. This is a standard fact from semigroup theory in the case that $T(t)$ is strongly continuous at $t=0$, i.e., if $A$ is densely defined. However, we do not want to make this assumption here, so we need to digress a little on that topic. \begin{prop}[Complex inversion formula]\label{sgr.p.coi} Let $-A$ be the generator of a bounded semigroup $T= (T(t))_{t> 0}$. Then the mapping \[ \mathbb{R}_+ \to \mathcal{L}(X)\qquad t \mapsto T(t)(1 +A)^{-1} \] is Lipschitz-continuous in operator norm. Moreover, for each $\omega < 0$ \[ T(t)(1+A)^{-2} = \frac{1}{2\pi \mathrm{i}} \int_{\omega + \mathrm{i} \mathbb{R}} \frac{\mathrm{e}^{-tz}}{(1+z)^2} \, R(z, A)\, \mathrm{d}{z} \qquad (t\ge 0) \] where the integration contour is directed top down from from $\omega +\mathrm{i}\infty$ to $\omega - \mathrm{i} \infty$. \end{prop} \begin{proof} The HP-calculus turns the scalar identity \[ \mathrm{e}^{-tz\mathbf{z}} - \mathrm{e}^{-s\mathbf{z}} = -\mathbf{z} \int_s^t \mathrm{e}^{-r\mathbf{z}}\, \mathrm{d}{r} \qquad (s,t\in \mathbb{R}_+) \] into the operator identity \[ T(t) - T(s) = -A \int_s^t T(r)\, \mathrm{d}{r}. \] Multiplying with $(1+A)^{-1}$ from the right and estimating yields \[ \norm{T(t)(1+A)^{-1} - T(s)(1+A)^{-1}} \le M \norm{A(1 +A)^{-1}} \abs{s-t} \qquad (s,t\in \mathbb{R}_+). \] For the second claim we note first the estimate \begin{equation}\label{sgr.eq.sthp} \norm{(\lambda + A)^{-1}} \le \frac{M}{\re \lambda}\qquad (\re \lambda > 0), \end{equation} where $M := \sup_{t > 0} \norm{T(t)}$. Consequently, $A$ is an operator of strong right half-plane type $0$, and hence admits a functional calculus $\Psi$, say, on half planes as in \cite{BatHaaMub2013}. Writing $\mathrm{e}^{-tA} := \Psi(\mathrm{e}^{t\mathbf{z}})$ one obtains \[ S(t) := \frac{1}{2\pi \mathrm{i}} \int_{\omega + \mathrm{i} \mathbb{R}} \frac{\mathrm{e}^{-tz}}{(1+z)^2} \, R(z, A)\, \mathrm{d}{z} = \Psi( \frac{\mathrm{e}^{-t\mathbf{z}}}{(1+\mathbf{z})^2} ) = \mathrm{e}^{-tA}(1+A)^{-2} \] for $t\ge 0$ by definition of $\Psi$ and usual functional calculus rules. Taking Laplace transforms, by \cite[Lemma 2.4]{BatHaaMub2013} we obtain \begin{equation}\label{sgr.eq.coi-aux} \int_0^\infty \mathrm{e}^{-\lambda t} S(t)\, \mathrm{d}{t} = \int_0^\infty \mathrm{e}^{-\lambda t} \mathrm{e}^{-tA}(1+A)^{-2}\, \mathrm{d}{t} = (\lambda + A)^{-1}(1+A)^{-2} \end{equation} whenever $\re \lambda > -\omega$. Since $\omega$ can be chosen arbitrarily close to $0$, the identity \eqref{sgr.eq.coi-aux} actually holds for all $\re \lambda > 0$. Since the Laplace transform is injective, it follows that \[ S(t) = T(t)(1+A)^{-2} \qquad (t\ge 0) \] as claimed. \end{proof} As a consequence we obtain that the commutant of the semigroup and the commutant of its generator coincide. \begin{cor}\label{sgr.c.commutator} For a bounded operator $S\in \mathcal{L}(X)$ the following assertions are equivalent: \begin{aufzii} \item $S$ commutes with $(1+A)^{-1}$ \item $S$ commutes with each $T(t)$, $t> 0$. \end{aufzii} \end{cor} \begin{proof} The implication (ii)$\Rightarrow$(i) is trivial. Suppose that (i) holds. Then $S$ commutes with $R(\lambda,A)$ for each $\lambda \in \varrho(A)$ \cite[Prop.{ }A.2.6]{HaaseFC}. By the complex inversion formula, $S$ commutes with $T(t) (1+A)^{-2} = (1+A)^{-2}T(t)$. It follows that \[ (1+A)^{-2} ST(t) = S (1+A)^{-2}T(t) = (1+A)^{-2} T(t) S. \] Since $(1+A)^{-2}$ is injective, $T(t)S = ST(t)$. \end{proof} \medskip \subsection{A Topological Extension of the HP-Calculus}\label{sgr.s.topext} We let, as before, $-A$ be the generator of a bounded semigroup as above. As in Section \ref{s.sectop} we consider the algebra \[ \calA_A = \{ (1+A)^{-1}\}' \] which by Corollary \ref{sgr.c.commutator} coincides with the commutant of the semigroup. Similarly to Section \ref{s.sectop}, for $((T_n)_n, T) \in \calA_A^\mathbb{N}\times \calA_A$ we write \[ T_n \stackrel{\tau_A^n}{\to} T \] if there is a point-separating subset $\calD \subseteq \calA_A'$ such that \[ DT_n \to DT \quad \text{in operator norm, for each $D\in \calD$} \] Note the difference to the structure $\tau_A^s$ considered in Section \ref{s.sectop}, where we allowed strong convergence. It is easily checked that $\tau_A^n$ is an algebraic Hausdorff convergence structure. \begin{thm}\label{sgr.t.topext} Let $-A$ be the generator of a bounded semigroup $T$ on a Banach space $X$. Then the Hille--Phillips calculus $\Phi_T$ is closable with respect to the joint convergence structure \[ \text{\rm (pointwise convergence on $\cls{\mathbb{C}_+}$ , $\tau_A^n$-convergence ) } \quad \text{on}\quad (\mathrm{H}^\infty(\cls{\mathbb{C}_+}) , \calA_A). \] \end{thm} \begin{proof} Suppose that $f_n = \mathcal{L}\mu_n \in \calL\mathcal{M}(\mathbb{C}_+)$ is pointwise convergent on $\cls{\mathbb{C}_+}$ to $0$ and that $\calD \subseteq \calA_A'$ is a point-separating subset of $\calA_A$ such that \[ D\Phi_T(f_n) \to DT \] in operator norm. It suffices to show that $(1+A)^{-1}DT=0$. To this aim, note that $\calA_A'$ is a commutative unital Banach algebra. Since $\Phi_T(f_n) \in \calA_A'$, we have also $DT \in \calA_A'$. Hence, by Gelfand theory, it suffices to show that \[ \chi( (1+A)^{-1}DT) = 0 \] for each multiplicative linear functional $\chi: \calA_A'\to \mathbb{C}$. Fix such a functional $\chi$. Since \[ \chi( (1+A)^{-1}DT) = \chi( (1+A)^{-1}) \chi(DT) \] we may suppose without loss of generality that $\alpha := \chi((1+A)^{-1}) \neq 0$. Consider the function $c: \mathbb{R}_+ \to \mathbb{C}$, $c(t) := \chi( T(t))$. Then \[ c(t+s) = c(t) c(s)\qquad (t,s \ge 0) \] and $c$ is bounded. Moreover, \[ t \mapsto \alpha c(t) = \chi( T(t)(1+A)^{-1}) \] is continuous (by Proposition \ref{sgr.p.coi}). Since $\alpha \neq 0$, $c$ is continuous. It follows from a classical theorem of Cauchy that there is $\lambda \in \cls{\mathbb{C}_+}$ such that \[ c(t) = \mathrm{e}^{-\lambda t} \qquad (t\ge 0). \] Next, we find \[ \Phi(f_n)(1+A)^{-1} = \int_{\mathbb{R}_+} T(t)(1+A)^{-1}\, \mu_n(\mathrm{d}{t}). \] By Proposition \ref{sgr.p.coi}, the integrand is a norm-continuous function of $t$. Hence, \begin{align*} \chi( D\Phi_T(f_n)(1+A)^{-1}) & = \chi(D) \int_{\mathbb{R}_+} \chi( T(t)(1+A)^{-1}) \, \mu_n(\mathrm{d}{t}) \\ & = \alpha \chi(D) \int_{\mathbb{R}_+} \mathrm{e}^{-\lambda t} \, \mu_n(t) = \alpha \chi(D) f_n(\lambda). \end{align*} Letting $n \to \infty$ yields \[ \chi((1+A)^{-1}DT) = 0 \] as desired. (It is in this last step that we need the operator norm convergence $D\Phi_T(f_n)\to DT$.) \end{proof} \begin{rem} It is not difficult to see that the spectra of $(1+A)^{-1}$ in $\calA_A'$ and in $\mathcal{L}(X)$ coincide. It follows from Gelfand theory that \[ \sigma((1+A)^{-1}) =\{ \chi((1+A)^{-1}) \,\,|\,\, 0 \neq \chi \,\, \text{is a multiplicative functional on $\calA_A'$}\}. \] This implies, eventually, that if $\alpha := \chi( (1+A)^{-1}) \neq 0$ then $\chi(T(t)) = \mathrm{e}^{-\lambda t}$, where $(1 + \lambda)^{-1} = \alpha \in \sigma( (1+A)^{-1})$, and hence $\lambda \in \sigma(A)$ by the spectral mapping theorem for the resolvent. All in all we obtain that we can replace pointwise convergence on $\cls{\mathbb{C}_+}$ by pointwise convergence on $\sigma(A)$ in Theorem \ref{sgr.t.topext}. (These arguments actually show that the failing of the spectral mapping theorem for the semigroup is precisely due to the existence of multiplicative functionals $\chi$ on $\calA_A'$ that vanish on $(1+A)^{-1}$ but do not vanish on some $T(t)$. However, these functionals are irrelevant in our context.) \end{rem} According to Theorem \ref{sgr.t.topext}, the Hille--Phillips calculus $\Phi_T$ has a topological extension based on the joint convergence structure \[ \text{\rm (pointwise convergence on $\cls{\mathbb{C}_+}$ , $\tau_A^n$-convergence ) } \quad \text{on}\quad (\mathrm{H}^\infty(\cls{\mathbb{C}_+}) , \calA_A). \] Let us call this the {\bf semi-uniform extension} of the HP-calculus. We do not know whether one can replace $\tau_A^n$ by $\tau_A^s$ here in general. However, there are special cases, when it is possible. \medskip \subsection{Compatibility of the HP-Calculus and the Sectorial Calculus} \label{sgr.s.sec-sgr} By \eqref{sgr.eq.sthp}, the negative generator $A$ of the bounded semigroup $T$ is sectorial of angle $\omega_{\mathrm{se}}(A) \le \frac{\pi}{2}$. Hence, there are now two competing functional calculi for it, the Hille--Phillips calculus $\Phi_T$ and the sectorial calculus $\Phi_A$, each coming with its associated algebraic and topological extensions. Of course, we expect compatibility, so let us have a closer look. \medskip Suppose first that $\omega_{\mathrm{se}}(A) < \frac{\pi}{2}$. Then by Proposition \ref{hir.p.hol}, the Hille--Phillips calculus is a restriction of the uniform extension of the elementary sectorial calculus for $A$. By commutativity, compatibility is still valid for the respective algebraic extensions, that is: the extended Hille--Phillips calculus is a subcalculus of $\Phi_A^\mathrm{uni}$. Now, suppose that $\omega_{\mathrm{se}}(A) = \frac{\pi}{2}$. It has been shown in \cite[Lemma 3.3.1]{HaaseFC} that each $e \in \mathcal{E}[\sector{\frac{\pi}{2}}]$ is contained in $\calL\mathcal{M}(\mathbb{C}_+)$ with \[ \Phi_T(e) = \Phi_A(e) \] (The actual formulation of \cite[Lemma 3.3.1]{HaaseFC} yields a little less, but its proof works in the more general situation considered here.) It follows that \[ \mathcal{E}_e[\sector{\frac{\pi}{2}}] \subseteq \calL\mathcal{M}(\mathbb{C}_+)\quad \text{and}\quad \Phi_T = \Phi_A\,\, \text{on}\,\, \mathcal{E}_e[\sector{\frac{\pi}{2}}]. \] By commutativity of the algebras, the algebraic extensions of these calculi also are compatible (Theorem \ref{ext.t.succ-comp}). That is, the (algebraically) extended Hille--Phillips calculus is an extension of the sectorial calculus for $A$. Furthermore, the uniform extension of the sectorial calculus is clearly contained in the semi-uniform extension of the Hille--Phillips calculus as described above. \begin{rems} \begin{aufziii} \item The bounded Stieltjes algebra is actually included in the Hille--Phillips algebra. This can be seen by a direct computation. More generally, each function $f\in \Mer[\sector{\frac{\pi}{2}}]$ such that $\Phi_A(f)$ is bounded for {\em each} negative generator of a bounded semigroup, is contained in $\calL\mathcal{M}(\mathbb{C}_+)$. (Choose $T$ to be the right semigroup on $\Ell{1}(\mathbb{R}_+)$.) \item At present, we do not know how the bp-extension of the sectorial calculus and the semi-uniform extension of the HP-calculus relate. \end{aufziii} \end{rems} \medskip On the other hand, we can ``reach'' the HP-calculus from the sectorial calculus by employing a modification of the uniform extension. Namely, consider the joint convergence structure \begin{equation}\label{sgr.eq.joint-uni} \text{\rm ( uniform convergence on $\mathbb{C}_+$ , operator norm convergence ) } \end{equation} on $\calL\mathcal{M}(\mathbb{C}_+)$ and $\calA_A$, respectively. By compatibility and Theorem \ref{sgr.t.topext}, the sectorial calculus on $\mathcal{E}_e[\sector{\frac{\pi}{2}}]$ is closable with respect to that structure. The next result shows that the functions \[ \frac{\mathrm{e}^{-t\mathbf{z}}}{(1+ \mathbf{z})^2} \qquad (t > 0) \] are in the domain of the corresponding topological extension. \begin{lem}\label{sgr.l.coi-approx} Let $-A$ be the generator of a bounded semigroup $T$, and let $t > 0$ and $\omega < 0$. For any $n\in \mathbb{N}$ the function \[ f_n(z) := \frac{1}{2\pi \mathrm{i}} \int_{\omega + \mathrm{i}[-n,n]} \frac{\mathrm{e}^{-wt}}{(1+w)^2} \frac{\mathrm{d}{w}}{w-z} \] is contained in $\mathcal{E}_e[\sector{\frac{\pi}{2}}]$. Moreover, \[ f_n \to \frac{\mathrm{e}^{-t\mathbf{z}}}{(1+ \mathbf{z})^2}\quad (n \to \infty) \] uniformly on $\cls{\mathbb{C}_+}$ and \[ \Phi_A(f_n) \to T(t)(1+A)^{-2} \] in operator norm. \end{lem} \begin{proof} Note that $f_n$ is holomorphic on $\mathbb{C} \setminus (\omega {+}\mathrm{i}[-n,n])$ and hence on a sector $\sector{\varphi}$ for $\varphi > \pi/2$. On each smaller sector we have $f_n(z) = O(\abs{z}^{-1})$ as $\abs{z} \to \infty$ and $f_n(z) - f_n(0) = O(\abs{z})$ as $\abs{z} \to 0$. It follows that $f_n \in \mathcal{E}_e(\sector{\varphi})$. The remaining statements follow from the complex inversion formula. One needs the identity \[ \Phi_A(f_n) = \frac{1}{2\pi \mathrm{i}} \int_{\omega + \mathrm{i}[-n,n]} \frac{\mathrm{e}^{-wt}}{(1+w)^2}\, R(w,A)\mathrm{d}{w}, \] which is proved by standard arguments. \end{proof} Since $T(t)$ can be reconstructed algebraically from $T(t) (1+A)^{-2}$, we see that the semigroup operators are contained in the algebraic extension of the topological extension given by \eqref{sgr.eq.joint-uni} of the elementary sectorial calculus. \section{Normal Operators}\label{s.spt} Normal operators on Hilbert spaces are known, by the spectral theorem, to have the best functional calculus one can hope for. The (Borel) functional calculus for a normal operator is heavily used in many areas of mathematics and mathematical physics. Despite this importance of the functional calculus, the spectral theorem is most frequently formulated in terms of projection-valued measures or multiplication operators, and the functional calculus itself appears merely as a derived concept. This expositional dependence (of the functional calculus on the spectral measure) is manifest in the classical extension of the calculus from bounded to unbounded functions as described, e.g., in Rudin's book \cite{RudinFA}. Since the description of $f(A)$ for unbounded $f$ in terms of spectral measures is far from simple, working with the unbounded part of the calculus on the basis of this exposition is rather cumbersome. However, the situation now is different from when Rudin's classic text was written, in at least two respects. Firstly, we now have an axiomatic notion of a functional calculus (beyond bounded operators in its range). This enables us to develop the properties of the calculus from axioms rather than from a particular construction, which makes things far more perspicuous and, eventually, far easier to handle. \vanish{ In the classical approach, assertions like (FC2) appear as consequences of the construction (cf.{ }\cite[13.24]{RudinFA}). Consequences of (FC2) are itself not acknowledged as such, and hence remain dependent on the construction as well. In contrast, we can now develop the theory of Borel functional calculus from a minimalistic axiom system and integrate the construction steps of such a functional calculus into the theory as separate theorems. } Secondly, we now have an elegant tool to go from bounded to unbounded functions: the algebraic extension procedure. As a result, the unbounded part of the construction of the functional calculus for a normal operator on a Hilbert space just becomes a corollary of Theorem \ref{ext.t.ext}. Actually, all algebras in this context are commutative and there is always an anchor element, so one does not even need the full force of Theorem \ref{ext.t.ext}, but only the relatively elementary methods of \cite{HaaseFC}. In order to render these remarks less cryptic, we need of course be more specific. We shall sketch the main features below. A more detailed treatment can be found in the separate paper \cite{Haase2020bpre}. \medskip Let $(X,\Sigma)$ be a {\em measurable space}, i.e., $X$ is a set and $\Sigma$ is a $\sigma$-algebra of subsets of $X$. We let \begin{align*} \mathcal{M}(X,\Sigma) & := \{ f: X \to \mathbb{C} \,\,|\,\, \text{$f$ measurable}\}. \end{align*} A {\bf measurable (functional) calculus} on $(X,\Sigma)$ is a pair $(\Phi, H)$ where $H$ is a Hilbert space and \[ \Phi: \mathcal{M}(X,\Sigma)\to \mathcal{C}(H) \] is a mapping with the following properties ($f,\: g \in \mathcal{M}(X,\Sigma),\: \lambda \in \mathbb{C}$): \begin{aufziii} \item[\quad(MFC1)] $\Phi(\mathbf{1}) = \mathrm{I}$; \item[\quad(MFC2)] $\Phi(f) + \Phi(g) \subseteq \Phi(f+g)$ and $\lambda \Phi(f) \subseteq \Phi(\lambda f)$; \item[\quad (MFC3)] $\Phi(f)\Phi(g) \subseteq \Phi(fg)$\quad and \[ \dom(\Phi(f)\Phi(g)) = \dom(\Phi(g))\cap \dom(\Phi(fg)); \] \item[\quad(MFC4)] $\Phi(f) \in \mathcal{L}(H)$ and $\Phi(f)^* = \Phi(\konj{f})$ if $f$ is bounded; \item[\quad(MFC5)] If $f_n \to f$ pointwise and boundedly, then $\Phi(f_n) \to \Phi(f)$ weakly. \end{aufziii} Property (MFC5) is called the {\bf weak bp-continuity} of the mapping $\Phi$. \medskip Evidently, (MFC1)--(MFC3) are just the axioms (FC1)--(FC3) of a proto-calculus. For $f\in \mathcal{M}(X, \Sigma)$ let \[ e := \frac{1}{1 + \abs{f}} \] Then $e$ is a bounded function and $ef$ is also bounded. Hence, by (MFC4), $e$ is a regularizer of $f$. Moreover, $\Phi(e^{-1})$ is defined, and hence $\Phi(e^{-1}) = \Phi(e)^{-1}$ (Theorem \ref{afc.t.pro-cal}). It follows that \[ \Phi(f) = \Phi(e^{-1} e f) = \Phi(e)^{-1} \Phi(ef), \] which just means that the set $\{ e\}$ is determining for $\Phi(f)$. This show that \[ \mathcal{E} := \{ e\in \mathcal{M}(X, \Sigma) \,\,|\,\, \text{$e$ is bounded}\} \] is an algebraic core for $\Phi$. (In particular, $\Phi$ satisfies (FC4) and hence is a calculus.) As a result, each measurable calculus coincides with the algebraic extension of its restriction to the bounded functions. To construct a measurable calculus, it therefore suffices to construct a calculus on the bounded measurable functions and then apply the algebraic extension procedure. And this is a far simpler method than employing spectral measures. \medskip It is remarkable (and very practical) that only (MFC1)--(MC5) are needed to establish all the well-known properties of the Borel calculus for normal operators. For example, one can prove that the identity \[ \Phi(\konj{f}) = \Phi(f)^* \] holds for each $f\in \mathcal{M}(X, \Sigma)$, and not just for bounded functions as guaranteed by (MFC4). Next, observe that for given $f,g$ the sequence of functions \[ e_n := \frac{n}{n+\abs{f} + \abs{g}} \] form a common approximate identity for $f$ and $g$. (This is actually a strong approximate identity, since strong convergence in (MFC5) holds automatically.) By Theorem \ref{api.t.api} we obtain \[ \cls{\Phi(f) + \Phi(g)} = \Phi(f+g),\qquad \cls{\Phi(f)\Phi(g)} = \Phi(fg). \] One of the most important results in this abstract development of measurable calculi concerns uniqueness. We only cite a corollary of a more general theorem: \begin{thm} Let $X\subseteq \mathbb{C}^d$, endowed with the trace $\sigma$-algebra of the Borel algebra. Let $(\Phi, H)$ and $(\Psi,H)$ be two measurable calculi on $X$ such that \[ \Phi(\mathbf{z}_j) = \Psi(\mathbf{z}_j) \quad (j=1, \dots, d). \] Then $\Phi = \Psi$. \end{thm} This theorem implies, e.g., the composition rule \[ (f\circ g)(A) = f( g(A)) \] for a normal operator $A$ on a Hilbert space $H$, since both mappings \[ \Phi(f) := (f\circ g)(A) \quad \text{and}\quad \Psi(f) := f(g(A)) \] are Borel calculi on $\mathbb{C}$ that agree for $f = \mathbf{z}$. \medskip For more about the functional calculus approach to the spectral theorem we refer to \cite{Haase2020bpre}. \medskip \vanish{ \section{Construction of Functional Calculi by Gelfand Theory} Let $X$ be a Banach space and let $A$ a closed operator on $X$ with nonempty resolvent set $\varrho(A) \neq \emptyset$. We let \[ \calA_A := \cls{\mathrm{alg}}\{ R(\lambda,A) \,\,|\,\, \lambda \in \varrho(A)\} \] the smallest norm-closed unital subalgebra of $\mathcal{L}(X)$ that contains all resolvents of $A$. The {\bf commutant algebra} of $A$ is \[ \calC_A := \{ T \in \mathcal{L}(X) \,\,|\,\, TA \subseteq AT\}. \] It is a unital subalgebra of $\mathcal{L}(X)$ that contains all resolvent operators $R(\lambda,A)$, $\lambda \in \varrho(A)$. It is closed in the strong operator topology. By \cite{...}, $\calC_A = \calA_A'$, the commutant algebra of $\calA_A$. The {\bf double commutant algebra} of $A$ is the commutant algebra of $\calC_A$, i.e., \[ \calB_A := \calC_A' = \{ S\in \mathcal{L}(X) \,\,|\,\, ST = TS \,\,\text{for all $T\in \calC_A$}\} = \calA_A''. \] This is a strongly closed, commutative unital subalgebra of $\mathcal{L}(X)$, containing all resolvents of $A$. Moreover, $\calB_A = \calB_A'$. As $\calA_A$ is commutative, Gelfand theory applies. We let \[ \Gamma_A := \Gamma(\calA_A) := \{ \gamma : \calA_A \to \mathbb{C} \,\,|\,\, 0 \neq \gamma \,\,\text{is a character}\} \] be the {\bf Gelfand space} of non-zero characters (= multiplicative linear functionals) on $\calA_A$. It is a subset of the dual unit ball of $\calA_A$, compact in the weak$^*$ topology. The associated {\bf Gelfand map} is \[ \Psi : \calA_A \to \mathrm{C}(\Gamma_A), \quad T \mapsto (\gamma \mapsto \gamma(T)). \] It is continuous and norm-decreasing, and one has \[ \{ \gamma(T) \,\,|\,\, \gamma \in \Gamma_A \} = \sigma_{\calA_A}(T), \] the spectrum of $T$ with respect to the algebra $\calA_A$.
1,108,101,562,696
arxiv
\section{Introduction} The Gaia mission \citep{2016A&A...595A...1G} and second Data Release 2 (GDR2) \citep{2018A&A...616A...1G} have provided positions, parallaxes, proper motions, and three photometric bands for 1.3\,billion sources across the sky. It also provided effective temperatures, luminosities, extinctions, and radial velocities for various subsets of these sources. While this has led to an unprecedented rich view of our Milky Way system \citep[i.a.][]{2018Natur.563...85H,2018A&A...618A..93C,2018MNRAS.478..611B}, it is at the same time hard to understand the limits of this data set. To help with this, the community has produced mock stellar catalogs that have similar selections to, and provide the same observables, as Gaia, in which the underlying truth is known. \citet{2018MNRAS.481.1726G,2020ApJS..246....6S} used N-body cosmological simulations of Milky Way-like galaxies. These have been used to interpret patterns in the stellar phase-space structure seen in GDR2 in terms of our Galaxy's merger history \citep{2019arXiv190904679B, 2020arXiv200106009G}, and they have been used to estimate the mass of our Galaxy \citep{2019MNRAS.487L..72G}. A slightly different approach was taken by some of the present authors in \citet{2018PASP..130g4101R}, were we used an underlying Milky Way model \citep{2003A&A...409..523R} to produce a mock stellar catalog with \texttt{galaxia} \citep{2011ApJ...730....3S}, a tool to sample stars from density distributions or N-body data. We published this in the same way as GDR2, namely via \texttt{ADQL} and mimicking the GDR2 data model. This proved useful for testing the Gaia selection function \citep{2018A&A...616A..37B,2019ApJ...887..237C} and also to estimate false positive rates in common proper motion pairs \citep{2018MNRAS.480.4884E,2020ApJS..246....4T}. It served also as a Galaxy prior \citep{2018AJ....156...58B} and provided an easy way to query a Milky Way model \citep{2019ApJ...881..164Y,2020MNRAS.tmp...78A} or estimate starcounts for future surveys \citep{2019ApJ...883..107C}. In this paper we present Gaia early DR3 mock (GeDR3mock), a simulated Gaia catalog with entries for 1,573,457,319 individuals stars brighter than G\,=\,20.7\,mag. It is intended as a community service for the preparation of the upcoming Gaia Early Data Release 3 (Gaia EDR3). Compared to our GDR2mock catalog \citet{2018PASP..130g4101R}, for GeDR3mock we have updated the Milky Way model \citep{2014A&A...564A.102C} and have added the Magellanic Clouds and over 1,000 open clusters \citep{2018A&A...618A..93C,2013A&A...558A..53K}. We simulate observational uncertainties empirically using GDR2 uncertainties scaled to the longer baseline of 34\,months for Gaia EDR3 (compared to 22\,months in GDR2). We again mimic the GDR2 data model, and additionally provide all underlying stellar parameters, e.g. teff, logg, feh, age, extinctions in all bands, initial and current mass, and which galactic component the star belongs to. All values provided in the catalog are noise free and we provide all values for all stars in the catalog, i.e.\ including those that will be absent from GeDR3 or even Gaia DR3. For example we provide radial velocities for all stars, which means that the user has to apply an appropriate selection. We assist with the selection function by providing both, maps of limiting magnitudes for four different GDR2 selections\footnote{The RVS sample is missing and will be addressed in a forthcoming publication.} (all sources, all with parallax, all with BP-RP, all with parallax and BP-RP), and the tools we used to create the maps \citep{2019ascl.soft01005R}. We provide example \texttt{ADQL} queries to illustrate how to access the data. The paper is structured as follows: In Section\,\ref{sec:generation} we sketch the generation of GeDR3mock. Section\,\ref{sec:selection} discusses selection effects of the Gaia instrument, following by a comparison to GDR2 in Section\,\ref{sec:comparison}. In Section\,\ref{sec:catalog} we discuss the catalog content and limitations. We provide example queries in Section\,\ref{sec:example_queries}. \section{Catalog generation} \label{sec:generation} Our catalog has been generated using \texttt{galaxia} \citep{2011ApJ...730....3S} a tool to turn an underlying chemo-dynamical Milky Way model via stellar isochrones into a synthetic or `mock' stellar catalog. It also has the functionality to turn N-body data into mock stellar particles, which we use to generate the Magellanic Clouds and open clusters. We use version 0.8.1 of \texttt{galaxia} \citep{2019MNRAS.tmp.2471S} and made some modifications to the code that we explain below. We have linked the final version of our \texttt{galaxia} code in the \texttt{galaxia\_wrap}\footnote{\url{https://github.com/jan-rybizki/Galaxia_wrap}} python package. Both can be used in interplay to redo or customise our catalog. \subsection{The Milky Way model} The underlying Galaxy model of \texttt{galaxia} is based on the Besan\c con \citep{2003A&A...409..523R} model. Since 2003 the Besan\c con model has seen many updates for various Galactic components. We have implemented a selection of these changes and list them in the following subsections. For each Galactic component we indicate the population ID (\texttt{popid}) which can be used to only select stars of a specific component. Basic information on age and local mass normalisation of the thin- and thick-disk components can be inspected in Table\,\ref{tab:local_mass}. \label{sec:galaxia} \subsubsection{Thin disk - popid = 0-6} We use a thin disk scale length of 2.2\,kpc for popid 1 to 6 \citep{2009A&A...495..819R}, but use the fiducial 5\,kpc for the youngest disk population (popid = 0) as in \citet{2014A&A...564A.102C}. The star formation rate (SFR) is modelled as exp(-0.12$\tau$), where $\tau$ is the time from 10\,Gyr ago, in accordance with \citet{2014A&A...564A.102C}\footnote{Though the SFR is still piecewise flat for each thin disk population, cf. \citet[][tab. 2]{2011ApJ...730....3S}}. We use the KH-v6 initial mass function (IMF) from \citet[tab.\,1]{2014A&A...564A.102C} for the thin disk. For the metallicity we implemented the values from \citet[tab.\,5]{2012A&A...543A.100R}. \subsubsection{Thick disk - popid = 7} For the thick disk, we implemented an age spread of 1\,Gyr \citep{2019MNRAS.tmp.2471S} and left the mean at 11\,Gyr. The thick disk metallicity is set to $-0.48\pm 0.3$ dex \citep{2014A&A...564A.102C}. \subsubsection{halo - popid = 8} We set the age of the halo to 13\,Gyr instead of the default 14\,Gyr because of isochrone limitations. The metallicity is $-1.5\pm0.5$ dex \citep[tab.\,5]{2012A&A...543A.100R}. The velocity dispersion is taken from \cite[tab.\,7]{2012A&A...543A.100R}. \subsubsection{Bulge - popid = 9} Metallicity is $0.0\pm 0.2$ \citep[tab.\,5]{2012A&A...543A.100R}. Velocity dispersion is also taken from \citep[tab.\,7]{2012A&A...543A.100R} \subsubsection{Magellanic clouds - popid = 10} The most prominent extragalactic features in the sky density maps of GDR2 data are the Magellanic clouds (MCs) (see the middle panel of Figure\,\ref{fig:catalog_comparison}). We include a simple model of the MCs in order to study first order selection effects that occur in such dense regions at large distances. In case the user does not want to include the MCs when querying GeDR3mock, this can be done adding the following to a query: \begin{lstlisting} WHERE popid != 10 -- this is part of an ADQL query \end{lstlisting} To generate N-body particles (from which \texttt{galaxia} then produces mock stellar particles) that represent the MCs, we use the parameters of the MC model tabulated in \citet[tab.\,10]{2012A&A...543A.100R} and assumed a constant star formation rate for both MCs. The sky position is taken from \citet{2003A&A...412...45P}. We arbitrarily set the velocity dispersions to 20 and 10 km/s for the LMC and SMC respectively. The stellar masses are set to $3.8\times10^9$\,M$_\odot$ for the LMC and to $6\times10^8$\,M$_\odot$ for the SMC, values that reproduce approximately the starcounts in those regions of the sky. To build the LMC we drew $10^5$ particles from a Gaussian distribution with a standard deviation of 1.075\,kpc. We assume spherical symmetry for the SMC as well for which we drew $10^4$ particles with a standard deviation of 0.525\,kpc. The velocity distribution is randomly applied to each particle (relative to the 3D velocity of the centre-of-mass of the LMC given in \citet[tab.\,10]{2012A&A...543A.100R}) using a normal distribution and neglecting the position of the particle within the MCs (which we accept is dynamically inconsistent). This means that GeDR3mock MCs kinematics should not be used to compare to the detailed Gaia observations, which a.o. allows for rotation field inference in these galaxies. For \texttt{galaxia} to generate mock stellar particles from these $10^5$ and $10^4$ N-body particles we calculate a 6D smoothing length using EnBiD\footnote{\url{https://sourceforge.net/projects/enbid/}} \citep{2006MNRAS.373.1293S}. \subsubsection{Open cluster - popid = 11} Our underlying Milky Way model is smooth, but we know that the real Galaxy has many localised overdensities in phase-space, like moving groups, and open and globular cluster. Unraveling and cataloging such structures with the help of \textit{Gaia} data is an active topic of research \citep[e.g.][]{2019ApJS..245...32L,2020arXiv200107122C}. We add mock open star clusters to our catalog, so that the astronomical community can train their algorithms to detect them and to extract their underlying astrophysical parameters. As an input catalog we use 1118 real clusters from \citet{2018A&A...618A..93C} and \citet{2013A&A...558A..53K}. We mock up the unknown astrophysical parameters of these in order to create an underlying truth, from which we can sample stars. The exact procedure can be inspected in notebook 7a of \texttt{galaxia\_wrap} (where also a fits file with the exact values can be found), but in brief, the procedure for assigning parameters to individual objects is as follows. \begin{itemize} \item The metallicity, [Fe/H], is forced to be 0.1 dex in the inner disk, -0.35 dex in the outer disk with a linear transition in between 8\,kpc and 12\,kpc galactocentric radius. We add Gaussian noise of $\pm$ 0.1 dex to these [Fe/H] values. \item Mass values of the clusters are picked from a truncated normal distribution (300 $<$ M$_\odot$ $<$ 2050, most clusters have low masses) and were sorted and assigned according to the number of member stars in GDR2. We chose the mass distribution in order to roughly reproduce the overall number of all cluster members, which is of order of 400K stars. \item We assume a solid body rotation for the stellar clusters with random spin axis. Rotational velocity is also correlated with the number of member stars (more stars mean higher rotational velocity). Velocities range from 0.1 to 0.7\,km/s and are given at the cluster radius. \item cluster center-of-mass positions and velocities were directly taken from the input catalog of 1118 clusters. \end{itemize} The mock data was generated using notebook 7. The particles contained in each cluster were distributed in a Plummer sphere using \texttt{amuse} \citep{2009NewA...14..369P}. To the resulting self-consistent velocity distribution we added the internal rotation depending on the position of each stellar particle with respect to the spin axis. The open cluster population can be easily queried\footnote{We report ADQL queries for didactic purposes and to help with reproducibility. They can be run on TAP services such as \texttt{topcat} \citep{2005ASPC..347...29T}, which also visualises the results. Alternatives include \texttt{PyVO} \citep{2014ascl.soft02004G,2019ASPC..521..483B} and web interfaces (e.g. \url{http://dc.zah.uni-heidelberg.de/__system__/adql/query/form}).} via: \begin{lstlisting} SELECT GAVO_NORMAL_RANDOM(pmra,pmra_error) AS pmra_obs, -- noise added values GAVO_NORMAL_RANDOM(pmdec,pmdec_error) AS pmdec_obs FROM gedr3mock.main WHERE popid = 11 -- selects only open cluster stars -- takes about 10 minutes \end{lstlisting} This query was used to generate the data for Fig.~\ref{fig:oc}, where observational noise has already been added via the \texttt{GAVO\_NORMAL\_RANDOM} function\footnote{It is not possible to reproduce the results of this function via e.g. a seed, due to the unspecified sequence in which the query results are returned.}. We show the proper motions for mock and GDR2 cluster members \citep{2020A&A...633A..99C} in orange and blue, respectively. Although the real clusters and their mock counterparts differ on a star-by-star basis, their statistical properties are (by design) in overall agreement. Finding and characterising the mock clusters might be a good exercise to test the capabilities of detection methods to be used on the \textit{Gaia}~EDR3 data. If the user is not interested in those mock clusters, they can be excluded from a query via the statement: \begin{lstlisting} WHERE popid! = 11 -- de-selects the open clusters \end{lstlisting} \begin{figure} \includegraphics[width=\linewidth]{gfx/oc_comparison.png} \caption{PMRA and PMDEC for open cluster member stars in GeDR3mock in orange and GDR2 in blue. Observational error is added to GeDR3mock from within ADQL.} \label{fig:oc} \end{figure} If users would like to mock up their own N-body data, e.g.\ clusters including tidal tails, streams or whole galaxies, they can adjust the procedure used in \texttt{galaxia\_wrap} notebook\,6\,\&\,7. \subsubsection{Galactic warp and flare} We update the parametrisation of the warp, based on \citet{1999ApJ...521..190G}, following \citet{2009A&A...495..819R}. Their comparison to 2MASS starcounts reveals that the displacement of the mid plane, characterised by the term $\gamma_\mathrm{warp}$ in the expression \begin{equation} z_\mathrm{warp}(R) = \gamma_\mathrm{warp}\times (R-R_\mathrm{warp})\times \sin(\phi - \phi_\mathrm{warp}) \end{equation} needs to be lowered from 0.18 to 0.09. $z_\mathrm{warp}(R)$ denotes the height of the warp above the plane. The starting galactocentric radius of the warp, $R_\mathrm{warp}$ is left at 8.4\,kpc. For the warp angle, $\phi_\mathrm{warp}$, we change the value from $0^\circ$ to $15^\circ$ in line with \citet{2004mim..proc..165Y}, which had no major effect on the fitting in \citet{2009A&A...495..819R}. \subsubsection{Thin- and thick-disk normalisation} The various changes to the default \texttt{galaxia} MW model outlined above, especially to the SFR and IMF, result in a substantial change in the starcount distribution over all sky in our updated model. To gauge the new model to GDR2 data we produced models with different thin- and thick-disk normalisations, i.e. we rescaled the density distribution of the underlying model by a linear factor for thin- and thick-disk separately. We compared to local densities which are based upon \citet{1997ESASP.402..675J} data (c.f. Table\,\ref{tab:local_mass}) and global starcounts (c.f. Figure\,\ref{fig:catalog_comparison}). For the latter we applied HEALpix\footnote{\url{http://healpix.sf.net}} dependent G magnitude limits \citep{2018ascl.soft11018R} (as explained in Section\,\ref{sec:maglim}) to the mock and the real data and cut out the MCs. We also inspected how well the mock data would fit the real data, using a Poisson likelihood based on binned CMDs per HEALpix, where the HEALpix level is variable in order to have a similar amount of stars in each HEALpix (and therefore CMD). The exact procedure and algorithms can be looked up in \texttt{galaxia\_wrap} notebook 5\footnote{The computational time of a single all-sky evaluation is about 10\,minutes on a modern CPU, which makes it feasible to run inferences on multiple galactic parameters similar to \citet{2019MNRAS.tmp.2471S} or \citet{2019arXiv190404350P}. A somewhat superior approach in terms of computational cost can be found in \citet{2018A&A...620A..79M}.}. A compromise between the overall starcounts, the local mass normalisation, and the CMD likelihood was then chosen by eye\footnote{Figures of the different metrics can be inspected in notebook 5a.} resulting in a thin disk normalisation of 0.9 and a thick disk normalisation of 0.8. The new thin disk normalisation of 0.9 applies to all thin disk populations\footnote{In \texttt{galaxia} the thin disk population 6 has been lowered by 20\,\%. We no longer apply this reduction.}, i.e. \texttt{popid}\,$\in[0,1,2,3,4,5,6]$. \subsection{PARSEC-COLIBRI isochrones}\label{sec:isochrones} The set of isochrones is the main astrophysical input that turns the underlying density distribution into mock stellar observations. Therefore we included the latest updates on these, as well as white dwarf tracks\footnote{White dwarfs were not included in GDR2mock}. The basic isochrones come from \citet{2017ApJ...835...77M}, and are built joining the PARSEC evolutionary tracks from \citet{2012MNRAS.427..127B} with the thermally-pulsing asymptotic giant branch (AGB) from \citet{Pastorelli_2019}\footnote{\url{htts://stev.oapd.inaf.it/cmd}}. To these tracks, we add WD tracks from \citet{Bertolami_2016}, using cooling sequences of initial metallicity $Z=0.01$ from \citet{Renedo_2010}. These grids of white dwarfs have been extrapolated up to a final WD mass of $1.1\,M_\odot$ by using fitting relations. The derived isochrones are converted into the Gaia DR2 magnitudes by means of synthetic photometry performed with the YBC software \citep{2019A&A...632A.105C}\footnote{\url{htts://stev.oapd.inaf.it/YBC}}, in this case using the \citet{2018A&A...617A.138W} filter transmission curves for Gaia, which provides two BP bands, i.e. one for bright and one for faint magnitudes with a limit of $\mathrm{G}=10.87$\,mag For the generation of Gaia photometry \texttt{galaxia} uses the complete isochrone set. In order to calculate the extinction in all bands (Gaia, SDSS, 2MASS, UBV) and the photometry in other systems (SDSS, 2MASS, UBV) we use a gridded version of the isochrones, reducing the total number of model stars from 8,102,858 to 243,238. We create a grid with the following number of bins (step-size in parentheses) [boundaries in brackets]: [Fe/H] 36 (0.05\,dex)[-1.5,0.34]; $\log_{10}(T_{\mathrm{eff}})$ 162 (0.02) [2.45,5.68]; $\log_{10}(\mathrm{L_\odot})$ 217 (0.05) [-4.60, 6.24]. A combination of those three dimensions determines the \texttt{index\_parsec} (LLLTTTFFF, L = lum, T = teff, F = feh). The median of all stars that fall into a specific gridpoint is taken and also the standard deviation inspected\footnote{We exclude some TP-AGB stars which had extremely high extinction values resulting in strong outliers within our isochrone grid.}. We report here the 50 and 99 percentiles of the standard deviation for all bins of this grid for the other stellar parameters: log(age) [0.03,1.22]; initial mass [0.10,2.96]; current mass [0.03,4.14]; log(g) [0.04,0.51]; G [0.04,0.78]; G$_\mathrm{BP}$ [0.04,1.12]; G$_\mathrm{RP}$ [0.04,0.68]; G$_\mathrm{RVS}$ [0.04,0.63]. These photometric bands and extinctions can be queried via a separate table: \texttt{gedr3mock.parsec\_props}. An example is given in Section\,\ref{sec:parsec_props}. Due to the non-linear scaling of extinction with dust column density (reddening of an already reddened spectrum is weaker), extinction values are given for 6 different A0 values: 1,2,3,5,10,20\,mag. For an extinction law we used \citet{1989ApJ...345..245C} plus \citet{1994ApJ...422..158O}, with $R_V$=3.1, and higher order bolometric corrections have been taken into account. Links to the raw and reduced isochrone data are given in \texttt{galaxia\_wrap}. Notebooks on the generation of the grid can be found here\footnote{\url{https://github.com/jan-rybizki/Galaxia_wrap/tree/master/notebook/isochrone_generation}}. \subsection{New 3D extinction map} An integral part of a mock stellar catalog generation is the application of interstellar reddening due to dust, because most stars which would have a G magnitude brighter than 20.7 in the absence of dust have a fainter G magnitude after extinction has been added. We build upon our experience with the \citet{2016ApJ...818..130B} combined dust map, which was put together using different 3D extinction maps (in order to get full-sky coverage). We replace the Bayestar 2015 map \citep{2015ApJ...810...25G} by Bayestar 2019 \citep{2019arXiv190502734G} up to $|b|<20^\circ$ and Bayestar 2017 \citet{2018MNRAS.478..651G} above (Bayestar 2017 has less clustering in low dust regions). Towards the Galactic center the combined map uses \citet{2006A&A...453..635M} which goes deeper since it is based on infrared data, whereas Bayestar requires photometric measurements in the optical as well. Parts that are not filled due to the Pan-STARRS \citep{2016arXiv161205560C} footprint are filled with \citet{2003A&A...409..205D}. See Figure 1 of \citet{2016ApJ...818..130B} for the footprint of each map. Resolution was increased to HEALpix level 9 (nside = 512, area of 47\,arcmin$^2$) from Healpix level 7 (nside = 128, 755\,arcmin$^2$) in GDR2mock and distance sampling is refined to 120 bins logarithmically sampled from 60\,pc to 60\,kpc\footnote{These values refer to Bayestar 2019. The other maps have different grids which are interpolated to that grid.}. The data cube is linked in \texttt{galaxia\_wrap} and methods for application are present in the \texttt{library/add\_extinction.py} file of \texttt{galaxia\_wrap}. $A_0$ (monochromatic extinction at lambda = 547.7nm in mag) values are interpolated linearly in distance while adjacent HEALpix values are not interpolated (HEALpix footprint is visible, as are the borders between the different extinction maps). As reported before, the extinction in specific bands (G, BP\_bright, BP\_faint, RP and RVS; these two BP bands will be merged in a later step, described in Section\,\ref{sec:mag_cut}) has been precalculated for 6 different values of A$_0$: 1,2,3,5,10,20\,mag. In order to calculate the extinction in a specific band for a specific value of A$_0$ (that comes from the 3D extinction map), we make a cubic fit to those 6 values onto a finer grid between 0 and 20\,mag and then interpolate linearly to the exact A$_0$ (this two step interpolation is a compromise between accuracy and speed, when operating with large arrays of extinction).\footnote{To query A$_0$ values from the 3D positions of 10\,M sources and apply the band corrected extinction to each of the 5 bands takes about 1\,minute on a modern laptop.} For values of A$_0$ that are larger than 20\,mag, we linearly scale the value for A$_0=20$\,mag. The procedure outlined above illustrates the steps taken in \texttt{library/util.py:apply\_extinction\_curves()} and gives the extinction in the respective photometric band which we add to the apparent magnitude of the unreddened stars as generated by \texttt{galaxia}. \subsection{Apparent magnitude cut} \label{sec:mag_cut} For GeDR3mock we compute all stars with G brighter than 20.7\,mag using \texttt{galaxia}. Afterwards we apply our 3D extinction map and add absorption to each band. Thereafter we remove stars with G\,$>20.7$\,mag, which diminishes stars by a factor of 4. Until this point we have used BP bright and faint bands separately, but now we use either of these as BP depending on whether the source is brighter or fainter than G\,$=10.87$\,mag \citep{2018A&A...617A.138W}. Note that some of the sources in our catalog can have BP or RP magnitudes much fainter than 20.7\,mag in their respective bands. Thus in order to retrieve sources from our catalog that would have BP and RP measurements in GDR2 (or Gaia EDR3), the user may want to apply cuts to our catalog on these magnitudes. We do not model BP or RP excess flux spilling over from nearby sources, which will brighten up those bands for faint stars in dense areas in the Gaia data. \subsection{Uncertainty model}\label{sec:errormodel} In GDR2mock we used the pre-launch nominal sky-averaged error model. This underestimates uncertainties, especially in the bright regime. To simulate the Gaia measurement metrics (e.g. no.\ of observations, parallax or photometric uncertainties) more accurately for GeDR3mock, we use GDR2 data to fit a predictive model of a metric as a function of parameters that we can simulate from \texttt{galaxia} (e.g.\ magnitude, colour, position). Specifically, we select 0.5\,\% of GDR2 data at random and use this to train ExtraTree models \citep{Geurts2006,scikit-learn}. The first model uses Galactic longitude and latitude as inputs to predict the number of \texttt{visibility\_periods\_used} (VPU) and \texttt{phot\_g\_n\_obs} (NOBS). These are mutliplied by 34/22 (the longer baseline of Gaia EDR3 compared to GDR2) and rounded to the nearest integer. We then train separate models with G, BP-RP, VPU and NOBS (all values are still coming from GDR2) as inputs to predict the \texttt{parallax\_error} and \texttt{phot\_g\_mean\_mag\_error} (using approximate values computed from the symmetrised flux uncertainties). These are then scaled by $\sqrt{22/34}$ to account for the longer baseline. This scaling factor assumes that the dominant noise factor is source-noise rather than systematics, which is not actually the case at the bright end. The factor of $\sqrt{22/34}$ is the factor for flux uncertainties, not magnitude uncertainties. Similarly, the \texttt{radial\_velocity\_error} is predicted from G, BP-RP, and Teff and the same rescaling is applied. The procedure outlined above can be inspected in notebook~8. From NOBS we derive NOBS for BP and RP by fitting a linear relation that minimises the least squares. We similarly derive the photometric uncertainties in the other bands from linear relations on \texttt{phot\_g\_mean\_mag\_error}, and uncertainties in positions and proper motions from linear relations\footnote{The derived scaling relations compare well with the estimated end-of-mission values from \url{https://www.cosmos.esa.int/web/gaia/table-6} given the expected relative improvement of proper motion uncertainties with time.} on \texttt{parallax\_error}. The fitted relations are listed in Table\,\ref{tab:error_scaling} and the procedure to obtain those values can be inspected in notebook~8a. We account for the $\left(22/34\right)^{1.5}$ uncertainty scaling for the proper motions. We produce the mock errors this way in order to save storage in the ADQL data base, because simple scaling relations with other columns do not require additional space. \begin{table}[] \caption{Empirical scaling relations that evaluate the quantity in the first column as a function of the quantity (from GDR2) in the second columns.} \centering \begin{tabular}{c|c} derived quantity & scaling relation \\ \hline \texttt{phot\_bp\_n\_obs} & 0.092 \texttt{phot\_g\_n\_obs}\\ \texttt{phot\_rp\_n\_obs} & 0.096 \texttt{phot\_g\_n\_obs}\\ \texttt{phot\_bp\_mean\_mag\_error} & 19.85 \texttt{phot\_g\_mean\_mag\_error}\\ \texttt{phot\_rp\_mean\_mag\_error} & 9.12 \texttt{phot\_g\_mean\_mag\_error}\\ \texttt{pmra\_error} & 1.71 \texttt{parallax\_error}\\ \texttt{pmdec\_error} & 1.52 \texttt{parallax\_error}\\ \texttt{ra\_error} & 0.81 \texttt{parallax\_error}\\ \texttt{dec\_error} & 0.75 \texttt{parallax\_error}\\ \end{tabular} \label{tab:error_scaling} \end{table} \subsection{Catalog entries are reported noise-free} \label{sec:catalogue_entries} All quantities that we report in GeDR3mock are noise-free. Noise can be added based on the uncertainty estimates derived in Section\,\ref{sec:errormodel} from within \texttt{ADQL}: see the example in Section\,\ref{sec:example_queries}. As in the GDR2 data model, GeDR3mock contains the phase space distribution in the following observables: {\tt ra, dec, l, b, parallax, pmra, pmdec, radial\_velocity} and similarly for the photometry, though we add an extra G$_\mathrm{RVS}$ column. A few stellar parameters have been estimated in GDR2 \citep{2018A&A...616A...8A} and these are reported together with all the other known quantities in GeDR3mock: {\tt teff\_val, ag\_val, a\_g\_val, e\_bp\_min\_rp\_val, radius\_val, lum\_val, feh, a0, initial\_mass, current\_mass, age, logg, popid, a\_bp\_val, a\_rp\_val, a\_rvs\_val}. The column descriptions can be inspected here\footnote{\url{http://dc.g-vo.org/tableinfo/gedr3mock.main}}. \section{Selection function}\label{sec:selection} Here we explain and investigate two effects that prevent stars from entering into the real Gaia catalog, even though they would be brighter than G=20.7\,mag and therefore inside the GeDR3mock catalog. For a proper comparison between mock and data, these selection effects should be taken into account. \subsection{Contrast sensitivity} \label{sec:contras_sensitivity} When two sources in Gaia are close to each other, the fainter one might not get allocated an observational window by Gaia, depending on their separation and magnitude difference \citep{2015A&A...576A..74D}. This effect, dubbed ``contrast sensitivity'', has been quantified to some degree for GDR2 by \citet{2019A&A...621A..86B}. We used their Table\,1 to calculate, for each source in GeDR3mock, its probability to be seen, which we call ``visibility''. We compute and add to GeDR3mock a quantity \texttt{d11y} that gives an integer from 0 to 100, where 0 means no visibility. The \texttt{ADQL} query that pre-selects close pairs and calculates \texttt{d11y} is linked to the \texttt{galaxia\_wrap} repository. As can be seen from Table\,\ref{tab:contrast_sensitivity}, 69\,million sources have issues with too bright and too near neighbours. When accounting for the magnitude limits from GDR2 this number drops to 33\,million. GeDR3mock does not include binaries or globular clusters, which would otherwise increase those numbers. \begin{table}[] \caption{Number of sources in GeDR3mock with certain visibility values, for all sources (second column), and for sources brighter than the magnitude limits given in Table\,\ref{tab:mag_lim} (third column). } \centering \begin{tabular}{c|c|c} visibility & GeDR3mock & GeDR3mock with G maglim \\ \hline \% & \multicolumn{2}{c}{million}\\ \hline\hline 0 & 34 & 16 \\ 1--50 & 20 & 10 \\ 51--99 & 15 & 7 \\ 100 & 1\,505 & 1\,304 \\ \end{tabular} \label{tab:contrast_sensitivity} \end{table} \subsection{Magnitude limit of GDR2} \label{sec:maglim} The effective magnitude limit along a line-of-sight can be shifted towards brighter magnitudes by a combination of crowding \citep{2016A&A...595A...1G} and a limited number of scans. The latter can, for faint sources, drop below the number of observations required for specific Gaia data products to be included in a release, e.g.\ parallax \citep{2018A&A...616A...2L}, G, BP, RP, or RVS. This magnitude limit can be approximated by the mode of the magnitude distribution within a specific HEALpix. When, in the following, we speak of the magnitude limit, we refer to the mode of the magnitude distribution binned in 0.1\,mag bins. To illustrate how this manifests itself in the real data we show in Figure\,\ref{fig:maglim_example} such maps for GDR2 for G\,$<20.7$. The top panel shows the magnitude limits when only requiring G measurements. We see that in the bulge and Magellanic Clouds the magnitude limits are brighter than everywhere else. Away from the disk the limit becomes rather noisy. In the middle panel we require that a parallax measurement be available. This makes the bulge limits brighter, and satellite scanning patterns become visible. In the bottom panel we show the same map again (requiring parallax measurement), but this time we only set the limit from the G-magnitude distribution for a HEALpix if it has more than 1e5 sources per deg$^2$. In all other HEALpix the limits are set to 20.7. \begin{figure} \includegraphics[width=\linewidth]{gfx/maglim_examples.png} \caption{G magnitude limits over the sky for HEALpix level 6 in Galactic coordinates (longitude increases to the left) computed from GDR2. The colour indicates the magnitude limits. From top to bottom the maps are for sources that have: a G magnitude (all of GDR2); a parallax measurement; a parallax measurement and more than 1e5 stars per deg$^2$ -- the limit is set to 20.7\,mag in HEALpix with lower density. While for the top panel the brightest limit is 18.9\,mag, the middle and bottom panels both have one HEALpix outside the range, with 17.6\,mag. The name above each skymap is the column name storing the plotted magnitude; this can be queried from the auxiliary table \texttt{gedr3mock.maglim\_6}.} \label{fig:maglim_example} \end{figure} As we can see the mode estimator -- upper panel of Figure~\ref{fig:maglim_example} has two main failure modes: (a) the starcount in a specific HEALpix is low, such that the magnitude distribution gets noisy due to Poisson sampling, and (b) a peak in the magnitude distribution is produced by some localised stellar population in a distant overdensity, e.g.\ red clump stars in the Magellanic Clouds, that is not characteristic of the crowding limit. An easy fix for (a) is to only apply the magnitude limits in dense areas and to set the magnitude limit to 20.7 in all HEALpix that have a small stellar density. This is what we do in the bottom panel of Figure\,\ref{fig:maglim_example} for a density threshold of 1e5 sources per deg$^2$\footnote{The density threshold might be useful to apply to the \texttt{maglim\_g} panel of Figure\,\ref{fig:maglim_example}. But for the \texttt{maglim\_g\_parallax} map it is obvious that there is not only low-density noise but also real structure related to the scanning law. A comparison to GDR2 would therefore be biased if we used the \texttt{maglim\_g\_parallax\_density\_threshold} map. On the other hand, we would assume that those structures related to the scanning law are suppressed in the Gaia EDR3 \texttt{maglim\_g\_parallax} map.}. To illustrate those failure modes further we plot in Figure\,\ref{fig:mag_distribution} the G magnitude distributions for three different HEALpix at level 6, namely towards Baades window, the LMC and a low density field at l = 20 and b = 30. The following query exemplifies the data acquisition for Figure\,\ref{fig:mag_distribution}: \begin{lstlisting} SELECT COUNT(*) AS ct, ROUND(phot_g_mean_mag,1) AS mag FROM gaia.dr2light WHERE source_id BETWEEN 4657847914607935488 AND 4657988652096290815 -- healpix level 6 pointing on Baades window GROUP BY mag \end{lstlisting} We can see how the red clump peak (blue points) in the LMC at around G\,$=19$\,mag can yield an incorrect magnitude limit estimate. The low density field (red points) is not yet too noisy such that the mode of the distribution is still a good estimator for the magnitude limit but one can see how the Poisson noise in the magnitude distribution can produce modes at brighter magnitude limits if the stellar density gets even lower or the HEALpix level increases. In Baades window (green points) it can be seen that a brighter magnitude limit of about G\,=\,19\,mag is reached and sources are petering out beyond that. The red clump peak in the luminosity function of Baades window at G\,$=16$\,mag is well visible, but does not bias our magnitude limit estimation in this particular case, since the mode is at fainter magnitudes still. \begin{figure} \includegraphics[width=\linewidth]{gfx/mag_distribution_level6.png} \caption{GDR2 G magnitude distributions in different directions of the sky. The pointings are towards Baades window, the LMC, and a low density field at l = 20, b = 30. Each curve corresponds to one HEALpix at level 6. From these magnitude distributions we approximate the limiting magnitude by taking the mode.} \label{fig:mag_distribution} \end{figure} We provide both variants, i.e.\ with and without a density threshold applied in the tables where the latter has no suffix and the former has \texttt{\_density\_threshold} added. We also provide magnitude limits for BP (also under the condition that G\,$<20.7$). For each band the magnitude limit comes for four flavours (and each flavour has one with and without density threshold applied): \begin{itemize} \item (1) G$<20.7$\,mag (applies to all variants) \item (2) parallax measurement is available \item (3) BP and RP measurement is available \item (4) parallax, BP and RP are available \end{itemize} In Table\,\ref{tab:mag_lim} we list the number of sources that are included in the G magnitude limits for both GeDR3mock and GDR2. In total, GeDR3mock has 1,573\,M sources compared to 1,451\,M in GDR2\footnote{There are 241\,M stars in GDR2 that have G\,$>20.7$\,mag, but in this work we usually only use sources with G\,$<20.7$ unless stated otherwise.}. When considering only sources that are brighter than the HEALpix dependent magnitude limit the numbers are more similar. The reason why the mock catalog has more sources than GDR2 is because of the density limit of the Gaia instrument of about 1.05\,M sources deg$^{-2}$ \citep{2016A&A...595A...1G}. The highest density area in GeDR3mock has 5.6\,M sources deg$^{-2}$. An illustration of this can be seen in Figure\,\ref{fig:baades_window}, the CMD of Baade's window, where the magnitude limit that also requires the existence of color and parallax measurements is depicted too. \begin{table}[] \caption{Number of sources in GeDR3mock and GDR2 for G\,$<20.7$\,mag. Starcounts are shown for stars brighter than the limiting G magnitude given in the online table \texttt{ gedr3mock.maglim\_6}. In parenthesis the numbers are given for the stars that are outside of the magnitude limit but are still brighter than G\,$=20.7$\,mag and fulfill the selection criteria, e.g. need parallax measurement for \texttt{maglim\_g\_parallax}.} \centering \begin{tabular}{r|c|c} magnitude limit & GeDR3mock & GDR2 \\ \hline column name & \multicolumn{2}{c}{starcounts in million}\\ \hline no magnitude limit & 1,573 & 1,451\phantom{ (131)} \\ \texttt{maglim\_g} & 1,332 & 1,321 (131)\\%check \texttt{maglim\_g\_parallax} & 1,146 & 1,100 (168) \\%check \texttt{maglim\_g\_color} & 1,231 & 1,123 (131)\\%check \texttt{maglim\_g\_parallax\_color}& 1,111 & 1,012 (158)\\%check \hline \texttt{maglim\_g\_density\_threshold} & 1,361 & 1,358 (94)\\%check \end{tabular} \label{tab:mag_lim} \end{table} We generated HEALpix maps of those magnitude limits for HEALpix levels 5, 6, and 7 (nside = 32, 64, and 128, have areas of 3.36, 0.84, and 0.21 deg$^2$ respectively) using the \texttt{gdr2\_completeness} package\footnote{\url{https://github.com/jan-rybizki/gdr2_completeness}} \citep{2018ascl.soft11018R}. They can be accessed via \texttt{gedr3mock.maglim\_X}, where X is the HEALpix level. Notebook 3 and 4 of \texttt{gdr2\_completeness} illustrate how to generate those maps. We encourage the user to produce maps for their specific use-cases, e.g.\ accounting for quality cuts or the existence of measurements, for example radial velocity. We did not provide RP magnitude limits because those are mainly governed by the condition that G is brighter than 20.7\,mag. Since RP is usually brighter than G, sources are usually lost because they get too faint in G not because they get too faint in RP. We also caution the use of the BP magnitude limit, because in dense areas, faint sources can acquire very bright BP (and RP) magnitudes due to flux contamination from neighbouring sources, something that is not modelled in GeDR3mock. BP maps might still be useful when comparing to other data or when modelling the BP and RP flux excess. More details on all bands and a comparison to GeDR3mock magnitude limits can be found in appendix\,\ref{sec:app_maglim}. Once the real data, Gaia EDR3, comes out we will provide updated magnitude limit maps in the TAP service. An example of how to query all stars in GDR2 that are below the \texttt{maglim\_g} magnitude limit for HEALpix level 6 is given below. \begin{lstlisting} SELECT COUNT(*) FROM gaia.dr2light AS g JOIN gedr3mock.maglim_6 AS lim ON (g.source_id/140737488355328=lim.hpx) -- matches catalogs on HEALpix number (level 6) WHERE phot_g_mean_mag<lim.maglim_g -- takes about 1 to 2 hours \end{lstlisting} A python package with a more rigorous method providing completeness as a function of magnitude per HEALpix (Boubert \& Everall, submitted) can be found here\footnote{\url{https://github.com/DouglasBoubert/selectionfunctions}}. One drawback of this is that the magnitude limits seem to depend on the authors' all-sky partition into equal density areas. \section{Comparison to GDR2} \label{sec:comparison} As a first quality assessment and to get an overview of the catalog parameters, we compare GeDR3mock with GDR2. This also serves to illustrate how the catalog can be queried via TAP services using ADQL queries. \subsection{Sky distribution} In order to compare the source density over the sky between GeDR3mock and GDR2, we apply the contrast sensitivity and the magnitude limits from the previous section to GeDR3mock: \begin{lstlisting} SELECT COUNT(*) AS ct, hpx FROM gedr3mock.main AS g JOIN gedr3mock.maglim_6 AS lim ON (g.source_id/140737488355328=lim.hpx) WHERE phot_g_mean_mag<lim.maglim_g_density_threshold AND d11y-RANDOM()*100 > 0 -- samples the visibility probability GROUP BY hpx ORDER BY hpx -- starcounts per hpx are returned -- takes about 1 to 2 hours (varies with load) \end{lstlisting} This returns 1,336\,M stars. For GDR2 this returns 1,358\,M stars. In Figure\,\ref{fig:catalog_comparison} we show the stellar densities in HEALpix level 6 for GeDR3mock and GDR2 and a comparison map at the bottom. GDR2 has more sources towards the poles, but overall the agreement is reasonably good. Globular clusters and the Sagittarius stream are visible, and the Magellanic Clouds show more structure in GDR2. When looking at the comparison map, we see the footprint of the \citet{2006A&A...453..635M} extinction map transitioning into \citet{2003A&A...409..205D} (c.f. Figure\,1 of \citet{2016ApJ...818..130B}), where there is a discrete jump in the model starcounts (colour getting redder) towards the right in the galactic plane. The warp is more prominent in the model, and the bulge structure is not well reproduced. The fit to the Magellanic clouds is poor, owing to the simplistic Gaussian distribution in GeDR3mock. \begin{figure} \includegraphics[width=\linewidth]{gfx/comparison_all.png} \caption{Stellar density in Galactic coordinates in Mollweide projection for GeDR3mock in the top, GDR2 in the middle, and the fractional differences in the bottom panel.} \label{fig:catalog_comparison} \end{figure} \subsection{Color--magnitude diagram (CMD)} Another insightful test is the CMD comparison. Here we do not apply HEALpix-dependent magnitude limits to either catalog, as those do not change the basic structure of the distributions. The query is: \begin{lstlisting} spSELECT COUNT(*) AS N, AVG(phot_bp_rp_excess_factor) AS excess, ROUND(phot_bp_mean_mag - phot_rp_mean_mag,2) AS color, ROUND(phot_g_mean_mag,1) AS mag FROM gaia.dr2light TABLESAMPLE(50) -- this only uses 5 WHERE phot_g_mean_mag < 20.7 GROUP BY color, mag -- this query takes between 1 and 2 hours \end{lstlisting} For GDR2 and GeDR3mock\footnote{For GeDR3mock we add measurement noise to the photometry and also half the query volume by requiring the \texttt{random\_index} to be lower than 786728660 (as TABLESAMPLE() does not work on views).}. These queries count the stars and average the \texttt{phot\_bp\_rp\_excess\_factor}\footnote{Excess of flux in the BP and RP integrated photometry with respect to the G band. In the absence of nearby sources this value should be close to 1. Large values indicate contamination of BP and RP photometry.} in magnitude bins (the excess factor has not been modelled in GeDR3mock). The data is shown in Figure\,\ref{fig:cmd_comparison} where the density distribution is given for GeDR3mock and GDR2 in the left and middle panel respectively. We see that GDR2 lacks sources\footnote{The term 'sources' is used for GDR2 data because not all entries are stars. For GeDR3mock the term 'star' is equivalent to 'source', since all entries are stars.} below the grey dashed line. The line indicates where the number of stars drops sharply when cutting on G$_{\rm BP}<22$\,mag. It seems to be a limit where the bulk part of sources is getting lost in GDR2 (with G$<20.7$\,mag). In the right panel of Figure\,\ref{fig:cmd_comparison} we see that sources which go below that line have issues with contaminated BP and RP measurement. Similarly the very blue stars in the GDR2 data have no counterpart in GeDR3mock. Again these stars have high \texttt{phot\_bp\_rp\_excess\_factor}, which is not modelled in GeDR3mock. The other structures in the CMD are fairly well reproduced. With respect to catalog selection function there are only 1.6\,M sources in GDR2 (5\,M if including sources with G\,$>20.7$) with G$_{\rm BP}>22$\,mag, while GeDR3mock has 36\,M. \begin{figure*} \includegraphics[width=\linewidth]{gfx/CMD_all.png} \caption{Colour-magnitude diagram for GeDR3mock in the left and GDR2 in the middle panel, colour-coded by number of sources per CMD bin. The right panel shows again the GDR2 CMD but this time the average \texttt{phot\_bp\_rp\_excess\_factor} per bin is depicted. The dashed grey line (its functional form is $\mathrm{G}=-0.9 (\mathrm{BP-RP})+23$) indicates a sharp limit in GeDR3mock and GDR2, below which no stars are left if we cut on G$_{\rm BP}<22$\,mag.} \label{fig:cmd_comparison} \end{figure*} \section{Catalog Content \& Limitations} \label{sec:catalog} The catalog contains 1,573,457,319 stars. It is hosted at GAVO\footnote{\url{http://dc.g-vo.org/tap}} and can be queried via \texttt{gedr3mock.main}. Example queries see Section\,\ref{sec:example_queries}. A bulk download is also available.\footnote{\url{https://dc.zah.uni-heidelberg.de/gedr3mock/q/download/}} \subsection{Data model and Catalog Content} Our catalog, by design, mimicks the GDR2 data model, which will be similar in Gaia EDR3. Some fields are filled with NULLs rather than omitted in order for GDR2 ADQL queries not to throw errors. Values like \texttt{phot\_bp\_rp\_excess\_factor} or \texttt{ruwe} are not easy to model because they depend on the actual measurement, but one could train models on the real data to predict those values for the mock catalog, using the method presented in Section~\ref{sec:errormodel} (notebook 8). Entries in GeDR3mock that have no counterpart in the GDR2 data model are now explained: \begin{itemize} \item \texttt{phot\_g\_mean\_mag\_error}\,\, For convenience we provide magnitude errors for all photometric bands. These are only good approximations of the flux error for small values. \item \texttt{phot\_rvs\_mean\_mag}\,\, Since we have the isochrone models with an approximate RVS band\footnote{G$_\mathrm{RVS}$ transmission curve: \url{https://github.com/jan-rybizki/Galaxia_wrap/blob/master/notebook/isochrone_generation/passband/rvs_gedr3mock.dat}} we also provide RVS mag (simply computed assuming a Vegamag zeropoint) because it is useful to select magnitude complete RVS samples. \item \texttt{popid}\,\, The popid from the Besan\c con model (c.f. table~\ref{tab:local_mass}; halo = 8 and bulge = 9), additionally having the Magellanic clouds = 10 and the open clusters = 11. \item \texttt{d11y}\,\, The visibility is given in percentage. Can be lower than 100 due to bright sources in the near vicinity (see Section\,\ref{sec:contras_sensitivity}). \item \texttt{index\_parsec}\,\, Is an index for joining the main mock catalog to other photometric bands/extinctions in the \texttt{gedr3mock.parsec\_props} table. \item \texttt{a\_bp\_val}, \texttt{a\_rp\_val}, \texttt{a\_rvs\_val}\,\, These are extinctions in the specified bands, in analogy to \texttt{a\_g\_val} in the G band. \item \texttt{source\_id}\,\, The most significant bits identify the HEALpix number as with Gaia \texttt{source\_id}. The rest of the \texttt{source\_id} is a running number. The \texttt{source\_id} can be easily turned into HEALpix number for any arbitrary HEALpix level, n, smaller than or equal to 12 (level 12 corresponding to Nside = 4096) via division: \begin{equation} \mathrm{Healpix}(\mathrm{level}=n) = \text{FLOOR}\left(\frac{\mathtt{source\_id}}{2^{35}\times4^{(12-n)}}\right) \end{equation} \end{itemize} Few additional stellar parameters not listed above but can be found in Section\,\ref{sec:catalogue_entries}. General information on the catalog and its columns can be inspected here\footnote{\url{http://dc.g-vo.org/tableinfo/gedr3mock.main}}. \subsection{Limitations} The underlying Galaxy model is a simple approximation of reality with know shortcomings, see lower panel of Figure\,\ref{fig:catalog_comparison} and discussion thereof in Section\,\ref{sec:comparison}. There have been improvements in the thick disk, halo \citep[e.g.][]{2014A&A...569A..13R} and bulge \citep{2012A&A...538A.106R} components of the Milky Way model, but these updates did not build on each other, so we decided to stay with the basic model update from \citet{2014A&A...564A.102C}. LMC and SMC have only Gaussian distributions with inconsistent velocity prescription. We only simulate single stars. The star formation in GeDR3mock is smooth (not clumpy) and independent of the 3D extinction model, therefore the two do not show the correlations one observes in the real MW. The all-sky 3D extinction map is up-to-date but not perfect, especially where different maps have been joined together. \subsection{Updates when GaiaEDR3 is released} We plan to update our mock catalog after the release of GaiaEDR3, foreseen for late 2020. This will contain magnitude limit maps as well as error, nobs, ruwe and contrast sensitivity columns based on GaiaEDR3 data. As some of those already exist, we will add abbreviations indicating that these were derived using Gaia EDR3 data. Updates to GeDR3mock will be announced here\footnote{\url{http://dc.g-vo.org/browse/gedr3mock/q}}. \subsection{Extension to GDR3 Content} The ``full'' GaiaDR3 currently planned for late 2021 will include many more data products. To assist the use and analysis of that catalog, we plan to augment GeDR3mock in a follow-up study with: \begin{itemize} \item binaries \item galaxies and quasars \item models of BPRP and RVS spectra (if publicly available) \item chemical abundances using chemical evolution models. \end{itemize} \section{Example use cases with ADQL queries} \label{sec:example_queries} \subsection{Distance prior} The user might be interested in producing a distance prior for the GDR2 RVS sample to be used in a Bayesian parameter estimation similar to the distance estimation in \citet{2018AJ....156...58B} (see also \citet{2018RNAAS...2...51M}). Following is a query that would mimick the GDR2 RVS sample selection and returns the mean distance per HEALpix: \begin{lstlisting} SELECT AVG(1000/parallax) AS mean_distance, ivo_healpix_index(5, ra, dec) AS healpix FROM gedr3mock.main WHERE phot_rvs_mean_mag < 12 AND teff_val < 6900 AND teff_val > 3550 -- selection mimicking RVS sample GROUP BY healpix -- takes about half an hour \end{lstlisting} The function ivo\_healpix\_index(5, ra, dec) shown here computes HEALpix indices based on RA and Dec; for Gaia and related data products, this is in general not necessary because by construction of the \verb|source_id| column one can obtain the HEALpix (in this case, of order 5) somewhat faster by computing ROUND(\texttt{source\_id}/$\left(2^{35}\times 4^{(12-5)}\right)$), but the function might be useful for tables without \verb|source_id|. We can not use the statement ``WHERE radial\_velocity IS NOT NULL'' because in GeDR3mock all radial velocities are known. Therefore the selection function needs to be approximated. Figure\,\ref{fig:rvs_sample} shows the mean distance per HEALpix which could be directly used as a prior parametrisation. 7.1\,M sources are returned by GeDR3mock which is more than the 5.3\,M that GDR2 has below G$_\mathrm{RVS}=12$ (G$_\mathrm{RVS}$ needs to be approximated using Equation 2 and 3 from \citet{2018A&A...616A...1G}). The reason of course is that the effective magnitude limit is brighter in the dense parts of the sky. Cutting on G$_\mathrm{RVS}<12$ is only a first order approximation. For refinement we recommend to produce a custom magnitude limit map for the RVS sample using the \texttt{gdr2\_completeness} package. \begin{figure} \includegraphics[width=\linewidth]{gfx/RVS_distances.png} \caption{Mean distances over the sky in galactic coordinates in GeDR3mock with G$_\mathrm{RVS}<12$ and $3550<\texttt{teff\_val}<6900$. The color encodes mean distance logarithmically. In total this selection returns 7.1\,M sources.} \label{fig:rvs_sample} \end{figure} \subsection{Parallax uncertainty} Because measured parallaxes can have very large uncertainties, the distribution of measured parallaxes can be quite different than for mock (true) parallaxes. We show this for the HEALpix 7876 (at level 5) which is a low density out-off-plane field at l = 20, b = 30. GDR2 contains 46\,k sources (G\,$<20.7$) and GeDR3mock has 39\,k sources in that HEALpix. Figure\,\ref{fig:parallax_error} shows, from left to right, the inverted parallax vs the G for: GeDR3mock; the same with parallax noise added; GDR2. In the absence of measurement uncertainty on the left we see a bimodal distribution in parallax at G=20.7, the peak at 1\,kpc consists mainly of lower main sequence stars while the one at about 8\,kpc consists mainly of upper main sequence and turn-off stars. These two sequences merge when the parallax uncertainty is added (the G magnitude error is negligible in this diagram). Similarly, the diagonal line in the top right of the three CMDs, which corresponds to the red clump, becomes blurred once noise is added. Noise can be added from within ADQL using: \begin{lstlisting} SELECT parallax, phot_g_mean_mag, GAVO_NORMAL_RANDOM(parallax,parallax_error) AS parallax_obs FROM gedr3mock.main WHERE source_id BETWEEN 4433793833146253312 AND 4434356783099674623 -- only a low-density HEALpix of level 5 -- takes few seconds \end{lstlisting} The numbers in the ``WHERE'' statement are $2^{35}\times 4^{(12-5)}\times7876$ and $2^{35}\times 4^{(12-5)}\times7877-1$. The above ADQL query produces the data for the plot together with the analog query for GDR2. \begin{figure*} \includegraphics[width=\linewidth]{gfx/Parallax_error.png} \caption{Inverted parallaxes (in mas) vs. G magnitude for GeDR3mock, GeDR3mock noise added and GDR2 from left to right.} \label{fig:parallax_error} \end{figure*} \subsection{CMD in Baade's window - magnitude limit} Here we look at a CMD in a high density area, namely Baade's window. This time we add noise to the mock photometry and compare to GDR2 data. The G magnitude limit, when parallax and BP and RP measurement are required, is 18.9\,mag. We only query in a circle of radius 0.1\,degree to keep the runtime short (query runs in synchronous mode). GDR2 contains 13\,k sources, whereas GeDR3mock contains 134\,k sources. When applying the magnitude cut, these numbers change to 12\,k and 18\,k respectively. The GeDR3mock data for Figure\,\ref{fig:baades_window} comes from the following query: \begin{lstlisting} SELECT phot_g_mean_mag, phot_bp_mean_mag, phot_rp_mean_mag, GAVO_NORMAL_RANDOM(phot_g_mean_mag,phot_g_mean_mag_error) AS g_obs, GAVO_NORMAL_RANDOM(phot_bp_mean_mag,phot_bp_mean_mag_error) AS bp_obs, GAVO_NORMAL_RANDOM(phot_rp_mean_mag,phot_rp_mean_mag_error) AS rp_obs FROM gedr3mock.main WHERE DISTANCE(270.879, -30.022, ra, dec)< 0.1 -- takes a few seconds \end{lstlisting} The left (blue) plume contains upper main sequence and turn-off stars, while the right (red) plume contains giant stars. The overdensity at G\,=\,16\,mag is the red clump (c.f. Figure\,\ref{fig:mag_distribution}). Both plumes seem to have merged at fainter magnitudes in GDR2, whereas even with noise applied these remain distinct in mock. Only at fainter magnitudes does the noise become visible, as seen in the spreading in colour. \begin{figure*} \includegraphics[width=\linewidth]{gfx/BW_CMD.png} \caption{CMD of Baade's Window (a circle of 0.1\,degree) for GeDR3mock, GeDR3mock noise added, and GDR2 (panels from left to right). The empirically determined g magnitude limit is indicated as a grey line. The density distribution is renormalised above the magnitude limit.} \label{fig:baades_window} \end{figure*} \subsection{Local 50\,pc sample} The local normalisation, i.e.\ the local stellar mass density, is a common benchmark for Galaxy models, we query the 50\,pc sample using: \begin{lstlisting} SELECT initial_mass, current_mass, age, popid, feh, bp_rp, phot_g_mean_mag, parallax, GAVO_NORMAL_RANDOM(parallax,parallax_error) AS parallax_obs FROM gedr3mock.main WHERE 1/parallax < 0.05 -- takes about 10 minutes \end{lstlisting} This returns 49,934 sources. In Table\,\ref{tab:local_mass} we compare our local mass density to Model B of \citet{2014A&A...564A.102C}, their table 7. The mass values agree pretty well, just the thick disk is only about 5\% of the local mass density, compared to their 9\%. \begin{table}[] \caption{Contribution of all galactic components to the local stellar mass density. We compare to Model B from \citet{2014A&A...564A.102C}.} \centering \begin{tabular}{c|c|c|c} popid & age [Gyr] & GeDR3mock & Model B \\ \hline \phantom{1} & [Gyr] & \multicolumn{2}{c}{10$^{-3}\times$M$_\odot$\,pc$^{-3}$}\\ \hline Thin disk 0& 0 - 0.15 & 1.7 & 1.9 \\ \phantom{Thin disk }1 & 0.15 - 1 & 4.9 & 5.0\\ \phantom{Thin disk }2 & 1 - 2 & 3.6 & 4.1\\ \phantom{Thin disk }3 & 2 - 3 & 3.1 & 2.8\\ \phantom{Thin disk }4 & 3 - 5 & 5.4 & 4.9\\ \phantom{Thin disk }5 & 5 - 7 & 5.7 & 5.0\\ \phantom{Thin disk }6 & 7 - 10 & 11.1& 9.3\\ Total thin disk & 0 - 10 & 35.5 & 33.0 \\ White dwarfs & 0 - 12 & 5.0 & 7.1\\ Thick disk 7 & 10 - 12& 1.7 & 2.9\\ \end{tabular} \label{tab:local_mass} \end{table} Figure\,\ref{fig:local_age} shows the age distribution of the local 50\,pc sample. The piecewise flat, but exponentially decreasing SFR (for thin-disk \texttt{popid} 0 to 6) is visible, as well as a local overdensity of very young (dynamically cold) stars. \begin{figure} \includegraphics[width=\linewidth]{gfx/local_age_distribution.png} \caption{The age distribution of the 50\,pc sample. } \label{fig:local_age} \end{figure} All stars of the 50\,pc sample are depicted in Figure\,\ref{fig:local_cmd} together with their respective metallicities (colour coded). The sample contains 4,162 white dwarfs (WD) for which the current mass is much lower than the initial mass. Extrapolating to a 100\,pc sample 10,856 of these WDs would be within the completeness range of \citet{2018MNRAS.480.4505J}. They find 8,555 stars, which is only a 20\,\% difference. The stellar distribution in the CMD looks reasonably well but the pre-main sequence might be a bit too pronounced, as it was in GDR2mock \citep{2018PASP..130g4101R}. \begin{figure} \includegraphics[width=\linewidth]{gfx/local_cmd.png} \caption{The colour absolute magnitude diagram of the 50\,pc sample, with metallicity colour coded. In total we have 50\,k sources with 4\,k being white dwarfs.} \label{fig:local_cmd} \end{figure} We find that 214 (0.4\%) mainly faint sources would have scattered out of our 50\,pc sample if cutting on observed parallax. Vice versa 236 that are truly outside of 50\,pc would have scattered in when cutting on observed parallax. The 10\,\% increase for the in-scattering stars is due to the assymetric volume at the border of the 50\,pc sphere, given that the stellar density is almost isotropic. \subsection{Other photometric bands} \label{sec:parsec_props} It is possible to query the absolute magnitudes and extinctions in other bands (UBV, 2MASS, SDSS) for specific values of $A_0$ via the \texttt{gedr3mock.parsec\_props} table. An example query for apparent magnitudes in 2MASS bands could be: \begin{lstlisting} SELECT phot_g_mean_mag, bp_rp AS color, tmass_j-5*LOG10(parallax/100)+a0*A0_1_tmass_j AS tmass_j, tmass_ks-5*LOG10(parallax/100)+a0*A0_1_tmass_ks AS tmass_ks FROM gedr3mock.main JOIN gedr3mock.parsec_props USING (index_parsec) -- crossmatching with the PARSEC isochrone table WHERE source_id BETWEEN 4433793833146253312 AND 4434356783099674623 -- takes a few seconds \end{lstlisting} Beware that all values in the \texttt{parsec\_props} table are binned according to the procedure outlined in Section\,\ref{sec:isochrones} (they are mapped onto the catalog stars via \texttt{index\_parsec}), which means that the CMD distribution will be somewhat discretised. Also by using \texttt{A0\_1\_tmass\_j} in the above query we approximate the extinction with a low $A_0$ value, which means that for large $A_0$ values the extinction will be overestimated since the absorption does not scale linearly with $A_0$ as the source spectrum gets redder with dust column density. \section{Summary} \label{sec:summary} We have presented the generation and content of a Gaia early DR3 mock stellar catalog (GeDR3mock). With respect to the previous version, GDR2mock \citep{2018PASP..130g4101R}, we have updated the thin disk model \citep{2014A&A...564A.102C} as well as the 3d extinction map \citep{2019arXiv190502734G} and isochrones \citep{2017ApJ...835...77M}, which now also include white dwarfs \citep{Bertolami_2016}. We also added a simple model of the Magellanic Clouds and open clusters, the latter including internal rotation. We refined the uncertainty model by training empirically on GDR2 data and scaled it to the longer time baseline of Gaia EDR3. A main focus of our investigation is modelling the selection function of the Gaia instrument and DPAC filtering. We provide all-sky magnitude limit maps \citep{2018ascl.soft11018R} approximated empirically by the mode of the magnitude distribution in a specific line-of-sight. A better comparison between model and data is achieved when applying those cuts to the relevant subsets. Similarly we investigate how many sources in GeDR3mock would suffer from decreased visibility due to contrast sensitivity\,\citep{2019A&A...621A..86B} and flag those stars in GeDR3mock. In order for the user to be able to create their own synthetic stellar catalog from N-body data or a galaxy model, we provide the routines we used for generating our catalog in the python package \texttt{galaxia\_wrap} \citep{2019ascl.soft01005R}, as well as the isochrones and the modified \texttt{galaxia} software, and the jupyter notebooks that illustrate their use\footnote{\url{https://github.com/jan-rybizki/Galaxia_wrap}}. We provided some example \texttt{ADQL} queries to show the many possible catalog interactions and to compare GDR2 to our mock stellar catalog. We plan to add columns/tables that update the magnitude limits and uncertainty estimates once Gaia EDR3 is released. These additions will be announced on the GAVO site of the catalog\footnote{\url{http://dc.g-vo.org/browse/gedr3mock/q}}. In preparation for the "full" Gaia DR3, we plan to augment GeDR3mock with data products that will be new in full Gaia DR3, including binaries, extragalactic objects, and chemical abundances. \section{Acknowledgements} This work made use of the following software packages: \texttt{topcat} \citep{2005ASPC..347...29T}, \texttt{HEALpix} \citep{2005ApJ...622..759G}, \texttt{astropy} \citep{2018AJ....156..123A}, \texttt{mwdust} \citep{2016ApJ...818..130B}, \texttt{dustmaps} \citep{2018JOSS....3..695M}, \texttt{amuse} \citep{2009NewA...14..369P}. We estimate the CO$_2$ footprint of this publication as follows: 6 person-months of work (MPIA yearly average per employee: 9 tons) = 4.5 tons. Data access: 3\,years * 1\,KW (conservative estimate of server electricity consumption) * 5\% (GeDR3mock consumed data volume) = 1.3\,MWh corresponding to 0.6\,tons CO$_2$ with the average German energy mix. I will not travel anywhere by plane for the purpose of promoting this paper. We thank the anonymous referee for their thorough inspection and helpful comments. We thank the German Astrophysical Virtual Observatory\footnote{\url{http://www.g-vo.org/}} for the publishing platform and for fruitful discussions on the technical aspects of this endeavor. YC acknowledges support from the ERC Consolidator Grant funding scheme (project STARKEY, G.A. n. 615604). This research has made use of the VizieR catalog access tool, CDS, Strasbourg, France (DOI: 10.26093/cds/vizier). The original description of the VizieR service was published in A\&AS 143, 23 This work has made use of data from the European Space Agency (ESA) mission Gaia, processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This work was supported by the MINECO (Spanish Ministry of Economy) through grant ESP2016-80079-C2-1-R and RTI2018-095076-B-C21 (MINECO/FEDER, UE), and MDM-2014-0369 of ICCUB (Unidad de Excelencia 'María de Maeztu'). TCG acknowledges support from Juan de la Cierva - Formaci\'on 2015 grant, MINECO (FEDER/UE). This work was funded by the DLR (German space agency) via grant 50\,QG\,1403. \bibliographystyle{aasjournal}
1,108,101,562,697
arxiv
\section{Introduction} The subject of chirality is ubiquitous in many branches of physics. It plays a prominent role in the understanding of many physical systems, in particular magnetic systems. Recently, the influence of spin chirality in magnon excitations of ordered quantum magnetic materials has attracted considerable attention. Chirality-induced topological magnetic systems exhibit novel properties \cite{alex0,alex1, alex1a, zhh, alex2, alex4, alex4h, mak5,shin,shin1} similar to topological insulators in electronic systems \cite{yu6,yu7, fdm}. In these systems, the propagation of collective excitations (magnons) is similar to spin-orbit coupling induced propagation in non-interacting electronic systems. However, non-interacting magnons can propagate without dissipation as they are uncharged quasiparticles. This property is crucial for future dissipationless magnon transport \cite{kru,kru1,kru2, alex1}. In magnetic structures with corner sharing triangles, chirality is induced by the Dzyaloshinsky-Moriya interaction (DMI) \cite{dm} and spin scalar chirality is defined as $\chi_{ijk}={\bf S}_i\cdot ({\bf S}_j\times {\bf S}_k)$, where $(i,j,k)$ label sites on the triangular plaquettes of the lattice and ${\bf S}_i$ is the spin at site $i$. Recently, chirality-induced transports have been observed in two-dimensional (2D) single-layer kagome magnet Cu(1-3, bdc) \cite{alex6a, alex6}. Also in a number of 3D single-layer chiral ferromagnetic pyrochlores such as Lu$_2$V$_2$O$_7$, Ho$_2$V$_2$O$_7$, and In$_2$Mn$_2$O$_7$ \cite{alex1,alex1a}. For ferromagnetic pyrochlores, thermal Hall conductivity changes sign upon reversing the direction of magnetic field, whereas for kagome magnet thermal Hall conductivity changes sign as function of magnetic field or temperature. There has been a great interest in these chirality-induced topologically protected magnetic systems \cite{xc,sol,sol1,kkim}. Recently, chirality-induced magnetic bilayer-skyrmion lattice has been shown to exhibit novel properties completely different from single-layer skyrmions \cite{mak5}. However, transport properties of chirality-induced magnetic bilayer magnons have not been addressed in many chiral magnetic systems. In this Letter, we study the chirality-induced magnetic transport in AA-stacked bilayer honeycomb chiral magnet coupled either ferromagnetically or antiferromagnetically. For ferromagnetically coupled layers, we compute the thermal Hall conductivity $\kappa_{xy}$, which shows a similar trend to that of single-layer honeycomb ferromagnetic insulator \cite{sol,sol1} with no sign change. In contrast, for antiferromagnetically coupled layers, we observe a sign change in the thermal Hall conductivity and spin Nernst conductivity \cite{alex7} as the magnetic field is reversed. We show that fully antiferromagnetically coupled bilayer with ordered N\'eel state exhibits similar properties to that of antiferromagnetically coupled bilayer ferromagnets. These results suggest an experimental search for chirality-induced bilayer honeycomb chiral magnets. In fact, many experimental realizations of bilayer honeycomb-lattice systems have been reported. They include magnetic compounds such as CrBr$_3$ \cite{dav0,dav} which is a spin-$1/2$ bilayer honeycomb ferromagnetic materials. Na$_3$Cu$_2$SbO$_6$ \cite{aat1} and $\beta$-Cu$_2$V$_2$O$_7$ \cite{aat} are spin-$1/2$ Heisenberg antiferromagnetic materials, in each of which the $S = 1/2$ Cu$^{2+}$ ions are situated on the sites of weakly coupled honeycomb-lattice layers. Besides, the iridates A$_2$IrO$_3$ (A= Na, Li) \cite{aat0,aat00} have magnetically ordered Mott phases in which the Ir$^{4+}$ ions form effective $S=1/2$ moments arrayed on weakly-coupled honeycomb-lattice layers. In these honeycomb-lattice materials, spin chirality or DMI can be induced using many experimental growth techniques and our results can be confirmed directly. \section{Bilayer ferromagnetic insulator} The Hamiltonian for AA-stacked bilayer honeycomb chiral magnet shown in Fig.~\ref{lattice1} is given by \begin{align} H=H_{FM}^\tau +H_{DM}^\tau +H_{ext.}^\tau+ H_{inter} \end{align} where \begin{align} H_{FM}^\tau &=-J\sum_{\langle i, j\rangle}{\bf S}_{i}^\tau\cdot{\bf S}_{j}^\tau-J^\prime\sum_{\langle\la i, j\rangle\ra}{\bf S}_{i}^\tau\cdot{\bf S}_{j}^\tau \label{h2}\\ H_{DM}^\tau&=\sum_{\langle \langle i,j\rangle\ra} {\bf D}_{ij}\cdot{\bf S}_{i}^\tau\times{\bf S}_{j}^\tau;~H_{ext.}^\tau =-h\sum_{i}S_{i,z}^\tau,\\H_{inter.}&=-J_{\perp}\sum_{ i}{\bf S}_{i}^T\cdot{\bf S}_{i}^B,\label{h} \end{align} where $\tau$ denotes the top ($T$) and bottom ($B$) layers. ${\bf S}_{i}$ is the spin moment at site $i$, $J>0$ is a nearest-neighbour (NN) ferromagnetic interaction on each layer, $J^\prime>0$ is a next-nearest-neighbour (NNN) ferromagnetic interaction on each layer, and $\bold{D}_{ij}$ is the DMI vector between site $i$ and $j$, allowed by the NNN triangular plaquettes on the honeycomb lattice, where ${\bf D}_{ij}=\nu_{ij}{\bf D}$, and $\nu_{ij}=\pm 1$. The Zeeman magnetic field is $h$ in units of $g\mu_B$. The interaction $J_{\perp}>0$ represents the ferromagnetic or antiferromagnetic ($J_{\perp}<0$) interlayer coupling. \begin{figure*}[!] \centering \subfigure[\label{lattice1}]{\includegraphics[width=.35\linewidth]{unit_copy_R}} \quad\quad \subfigure[\label{lattice2}]{\includegraphics[width=.35\linewidth]{unit_AFM}} \caption{Color online. $(a)$ Ferromagnetically coupled AA-stacked bilayer honeycomb magnets. For antiferromagnetically coupled system, the spins on the upper or layer are pointing downwards. $(b)$ AA-stacked honeycomb-lattice bilayer N\'eel antiferromagnet.} \end{figure*} \section{Magnon bands} \subsection{Ferromagnetic interlayer coupling} For ferromagnetic interlayer coupling $J_\perp >0$, the Fourier space Hamiltonian of the Holstein-Primakoff \cite{HP} boson operators is given by $ H=\sum_{\bold k}\psi^\dagger_{\bold k}\cdot \mathcal{H}(\bold k)\cdot\psi_{\bold k}$ where $\psi^\dagger_{\bold k}= (a_{\bold{k}A_1}^{\dagger},\thinspace a_{\bold{k} B_1}^{\dagger},a_{\bold{k} A_2}^{\dagger},\thinspace a_{\bold{k} B_2}^{\dagger})$, and \begin{align} \mathcal{H}_{FM}(\bold k)&=\epsilon_{a}\tau_0\otimes\sigma_0+ \tau_0\otimes\sigma_z m_{\bold{k}\phi}-v_s\tau_0\otimes(f_\bold{k}\sigma_+ + f_\bold{k}^*\sigma_{-})\nonumber\\&-v_\perp\tau_x\otimes\sigma_0, \label{honn} \end{align} where $\boldsymbol{\sigma}$ and $\boldsymbol{\tau}$ are triplet pseudo-spin Pauli matrices for the sublattice and layer degrees of freedom respectively, $\tau_0$ and $\sigma_0$ are identity matrices in each space, and $\sigma_\pm=(\sigma_x\pm i\sigma_y)/2$; $v_0=h+zv_s+z^\prime v_s^\prime$, $v_s(v_s^\prime)(v_D)(v_\perp)= JS(J^\prime S)(DS)(J_\perp S)$, $v_t=\sqrt{v_s^{\prime 2} +v_D^2}$, $z(z^\prime)=3(6)$, and $\epsilon_{a}=v_0+v_\perp-2v_t\sum_{\mu} \cos(\bold{k}\cdot\bold{a}_\mu) \cos\phi$. Here, $f_\bold{k}= e^{ik_ya/2}\left( 2\cos\sqrt{3}k_xa/2+e^{-3ik_ya/2}\right),$ and $m_{\bold{k}\phi}= 2v_t\sum_{\mu} \sin(\bold{k}\cdot\bold{a}_\mu) \sin\phi$, where $\bold a_1=\sqrt{3}\hat x;~ \bold a_2=(-\sqrt{3}\hat x, {3}\hat y)/2~ \bold a_3=-(\sqrt{3}\hat x, {3}\hat y)/2$. We have assumed a DMI along the $z$-axis. The phase factor $\phi=\arctan(D/J^\prime)$ is a magnetic flux generated by the DMI on the NNN triangular plaquettes. The eigenvalues are given by \begin{align} \epsilon_{\alpha \pm}({\bf k})=\epsilon_a +(-1)^\alpha v_\perp \pm\sqrt{m_{\bold{k}\phi}^2+|v_sf_\bold{k}|^2}, \label{fm} \end{align} where $\alpha=1,2$ is the layer index. For $v_\perp=0$, the Hamiltonian decouples to two single layers \cite{sol}. The magnon band is shown in Fig.~\ref{bands}(a). At the Dirac points ${\bf K}_\pm= (\pm 4\pi/3\sqrt{3}a, 0)$, the eigenvalues reduces to $\epsilon_{\alpha\pm}({\bf K}_\pm)=\epsilon_{a}+(-1)^\alpha v_\perp\pm |m_\phi|,$ where $m_\phi=3\sqrt{3}v_t\sin\phi$. This is similar to AA-stacked spin-orbit-coupled bilayer graphene \cite{mak55, mak66}. \begin{figure}[ht] \centering \includegraphics[width=3in]{Band_L} \caption{Color online. Magnon band structures of spin-$1/2$ bilayer honeycomb chiral magnet for $h=v_s=0.5$, $v_D=v_s^\prime=0.05,~v_\perp=1,~\phi=\pi/4$. $(a)$~Ferromagnetic coupling. $(b)$~ Antiferromagnetic coupling.} \label{bands} \end{figure} \subsection{Antiferromagnetic interlayer coupling} We now consider antiferromagnetically coupled layers, with the spins on the upper or lower layer pointing downwards, and the interlayer coupling $J_\perp<0$. The top and bottom layers are still ferromagnetic insulators described by $H_{FM}^\tau$ and $H_{DM}^\tau$. To study the magnetic excitations, we perform a $\pi$-rotation about the $S_x$-axis on the top layer, \begin{eqnarray} S_{i}^x&\to S_{i}^x,~ S_{i}^y\to -S_{i}^y,~S_{i}^z\to -S_{i}^z\label{ro3}.\end{eqnarray} This rotation keeps the upper ferromagnetic layer invariant but points the spins in the new $z$-direction, and changes the sign of $H_{ext}^\tau$ and $H_{DM}^\tau$ on the top layer. The Fourier space Hamiltonian is $ H=\frac{1}{2}\sum_{\bold k}\Psi^\dagger_{\bold k} \mathcal{H}_{AFM}(\bold k)\Psi_{\bold k}+\text{const.}\label{ham1}$, where $\Psi_{\bold k}=(\psi_\bold{k}^\dagger,~\psi_{-\bold{k}})$, and \begin{align} \mathcal{H}_{AFM}(\bold k)=\left( \begin{array}{cc} \mathcal A(\bold{k})& \mathcal B\\ \mathcal B& \mathcal A^*(-\bold{k}) \end{array} \right). \label{atn} \end{align} The matrices $\mathcal A(\bold{k})$ and $\mathcal B$ are given by \begin{align} &\mathcal A(\bold{k})=\epsilon_{0}\tau_0\otimes\sigma_0+ \tau_z\otimes\sigma_z m_{\bold{k}\phi}-v_s\tau_0\otimes(f_\bold{k}\sigma_+ + f_\bold{k}^*\sigma_-)\nonumber\\&+h\tau_z\otimes\sigma_0;~\mathcal B= |v_\perp|\tau_x\otimes\sigma_0, \label{mata} \end{align} where $\epsilon_{0}=zv_s+z^\prime v_s^\prime+ |v_\perp|-2v_t\sum_{\mu} \cos(\bold{k}\cdot\bold{a}_\mu) \cos\phi$. Note that $\mathcal A(\bold k)\neq \mathcal A^*(-\bold k)$ due to the DMI. The Hamiltonian is diagonalized by generalized Bogoliubov transformation (see Supplemental material: \href{http://stacks.iop.org/JPhysCM/28/47LT02/mmedia}{stacks.iop.org/JPhysCM/28/47LT02/mmedia}). The positive eigenvalues are given by \begin{align} \epsilon_{\alpha,\pm }({\bf k})= (-1)^\alpha h+\sqrt{\bigg[\epsilon_0\pm \sqrt{m_{\bold{k}\phi}^2+|v_sf_\bold{k}|^2}\bigg]^2-v_\perp^2}. \label{afm} \end{align} For $h=0$, the energy bands are doubly degenerate --- one of the major differences between ferromagnetically and antiferromagnetically coupled layers. Also notice that antiferromagnetically coupled layers have a linear dispersion near ${\bf \Gamma}$ (see Fig.~\ref{bands}(b) ) as opposed to a quadratic dispersion in the ferromagnetic case. For $v_\perp=0$, Eq.~\ref{afm} decouples and reduces to Eq.~\ref{fm} with opposite magnetic field on each layer. \subsection{Bilayer antiferromagnetic insulator} In the fully antiferromagnetic case, each layer is modeled by the Heisenberg antiferromagnet, with $J,J^\prime, J_\perp<0$. Due the $J^\prime$ term, the Heisenberg antiferromagnet is frustrated as opposed to the ferromagnetic counterpart. With zero DMI $H_{DM}^\tau =0$, the system is considered to describe bilayer honeycomb antiferromagnetic material Bi$_3$Mn$_4$O$_{12}$(NO$_3$) \cite{mak0, mak1, mak2, mak3, mak4}. The ground state phase diagram of this model has been studied extensively \cite{mak0, mak1, mak2, mak3, mak4}. It consists of an ordered N\'eel state for $J^\prime/J<1/6$ and a nonmagnetic state for $J^\prime/J>1/6$ \cite{mak0}. For large values of $J_\perp$, the ground state is an interlayer valence-bond crystal in which the spins from both layers form dimers \cite{mak3}. We are interested in the topological effects of the ordered N\'eel state for $J^\prime/J<1/6$ shown in Fig.~\ref{lattice2}. Such N\'eel state order exists in the bilayer honeycomb iridates A$_2$IrO$_3$ (A= Na, Li) \cite{aat0,aat00}. In this phase, the band structure in the absence of the chiral DMI exhibits Dirac points at ${\bf K}_\pm= (\pm 4\pi/3\sqrt{3}a, 0)$ \cite{mak1, mak2,mak3, mak4}. A nearest-neighbour DMI does not introduce chirality and an external magnetic field introduces canting up to the saturated field when fully polarized ferromagnetic states are recovered. These terms do not open a gap at ${\bf K}_\pm$. As in the ferromagnetic case, chirality is introduced by a next-nearest-neighbour DMI. As we now show, this is very similar to antiferromagnetically coupled bilayer ferromagnets studied above at zero magnetic field. The only difference is that the NNN coupling is restricted to $J^\prime/J<1/6$. We begin by performing the $\pi$-rotation describe above on sublattice $A_1$ and $B_2$ such that the spins point along the new rotated $z$-axis. The SU(2)-invariant NN and NNN interactions on each layer are invariant under this rotation, but the U(1)-invariant out-of-plane DMI changes sign as in the previous case. In the bosonic representation, the Hamiltonian has the form as Eq.~\ref{atn} with \begin{figure}[ht] \centering \includegraphics[width=3in]{BandA_R} \caption{Color online. Magnon bulk bands of the spin-$1/2$ AA-stacked bilayer N\'eel antiferromagnet along $k_y=0$ at $h=0$. The parameters are $v_s=0.5$ $(a)$. $v_D=v_\perp=0$; $v_s^\prime=0.05$. $(b)$. $v_\perp=v_s^\prime=0.0,~v_D=0.05,~\phi=\pi/2$. $(c)$. $v_\perp=v_s^\prime=0.05,~v_D=0.0,~\phi=0$; $(d)$. $v_D=v_s^\prime=0.05,~v_\perp=0.5,~\phi=\pi/4$.} \label{bandsA} \end{figure} \begin{figure*}[ht] \centering \subfigure[\label{e0}]{\includegraphics[width=.47\linewidth]{Edge_R}} \subfigure[\label{ee0}]{\includegraphics[width=.47\linewidth]{EdgeA_R}} \caption{Color online. Chiral edge states of AA-stacked bilayer honeycomb chiral magnets with $v_s=0.5$. $(a)$ Ferromagnetically coupled layers $(i)~v_D=0.1,~v_s^\prime=0.05, ~h=0.1, ~v_\perp=0$. $(ii)~v_D=0.1,~v_s^\prime=0.0, ~h=0.1, ~v_\perp=0.05$. $(iii)~v_D=0.1,~v_s^\prime=0.05, ~h=0.1, ~v_\perp=0.375$. $(iv)~v_D=0.0,~v_s^\prime=0.05, ~h=0.1, ~v_\perp=0.5$. $(b)$ Antiferromagnetically coupled layers $(i)~v_D=0.1,~v_s^\prime=0.05,~h=v_\perp=0$. $(ii)~v_D=0.1,~v_s^\prime=0.0, ~h=0.1, ~v_\perp=0.5$. $(iii)~v_D=0.1,~v_s^\prime=0.05, ~h=0.5, ~v_\perp=0.5$. $(iv)~v_D=0.0,~v_s^\prime=0.05, ~h=0.1, ~v_\perp=0.5$.} \end{figure*} \begin{figure}[ht] \centering \subfigure[\label{e1}]{\includegraphics[width=.45\linewidth]{Edgef}} \subfigure[\label{ee1}]{\includegraphics[width=.45\linewidth]{Edgee}} \caption{Color online. Schematics of chiral edge states (green arrows). $(a)$~Ferromagnetically coupled layers. $(b)$ Antiferromagnetically coupled layers. } \end{figure} \begin{align} \mathcal A(\bold k)&=\epsilon_{0}\tau_0\otimes\sigma_0+ m_{\bold{k}\phi}\tau_z\otimes\sigma_0, \\\mathcal B(\bold k)&=v_s\tau_0\otimes(f_\bold{k}\sigma_+ + f_\bold{k}^*\sigma_-)+|v_\perp|\tau_x\otimes\sigma_0, \label{mata} \end{align} where $\epsilon_{0}=zv_s-z^\prime |v_s^\prime|+|v_\perp|+2v_t\sum_{\mu} \cos(\bold{k}\cdot\bold{a}_\mu) \cos\phi$. As before $\mathcal A(\bold k)\neq \mathcal A^*(-\bold k)$, but $\mathcal B(\bold k)= \mathcal B^*(-\bold k)$. The Hamiltonian is diagonalized as usual. The positive eigenvalues are given by \begin{align} \epsilon_{\alpha\pm}({\bf k})= \sqrt{m_{\bold{k}\phi}^2+\epsilon_0^2-v_\perp^2-|v_s f_\bold{k}|^2\pm 2g_\bold{k}}, \end{align} where $g_\bold{k}=\sqrt{m_{\bold{k}\phi}^2(\epsilon_0^2- |v_s f_\bold{k}|^2) +|v_\perp v_s f_\bold{k}|^2}$. The band structures depicted in Fig.~\ref{bandsA} are very similar to that of bilayer ferromagnet with antiferromagnetic coupling for $h=0$. As mentioned above, a finite magnetic field introduces spin canting. In this case, both the out-of-plane and in-plane DMIs contribute to the magnon excitations. This scenario is analyzed in the Supplemental material: \href{http://stacks.iop.org/JPhysCM/28/47LT02/mmedia}{stacks.iop.org/JPhysCM/28/47LT02/mmedia}. \begin{figure*}[ht] \centering \subfigure[\label{e5}]{\includegraphics[width=.4\linewidth]{Thermal_FM}} \subfigure[\label{ee5}]{\includegraphics[width=.4\linewidth]{Thermal_FM_T}} \caption{Color online. ~ Thermal Hall conductivity of ferromagnetically coupled AA-stacked bilayer honeycomb chiral magnet as function of $(a)$ magnetic field $(b)$~ temperature. The parameters are the same as Fig.~\ref{bands} (a). } \label{thfm} \end{figure*} \begin{figure*}[ht] \centering \subfigure[\label{e6}]{\includegraphics[width=.4\linewidth]{Thermal_AFM}} \subfigure[\label{ee6}]{\includegraphics[width=.4\linewidth]{Thermal_AFM_T}} \caption{Color online. ~ Thermal Hall conductivity of antiferromagnetically coupled AA-stacked bilayer honeycomb chiral magnet as function of $(a)$ magnetic field $(b)$~ temperature. The parameters are the same as Fig.~\ref{bands} (b). } \end{figure*} \section{Magnon Transports} \subsection{Magnon edges states} Magnetic transports in topological magnon insulator materials are encoded in the protected chiral edge states of the system induced by the DMI. Figure~\ref{e0} shows the evolution of the chiral protected edge states of the ferromagnetically coupled layers in different parameter regimes. The chiral edge states propagate in the same direction as depicted schematically in Fig.~\ref{e1}. For antiferromagnetically coupled case, the same situation is observed with different parameters as depicted in Fig.~\ref{ee0}. However, the chiral edge states propagate in opposite directions for the top and bottom layers because of opposite DMI as shown schematically in Fig.~\ref{ee1}. \subsection{Magnon Hall effect} The most interesting property of chiral magnetic systems is the observation of magnon Hall effect \cite{alex1,alex1a, alex6, alex6a}. In magnon Hall effect \cite{alex0}, as well as magnon spin Nernst effect \cite{alex7}, the non-vanishing Berry curvatures induce an effective magnetic field in the system, upon the application of a temperature gradient. The propagation of magnons in the bilayer system is deflected by the chiral DMI. Magnon Hall effect is characterized by a transverse thermal Hall conductivity, given by \cite{alex2} $\kappa_{xy}=-{2k_B^2 T}V^{-1}\sum_{\bold{k}\mu}c_2\left( n_\mu\right)\Omega_\mu(\bold k),$ where $V$ is the volume of the system, $k_B$ is the Boltzmann constant, $T$ is the temperature, $n_\mu\equiv n_B[\epsilon_\mu(\bold k)]=[e^{{\epsilon_\mu(\bold k)}/k_BT}-1]^{-1}$ is the Bose function, $c_2(x)=(1+x)\left( \ln \frac{1+x}{x}\right)^2-(\ln x)^2-2\text{Li}_2(-x),$ and $\text{Li}_2(x)$ is a dilogarithm. Magnon spin Nernst conductivity has a similar definition \cite{alex7} $\alpha_{xy}^s={k_B}V^{-1}\sum_{\bold{k}\mu}c_1\left( n_\mu\right)\Omega_\mu(\bold k),$ where $c_1(x)=(1+x)\ln(1+x)-x\ln x$. The chirality-induced Berry curvature is defined as \begin{align} {\Omega}_\mu(\bold k)=-2\sum_{\mu\neq \mu^\prime}\frac{\text{Im}[ \braket{{\psi}_{\bold{k}\mu}|v_x|{\psi}_{\bold{k}\mu^\prime}}\braket{{\psi}_{\bold{k}\mu^\prime}|v_y|{\psi}_{\bold{k}\mu}}]}{[\epsilon_{\bold{k}\mu}-\epsilon_{\bold{k}\mu^\prime}]^2}, \label{chern2} \end{align} where $\ket{\psi_{\bold{k}\mu}}$ are the eigenstates of the Hamiltonian and $\mu$ labels the bands; $v_{x,y}=\partial \mathcal{H}(\bold k)/\partial k_{x,y}$ defines the velocity operators. Figures~\ref{e5} and \ref{ee5} show the dependence of thermal Hall conductivity on the magnetic field and the temperature for the ferromagnetically coupled layers. As the temperature approaches zero, $\kappa_{xy}$ vanishes due to lack of thermal excitations, but it never changes sign as the temperature increases or the magnetic field changes sign. This is what is observed theoretically in the single layer honeycomb chiral ferromagnet \cite{sol1}. However, for antiferromagnetically coupled layers shown in Figs.~\ref{e6} and \ref{ee6}, we see that $\kappa_{xy}$ changes sign as the magnetic field is reversed and vanishes at zero field. The sign change in $\kappa_{xy}$ is encoded in the magnon bulk bands, the Berry curvatures, and the propagation of the chiral edge states. The sign change in $\kappa_{xy}$ is very similar to what was observed on the pyrochlore chiral magnets upon reversing the direction of the applied magnetic field \cite{alex1, alex2}. Due to the Berry curvature, $\alpha_{xy}^s$ shows similar trends (not shown). We also observe that for the chirality-proximity effect, where only one layer contains a chiral DMI, topological effects are induced in the bilayer system and thermal conductivity $\kappa_{xy}$ is suppressed (see the Supplemental material: \href{http://stacks.iop.org/JPhysCM/28/47LT02/mmedia}{stacks.iop.org/JPhysCM/28/47LT02/mmedia}). \section{Conclusion} We have studied chirality-induced magnon transport in AA-stacked bilayer honeycomb chiral magnets. We observe remarkable distinctive features for ferromagnetic and antiferromagnetic couplings. In particular, the band structure and the chiral edge states have different topological properties. As a result thermal Hall and spin Nernst conductivities show a sign change for antiferromagnetic coupling in contrast to ferromagnetic coupling. As far as we know, chirality-induced transports and thermal Hall effect still await experimental observation on the honeycomb lattice. As mentioned above, there are many accessible AA-stacked bilayer honeycomb quantum magnets in which chirality can be induced and these theoretical results can be confirmed. Experiments can also probe the observed magnon edge states, by noticing that spin-$1/2$ quantum magnets map to hardcore bosons. Thus, the magnon edge states correspond to bosonic edge states, which can be studied experimentally in ultracold atoms on optical lattices similar to the realization of Haldane model \cite{jot}. Our results can also be applied to magnon spintronics in chiral bilayer quantum magnetic systems. \section*{Acknowledgments} Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.
1,108,101,562,698
arxiv
\section{Introduction} \subsection{Background} Autonomous vehicles have been gaining popularity in the recent times and have also been successfully deployed in the field. Waymo, for instance, has launched their self-driving ride sharing service, which is the first to provide fully self-driving experience to the public. While companies like Tesla and comma.ai provide some level of autonomy, they haven't been able to ship a fully autonomous vehicle to the market as of now. There are several components that make up an autonomous driving system. Primarily, these can be broken down into four different modules: perception module, decision-making module, control module, and an actuator-mechanism module~\cite{wang2019lane}. Out of these modules, this work deals with the decision-making module. Decision-making under wide variety of conditions is a hard task to generalize. Hence, it is necessary that the decision maker is robust to various conditions and can generalize or learn from its experience. As such, Reinforcement Learning (RL) comes in handy as several breakthroughs have been made in the recent years in the field of RL. RL, most prominently, Deep RL models have been developed that have achieved super-human performance in games like Go and Poker\cite{silver2017mastering, rebel}. In \cite{rebel}, the authors used self-play RL with search to achieve superhuman performance in heads-up no-limit Texas hold’em poker using limited domain knowledge than any existing poker AI. Advances like these have opened a wide range of potential domains where RL can be used to achieve superhuman performance. Autonomous driving also has several challenges, which currently do not have an exact solution. For instance, Waymo experienced ten instances (out of total 16 simulations) of ``same direction sideswipe'', which involved collisions during lane changing or lane merging maneuvers~\cite{waymo}. \subsection{Related Work} The methods to solve Autonomous Driving problems can be divided into two categories i.e Rule based and learning based algorithms. Although rule-based techniques have had some success in the past, learning-based approaches have also shown their effectiveness in recent years.\par Many conventional solutions are based on explicitly written rules and rely on state machines to shift between specified decision behaviors. For example,the "Boss" that is trained by CMU~\cite{pomerleau1989alvinn} discusses a rule based approach, also team of researchers at Stanford~\cite{montemerlo2008junior} used reward designs to determine trajectories.However, reliable decision has high amount of uncertainity in rule based approach. Learning-based techniques, as a key AI technology, can deliver more advanced and safe decision-making algorithms for autonomous driving. Recently, NVIDIA~\cite{bojarski2016end} researchers trained a deep convolutional neural network (CNN) to train images directly from camera to cluster control. The trained model was able to handle the task of lane change and driving on a gravel road. In addition to supervised learning, reinforcement learning results have significantly improved over the years. Wolf et al.~\cite{wolf2017learning} have presented a method by using DQN to drive a car in stimulated environment. Only 5 action were defined each corresponding to a different steering angle along with training images of size 48$\times$27. The reward function was calculated by taking into account the distance from lane center as well as certain auxiliary information (such as the angle error between the vehicle and the center line). Hoel et al.~\cite{hoel2018automated} used DQN to solve lane change along with vehicle speed. They used a one dimensional vector to define many components including speed, surrounding vehicles and adjacent lanes instead of using front facing images. Wang et al.~\cite{wang2019lane} used safety rate to measure the quality of their model which was calculated based on collision frequency which provided clear idea about stimulation. The results show that the models which considers the speed along with other factors usually outperformed the other models that did not consider the average speed of vehicle. \subsection{Overview} This report is an overview on the implementation of Deep Q-Learning in lane change decision-making problem with rule based constraints. This report is structured as follows. Sec.II discusses the methodology associated with the RL algorithm along with the rule-based constraint formulation. The implementation and the simulation results are discussed in Sec.III and Results are discussed in Sec.IV. Finally, the discussion and conclusion are discussed in Sec.V. \section{Methodology} \subsection{Markov Decision Process} A Markov decision process (MDP) is a mathematical framework for describing decision-making in circumstances where the result is partially random and partly within the control of a decision maker. An MDP's policy satisfies the Markov property, which indicates that the conditional probability distribution of a random process's future state depends solely on the current state when the current state and all previous states are supplied ~\cite{ref1}.\par An MDP is a 4-tuple $M = <S,A, P_{sa},R>$ ~\cite{ref1}, where \begin{enumerate} \item $S$ represents set of states and $s_t \in S$ which represents state at time step $t$. \item $A$ represents set of actions and $a_t \in A$ which represents action at time step $t$. \item $P_{sa}$ is the probability of performing action $a \in A$ in current state $s \in S$ will lead to next state. \item $R$ is the reward function. \end{enumerate} At each time steps, $t = 0, 1, 2,....,$ the agent interacts with the environment and it observes the current state st S of the environment, then chooses and executes a practicable action at $a_t \in A$ depending on the state. Following the completion of the activity in the environment, the agent receives a numerical reward $r_t \in R$ and the next state $s_{t+1}$.\par The purpose of reinforcement learning is to learn an optimum policy that maps from environmental states to agent behaviors by maximizing the cumulative reward ~\cite{ref1}. The policy at time step $t$ , indicated by $\pi t$, relates environmental states to probabilities of choosing each potential action, where $\pi^t(a|s)$ is the probability that $a_t$ = a, if $s_t = s$. Reinforcement learning normally employs a way of maximizing a total anticipated return $G$, which is a cumulative sum of immediate rewards $r$ received over the long run, to arrive at the best policy. $ G_t$ is defined at time step $t$ as \begin{equation} G_t=\Sigma_{k=0}^{\infty}\gamma^kr_{t+k} \end{equation} where $\gamma \in (0,1] $ is the discount factor. \subsection{State Space} The simulator provides information on the location and speed of the ego car as well as other vehicles. It comprises the $x, y$ position of the cars in map coordinates and $s, d$ position in fernet coordinates, the self-driving car's yaw angle in the map and speed in MPH, and the $x, y$ speed in m/s of the other cars. In this project, $s,d$ positions of vehicles in Frenet co-ordinates and the speed of the vehicle are used to represent the states of the vehicles. A $45 \times 3$ matrix is used to represent the state space, which corresponds to the entire traffic situation within the range of 60 meters front and 30 meters behind the ego car as shown in Fig. ~\ref{fig_2}. Each car is approximately 5.5 meters in length and each row in the matrix spans 2 meters, one car can occupy 4 cells in extreme cases. Hence, we fill the 4 cells corresponding an individual car with the car’s normalized speed. The speed is normalised with respect to the maximum speed of the vehicle in frame and minimum speed of vehicle in the frame. Here, the normalized speed of the ego car is positive while other cars are in negative. Cells corresponding to location without any cars are filled with 1s. \begin{figure}[!h] \centering \includegraphics[width=3.4in]{staterepresentation.png} \caption{State Representation} \label{fig_2} \end{figure} \subsection{Action Space} The RL agent (ego car) has three actions to choose from -- go to the left lane, stay in the current lane, or go to the right lane. All the other controls are handled by the low level controller. Table~\ref{tab:actions} shows the individual actions available to the agent. Actions are enumerated from 0 to 2 for each of the cases. \begin{table}[!h] \caption{Action Space for the RL Agent\label{tab:actions}} \centering \begin{tabular}{|c|c|} \hline Action & Description\\ \hline $0$ & stay in the current lane\\ \hline $1$ & go to the right lane\\ \hline $2$ & go to the left lane\\ \hline \end{tabular} \end{table} \subsection{Reward} The self-driving car in this project is only allowed to drive on the right 3 lanes of the highway. In case of an illegal lane change i.e., when action $2$ ($or$ $1$) is selected when the car is on the left-most ($or \; right$) lane, the car is forced to stay in the same lane with a negative reward of $r_{ch1}=-5$. A negative reward of $r_{co}=-10$ is assigned to avoid collision. A penalty of $r_{ch2}=-3$ is given to the agent when it decides to change lane even when there is no car in front of it (invalid lane-change case). On top of changing lanes only when required, the ego car should also drive as fast as possible (within the speed limit). To enforce this, following reward is assigned. \begin{equation} r_v=\lambda(v-v_{ref}) \end{equation} where $v$ represents ego car's average speed, $v_{ref}$ represents reference speed (25 MPH), and $\lambda$ represents normalizing coefficient which is equal to $0.04$. Our reward structure within one decision period is \begin{equation} r=\left \{ \begin{matrix} r_{co} && \text{collision happens}\\ r_{ch1} && \text{illegal lane change} \\ r_{ch2} && \text{invalid lane change} \\ r_{v} + r_{ch3} && \text{legal lane change}\\ r_{v} && \text{normal drive}\\ \end{matrix} \right. \end{equation} When lane change happens without collision. \subsection{Rule-Based Constraints} We apply rule-based limitations based on the planning trajectories and the others' expected trajectories to assure absolute security of the lane change behavior. The low-level controller can anticipate the trajectories of the ego car and the adjacent cars in the desired lane depending on the action determined by the high-level decision maker. The trajectories of adjacent cars are calculated assuming they retain their present speed and stay in the current lane. The choice made by the high-level decision maker is potentially harmful if the distance between the anticipated trajectory of the ego car and that of surrounding cars is smaller than the specified threshold value. This action gets rejected by the low-level controller and the car will continue on its current trajectory. \subsection{Deep Q Network} \label{dqn} Deep Q-Learning is used to determine the optimal action for a given state. Q-Learning is an off-policy TD reinforcement learning algorithm. It evaluates a policy $\pi$ using the state-action value function $Q^{\pi}(s, a)$, which is defined as: \begin{equation} Q^{\pi} (s, a) = \mathbb{E}_{\pi} \left[ \sum_{k=0}^{\infty} \gamma^{k}\;r_{t+k} \bigg| s_t = s, a_t = a \right] \end{equation} The Q-learning algorithm tries to find the optimal state-action value function: \begin{equation} Q^*(s, a) = \max\limits_{\pi} Q^\pi (s, a) \end{equation} $Q^*$ corresponds to the optimal policy $\pi^*$. This value function $Q*$ follows the Bellman Optimality equation: \begin{equation} Q^* (s, a) = \mathbb{E}\left[r + \gamma \max\limits_{a' \in A} Q^* (s', a') \bigg|s, a \right] \end{equation} Using the optimal value function $Q^*$, the optimal policy can be determined by finding actions that maximize the value function at each state~\cite{wang2019lane}. Finding a Q-function is simple enough for a discrete and finite problem. However, for problems like ours with high dimensional and continuous state-space, this method becomes computationally expensive and intractable. A Deep Q-Network, as shown in Figure~\ref{fig_a}, is used to approximate the value function. Following loss function is minimized using Adam optimizer~\cite{wang2019lane}: \begin{equation} L_i(\theta_i) = \mathbb{E} \left[(r+\gamma\max\limits_{a'} Q(s', a'; \theta_i^-) - Q(s, a; \theta_i))^2 \right], \end{equation} The input to the DQN is the state matrix along with a $3D$ vector, which is concatenated to the network after the convolution layers. The first element of the vector corresponds to the normalized speed of the RL agent, while the second and the third element are either 1 (True) or 0 (False) if there is a lane to the left or right respectively. \begin{figure}[!h] \centering \includegraphics[width=2.4in]{ann_architecture.png} \caption{Deep Q-Network Architecture} \label{fig_a} \end{figure} \section{Implementation and Simulation} \label{simulation} \subsection{System Requirements} For our project we used Ubuntu with 16 GB RAM, Intel I5 processor and 4 GB GPU. The following software installations are required for the project: \begin{enumerate} \item{CMake 3.22.0} \item{Python-3.6} \item{Virtual Box-6} \item{Ubuntu.iso} \item{CUDA 11.1.1} \item{Tensorflow-GPU} \item{keras} \end{enumerate} Also, a dedicated GPU was used for better training performance. \subsection{Implementation} \begin{enumerate} \item{If not on Ubuntu, Install Virtual box version 6 or above on your system} \item{Download Ubuntu desktop version 16.04 or above} \item{Now designate sufficient virtual memory and processing power to the virtual box for Ubuntu to run smoothly} \item{Import the disk image of Ubuntu on to the Virtual Box} \item{git clone \url{https://github.com/DRL-CASIA/Autonomous-Driving.git }} \item{ Go to folder "term3-sim-linux" } \item{sudo chmod u+x term3-sim for 64 bit } \item{Ensure that cmake greater than 3.5, make greater than or equal to 4.1, gcc/g++ greater than or equal to 5.4 } \item{Enter in the CarND-test folder and run install-ubuntu.sh to install dependencies (bash install-ubuntu.sh) } \item{Keep in the CarND-test folder and compile: mkdir build and cd build cmake .. } \item{Modify the last line in environment.yaml to your conda installation location, and run conda env create -f environment.yaml to create a virtual environment} \item{Install PyCharm IDE} \end{enumerate} \subsection{Simulator} The simulator used in this project is developed by Udacity~\cite{udacity}. The simulation setup consists of a 3-lane highway as shown in Fig.~\ref{fig_1}. The vehicle with the green trajectory marker is the RL agent. The simulator provides the localization and speed of all the cars in the frame. The speed limit is 50 MPH, and the goal of the RL agent is to maintain the speed limit. The simulator also provides feedback related to the performance of the car in terms of acceleration (normal, tangential, and resultant) and jerk. The requirements for the car are such that it should not experience acceleration of over $10 m/s^2$ and jerk greater than $10 m/s^3$. The total distance of the track is 6946 meters. \begin{figure}[!ht] \centering \includegraphics[width=2.5in]{simulator.png} \caption{Udacity Simulator} \label{fig_1} \end{figure} \subsection{Training Details} The Deep Q-Network model as discussed in Sec.\ref{dqn} is trained for 100 episodes. The input to the network consists of state matrix that encodes the traffic condition. The output of the network are the three state-action values (or Q-values) for each action. The action corresponding to the maximum state-action value is picked. During training the agent picks action either randomly or following the greedy policy. This phenomenon, known as the exploration-exploitation trade-off is handled by implementing a $\epsilon$-greedy policy. With $\epsilon$-greedy policy, the agent selects random action with the probability of $\epsilon$, and greedy-action with $1-\epsilon$ probability. A decaying $\epsilon$ is used with $\epsilon_{0} = 1$, $\lambda_{decay} = 0.99985$, and $\epsilon_{min} = 0.03$. Training steps: \begin{enumerate} \item{Download the updated code from github from \href{https://github.com/ghimiremukesh/Autonomous-Driving/tree/master/decision-making-CarND}{\textcolor{blue}{here}}}\footnote{\href{https://github.com/ghimiremukesh/Autonomous-Driving}{https://github.com/ghimiremukesh/Autonomous-Driving}}. \item{Check the port and host in the code and make sure the port is free} \item{Run train.py file following which simulator opens up} \item{Select the simulation window and graphics quality and hit start} \end{enumerate} \subsection{Simulation Environment} There are three aspects to our simulation experiment's implementation. The simulator is the initial component, which generates environmental data and receives a predetermined path. The second component is the controller and planner, which is in charge of speed control and course planning. The DQN algorithm, which is in charge of high-level lane change decision-making, is the third component.\par The goal of this study is to use deep reinforcement learning to make lateral lane change decisions. The low-level controller includes a rules-based speed controller and a path planner, but not the data processing. A rule-based technique is used for longitudinal speed control, while spline interpolation is used for path planning based on the specified waypoints and prospective target spots matching to the lane change decision outcome. Simultaneously, the controller serves as a lower-level modifier, allowing higher-level decisions to be revised.\par When a training is launched, the simulator loads the environment and waits for control action from the RL agent while following the low-level controller. In the beginning of the training, the car always starts on the middle lane. The low-level rule-based controller maintains the motion of the car in straight line avoiding forward collision. The action picked by the RL agent (either based on the policy or randomly) triggers lane change, which is communicated to the simulator. The simulator then executes the lane change based on the action it receives. An instance of the training is shown in Fig.~\ref{train_1}. The last action executed by the agent is to go to the right lane. This action is communicated to the simulator and the car goes to the right lane as seen in Fig.~\ref{train_2}. A few time steps of training is made available \href{https://drive.google.com/drive/u/3/folders/1e9CaX4bE08x__5hfRL_mxWcqufk7Tjcj}{\textcolor{blue}{here}}. \begin{figure}[!h] \centering \includegraphics[width=2.3in]{train_1.png} \caption{Simulation During Training and the RL Agent's Action} \label{train_1} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=2.3in]{train_2.png} \caption{Agent changes lane after an action is executed} \label{train_2} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=2.3in]{menu_1.png} \caption{Initial Menu Selection in the Simulator} \label{fig:menu_1} \end{figure} One of the initial challenges with the simulator was that the simulator and the terminal had to restart at the end of every episode. It required navigating through the simulator menus as shown in figures~\ref{fig:menu_1} and~\ref{fig:menu_2}. It was not feasible to physically be present to select the options when training for extended number of episodes. We use PyAutoGUI package to automate this process. We found that closing any extra window other than the IDE was smooth in terms of the package's performance. Furthermore, having a dual monitor setup would sometime lead to the simulator opening in random co-ordinates in the screen. Hence, we use single monitor for easier execution. The package can automate mouse-clicks and keyboard strokes. For mouse clicks we recorded the co-ordinates of the radio buttons that were to be clicked and passed it to PyAutoGUI for execution. \begin{figure}[!h] \centering \includegraphics[width=2.3in]{menu_2.png} \caption{Final Menu Selection in the Simulator} \label{fig:menu_2} \end{figure} \section{Results} The simulator operates in real time, and it takes around 6 minutes for the ego vehicle to complete a lap in the simulation environment, which is viewed as an episode in our training process. The training program is made up of 100 episodes. When one episode is over, we restart the simulator for the next episode. Figure~\ref{result} shows the frequency of lane changes during training and testing. The figure only shows the results for first 10 episodes of training and testing. Lane changing fully depends on the traffic condition at the given time-step. From fig.~\ref{result}, we see that during training, the agent changes lanes quite often, on average about 64.1 times. After training for 100 episodes and testing the agent for 10 episodes, we see that the frequency of the lane changes drops significantly, with an average of 10 lane changes per episode. Furthermore, as the training progresses, we see a gradual decrease in lane change trend. However, due to exploration, we do see erratic behavior. The results after training show that the agent has learned to only change lanes whenever necessary. Comparing the results obtained in the reference paper (cf. ~\ref{result_paper}), we do not see such decrease in the lane changing frequency during training. They do not compare the results from training and testing. One possible theory could be that their exploration rate was low and their policy was to always pick the greedy action. We compare the average speed and average lane change times of the trained rule-based DQN agent to those of DQN-based policy. We run the agent through the simulation environment and then compute its average speed, average number of lane changes, and safety rate. The safety rate is defined as the ratio of test episodes without crashes to total test episodes.The results are given in Table ~\ref{results}. Here. the rule based DQN method has highest safety rate and less number of lane changes ($0.8$ and $8.80$ respectively) than the traditional DQN method ($0.2$ and $36.20$ respectively). This implies that our technique results in a more efficient and secure strategy than the others. \begin{table}[!htb] \centering \caption{Results} \begin{tabular}{|c|c|c|c|} \hline Method &Average Speed&(in MPH) & Safety rate\\ \hline DQN-based policy & 46 & 36.20 & 0.2\\ \hline rule-based DQN policy& 47 & 8.80 & 0.8 \\ \hline \end{tabular} \label{results} \end{table} \begin{figure}[!htb] \centering \includegraphics[width=3.15in]{Lane_plot.png} \caption{Frequency of Lane Changes} \label{result} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=3.15in]{result.png} \caption{Results from the reference paper (Lane changes during training)} \label{result_paper} \end{figure} \section{Discussion and Conclusion} \label{Discussion} The aim of this project was to replicate the results obtained in the referenced paper while understanding the concepts of Deep Q-Learning and its application to Reinforcement Learning in Autonomous Driving. Initially, as with most of the programming tasks, we faced several difficulties during compilation of the source code. Since the project was open-sourced we were able to easily get hands on with the source code, however, setting up the right environment was hard as there were little to no documentation on the requirements. Most of our time was invested in setting up the training/testing environment and reviewing the existing code. Furthermore, gathering resources for training was an equally hard task. As we know, training a deep neural network is computationally expensive task and especially in the context of reinforcement learning, it requires several iteration of running this expensive training task. As our application required running a simulator, training on a ``command-line only'' cluster was not possible. Hence, our only option was using one GeForce GTX 1080 GPU. We compared the results obtained from our training with the results shown in the referenced paper. Due to resource constraints we did not compare the results with other Reinforcement Learning Frameworks, which is the main idea of the referenced paper for this project. The results that we obtained resembled the results mentioned in the paper. However, on several iterations, we noticed that though the ego vehicle caused a ``minor'' collision, it resulted in series of collisions behind the ego vehicle. In real-life scenario, this is a catastrophic accident which must be highly penalized. Modifying reward function could enable further improvements in this method. One possible future work could be conducting an ablation study on how formulating different reward functions changes the performance of the agent.
1,108,101,562,699
arxiv
\section{GRAPHICAL CAUSAL MODELS}\label{sec:graphical_models} In graphical causal models, the causal relations among random variables are described via a directed acyclic graph (DAG) $G$, where the expression $X_i \rightarrow X_j$ means that $X_i$ influences $X_j$ 'directly' in the sense that intervening on $X_i$ changes the distribution of $X_j$ if all other nodes are adjusted to fixed values. If no hidden variable $U\notin{\bf X}$ exists that causes more than one variable in ${\bf X}$, then the set ${\bf X}$ is said to be {\it causally sufficient} \citep{spirtes2010introduction}. The crucial postulate that links statistical observations with causal semantics is the {\it causal Markov condition} \citep{Spirtes1993,Pearl2000}, stating that each node $X_n$ is conditionally independent (CI) of its non-descendants given its parents $PA(X_n)$ w.r.t.\ the graph $G$. Then, the probability mass function of the joint probability distribution factorises into \begin{align} p(x_1,\dots x_N) = \prod_{n=1}^N p(x_n \mid pa(x_n)) \;, \end{align} where $p(x_n \mid pa(x_n))$ are often called {\it Markov kernels} \citep{Lauritzen}. This entails further CIs described by the graphical criterion of {\it d-separation} \citep{Pearl2000}. The Markov condition is a necessary condition for a DAG being causal. To test the corresponding CIs is a first sanity check for a causal hypothesis. More assumptions are required to infer causal structure from observational data. One common assumption is {\it faithfulness}: a distribution is faithful to a DAG $G$ if a CI in the data implies d-separation in the graph. Inferring the entire causal DAG from passive observations, or {\it causal graph discovery}, is, nevertheless, an ambitious task \citep{Spirtes1993, peters2017elements}. We, therefore, focus on the weaker task of inferring the presence or absence of certain causal links. \section{DATASETS COMING FROM DIFFERENT JOINT DISTRIBUTIONS}\label{app:different_contexts} Our approach implicitly assumes that all datasets are taken from the same joint distribution. This assumption deserves justification. Suppose, for instance, we are interested in statistical relations between variables $X_1,\dots, X_N$ describing different health conditions of human subjects. Assume we are given $L$ datasets containing different subsets of variables (e.g. bivariate statistics), but the datasets are from different countries. Accordingly, we should not assume a common joint distribution $X_1,\dots, X_N$. Instead, we may introduce an additional variable $C$, and a dataset from country $C=c$ containing variables $X_i,X_j$ then provides only information about $E[f(X_i,X_j)|C=c]$. We would then infer a joint distribution of $C,X_1,\dots,X_N$ via MAXENT, given the conditional expectations. \section{MAXIMUM ENTROPY}\label{sec:maxent} The maximum entropy (MAXENT) principle \citep{jaynes1957information} is a framework to find a \enquote{good guess} for the distribution of a system if only a set of expectations for some feature functions $f$ is given. The MAXENT distribution is the solution to the optimisation problem \begin{align}\label{eq:maxent_op_eq} &\max_{p} H_p({\bf X}) \notag\\ \qquad \text{s.t.: }\; &\Exp{p}{f} = \tilde{f} \; ,\quad \sum_{\bf x} p({\bf x}) = 1 \; , \end{align} for the Shannon entropy $H_p({\bf X})=-\sum_{\bf x} p({\bf x})\log p({\bf x})$. Often the statistical moments $f_k(x)=x^k$ for $k\in\mathbb{N}$ are used. Note that many quantities of interest are simple expressions from expectations of appropriate functions, e.g.\ the covariance of two random variables is $\mathbb{E}\left[X_iX_j\right] - \mathbb{E}\left[X_i\right]\mathbb{E}\left[X_j\right]$. {\bf Approximate MAXENT} The empirical means of the functions $f$ gained from a finite sample will never be {\it exactly} identical to the true expectations. This implies that not even the true distribution necessarily satisfies the constraints imposed on the MAXENT distribution. This can lead to large (or even diverging) values of the parameters which overfit statistical fluctuations. To account for this, the expectations only need to be {\it close} to the given empirical means. This leads to the formulation of {\it approximate} MAXENT \citep{dudik2004performance,altun2006unifying}, where the constraints in the optimisation problem in \cref{eq:maxent_op_eq} are replaced by approximate constraints, resulting in \begin{align}\label{eq:maxent_op} &\min_{p} -H_p({\bf X}) \notag\\ \qquad \text{ s.t.:}\; &\|\mathbb{E}_{p}\left[f\right] - \tilde{f}\|_{\cal B} \leq \varepsilon \; ,\quad \sum_{\bf x} p({\bf x}) = 1 \; , \end{align} with $\varepsilon\geq 0$. This type of convex optimisation problems have been studied for infinite dimensional Banach spaces ${\cal X}$ and ${\cal B}$ by \citet{altun2006unifying}, where it was shown that \cref{eq:maxent_op} is equivalent to \begin{align}\label{eq:maxent_op_dual} \max_\phi \left<\phi,\tilde{f}\right> - \log\sum_{\bf x}\exp\left[\left<\phi,f\right>\right] - \varepsilon\|\phi\|_{{\cal B}^*} \; , \end{align} with ${\cal B}^*$ being the dual to ${\cal B}$. In contrast to standard MAXENT, whose well-known dual is maximum likelihood estimation, in approximate MAXENT, the parameters $\phi$ are regularised depending on the choice of the norm in \cref{eq:maxent_op}. For instance, ${\cal B}=\ell_\infty$ results in a Laplace regularisation $\varepsilon\|\phi\|_1$. Appropriate choices for $\varepsilon$ are proportional to $\mathcal{O}(1/\sqrt{M})$, where $M$ is the sample size, although in practice $\varepsilon$ is usually chosen using cross-validation techniques \citep{dudik2004performance,altun2006unifying}. Throughout this paper, we will use approximate MAXENT and assume that ${\cal X}$ and ${\cal B}$ are finite-dimensional. We consider the $\ell_\infty$ norm, which results in the element-wise constraints \begin{align}\label{eq:maxent_constraints} |\mathbb{E}_{p}\left[f_k\right] - \tilde{f}_k| \leq \varepsilon_k \quad \forall k \end{align} for $\varepsilon_k\geq 0$. In this case, the MAXENT optimisation problem can be solved analytically using the Lagrangian formalism of constrained optimisation and the solution reads \begin{equation}\label{eq:maxent_sol} \hat{p}({\bf x}) = \exp\left[\sum_{k} \lambda_{k} f_{k}({\bf x}_{S_k}) -\alpha\right] \;, \end{equation} where $\lambda=\left\{\lambda_{k}\right\}$ are the Lagrange multipliers and $\alpha=\log\sum_{\bf x}\exp\left[\sum_k \lambda_{k} f_{k}({\bf x}_{S_k})\right]$ is the partition function ensuring that $\hat{p}$ is correctly normalised. The optimal Lagrange multipliers can be found via \begin{align}\label{eq:maxent_lambdas} \min_\lambda &-\sum_k\lambda_k\tilde{f}_k + \log\sum_{\bf x}\exp\left[\sum_k\lambda_kf_k({\bf x}_{S_k})\right] \notag\\ &+ \sum_k\varepsilon_k|\lambda_k| \; . \end{align} {\bf Conditional MAXENT} In cases where the marginal distribution of a subset of the variables is already known, the MAXENT approach can be natively extended to a {\it conditional} MAXENT. For instance, consider the variable $X_j\in{\bf X}$, and assume we are given the distribution $P(\bar{\bX})$ for $\bar{\bX}={\bf X}\backslash\left\{X_j\right\}$. Additionally, we are given some expectations involving the variables $\bar{\bX}$ and $X_j$. In this case, we obtain the MAXENT solution for the joint distribution of ${\bf X}$ by maximising the conditional entropy \begin{align} H_p(X_j \mid \bar{\bX})&= -\sum_{{\bf x}} p(x_j\mid\bar{\bx})p(\bar{\bx})\log p(x_j\mid\bar{\bx}) \; , \end{align} subject to the constraints in \cref{eq:maxent_constraints} imposed by the expectations of the functions $f$ as before, but now the set of variables ${\bf X}_{S_k}$ the function $f_k$ acts upon always contains the variable $X_j$, so ${\bf X}_{S_k}=\bar{\bX}_{S_k}\cup\left\{X_j\right\}$ for $\bar{\bX}_{S_k}\subseteq\bar{\bX}$. In this case, the solution in the Lagrangian formalism reads \begin{align} \hat{p}(x_j\mid\bar{\bx}) &=\exp\left[\sum_{k} \lambda_{k}f_{k}({\bf x}_{S_k}) -\beta(\bar{\bx})\right] \; , \label{eq:cmaxent_sol_pyx} \end{align} where $\lambda$ are the respective Lagrange multipliers for which optimal values can be found analogously to \cref{eq:maxent_lambdas}, and $\beta(\bar{\bx})=\log\sum_{x_j} \exp\left[\sum_{k} \lambda_{k}f_{k}({\bf x}_{S_k}) \right]$ ensures that the marginal constraint $\hat{p}(\bar{\bx})=p(\bar{\bx})$ is satisfied. The joint MAXENT distribution is then given by $\hat{p}({\bf x})= \hat{p}(x_j\mid\bar{\bx}){p}(\bar{\bx})$. {\bf Using conditional means}  When we consider a scenario as described in the introduction, in which we want to merge the information from different studies or research papers, we might only be provided with {\it conditional means}, like the average depression rate given that the age is in a specific range. In this case, the given constraints would be \begin{align} \label{eq:cmaxent_constraints_conditional} |\Exp{p}{f_k \mid \bar{\bx}_{S_k}=\bar{\bx}_{S_k}^\nu} - \tilde{f}_k^{\nu}|\leq\hat{\varepsilon}_k^\nu \quad \forall k,\nu \; , \end{align} for $\nu=1,\dots,\mathcal{V}_k$ and $\bar{\bx}_{S_k}^1,\dots,\bar{\bx}_{S_k}^{\mathcal{V}_k}$ being the possible sets of values the set of discrete random variables $\bar{\bX}_{S_k}$ can attain. Then \cref{eq:cmaxent_constraints_conditional} replaces the constraints in \cref{eq:maxent_constraints} and the conditional MAXENT solution reads \begin{align}\label{eq:cmaxent_sol_conditional} \hat{p}(x_j\mid\bar{\bx}) = \exp\left[\sum_{k,\nu}\hat{\lambda}_k^{\nu} f_k({\bf x}_{S_k})\delta_{\bar{\bx}_{S_k},\bar{\bx}_{S_k}^\nu}-\hat{\beta}(\bar{\bx}) \right] \end{align} with the Lagrange multipliers $\hat{\lambda}=\left\{\hat{\lambda}_k^\nu\right\}$ and $\hat{\beta}(\bar{\bx})=\log\sum_{x_j} \exp\left[\sum_{k,\nu}\hat{\lambda}_k^{\nu}f_k({\bf x}_{S_k})\delta_{\bar{\bx}_{S_k},\bar{\bx}_{S_k}^\nu} \right]$ and \begin{align} \label{eq:delta} \delta_{\bar{\bx}_{S_k},\bar{\bx}_{S_k}^\nu} = \begin{cases} 1 \quad &\text{if } \bar{\bx}_{S_k}=\bar{\bx}_{S_k}^\nu \\ 0 \quad &\text{otherwise} \; . \end{cases} \end{align} \section{OBTAINING INFORMATION ABOUT THE STRENGTH OF CAUSAL EFFECTS BY MERGING DATASETS}\label{sec:causal_influence} Another similarly essential and challenging task is to quantify the causal influence of a treatment on a target in the presence of confounders. In this section, we consider a scenario where we want to investigate the causal effect of a treatment variable $X_i$ (e.g.\ the place of residence) on a target variable $X_j$ (e.g.\ the depression rate) in the presence of confounders ${\bf Z}$ (e.g.\ the age that can influence both the depression rate and the place of residence, as displayed in \cref{fig:xyz}). Only pairwise observations for treatment -- target and treatment -- confounders are available in this scenario. To investigate the causal effect of $X_i$ on $X_j$, first, we can use the results from \cref{sec:merging_datasets} and the MAXENT distribution to identify if there is a direct causal link from $X_i$ to $X_j$. If this is the case, this section provides further insights into the causal relationship between $X_i$ and $X_j$. Even without observing all variables jointly, we can derive bounds on the interventional distribution $P(X_j\mid do(X_i))$ and the ACE of $X_i$ on $X_j$. \begin{figure}[t] \centering \adjustbox{max width=\textwidth}{\input{confounder_figure.tex}} \caption{DAG for a treatment variable $X_i$ influencing a target $X_j$ in the presence of confounders ${\bf Z}$. \label{fig:xyz}} \end{figure} \paragraph{Background} One of the core tasks in causality is computing interventional distributions. By answering the question ``what would happen if variable $X_i$ was set to value $x_i$?" they provide valuable information without actually having to perform an experiment in which $X_i$ is set to the value $x_i$. Pearl's {\it do-calculus} \citep{Pearl2000} provides the tools to compute the distribution $P(X_j\mid do(X_i))$ of $X_j$ when intervening on $X_i$. In the infinite sample limit, the interventional distribution can be computed non-parametrically using backdoor adjustment \begin{align}\label{eq:bd} p(x_j\mid do(x_i)) = \sum_{{\bf z}} p(x_j\mid x_i,{\bf z}) p({\bf z}) \; , \end{align} if ${\bf Z}\subseteq {\bf X}\backslash\left\{X_i,X_j\right\}$ is a set of nodes that contains no descendent of $X_i$ and blocks all paths from $X_i$ to $X_j$ that contain an arrow into $X_i$ \citep{Pearl2000}. In the case of binary variables, the interventional distribution can also be used to compute the average causal effect (ACE) of $X_i$ on $X_j$: \begin{align}\label{eq:ACE} ACE_{X_i\to X_j} = &p(x_j=1\mid do(x_i=1)) -p(x_j=1\mid do(x_i=0)) \; . \end{align} \paragraph{Deriving Bounds on the Interventional Distribution and the ACE} Using the backdoor adjustment in \cref{eq:bd} we can bound the interventional distribution based only on the observed marginal distributions. \begin{Theorem}\label{th:bounds} Let $X_i$, $X_j$, and ${\bf Z}$ be discrete random variables in the causal DAG shown in \cref{fig:xyz} with known marginal distributions $P(X_i,X_j)$ and $P(X_i,{\bf Z})$. Then the interventional distribution $P(X_j\mid do(X_i))$ is bounded as follows: \begin{align}\label{eq:bounds} \frac{p(x_j,x_i={x}'_i)}{\max_{{\bf z}} p(x_i={x}'_i\mid {\bf z})} &\leq p(x_j\mid do(x_i=x'_i)) \leq \frac{p(x_j, x_i={x}'_i)}{\min_{{\bf z}} p(x_i={x}'_i\mid {\bf z})} \; . \end{align} \end{Theorem} \begin{proof} Using Pearl's backdoor adjustment in \cref{eq:bd} and Bayes' rule, we find \begin{align*} p(x_j\mid do(x_i={x}'_i)) &= \sum_{{\bf z}} p(x_j\mid x_i={x}'_i, {\bf z})p({\bf z}) = \sum_{{\bf z}} p(x_j\mid x_i={x}'_i, {\bf z})p({\bf z})\cdot \frac{p(x_i={x}'_i\mid {\bf z})}{p(x_i={x}'_i\mid {\bf z})} \\ &\leq \frac{\sum_{{\bf z}} p(x_j\mid x_i={x}'_i, {\bf z})p({\bf z})p(x_i={x}'_i\mid {\bf z})}{\min_{{\bf z}} p(x_i={x}'_i\mid {\bf z})} = \frac{p(x_j, x_i={x}'_i)}{\min_{{\bf z}} p(x_i={x}'_i\mid {\bf z})} \; . \end{align*} The lower bound can be derived analogously. \end{proof} In the case where $X_i$ and $X_j$ are binary, we can use \cref{eq:bounds} also to bound the ACE of $X_i$ on $X_j$. \begin{Lemma}\label{lm:ace} In the setting described in \cref{th:bounds} the ACE of $X_i$ on $X_j$ is bounded as follows: \begin{align} \frac{p(x_j\text{=}1, x_i\text{=}1)}{\max_{\bf z} p(x_i\text{=}1\mid {\bf z})} - \frac{p(x_j\text{=}1, x_i\text{=}0)}{\min_{{\bf z}}p(x_i\text{=}0\mid {\bf z})} \leq ACE_{X_i\to X_j} \leq \frac{p(x_j\text{=}1, x_i\text{=}1)}{\min_{{\bf z}}p(x_i\text{=}1\mid {\bf z})} - \frac{p(x_j\text{=}1, x_i\text{=}0)}{\max_{{\bf z}}p(x_i\text{=}0\mid {\bf z})} \; . \end{align} \end{Lemma} \begin{proof} This directly follows from \cref{th:bounds} and \cref{eq:ACE}. \end{proof} \Cref{lm:ace} provides us with at least an approximate insight of the strength of the causal effect of the variable $X_i$ on $X_j$. In addition, the bounds provide a correct scale of the bounds that could be found using the MAXENT solution. However, as the MAXENT solution is an approximation to the true distribution, this will also be an approximation, and as it is a point estimate, we do not know how close or far we are from the true ACE. Therefore we report bounds to show what can be said from marginal distributions about the ACE even without MAXENT. \paragraph{Related Work on Confounder Correction} The classical task of confounder correction is to estimate the effect of a treatment variable on a target in the presence of unobserved confounders. In this paper, however, we consider the scenario shown in \cref{fig:xyz} and assume that we have observations for the confounders ${\bf Z}$, but not for $X_i,X_j$, and ${\bf Z}$ jointly. If $X_i,X_j$, and ${\bf Z}$ were observed jointly, the causal effect of $X_i$ on $X_j$ would be identifiable and could be computed using Pearl's backdoor adjustment (see \cref{sec:graphical_models}). In cases where ${\bf Z}$ is unobserved, the causal effect of $X_i$ on $X_j$ is not directly identifiable. One exception is if a set of observed variables satisfies the front-door criterion \citep{Pearl2000}. In \citet{Galles1995,Pearl2000} and \citet{kuroki2014measurement} more general conditions were presented for which do-calculus and proxy-variables of unobserved confounders, respectively, make the causal effect identifiable. Another approach to confounder correction is by phrasing the problem in the potential outcome framework, e.g., using instrumental variables \citep{angrist1996identification,GroJanSieSch2016} or principal stratification \citep{Rubin2004}. Other, more recent approaches include, for instance: double/debiased machine learning \citep{chernozhukov2018double,jung2021estimating}; combinations of unsupervised learning and predictive model checking to perform causal inference in multiple-cause settings \citep{wang2019blessings}; methods that use limited experimental data to correct for hidden confounders in causal effect models \citep{kallus2018removing}; and the split-door criterion which considers time series data where the target variable can be split into two parts of which one is only influenced by the confounders and the other is influenced by the confounders and the treatment, reducing the identification problem to that of testing for independence among observed variables \citep{sharma2018split}. Although confounder correction is a common and well-studied problem, we are unaware of approaches based on pairwise observations for treatment -- target and treatment -- confounders only. \section{CONCLUSION}\label{sec:conclusion} We have derived how the MAXENT principle can identify links in causal graphs and thus obtain information about the causal structure by merging the statistical information in different datasets. There are several directions of extension of this work. On the practical side, we believe that developing efficient ways to compute the expectations of the inferred distribution is vital. In our experiments, we merged between two and five datasets and used up to 22 constraints. In order to scale the problem to more variables and constraints, the main bottleneck is the estimation of the partition function ($\alpha$ and $\beta(\bar{{\bf x}})$ in \cref{sec:maxent}). Some efficient ways to compute this are developed in \cite{wainwright2008graphical}, however, the properties with respect to causality remain unknown. Another direction for future work would be to study the statistical properties of the estimated parameters. In other words, to develop a statistical of the null hypothesis of a multiplier being zero (in the case of unconditional moments, or equal to others in the case of conditional moments). In addition to the causal insights we get from this work, we would like to highlight two ways in which this work can positively impact society. First, by using only information from expectations, a characteristic that makes MAXENT a flexible approach, we move one step forward to avoid identifying individuals in adversarial attacks. Second, by using information from different sources, we can avoid being unable to answer causal questions, or worse, giving wrong causal answers because of a lack of jointly observed data. \section{IMPLEMENTATION DETAILS}\label{sec:implementation} We implemented MAXENT on Python using JAX \citep{jax2018github} optimisation procedures. We minimise the sum of the squares of the difference between the moments given as constraints and the moments estimated using the MAXENT distribution entailed by the Lagrange multipliers. If the absolute difference between the data expectations and the MAXENT expectations were smaller than 0.001, the procedure was considered convergent. In our current implementation, we estimate the normalising constant, although there is the possibility to use approximation methods to make the computation faster, if required \citep{wainwright2008graphical}. \section{EXPERIMENTAL SETUP}\label{sec:experimental_setup} In all experiments, we observe only expectations associated with the $X_{i}$ variables. To build the ROC curves for each of the samples obtained, we first generated a vector $p$ of probabilities from a $\mathcal{U}(0.1, 0.9)$ distribution. In all the following examples, we generated 1000 observations for 100 repetitions of the SCM and estimated the empirical expectations from that sample. If the procedure did not converge, we did not take it into account for the ROC. We also randomised the logical relation between the causes $X_{i}$ and the effect $X_0$. We denote this logical relation below as $\odot \in \{\wedge, \vee, \oplus\}$. The generative processes for the shown experiments with synthetically generated data are the following: First, we select the used parameters as follows: \begin{alignat*}{2} u_l &\sim {\cal N}(0, 1) \quad &&\text{for}\quad l\in\left\{1, \dots, 5\right\}\\ p_{k} &\sim \mathcal{U}(0.1, 0.9) \quad &&\text{for} \quad k=0,\dots,5 \\ a_{i} &\sim \mathcal{N}(0, 1) \quad &&\text{for} \quad i=1,\dots,5 \\ b_{i,j} &\sim \mathcal{N}(0, 1) \quad &&\text{for} \quad j=1,\dots,5 \end{alignat*} Then we use these parameters to generate the data for the variables $X_0$ to $X_5$. For the experiment in \cref{fig:graph_exp_a} the data is generated according to: \begin{align*} x_{1} &\sim |\text{Ber}(p_{1}) - (u_1 > 0)| \\ x_{2} &\sim |\text{Ber}(p_{2}) - (u_1 < 0.25)| \\ x_{3} &\sim |\text{Ber}(p_{3}) - (u_2 > 0)| \\ x_{4} &\sim |\text{Ber}(p_{4}) - (u_2 > 0.25)| \\ x_{5} &\sim \text{Ber}(p_{5}) \\ x_0 &\sim \mathbf{1}_{>0} \bigg[\bigg(\sum_{i}a_{i}X_{i}+\sum_{i,j}b_{i,j}X_{i}X_{j}\bigg)\bigg] \odot \text{Ber}(p_{0}) \end{align*} And finally, for the experiment in \cref{fig:graph_exp_b} the data is generated according to: \begin{align*} x_{1} &\sim |\text{Ber}(p_{1}) - (u_1 > 0 \vee u_2 > 0.25 \vee u_3 > 0.5)| \\ x_{2} &\sim |\text{Ber}(p_{2}) - (u_2 < 0.5 \vee u_3 < 0.25 \vee u_4 < 0)| \\ x_{3} &\sim |\text{Ber}(p_{3}) - (u_3 > 0 \vee u_4 < 0.25 \vee u_5 > 0.5)| \\ x_{4} &\sim |\text{Ber}(p_{4}) - (u_4 < 0.5 \vee u_5 > 0.25 \vee u_1 < 0)| \\ x_{5} &\sim |\text{Ber}(p_{5}) - (u_5 > 0 \vee u_1 < 0.25 \vee u_2 > 0.5)| \\ x_0 &\sim \mathbf{1}_{>0} \bigg[\bigg(\sum_{i}a_{i}X_{i}+\sum_{i,j}b_{i,j}X_{i}X_{j}\bigg)\bigg] \odot \text{Ber}(p_{0}) \end{align*} For the experiment in \cref{fig:graph_exp_c} we used the following generative process: \begin{align*} x_{1} &\sim |\text{Ber}(p_{1}) - (u_1 > 0)| \\ x_{5} &\sim |\text{Ber}(p_{5}) - (-0.25 < u_1 < 0.25)| \\ x_{2} &\sim |\text{Ber}(p_{2}) - (u_1 < 0)| \vee x_{1}\\ x_{4} &\sim |\text{Ber}(p_{4}) - (u_1 < -0.25)| \vee x_{5}\\ x_{3} &\sim |\text{Ber}(p_{3}) - (u_1 > 0.25)| \vee (x_{1} \oplus x_{5})\\ x_0 &\sim \mathbf{1}_{>0} \bigg[\bigg(\sum_{i}a_{i}X_{i}+\sum_{i,j}b_{i,j}X_{i}X_{j}\bigg)\bigg] \odot \text{Ber}(p_{0}) \end{align*} \section{EXPERIMENTS}\label{sec:experiments} \begin{figure*}[!ht] \sbox0{\begin{subfigure}[t]{.3\textwidth} \centering \adjustbox{max width=.9\textwidth}{\input{graph_ex_a.tex}} \subcaption{Structure graph (a)\label{fig:graph_exp_a}} \end{subfigure}} \sbox1{\begin{subfigure}[t]{.3\textwidth} \centering \adjustbox{max width=.9\textwidth}{\input{graph_ex_b.tex}} \subcaption{Structure graph (b)\label{fig:graph_exp_b}} \end{subfigure}} \sbox2{\begin{subfigure}[t]{.3\textwidth} \centering \adjustbox{max width=.9\textwidth}{\input{graph_ex_c.tex}} \subcaption{Structure graph (c)\label{fig:graph_exp_c}} \end{subfigure}} \sbox3{\begin{subfigure}[t]{.25\textwidth} \centering \adjustbox{max width=\textwidth}{\includegraphics[width=\textwidth]{roc_graph_a.png}} \caption{ROC curve for graph (a) \label{fig:roc_overlay_a}} \end{subfigure}} \sbox4{\begin{subfigure}[t]{.25\textwidth} \centering \includegraphics[width=\textwidth]{roc_graph_b.png} \caption{ROC curve for graph (b) \label{fig:roc_overlay_b}} \end{subfigure}} \sbox5{\begin{subfigure}[t]{.25\textwidth} \centering \includegraphics[width=\textwidth]{roc_graph_c.png} \caption{ROC curve for graph (c) \label{fig:roc_overlay_c}} \end{subfigure}} \sbox6{\begin{subfigure}[t]{.32\textwidth} \centering \adjustbox{max width=\textwidth}{\includegraphics[width=\textwidth]{rate_vs_ace_graph_a.png}} \caption{True positives over ACE, graph (a) \label{fig:rate_vs_strength_a}} \end{subfigure}} \sbox7{\begin{subfigure}[t]{.32\textwidth} \centering \includegraphics[width=\textwidth]{rate_vs_ace_graph_b.png} \caption{True positives over ACE, graph (b) \label{fig:rate_vs_strength_b}} \end{subfigure}} \sbox8{\begin{subfigure}[t]{.32\textwidth} \centering \includegraphics[width=\textwidth]{rate_vs_ace_graph_c.png} \caption{True positives over ACE, graph (c) \label{fig:rate_vs_strength_c}} \end{subfigure}} \centering \begin{tabular}{ccc} \usebox0 & \usebox1 & \usebox2 \\[2em] \usebox3 & \usebox4 & \usebox5 \\[2em] \usebox6 & \usebox7 & \usebox8 \\[2em] \end{tabular} \caption{ We show the structure of the graphs we consider in the synthetic experiments in (\subref{fig:graph_exp_a}), (\subref{fig:graph_exp_b}), and (\subref{fig:graph_exp_c}). In (\subref{fig:roc_overlay_a}), (\subref{fig:roc_overlay_b}), and (\subref{fig:roc_overlay_c}) we show the ROC curves for the identification of missing edges. We generated 100 datasets for each graph where we varied the used SCMs and the absence and presence of the edges shown as dashed lines, whereas links represented by solid lines are always present. In (\subref{fig:rate_vs_strength_a}), (\subref{fig:rate_vs_strength_b}), and (\subref{fig:rate_vs_strength_c}) we show how the ability to detect an edge depends on the strength of the causal effect. Here we generated another 500 datasets for each graph in which the link between $X_1$ and $X_0$ was always present, but the ACE of $X_1$ on $X_0$ varied. Although our MAXENT-based approach only uses conditional means as input, it achieves similar performance as the KCI-test that uses the full generated dataset. \label{fig:exp_results}} \end{figure*} In this section, we apply the theoretical results from \cref{sec:merging_datasets} on different synthetically generated and real-world datasets. For this, we implemented the MAXENT estimation in Python (see \cref{sec:implementation}). {\bf Synthetic data} We consider five binary variables $X_1,\dots,X_5$, which are potential causes of a sixth binary variable $X_0$, and we want to infer which variables $X_i$ have a direct causal link to $X_0$. The ground truth DAGs for our experiments are shown in \cref{fig:graph_exp_a,fig:graph_exp_b,fig:graph_exp_c}, and the SCMs we used for the data generation can be found in \cref{sec:experimental_setup}. For the first set of experiments, we kept the structure of the confounders $U_j$ with the potential causes fixed (solid lines) and randomised the existence of mechanisms between the potential parents and the effect variable $X_0$ (dashed lines). We generated 100 datasets for each graph structure by randomly picking the existing mechanisms and the parameters used in the SCM. We sample 1000 data points according to the respective SCM for each dataset. Then we artificially split these observations into five datasets that we want to merge and that always only contain bivariate information about $X_0$ and one of the potential causes $X_i$. We do this by empirically estimating the conditional means $\Exp{p}{x_0\mid x_i=0}$ and $\Exp{p}{x_0\mid x_i=1}$ from the samples for all $i=1,\dots,5$. We use these conditional means as constraints for the MAXENT optimisation problem as shown in \cref{eq:cmaxent_constraints_conditional}. We assume that $X_0$ cannot have a causal influence on any $X_i$. Therefore, we can use the results in \cref{lm:no_edge} to identify whether $X_i$ is directly causally linked to $X_0$ or not. To decide whether the Lagrange multipliers associated with a potential cause $X_i$ are constant -- and hence $X_i$ is not directly linked to $X_0$ -- we use a relative difference estimator \begin{align} \theta_i = \frac{\left|\lambda_i^{1}-\lambda_i^{2}\right|}{\max\{|\lambda_i^{1}|, |\lambda_i^{2}|, \left|\lambda_i^{1}-\lambda_i^{2}\right|, 1\}} \;\in\left[0,1\right] \; , \end{align} where $\lambda_i^1, \lambda_i^2$ are the two Lagrange multipliers for the constraints associated with $X_i$. We consider the Lagrange multipliers constant if $\theta_i$ is smaller than a threshold $t \in [0, 1]$. We vary the threshold $t$ linearly between zero and one. We count the number of correctly and falsely identified edges in the 100 datasets for each threshold value. The results are summarised in the receiver operating characteristic (ROC) curves in \cref{fig:roc_overlay_a,fig:roc_overlay_b,fig:roc_overlay_c}. We consider two scenarios: one in which we assume that we know the marginal distribution $P(X_1,\dots,X_5)$ for the potential causes (called \enquote{known $p(x)$}, orange line), and the second where we first infer this distribution also using MAXENT (called \enquote{estimated $p(x)$}, blue line). Further, we compare our results with a kernel-based conditional independence test (KCI-test) \citep{UAI_Kun_kernel,strobl2019approximate} (green line). For the KCI-test, we directly use the 1000 data points generated from the joint distribution. To generate the ROC curve, we vary the $\alpha$-level of the test for the null hypothesis that $X_0$ is CI of $X_i$ given all other potential causes and count the number of correct/false rejections/acceptances. In the second set of experiments, we investigate how much our approach's ability to identify edges depends on the strength of the causal effect. We generated 500 additional datasets for each graph as described before, but this time always included a causal link from $X_1$ to $X_0$ and only varied the strength of this connection. We fixed the threshold for the identification of an edge to a randomly picked value ($t=\alpha=0.15$). \Cref{fig:rate_vs_strength_a,fig:rate_vs_strength_b,fig:rate_vs_strength_c} show how in this case the true positive rate for the identification of the link depends on the ACE of $X_1$ on $X_0$. The results in \cref{fig:roc_overlay_a,fig:roc_overlay_b,fig:roc_overlay_c,fig:rate_vs_strength_a,fig:rate_vs_strength_b,fig:rate_vs_strength_c} show that for all graph structures our method achieves similar performance as the KCI-test. This is impressive, as our method only uses the conditional means of $X_0$ on only one of the potential causes. In contrast, the KCI-test uses all samples generated from the joint data distribution. That means that, although our method uses much less information than the KCI-test and even merges these little pieces of information from different datasets, our method still achieves similar performance as the KCI-test. In addition, we want to show that the MAXENT solution can not only provide information about the causal structure but even about the strength of a causal effect. For this, we derive bounds for the ACE based only on the marginal distributions in \cref{sec:causal_influence}. In \cref{fig:ace} we see that the ACE estimated based on the MAXENT distribution is always very close to the true ACE, and even in the cases where they do not precisely coincide, they are both clearly within the bounds derived based on the marginal distributions. \begin{figure}[t] \centering \includegraphics[width=.9\columnwidth]{ace_and_bounds} \caption{ACE of $X_3$ on $X_0$ for ten randomly picked examples of a variation of the graph in \cref{fig:graph_exp_c}. The ACE estimated from the MAXENT solution with a known marginal distribution of the causes (orange triangle) is always close to the true ACE (black square). But even when the distribution of the causes is also inferred using MAXENT (blue dot), the estimated ACE is close to the true one and always within the bounds (grey lines) estimated from the marginal distributions. \label{fig:ace}} \end{figure} {\bf Real data} We performed an experiment using real-world data from \citet{Gapminder}, a website that compiles country-level data of social, economic, and environmental nature. We chose three variables for our experiment: CO2 tonnes emission per capita \citep{osti_1389331}; inflation-adjusted Gross Domestic Product (GDP) per capita \citep{WB:GDPPC}; and Human Development Index (HDI) \citep{UNHDR:HDI}. We use data from 2017 for all variables and standardise it before estimation. We consider CO2 emissions the target variable for which the other variables are potential causes. We use the unconditional mean and variance of CO2 emissions and the pairwise covariance between CO2 emissions and each of the two other variables as constraints. According to the Lagrange multipliers shown in \cref{tab:pvalues}, and \cref{lm:no_edge}, we conclude that CO2 is directly linked to HDI but not to GDP. We run the same KCI-test as in the synthetic experiments above to investigate this conclusion. \Cref{tab:pvalues} also shows the obtained p-values for the null hypothesis that CO2 emission is CI of each variable conditioned on the other variable. The results of the KCI-test agree with our conclusion. The result of the KCI-test, nonetheless, does not necessarily reflect the ground truth. However, GDP only has an indirect causal influence on CO2 emissions through HDI matches our intuition. We would expect that a change in GDP not directly affects the CO2 emissions but influences the HDI -- and potentially multiple other factors that we do not consider in this experiment -- that then affects the CO2 emissions. In \cref{sec:add_results} we discuss more such experiments in which we also include fertility and life expectancy as potential causes. In all of the considered cases where we used two potential causes, the conclusion drawn from the Lagrange multipliers agreed with the conclusion drawn from the KCI-test. Only when we include more variables, the KCI-test indicates CI of HDI and CO2 emissions given the other variables, while our method still finds a direct link between them. However, this finding of the KCI-test is also inconsistent with the other CI statements of the KCI-test for smaller conditioning sets. Moreover, consider what is available to both methods: The KCI-test requires a sample of the joint distribution, while our method relies solely on bivariate covariances, which might not even be enough to describe the joint distribution fully. \begin{table}[t] \centering \caption{Found Lagrange multipliers $\lambda_i$ for the MAXENT solution and the p-values for the KCI-test. We indicate where the multipliers / p-values indicate the presence of a direct edge connecting $X_i$ and CO2 emissions / that the two are not CI given the other variable. \label{tab:pvalues}} \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{lrc|rc} \toprule variable $X_i$ & $\lambda_i$ & edge & p-value & no CI \\ \midrule GDP & -0.29 & \ding{55} & 0.19 & \ding{55}\\ HDI & 3.26 & \ding{51} & 0.02 & \ding{51}\\ \bottomrule \end{tabular} \end{adjustbox} \end{table} Finally, we consider the example from the introduction, in which we want to investigate the depression rate conditioned on age, sex and place of residence. We are given the conditional means for the depression rate given age, sex, and the federal state of Germany, in addition to the joint distribution of age, sex, and state \citep{gesundheit2021depression,gesundheit2021sexAge}. Using this information, we can find the MAXENT solution for the joint distribution of all four variables (depression rate ($D$), age ($A$), sex ($S$), and place of residence ($P$)). The found Lagrange multipliers are shown in \cref{sec:add_results}. For none of the three potential causes, the multipliers are constant. Hence, we assume that all three factors (age, sex, and place of residence) have a direct causal link to the depression rate.\footnote{Note, it is still possible that these factors only have an {\it indirect} influence on the depression rate via other factors that we do not consider here. Investigating {\it all} potential causes for the depression rate would be a research project of its right and is out of the scope of this work.} Nevertheless, we can use the result from \cref{th:predictive_power}, stating that the joint MAXENT solution is a better predictor than any of the given marginal distributions, to investigate questions like \enquote{What is the probability for a 30-year-old woman living in a certain federal state to become depressed?}. When we, for instance, consider the federal states Baden-Wuerttemberg (BW) and Berlin (BE), then the result of the MAXENT solution is \begin{alignat*}{2} &p(D\mid S=\text{female}, A=30, P=\text{BW}) &&= \phantom{1}9.5\%, \\ &p(D\mid S=\text{female}, A=30, P=\text{BE}) &&= 11.2\%, \end{alignat*} while from the marginal distributions, we get \begin{alignat*}{5} &p(D\mid S=\text{female}) &&= 9.7\% , &\; &p(D\mid P=\text{BW}) &&= 7.7\% , \\ &p(D\mid A=30-44) &&= 7.5\% , &\; &p(D\mid P=\text{BE}) &&= 9.3\% . \end{alignat*} It seems surprising that the probability increases when conditioning on all three factors. However, since the depression rate for \enquote{female} is higher than for \enquote{male} (which is only 8.6\%), it makes sense that the depression probability slightly increases when additionally conditioning on the sex being female. Further note that none of the above necessarily reflects the true probability. The MAXENT solution only provides a \enquote{better guess} for the depression rate given all three factors than each of the marginal distributions. \section{INTRODUCTION} The scientific community is rich in observational and experimental studies that consider a tremendous amount of problems from an even more significant number of perspectives. All these studies have collected data containing valuable information to investigate the research question at hand. At the same time, it is often impossible to use the collected data to answer slightly different or more general questions, as the required information cannot be extracted from the already existing datasets. Consider, for instance, a case in which we want to investigate the influence of the place of residence on the probability to become depressed, and we are given four different studies: (1) showing the depression rates for different regions; (2) capturing depression rate with respect to (w.r.t.) age; (3) providing information about the depression rate w.r.t.\ sex; and (4) showing the distribution of age and sex across different regions. We want to know whether there is a direct causal link between the place of residence and the depression rate or only an indirect link through age and/or sex. In this paper, we address the question of how we can obtain this causal information without performing a new study in which we observe all factors (age, sex, place of residence, and depression rate) at the same time, but only by merging the already collected datasets. Since the problem of inferring the joint distribution from a set of marginals is heavily underdetermined \citep{Kellerer1964}, we use the maximum entropy (MAXENT) principle to infer the joint distribution that maximises the joint entropy subject to the observed marginals. This has the advantage that the MAXENT distribution contains some information about the existence of causal arrows that also hold for the true joint distribution regardless of how much the MAXENT distribution deviates from the latter. As usual, our causal conclusions require debatable assumptions that link statistical properties of distributions from passive observations to causality. Therefore we use assumptions common in causal discovery \citep{Spirtes1993,Pearl2000}. Additionally, we define and intuitively justify the notion of {\it faithful $f$-expectations}, which is analogous to faithfulness in the sense of postulating genericity of parameters. This allows us to draw the following conclusions, which are the main contributions of this paper: \begin{itemize} \item The presence or absence of direct causal links can be identified only from the Lagrange multipliers of the MAXENT solution if the causal order is known (see \cref{sec:merging_datasets}, \cref{lm:no_edge}). \item For a causal graph $G$ with $N$ nodes for which the given constraints define all bivariate distributions uniquely, the graph constructed from the MAXENT distribution by connecting two nodes if and only if there is a non-zero Lagrange multiplier corresponding to some bivariate function of the two variables, is a supergraph of the moral graph of $G$ (see \cref{sec:merging_datasets}, \cref{thm:moral}). \item Merging datasets with MAXENT improves the predictive power compared to using the observed marginal distributions (see \cref{sec:merging_datasets}, \cref{th:predictive_power}). \end{itemize} The remainder of this paper is structured as follows: We start by presenting the notation and assumptions used throughout this paper. Then, in \cref{sec:maxent} we introduce the MAXENT principle. In \cref{sec:merging_datasets} we discuss how we can obtain causal information by merging datasets. In \cref{sec:related_work} we put our work into the context of the related literature. Finally, in \cref{sec:experiments} we evaluate the identification of causal edges from MAXENT on simulated and real-world datasets. {\bf Notation}\label{sec:notation} Let ${\bf X}=\left\{X_1,\dots,X_N\right\}$ be a set of discrete random variables. Although the results in this article hold also for continuous variables with strictly positive densities ($p({\bf x})>0$) and finite differentiable entropy, for notational convenience we consider discrete random variables with values ${\bf x}\in{\cal X}$. Further let $X_i,X_j\in{\bf X}$ be two variables whose causal relationship we want to investigate. We denote with ${\bf Z}={\bf X}\backslash\left\{X_i,X_j\right\}$ the complement of $\left\{X_i,X_j\right\}$ in ${\bf X}$, where (by slightly overloading notation) bold variables represent sets and vectors of variables at the same time. We consider the set of functions $f=\left\{f_k\right\}$ with $f_k:{\cal X}_{S_k}\to{\mathbb R}$ for some $k\in{\mathbb N}$ and ${\bf X}_{S_k}\subseteq{\bf X}$. The empirical means of $f$ for a finite sample from the joint distribution $P({\bf X})$ are collected in the set $\tilde{f}=\left\{\right.$$\tilde{f}_k$$\left.\right\}$, and the set of true expectations we denote with $\Exp{p}{f}=\left\{\sum_{\bf x} p({\bf x})f_k({\bf x}_{S_k})\right\}$. Further, we denote with $P$ the \enquote{true} joint distribution of the variables under consideration and with $\hat{P}$ the approximate MAXENT solution satisfying the constraints imposed by the expectations of $f$, as described in \cref{sec:maxent}. {\bf Assumptions} If not stated differently, we make the following assumptions throughout this paper: The set of variables ${\bf X}$ is causally sufficient, that is, there is no hidden common cause $U \notin {\bf X}$ that is causing more than one variable in ${\bf X}$ (and the causing paths go only through nodes that are not in ${\bf X}$)\citep{peters2017elements}. Furthermore, their joint distribution $P({\bf X})$ satisfies the causal Markov condition and faithfulness w.r.t.\ a directed acyclic graph (DAG) $G$ (see \cref{sec:graphical_models}). We have $L$ datasets, where each contains observations for only a subset of the variables, and at least one dataset contains observations for the set $\left\{X_i,X_j\right\}$. The observations are drawn from the same joint distribution $P({\bf X})$.\footnote{In \cref{app:different_contexts} we sketch the case where each dataset is from a different joint distribution by introducing an additional variable for the background conditions.} Further, the set of functions $f$ is linearly independent. \section*{Acknowledgements} We thank Steffen Lauritzen for helpful remarks on undirected graphical models. \setlength{\itemindent}{-\leftmargin} \section{OBTAINING CAUSAL INFORMATION BY MERGING DATASETS WITH MAXENT} \label{sec:merging_datasets} In this section, we consider the analysis of the causal relationship between variables if not all variables have been observed jointly. All proofs of the following propositions can be found in \cref{app:proofs}. First, we show how to detect the presence or absence of direct causal links in a DAG $G$ from the Lagrange multipliers of the MAXENT distribution. We start by showing that if $X_i$ and $X_j$ are CI given all other variables w.r.t.\ the true distribution, then this is also the case w.r.t.\ the MAXENT distribution and reflects in the respective Lagrange multipliers being zero. \begin{restatable}[CI results in Lagrange multipliers being zero]{Lemma}{cioneway}\label{th:ci_one_way} Let $P$ be a distribution and let $\hat{P}$ be the MAXENT distribution satisfying the constraints imposed by the expectations of the functions $f$ which are sufficient to uniquely describe the marginal distributions $P(X_i,{\bf Z}), P(X_j,{\bf Z})$, and $P(X_i,X_j)$. Then it holds: \begin{align}\label{eq:ci_one_way} &X_i\mathrel{\perp\mspace{-10mu}\perp} X_j \mid {\bf Z} \;\; [P] \notag\\ \quad\Rightarrow\quad &X_i\mathrel{\perp\mspace{-10mu}\perp} X_j\mid {\bf Z} \;\; [\hat{P}] \notag\\ \quad \Rightarrow \quad &\lambda_k = 0 \quad \forall k\;\text{ with }\; {\bf X}_{S_k}=\left\{X_i,X_j\right\} \; . \end{align} \end{restatable} Under the stated assumptions, it directly follows from \cref{th:ci_one_way} that if two variables are CI given all other variables, and hence not directly linked in the causal DAG $G$, then the respective Lagrange multipliers are zero. This, however, is not enough to draw conclusions from the Lagrange multipliers about the absence or presence of causal links. For this, we first need to show that the presence of a direct link results in a non-zero Lagrange multiplier. But to do this, we first need to postulate a property that we call {\it faithful $f$-expectations}. This property is analogous to faithfulness in postulating the genericity of parameters. For the following definition, we denote with $\lambda^P_f$ and $\lambda_f^Q$ the set of Lagrange multipliers of the MAXENT distribution satisfying the expectation constraints in \cref{eq:maxent_constraints} entailed by the functions $f$ w.r.t.\ the distributions $P$ and $Q$, respectively. \begin{Definition}[Faithful $f$-Expectations] A distribution $P$ is said to have faithful $f$-expectations relative to a DAG $G$, if $\lambda_{f_k}^P\neq 0$ for all $f_k\in f$ where it exists a distribution $Q$ that is Markov relative to $G$ and for which it is $\lambda_{f_k}^Q\neq 0$. \end{Definition} We rephrase this definition in the language of information geometry to show that this is just a genericity assumption like usual faithfulness, and \cref{fig:faithfulness} illustrates the intuition behind it. By elementary results of information geometry \citep{Amari}, the MAXENT distribution $\hat{P}$ can also be considered a projection of the distribution $P$ onto the exponential manifold $E_f$, which is defined by the span of all functions $f$, containing distributions of the form $\exp\left[\sum_k\lambda_k f_k({\bf x}_{S_k}) - \alpha\right]$ (visualised by the blue plane in \cref{fig:faithfulness}). If a Lagrange multiplier $\lambda_k$ is zero, then $\hat{P}$ lies within the submanifold $E_{f\setminus \{f_k\}} \subset E_f$ which is defined through the span of all functions $f$ without $f_k$ (illustrated by the red, dashed line in \cref{fig:faithfulness}). Then faithful $f$-expectations state that the projection of $P$ onto $E_f$ will generically not lie in $E_{f\setminus \{f_k\}}$ unless the DAG $G$ only allows for distributions whose projections onto $E_f$ also lie in $E_{f\setminus \{f_k\}}$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{faithful.png} \caption{Intuitive explanation of the idea behind faithful $f$-expectations: the MAXENT distribution is a projection of the distribution $P$ onto the exponential manifold $E_f$, defined by the span of the functions $f$. It is very unlikely that this projection falls into the submanifold $E_{f\setminus \left\{f_k\right\}}$ where $\lambda_k=0$ just by chance. \label{fig:faithfulness}} \end{figure} Further justification of faithful $f$-expectations via some probabilistic arguments would be a research project in its own right. After all, even the discussion on usual faithfulness is ongoing: The \enquote{measure zero argument} by \citet{Meek} is criticised in \citet{LemeireJ2012}, and it is argued that natural conditional distributions tend to be more structured. In \citet{uhler2013} it is shown that distributions are not unlikely to be {\it close to} being unfaithful. Despite these concerns, faithfulness still proved to be helpful. Postulating faithful $f$-expectations allows us to link the causal structure to the Lagrange multipliers \begin{restatable}[Causally linked variables have non-zero Lagrange multipliers]{Lemma}{fexpectations}\label{lm:f-expectations} Let $P$ have faithful $f$-expectations relative to a causal DAG $G$. Then it is $\lambda_k^P\neq 0$ for any bivariate function $f_k$ whose variables are connected in $G$. \end{restatable} Now we have all we need to connect the structure of the causal DAG and the Lagrange multipliers. \begin{restatable}[Causal structure from Lagrange multipliers]{Theorem}{cibothways}\label{th:ci_both_ways} Let $P$ be a distribution with faithful $f$-expectations w.r.t.\ a causal DAG $G$, and let $\hat{P}$ be the MAXENT solution satisfying the constraints imposed by the expectations of the functions $f$ which are sufficient to uniquely describe the marginal distributions $P(X_i,{\bf Z}),P(X_j,{\bf Z})$, and $P(X_i,X_j)$. Then the following two statements hold: \begin{enumerate} \item If ${\bf Z}$ is d-separating $X_i$ and $X_j$ in $G$, then all Lagrange multipliers $\lambda_k$ are zero for all $k$ with ${\bf X}_{S_k}=\left\{X_i,X_j\right\}$. \item If $\lambda_k=0$ for all $k$ with ${\bf X}_{S_k}=\left\{X_i,X_j\right\}$, then there is no direct link between $X_i$ and $X_j$ in the DAG $G$. \end{enumerate} \end{restatable} For the special case where we have some prior knowledge about the causal order, e.g.\ if we know that $X_j$ can be causally influenced by $X_i$ or ${\bf Z}$, but not the other way around, we can directly identify the absence or presence of a direct causal link between $X_i$ and $X_j$: \begin{restatable}[Identification of causal links when causal order is known]{Corollary}{noedge}\label{lm:no_edge} Let $P$ be a distribution with faithful $f$-expectations w.r.t.\ a causal DAG $G$, and let $\hat{P}$ be the MAXENT solution satisfying the constraints imposed by the expectations of the functions $f$ which are sufficient to uniquely describe the marginal distributions $P(X_i,{\bf Z}),P(X_j,{\bf Z})$, and $P(X_i,X_j)$. If it is excluded that $X_j$ can causally influence $X_i$ and ${\bf Z}$, i.e.\ the DAG $G$ cannot contain edges $X_j\to X_i$ or $X_j\to{\bf Z}$, then it holds \begin{align} &X_i \text{ is not directly linked to } X_j \notag\\ \quad \Leftrightarrow \quad &\lambda_k = 0 \quad \forall k\; \text{ with }\; {\bf X}_{S_k}=\left\{X_i,X_j\right\}\; . \end{align} This also holds for conditional MAXENT, and if conditional means are used (see \cref{eq:cmaxent_constraints_conditional}) it holds \begin{align} &X_i \text{ is not directly linked to } X_j \notag\\ \Leftrightarrow \quad &\hat{\lambda}_k^\nu=\hat{\lambda}_k^{\nu'} \quad\forall\nu,\nu', k\text{ with } {\bf X}_{S_k}=\left\{X_i,X_j\right\}\; . \end{align} \end{restatable} Note that when conditional means are used, the Lagrange multipliers need not be zero to indicate missing links, but need to be constant for all conditions. We use this result in our experiments in \cref{sec:experiments}, where we estimate conditional MAXENT in the causal order, which is called \textit{causal MAXENT}, as proposed and justified by \cite{janzingPIR}. The reader may wonder about more general statements like the question \enquote{What information can be obtained about a DAG with $N$ nodes if only bivariate distributions are available?}. For this scenario, we have at least a necessary condition for causal links. For this recall that for any DAG $G$, the corresponding {\it moral graph} $G^m$ is defined as the undirected graph having edges if and only if the nodes are directly connected in $G$ or have a common child \citep{Lauritzen}. \begin{restatable}[Graph constructed from MAXENT with only bivariate constraints is a supergraph of the moral graph]{Theorem}{moral}\label{thm:moral} Let $f$ be a basis for the space of univariate and bivariate functions, i.e.\ the set of $f$-expectations determine all bivariate distributions uniquely. Let $P$ be a joint distribution that has faithful $f$-expectations w.r.t.\ the DAG $G$. Let $G^b$ be the undirected graph constructed from the MAXENT distribution by connecting $X_i$ and $X_j$ if and only if there is a non-zero Lagrange multiplier corresponding to some bivariate function of $X_i$ and $X_j$. Then $G^b$ is a supergraph of $G^m$, the moral graph of $G$. \end{restatable} \Cref{thm:moral} provides at least a candidate list for potential edges from bivariate information alone, which tells us where additional observations are needed to identify edges. The edges are candidates for being in the Markov blanket, which thus limits the number of variables that need to be considered for a prediction model. Note that inferring causal relations via bivariate information is not uncommon: after all, many implementations of the PC algorithm \citep{Spirtes2000,kalisch2007estimating,kalisch2008robustification,harris2013pc,cui2016copula,tsagris2019bayesian} use partial correlations instead of CIs. For real-valued variables, one can interpret this in the spirit of this paper since it infers CIs to hold whenever they are true for the multivariate Gaussian matching the observed first and second moments (i.e.\ the unique MAXENT distribution satisfying these constraints). These heuristics avoid the complex problem \citep{Shah2020} of non-parametric CI testing. In addition to the results above, our approach also generalises the partial correlation heuristics to more general functions $f_k$, including multivariate and higher-order statistics. Note also that we do not propose a {\it general purpose} conditional independence test because we do not fully understand what sort of conditional dependence it detects. We have concluded that it \enquote{generically} (i.e.\ subject to faithful $f$-expectations) has power against conditional dependencies generated by a DAG. Without a DAG (whose distributions can be easily parameterised), we do not see a clear notion of genericity on which a similar statement could be based. For most applications, estimating the joint distribution of many variables is not an end in itself. Instead, one will often be interested in particular properties of the joint distributions for specific reasons. In these cases, MAXENT is already helpful if it resembles the statistical properties of interest. So far, we have shown this for some conditional independence. We will now sketch how entropy maximisation can be used for pooling predictions made from different datasets. \begin{restatable}[Predictive power of MAXENT]{Theorem}{predictive}\label{th:predictive_power} Let $X_j, X_i, {\bf Z}$ be binary variables, with ${\bf Z}$ possibly high dimensional. Furthermore, let $\hat{P}(X_j \mid X_i, {\bf Z})$ be the MAXENT solution that maximises the conditional entropy of $X_j$ given $X_i$ and ${\bf Z}$, subject to the moment constraints given by the observed pairwise distributions $P(X_j, X_i)$, $P(X_j, {\bf Z})$, and $P(X_i, {\bf Z})$. Then $\hat{P}$ is a better predictor of $X_j$ than any of the individual bivariate probabilities, as measured by the likelihood of any point where all variables are observed, i.e.\ a point from $P(X_j, X_i, {\bf Z})$. \end{restatable} This show that merging datasets with MAXENT also improves the predictive power compared to using the observed marginal distributions. \section{PROOFS}\label{app:proofs} Here we repeat the theorems, corollaries and lemmas from the main text and provide the complete proofs for all of them. \cioneway* \begin{proof} We first show that CI w.r.t.\ $P$ results in CI w.r.t.\ the MAXENT distribution. Let $Q$ be a distribution satisfying the following two conditions: \begin{itemize} \item[(a)] $Q(X_i,X_j)=P(X_i,X_j)$, $Q(X_i,{\bf Z})=P(X_i,{\bf Z})$, and $Q(X_j,{\bf Z})=P(X_j,{\bf Z})$, and \item[(b)] $X_i\mathrel{\perp\mspace{-10mu}\perp} X_j\mid {\bf Z} \;\; [Q]\;$. \end{itemize} We know that such a distribution satisfying (a) and (b) exists, as this is the case for at least $P$ itself. Now assume that the MAXENT distribution $\hat{P}$ satisfies condition (a) but not condition (b). Then the entropy of $\hat{P}$ is \begin{align*} H_{\hat{p}}({\bf X}) &= H_{\hat{p}}(X_i\mid X_j,{\bf Z}) + H_{\hat{p}}(X_j,{\bf Z}) \stackrel{\centernot{\text{(b)}}}{<} H_{\hat{p}}(X_i\mid {\bf Z}) + H_{\hat{p}}(X_j,{\bf Z}) \\ &\stackrel{\text{(a)}}{=}H_{q}(X_i\mid {\bf Z}) + H_{q}(X_j,{\bf Z}) \stackrel{\text{(b)}}{=} H_{q}(X_i\mid X_j,{\bf Z}) + H_{q}(X_j,{\bf Z}) = H_{q}({\bf X}) \; . \end{align*} This violates the assumption that $\hat{P}$ maximises the entropy. Hence, the distribution satisfying the marginal constraints in (a) that maximises the entropy must satisfy the CI in (b). Next, we show that CI w.r.t. the MAXENT distribution results in the respective Lagrange multipliers being zero. By applying Bayes' rule to the MAXENT distribution in \cref{eq:maxent_sol} it can be seen that \begin{align*} \hat{p}(x_i\mid x_j,{\bf z}) &= \frac{p({\bf x})}{\sum_{x_i}p({\bf x})} = \frac{\exp\left[\sum_{k} \lambda_k f_k({\bf x}_{S_k}) + \alpha\right]}{\sum_{x_i}\exp\left[\sum_{k} \lambda_k f_k({\bf x}_{S_k}) + \alpha\right]} \\ &= \frac{\exp\left[\sum\limits_{{k \text{ with }}\atop{{\bf X}_{S_k}=\{X_i\}\cup{\bf Z}}} \lambda_k f_k({\bf x}_{S_k}) + \sum\limits_{{k \text{ with }}\atop{{\bf X}_{S_k}=\{X_i,X_j\}}} \lambda_k f_k({\bf x}_{S_k}) + \alpha\right]} {\sum\limits_{x_i}\exp\left[\sum\limits_{{k\text{ with }}\atop{{\bf X}_{S_k}=\{X_i\}\cup{\bf Z}}} \lambda_k f_k({\bf x}_{S_k}) + \sum\limits_{{k \text{ with }}\atop{{\bf X}_{S_k}=\{X_i,X_j\}}} \lambda_k f_k({\bf x}_{S_k}) + \alpha\right]} \end{align*} Using the linear independence of the functions $f$, it directly follows that \begin{align*} &\hat{p}(x_i\mid x_j,{\bf z}) = \hat{p}(x_i\mid {\bf z}) \quad \\ \Rightarrow \quad &\lambda_k = 0 \quad\forall k \;\text{ with }\; {\bf X}_{S_k}=\left\{X_i,X_j\right\} \end{align*} and from this, it directly follows the assertion. \end{proof} An alternative way to prove \cref{th:ci_one_way} is by considering an undirected graphical model and using insights from information geometry. To do this, we consider an undirected graph $G_U$ with a vertex set corresponding to the random variables ${\bf X}$. Furthermore, let the joint distribution $P({\bf X})$ satisfy the global Markov condition on $G_U$ and have strictly positive density $p({\bf x})>0$. Then the Hammersley-Clifford theorem \citep{Lauritzen} tells us that the joint density $p({\bf x})$ can be factorised into \begin{align} p({\bf x}) = \frac{1}{\tilde{\alpha}} \prod_{C\in{\cal C}} \psi_C({\bf x}_C) \qquad \end{align} with \begin{align} \tilde{\alpha} = \sum_{{\bf x}} \prod_{C\in{\cal C}} \psi_C({\bf x}_C) \end{align} for some clique potentials $\psi_C: {\cal X}_C\to\left[0,\infty)\right.$, where ${\cal C}$ is the set of maximal cliques of the graph $G_U$ and ${\bf X}_C$ are the variables corresponding to the nodes in clique $C$. In a log-linear model, we can formulate the clique potentials as \begin{align} \psi_C({\bf x}_C) = \exp\left[\sum_{k=1}^K \theta_{C,k} h_k({\bf x}_C)\right] \end{align} for some measurable functions $h_k: {\cal X}_C\to{\mathbb R}$. Hence, the joint density can be written in the form \begin{align}\label{eq:p_factor_HC} p({\bf x}) = \exp\left[\sum_{C,k} \theta_{C,k} h_k({\bf x}_C) - \tilde{\alpha}\right] \; . \end{align} This strongly resembles the MAXENT distribution (see \cref{eq:maxent_sol}). And indeed, if the subsets of variables ${\bf X}_{S_k}$ observed in the different datasets would be equal to the maximal cliques of the undirected graph, there would be a one-on-one correspondence between the MAXENT solution and the factorised true distribution. As a result, we would directly get equivalence in \cref{eq:ci_one_way} in \cref{th:ci_one_way}. In general, however, this is not the case. Nevertheless, the clique potential formalism provides an additional way to prove \cref{th:ci_one_way}. \begin{proof}[Alternative proof for \cref{th:ci_one_way}] Let us, without loss of generality, assume that $Z={\bf Z}$ is one (vector-valued) variable. Then $X_i\mathrel{\perp\mspace{-10mu}\perp} X_j\mid Z$ w.r.t.\ $P$ implies that $P$ can be represented by the undirected graphical model $X_i - Z - X_j$ \citep{Lauritzen} or a subgraph of it (in case $X_i$ or $X_j$ are also independent of $Z$). Accordingly, $P$ factorises according to the clique potentials of this graph and \cref{eq:p_factor_HC}. Thus $P$ lies in the exponential manifold \citep{Amari} $\tilde{E}$ of distributions given by $\exp\left[h_1(x_i,z) + h_2(x_j,z)-\tilde{\alpha}\right]$ with arbitrary functions $h_1,h_2$. Let $E\supset \tilde{E}$ be the exponential manifold of distributions $\exp\left[h_1(x_i,z) + h_2(x_j,z)+ h_3(x_i,x_j)-\tilde{\alpha}\right]$ with arbitrary functions $h_1,h_2,h_3$. By elementary results of information geometry \citep{Amari}, $\hat{P}$ can also be defined as the projection of $P$ onto $E$. Since $P$ lies in $\tilde{E}$, and thus also in $E$, it follows that $P=\hat{P}$ in this case. This also implies that $h_3(x_i,x_j)=0$ in the MAXENT distribution and thus $\sum_k\lambda_kf_k(x_i,x_j)=0$. Due to the linear independence of the functions $f$ this implies that $\lambda_k=0$ for all $k$ with ${\bf X}_{S_k}=\left\{X_i,X_j\right\}$. \end{proof} \fexpectations* \begin{proof} If $X_i$ and $X_j$ are connected in $G$, the distribution $q({\bf x})\sim \exp\left[ f_k(x_i,x_j)\right]$ is Markov relative to $G$. Obviously, it is $\lambda_k^Q=1\neq 0$, and due to $P$ having faithful $f$-expectations it is also $\lambda^P_k\neq 0$. \end{proof} \cibothways* \begin{proof} The first statement follows from \cref{th:ci_one_way}, and the second statement directly follows from \cref{lm:f-expectations}. \end{proof} \noedge* \begin{proof} This directly follows from \cref{th:ci_both_ways}. \end{proof} \moral* \begin{proof} The undirected graph $G^b$ contains all edges of $G$ due to \cref{lm:f-expectations}. It only remains to show that $G^b$ also connects pairs with a common child. To show that $G^b$ also connects pairs $X_i,X_j$ with a common child $X_c$, we first consider the 3-node DAG $X_i \rightarrow X_c \leftarrow X_j$, and construct an example distribution, that is Markovian for this DAG, which uses only pair-interactions, including an interaction term $X_i,X_j$. By embedding this distribution into a general joint distribution, we conclude that common children can result in interaction terms after projection on pair interactions. We define a Markovian distribution $P$ via $P(X_i) P(X_j) P(X_c\mid X_i,X_j)$, with \begin{align*} P(X_c\mid X_i,X_j) := \exp &\left[ \phi_i (X_c,X_i) + \phi_j(X_c,X_j) - \log z(X_i,X_j)\right] \;, \end{align*} where the partition function $z$ reads \begin{align*} z(X_i,X_j) := \sum_{x_c} \exp \left[ \phi_i (x_c,X_i) + \phi_j(x_c,X_j)\right]\;. \end{align*} By construction, $P$ lies in the exponential manifold spanned by univariate and bivariate functions. It therefore coincides with the MAXENT distribution subject to all bivariate marginals. Thus, $G^b$ contains the edge $X_i - X_j$ whenever $z(X_i,X_j)$ depends on both $X_i$ and $X_j$. This dependence can be easily checked, for instance, for $\phi_i(x_c,x_i):=\delta_{x_c} \delta_{x_i}$ and $\phi_j(x_c,x_j):=\delta_{x_c} \delta_{x_j} $, where $\delta_{x_i},\delta_{x_j},\delta_{x_c}$ are indicator functions for arbitrary values, as defined in \cref{eq:delta}. For any DAG $G$ with $N$ variables containing the collider above as subgraph, $P(X_1,\dots,X_N) \sim P(X_i,X_j,X_c)$ is also Markov relative to $G$ and, at the same time, coincides with the MAXENT solution subject to the bivariate constraints. Hence the moral graph $G^m$ still has an edge $X_i - X_j$ because there exists a distribution, Markovian to $G$, that has a bivariate term depending on $X_i$ and $X_j$ in the MAXENT distribution subject to all bivariate distributions. \end{proof} \predictive* \begin{proof} The proof follows directly from the duality of MAXENT and maximum likelihood \citep{wainwright2008graphical}. However, we prove it here using the Lagrange multipliers found by the optimisation procedure. By the definition of maximum likelihood and MAXENT, we can write: \begin{align}\label{eq:max_likelihood} \Exp{P(X_j, X_i, {\bf Z})}{\log \hat{P}(x_j \mid x_i, {\bf z})} &= \mathbb{E}_{P(X_j, X_i, {\bf Z})}\left[\log \max_{\lambda} \exp\left(\sum_{k} \lambda_{k}f_{k}(x_j, {\bf z}) + \sum_{l} \lambda_{l}g_{l}(x_j, x_i) -\beta(x_i, {\bf z})\right)\right] \end{align} On the other hand, if we do not use the MAXENT solution, the maximum likelihood estimate we can attain consistent with $P(X_j, X_i)$ is $P(X_j \mid X_i)$. From \cref{eq:max_likelihood}, we can attain that solution by setting all $\lambda_{k}$ to zero. This means that if $P(X_j, {\bf Z})$ is not valuable in predicting the multipliers, then we attain the same solution as not using the information from $P(X_j, {\bf Z})$. However, if there is information to be exploited from the moments given by $P(X_j, {\bf Z})$, then the multipliers are not set to zero, attaining a higher likelihood. \end{proof} \section{RELATED WORK}\label{sec:related_work} In the context of missing data \citep{Rubin1976,Bareinboim2011,Mohan2021}, many methods have been developed to investigate CI, how the joint distribution of the variables factorises, and how to predict the result of interventions \citep{Pearl2000,Spirtes2000,Chickering2002b,tsamardinos2006max}. However, most of these approaches assume they are given one dataset in which some values are missing at random. More recently, the even more challenging task of inferring causal relationships from multiple datasets \citep{tillman2009structure,ramsey2010six,triantafillou2010learning,eberhardt2010combining,claassen2010causal,tillman2011learning,hyttinen2013discovering,tillman2014learning}. These approaches and our method have in common that they assume that the underlying causal structures are similar across the different datasets. \citet{triantafillou2010learning,tillman2011learning}, for instance, assume that a single causal mechanism generates the data and that the dependencies and independencies are captured by a maximal ancestral graph (MAG) and the m-separation criterion \citep{richardson2002ancestral}. Various methods have also been proposed to discover the causal graph from multiple datasets containing measurements in different environments. Some combine statistics or constraints from the different datasets to construct a single causal graph \citep{claassen2010causal,tillman2011learning,hyttinen2013discovering,hyttinen2014constraint,triantafillou2015constraint,rothenhausler2015backshift,forre2018constraint}, while others directly combine the data from the different datasets and construct a causal graph from the pooled data \citep{cooper1997simple,hauser2012characterization,cooper2013causal,mooij2013cyclic,peters2016causal,oates2016estimating,zhang2017causal,mooij2020joint}. In \citet{mooij2020joint}, for instance, the union of causal graphs in each dataset (or context) is found by jointly modelling the context variables and the observed variables. The main difference between these approaches and ours is that they all rely on statistical information that reveals CIs in each dataset individually. Hence they can only be applied if at least three variables have been observed jointly, while our approach can also be used if only pairwise observations are available. In \cite{gresele2022causal}, the structural marginal question was asked: Can marginal causal models over subsets of variables with known causal graph be consistently merged? They proved that certain SCM can be falsified using only interventional and observetional data and a known graph structure. Their work differs from ours in that we are interested in interventional quantities, while they focus on counterfactual ones. As a result, the questions that can be answered with our framework are different. In \cref{app:add_related_work} we comment on less related -- but still interesting -- literature on gaining statistical information from causal knowledge and other entropy-based approaches to extract and exploit causal information. \section{ADDITIONAL RELATED WORK}\label{app:add_related_work} \paragraph{Gaining statistical information from causal knowledge} One approach to using causal information to improve the approximation to the true joint distribution is {\it causal} MAXENT \citep{sun2006causal,janzing2009distinguishing}, a particular case of conditional MAXENT, where the entropy of the variables is maximised along the causal order. For cause-effect relations, it just amounts to first maximising the entropy of the cause subject to all constraints that refer to it. Then, it maximises the conditional entropy for the effect given the cause subject to all constraints. Maximising the entropy in the causal order results in a distribution with lower entropy than maximising the entropy jointly. Consequently, the distribution learned in the causal order will have better predictive power. Another simple example of how causal information can help gaining statistical insights is the following: Imagine we are given the bivariate marginal distributions $P(X_1,X_2)$ and $P(X_2,X_3)$. In the general case, where we do not know the causal graph, we could not identify the joint distribution. However, when we know that the three variables form a causal chain $X_1\to X_2\to X_3$, this causal information is enough to identify the joint distribution uniquely \citep{overlapping}. \footnote{In general, it is a non-trivial problem to decide whether a set of marginal distributions of different but non-disjoint sets of variables are consistent with a joint distribution (the so-called \enquote{marginal problem} \citep{Vorobev1962}).} Even perfect causal knowledge does not uniquely determine the joint distribution for less simplistic scenarios. But causal information may still help to get {\it some} properties of the joint distribution. In \citet{Tsamardinos}, for instance, the causal structure is used to predict CI of variables that have not been observed together. This paper approaches a complementary problem: gaining causal insights by merging statistical information from different datasets. \paragraph{Entropy based approaches to extract or exploit causal information} Different methods exposing the relationship between information theory and causality are present in the literature. In \citet{kocaoglu2017entropic,compton2021entropic} properties of the entropy are used to infer the causal direction between categorical variable pairs. Their main idea is that if the entropy of the exogenous noise of a functional assignment in a structural causal model (SCM) is low, then the causal direction often becomes identifiable. Their approach differs from ours in several respects: first, we investigate the absence and presence of causal edges from merged data, as opposed to trying to infer the causal direction; second, we are not constrained to variable pairs; finally, we use the entropy as the function we want to optimise directly while they compare the entropy of each of the noise variables to decide the causal direction. In \citet{ziebart2010modeling,ziebart2013principle} the maximum causal entropy is introduced to solve inverse reinforcement learning problems. Their approach is based on having knowledge about a possible causal graph, making the MAXENT computation cheaper by exploiting the causal structure of the data. Their work differs from ours on the type of insights we get from the MAXENT estimation: while they are trying to save computation, we are trying to identify causal edges from the Lagrange multipliers. \paragraph{Semi-supervised learning (SSL)}The relation to semi-supervised learning (SSL) is interesting but still unexplored. The high-level connection, which we can mention, is that SSL uses $P(X)$ to infer properties of $P(X,Y)$, which has been claimed to be only possible if $Y$ is the cause and $X$ the effect, but not vice versa \cite{anticausalSSL}. Hence, SSL also infers joint properties from the marginal but relies on model assumptions like cluster assumption, manifold assumption, smoothness of decision boundaries. To relate this inductive bias with MAXENT probably requires defining the correct type of functions $f$. \section{ADDITIONAL RESULTS}\label{sec:add_results} In this section, we provide exemplary plots of the Lagrange multipliers for the synthetic experiments discussed in \cref{sec:experiments}, as well as further results for the experiments on the two real-world datasets. \paragraph{Synthetic Data} \begin{figure}[t] \sbox0{\begin{subfigure}[t]{.3\textwidth} \centering \adjustbox{max width=.9\textwidth}{\input{graph_ex_a_example.tex}} \subcaption{Exemplary graph (a)\label{fig:graph_exp_a_example}} \end{subfigure}} \sbox1{\begin{subfigure}[t]{.3\textwidth} \centering \adjustbox{max width=.9\textwidth}{\input{graph_ex_b_example.tex}} \subcaption{Exemplary graph (b)\label{fig:graph_exp_b_example}} \end{subfigure}} \sbox2{\begin{subfigure}[t]{.3\textwidth} \centering \adjustbox{max width=.9\textwidth}{\input{graph_ex_c_example.tex}} \subcaption{Exemplary graph (c)\label{fig:graph_exp_c_example}} \end{subfigure}} \sbox3{\begin{subfigure}[t]{.3\textwidth} \centering \includegraphics[width=\textwidth]{example_multipliers_a} \subcaption{Exemplary result for the Lagrange multipliers for graph (a)\label{fig:first_multipliers}} \end{subfigure}} \sbox4{\begin{subfigure}[t]{.3\textwidth} \centering \includegraphics[width=\textwidth]{example_multipliers_b} \subcaption{Exemplary result for the Lagrange multipliers for graph (b)\label{fig:second_multipliers}} \end{subfigure}} \sbox5{\begin{subfigure}[t]{.3\textwidth} \centering \includegraphics[width=\textwidth]{example_multipliers_c} \subcaption{Exemplary result for the Lagrange multipliers for graph (c)\label{fig:third_multipliers}} \end{subfigure}} \centering \begin{tabular}{ccc} \usebox0 & \usebox1 & \usebox2 \\[2em] \usebox3 & \usebox4 & \usebox5 \end{tabular} \caption{We show the found Lagrange multipliers for a randomly picked dataset for the displayed graphs. One can see that in all three cases, the multipliers are very close to being constant whenever an edge is missing. On the other hand, the differences between them are significant in the cases where there is an edge from $X_i$ to $X_0$.\label{fig:synth_multipliers}} \end{figure} In \cref{fig:synth_multipliers} we show exemplary results for the Lagrange multipliers for the experiments with synthetic data discussed in \cref{sec:experiments}. For each of the three graphs in \cref{fig:graph_exp_a,fig:graph_exp_b,fig:graph_exp_c} we randomly picked one dataset, for which we show the exact used graph structure in \cref{fig:graph_exp_a_example,fig:graph_exp_b_example,fig:graph_exp_c_example} and the found Lagrange multipliers in \cref{fig:first_multipliers,fig:second_multipliers,fig:third_multipliers}. In all three cases, the difference between the multiplier associated with $\Exp{}{X_0\mid X_i=0}$ and the one associated with $\Exp{}{X_0\mid X_i=1}$ is very small whenever there is no edge from $X_i$ to $X_0$, and relatively large whenever there is an edge connecting $X_i$ and $X_0$. \paragraph{Real Data} First, we further investigate the results from the experiment on the data from \cite{Gapminder}. For this, we consider different subsets of the variables \begin{itemize} \item children per woman / fertility (FER) \citep{Gapminder}; \item inflation-adjusted Gross Domestic Product (GDP) per capita \citep{WB:GDPPC}; \item Human Development Index (HDI) \citep{UNHDR:HDI}; and \item life expectancy (LE) in years \citep{Gapminder}. \end{itemize} as potential causes of the target variable CO2 tonnes emission per capita \citep{osti_1389331}. We always use data from 2017 for all variables and standardise it before estimation. We always use the unconditional mean and variance of CO2 emissions, and the pairwise covariance between CO2 emissions and each considered variable as constraints. We compare our results with the output of the KCI-test, where we use a significance threshold of $\alpha=0.05$. The results in \cref{fig:exp_results_gapminder} and \cref{tab:gapminder_compare} show that the conclusions drawn from the Lagrange multipliers are consistent over the different sets of considered potential causes. For the KCI-test, on the other hand, we get separate statements about the CIs of CO2 and HDI, and CO2 and FER, depending on the considered conditioning set. At first glance, this might be not surprising as, of course, the CI relationships can change when considering more variables. For instance, one could imagine that the causal effect of HDI on CO2 is only via FER. This would explain the behaviour of the KCI-test w.r.t.\ HDI. To check if this is the case -- which would contradict the result from the Lagrange multipliers --, we perform another KCI-test for CO2 and HDI conditioned only on FER. Summarising the obtained CI statements, we get: \begin{alignat}{2} CO2 &\centernot{\CI} HDI &&\mid GDP \label{eq:CIfirst} \\ CO2 &\centernot{\CI} HDI && \mid FER \label{eq:CIsecond}\\ CO2 &\mathrel{\perp\mspace{-10mu}\perp} HDI &&\mid GDP, FER \label{eq:CIthird}\\ CO2 &\mathrel{\perp\mspace{-10mu}\perp} GDP &&\mid HDI \label{eq:CIforth}\\ CO2 &\mathrel{\perp\mspace{-10mu}\perp} GDP && \mid HDI, FER \label{eq:CIfifth}\\ CO2 &\centernot{\CI} FER &&\mid HDI, GDP \label{eq:CIsixth} \end{alignat} If we now try to draw a causal DAG for these four variables based on the CIs in \cref{eq:CIfirst,eq:CIsecond,eq:CIthird,eq:CIforth,eq:CIfifth,eq:CIsixth}, we see that this is not so simple. In fact, it is not possible to construct a DAG over these four variables that is consistent with \cref{eq:CIfirst,eq:CIsecond,eq:CIthird,eq:CIforth,eq:CIfifth,eq:CIsixth}. There are, of course, many possible reasons why the KCI-test provides these seemingly inconsistent results. For instance, one could argue that choosing $\alpha=0.05$ as a threshold for the decision is very arbitrary and maybe a non-optimal choice. All obtained p-values were relatively small (less than 0.2), which might indicate that the question \enquote{CI or no CI?} might not be so easy to answer in this case. Furthermore, we do not know whether the considered example violates some of the assumptions made in the KCI-test. This shows that the KCI-test should not be mistaken with \enquote{ground truth}, and the fact that the conclusions drawn from the Lagrange multipliers do not always coincide with the conclusions drawn from the KCI-test is not necessarily a problem of our proposed approach. \begin{figure}[t] \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\columnwidth]{experiment_gapminder_reduced_covariance} \caption{considering GDP and HDI \label{fig:co2_gdp_hdi}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\textwidth]{experiment_gapminder_no_le_covariance} \caption{considering FER, GDP, and HDI \label{fig:co2_fer_gdp_hdi}} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\textwidth]{experiment_gapminder_covariance} \caption{considering FER, GDP, HDI, and LE \label{fig:co2_fer_gdp_hdi_le}} \end{subfigure} \caption{\label{fig:exp_results_gapminder} Lagrange multipliers for the real-world dataset from \citet{Gapminder} when considering different sets of variables as potential causes for CO2 emissions. We see that in all three cases, the results for GDP and HDI are consistent, indicating that HDI is directly linked to CO2 and GDP not.} \end{figure} \begin{table}[t] \centering \caption{We show the found Lagrange multipliers $\lambda_i$ for the MAXENT solution (see (\subref{tab:co2_gdp_hdi}) to \subref{tab:co2_fer_gdp_hdi_le})) together with the p-values for the KCI-test (see (\subref{tab:co2_gdp_hdi_p}) to \subref{tab:co2_fer_gdp_hdi_le_p})) . We indicate where the multipliers and p-values indicate the presence of a direct edge connecting $X_i$ and CO2 emissions, or, respectively, that the two are not CI given the other considered variable(s). We see that the conclusions drawn from the Lagrange multipliers are consistent across the different considered sets of potential causes, while the CI statements of the KCI-test change when changing the conditioning set. \label{tab:gapminder_compare}} \begin{subtable}[b]{.3\textwidth} \centering \caption{considering GDP and HDI as potential causes\label{tab:co2_gdp_hdi}} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lrc} \toprule variable $X_i$ & $\lambda_i$ & edge \\ \midrule &&\\ GDP & -0.29 & \ding{55} \\ HDI & 3.26 & \ding{51} \\ &&\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \hfill \begin{subtable}[b]{.3\textwidth} \centering \caption{considering FER, GDP, and HDI as potential causes\label{tab:co2_fer_gdp_hdi}} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lrc} \toprule variable $X_i$ & $\lambda_i$ & edge \\ \midrule FER & -3.22 & \ding{51} \\ GDP & -0.29 & \ding{55} \\ HDI & 3.69 & \ding{51} \\ && \\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \hfill \begin{subtable}[b]{.3\textwidth} \centering \caption{considering FER, GDP, HDI, and LE as potential causes \label{tab:co2_fer_gdp_hdi_le}} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lrc} \toprule variable $X_i$ & $\lambda_i$ & edge \\ \midrule FER & -3.98 & \ding{51} \\ GDP & -0.27 & \ding{55} \\ HDI & 5.40 & \ding{51} \\ LE & -1.15 & \ding{51} \\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \\[2em] \begin{subtable}[b]{.3\textwidth} \centering \caption{considering GDP and HDI as potential causes\label{tab:co2_gdp_hdi_p}} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lrc} \toprule variable $X_i$ & p-value & no CI \\ \midrule &&\\ GDP & 0.19 & \ding{55}\\ HDI & 0.02 & \ding{51}\\ &&\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \hfill \begin{subtable}[b]{.3\textwidth} \centering \caption{considering FER, GDP, and HDI as potential causes\label{tab:co2_fer_gdp_hdi_p}} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lrc} \toprule variable $X_i$ & p-value & no CI \\ \midrule FER & 0.03 & \ding{51} \\ GDP & 0.13 & \ding{55}\\ HDI & 0.13 & \ding{55}\\ & & \\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \hfill \begin{subtable}[b]{.3\textwidth} \centering \caption{considering FER, GDP, HDI, and LE as potential causes\label{tab:co2_fer_gdp_hdi_le_p}} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lrc} \toprule variable $X_i$ & p-value & no CI \\ \midrule FER & 0.08 & \ding{55} \\ GDP & 0.16 & \ding{55}\\ HDI & 0.14 & \ding{55}\\ LE & 0.00 & \ding{51}\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \end{table} Finally, we show in \cref{fig:exp_results_depression} the Lagrange multipliers for the experiment on depression rate w.r.t.\ place of residence, age, and sex. We see that for all three factors the multipliers are {\it not} constant across the various conditions. This indicates that all three factors might directly cause the depression rate. \begin{figure}[t] \centering \includegraphics[width=.7\textwidth]{experiment_depression_bivariates} \caption{\label{fig:exp_results_depression} Lagrange multipliers for the depression dataset. We see that for none of the three potential causes place of residence, age, and sex the multipliers are close to being constant. Hence we conclude that all three factors can directly have impact on the depression rate.} \end{figure}
1,108,101,562,700
arxiv
\section{Introduction} \label{Introduction} The properties of dark matter (DM), which is believed to be mostly cold and collisionless, have been extensively explored in dynamical simulations. A much touted result from these simulations is the ``universal'' density profile (Navarro, Frenk, \& White 1997, hereafter NFW; Moore et al.\ 1998) of galaxies and galaxy clusters. As the main mass constituent of galaxy clusters, DM largely self-gravitates and dominates the hydrodynamics of intracluster (IC) gas and the dynamics of member galaxies. Cluster DM density profiles deduced from X-ray observations (Pointecouteau, Arnaud, \& Pratt 2005; Vikhlinin et al.\ 2006; Schmidt \& Allen 2007; Arnaud, Pointecouteau, \& Pratt 2008), galaxy velocity distribution (Diaferio, Geller, \& Rines 2005), SZ measurements (Atrio-Barandela et al.\ 2008), and strong and weak lensing measurements (Broadhurst et al.\ 2005a, hereafter B05a; Broadhurst et al.\ 2005b, hereafter B05b; Limousin et al.\ 2007; Medezinski et al.\ 2007; Lemze et al.\ 2008, hereafter L08; Broadhurst et al.\ 2008; Zitrin et al.\ 2009, 2010; Umetsu et al.\ 2010) are broadly claimed to be consistent with NFW profiles. However, the shape of the profile in the inner cluster region where it is predicted to have a characteristic radial slope of $-1$ has been deduced to be shallower in some studies (Kravtsov et al.\ 1998; Ettori et al.\ 2002; Sanderson et al.\ 2004) and steeper, around $-1.5$, in others (Fukushige \& Makino 1997, 2001, 2003; Moore et al.\ 1999; Ghigna et al.\ 2000; Klypin et al.\ 2001; Navarro et al.\ 2004; Limousin et al.\ 2008). A more complete description of the clustering properties of DM necessitates characterization of its phase space distribution. Whereas the DM density profile can be determined directly from analysis of X-ray, lensing, and SZ measurements, deducing the DM velocity distribution is considerably more challenging. On the other hand, the dynamical properties of member galaxies can be studied directly in terms of their density and velocity profiles, including a study of the radial behavior of the velocity dispersion. Also, the location of the velocity `caustics' can now be studied both in individual massive clusters where the data quality is high (Lemze et al.\ 2009, hereafter L09) and in composite surveys for which only lower quality measurements are available but statistical results can be derived (e.g., Biviano \& Girardi 2003). The dynamical evolution of the galaxy population in a cluster is presumed to be largely collisionless following an initial phase of mean-field (`violent') relaxation of the main sub-cluster progenitors that merged to form the cluster. This collisionless behavior is expected particularly outside the central cluster region. As such, the basic dynamical characteristics of cluster galaxies are expected to resemble those of DM, which is strictly collisionless. For example, in the cluster 1E0657-56 - the "bullet cluster" - despite a recent collision of two massive clusters, the spatial distributions of DM and galaxies are quite similar. A conical shaped shock front is visible indicating two clusters have passed through each other with an obvious collisionally merged gas distribution, but the galaxies and the lensing mass are largely intact, implying straightforwardly that the DM and galaxies are collisionless (Markevitch et al.\ 2002; Clowe et al.\ 2004; Brada{\v c} et al.2006). In the colliding cluster A520, on the other hand, a massive dark core is claimed to coincide with the central X-ray emission peak, but the region is largely devoid of galaxies (Mahdavi et al.\ 2008), though this depends on the way the weak lensing analysis is formulated (Okabe \& Umetsu 2008). With increasingly extensive and precise data, such as we have acquired for A1689, it is now possible to assess the collisionless nature of galaxies and DM by measuring the degree of consistency between the measured galaxy and DM density profiles and the profile of the DM velocity anisotropy. A1689 seems well relaxed (with possibly a small deviation from a relaxed state; Andersson \& Madejski 2004). It has a centrally located cD galaxy, and an X-ray emission region that is spherically symmetric (Xu \& Wu 2002; L08; Riemer-Sorensen et al.\ 2008). The cluster has well-defined galaxy velocity caustics with no major infall of matter close to the virial radius (L09), and only a low level of substructure (Broadhurst et al.\ 2005a,b; Umetsu \& Broadhurst 2008). We have previously determined the DM and gas density profiles in A1689 from a combined analysis of lensing and X-ray measurements (L08), using an approach that we refer to as model-independent, since we did not assume particular functional forms for the profiles. Additionally, we extended the analysis by including photometric and spectroscopic measurements of a very large number of galaxies in the A1689 field, from which we deduced the positions and radial velocities of $476^{+27}_{-43}$ cluster members. These results made it possible to deduce the galaxy velocity anisotropy profile, which was found to exhibit the expected behavior, varying between predominantly radial orbits at large radii towards more tangential orbits near the center (L09). Here we show that with the above information we can infer the DM velocity anisotropy which we compare with the galaxy velocity anisotropy profile and with results from simulations. The paper is organized as follows; in \textsection~\ref{Methodology} we describe our method for determining the DM velocity anisotropy. In \textsection~\ref{Results} we compare the DM and galaxy density profiles (\textsection~\ref{Density profiles}), derive the DM velocity anisotropy and compare it with the galaxy velocity anisotropy profile and results from simulations (\textsection~\ref{Velocity anisotropy}), and estimate the collisionless profile of cluster galaxies (\textsection~\ref {The collisionless profile of cluster galaxies}). We conclude with a summary and discussion in \textsection~\ref{Discussion}. \section{Methodology} \label{Methodology} In this section we present the procedure for deriving the DM velocity anisotropy using results from our previous analyses of the galaxy dynamics (L09) and the total mass density profile (L08) of A1689. In L08 we combined lensing and X-ray measurements to determine model-independent profiles of the gas and total mass density profiles (i.e., without assuming particular functional forms). In the second stage of the work (L09) the galaxy surface number density and the projected velocity dispersion were included and analyzed using the Jeans equation, from which we obtained profiles of the 3D galaxy number density and the galaxy velocity anisotropy. The dynamics of a collisionless gas are governed by the Jeans equation (Binney \& Tremaine 1987) \begin{equation} \frac{1}{\rho_{i}}\frac{d}{dr}\left(\rho_{i} \sigma_{i,r}^2 \right) +\frac{2\beta_i \sigma_{i,r}^2}{r}= -\frac{GM}{r^2}\ , \label{Jeans eq} \end{equation} where $i=$DM, gal, and $\rho_{i}(r)$ is the density of element $i$. The velocity anisotropy profile $\beta_i(r)$ is \begin{equation} \beta(r)\equiv 1-\frac{\sigma_t^2(r)}{\sigma_r^2(r)}\ , \end{equation} where $\sigma_r(r)$ is the radial velocity dispersion, and $\sigma_t(r) = \sigma_{\theta}(r) = \sigma_{\phi}(r)$ is the (1D) transverse velocity dispersion. Using eq.~\ref{Jeans eq} for the galaxies, the degeneracy between $\sigma_{\rm gal,r}$ and $\beta_{\rm gal}$ can be removed with sufficient spectroscopic data (L09). This procedure is obviously irrelevant in the case of DM, for which we have to adopt an alternative approach. The orbit of a test particle in a collisionless gravitational system is independent of the particle mass. This would presumably imply that once hydrostatic equilibrium is attained, most likely as a result mixing and mean field relaxation, DM and galaxies should have the same mean specific kinetic energy, i.e., \begin{equation} \sigma_{\rm DM,tot}^2(r)=\sigma_{\rm gal,tot}^2(r)\ , \label{sigma_total} \end{equation} where \begin{equation} \sigma_{i,\rm tot}^2(r)= \sigma_{i,r}^2(r)+ \sigma_{i,\theta}^2(r)+ \sigma_{i,\phi}^2(r)=\sigma_{i,r}^2(r)\left(3-2\beta_i(r) \right)\ . \label{sigma_total derivation} \end{equation} Additionally, it is expected that the total specific kinetic energy of DM particles is proportional to that of the gas (e.g., Mahdavi 2001; Host et al.\ 2008), $T\propto \sigma_{\rm tot}^2$. Observational evidence for this scaling relation comes from combined X-ray and optical observations of groups and clusters for which the mean emission-weighted gas temperature scales roughly as the second power of the total galaxy velocity dispersion (Mulchaey \& Zabludoff 1998; Xue \& Wu 2000). The temperature and density profiles of IC gas can be deduced from X-ray spectral and surface brightness measurements. These profiles can then be used to determine the total mass distribution from a solution of the hydrostatic equilibrium equation. We have recently developed a model-independent joint lensing/X-ray analysis procedure (L08) to examine the consistency of X-ray temperature and emission profiles with the lensing based mass profile, finding that the cluster temperature profile is systematically $\sim$ 30-40\% lower than expected when solving the equation of hydrostatic equilibrium using the lensing-deduced mass profile. This discrepancy may reflect in part the ambiguity in deriving 3D temperatures from projected spectral X-ray data, stemming from the sampling of a range of gas temperatures along any given line of sight (Mazzotta et al.\ 2004; Vikhlinin 2006). This could also be partly related to the small-scale structure of the gas [possibly including relatively dense cooler clouds found in simulations (Kawahara et al.\ 2007)] which may result in a significant downward bias in temperature estimates from spectral X-ray observations. The inferred temperature is also sensitive to instrumental effects, as has recently been deduced in the analysis of Chandra observations of A1689 (Peng et al.\ 2009). Other reasons for temperature biases can be deviations from equilibrium and non-thermal pressure (Molnar et al.\ 2010). Even in the best-case scenario the temperature can be used as a reliable tracer for $\sigma_{\rm tot}$ only in the inner part of the cluster, where hydrostatic equilibrium applies, and also only at radii larger than about $0.1r_{\rm vir}$, since at smaller radii the specific energy of the gas and DM may be different (Rasia, Tormen, \& Moscardini 2004). For our analysis we assume an NFW-like profile for the DM density \begin{equation} \rho_{\rm DM}(r)\propto [(r/r_s)(1+r/r_s)^{\alpha}]^{-1}\ , \end{equation} and take the DM velocity anisotropy to be constant, i.e., $\beta_{\rm DM} ={\rm const}$, since the data is not sufficient to meaningfully constrain more than one free parameter in this quantity. We then relate our model parameters to the DM radial velocity dispersion, $\sigma _{\rm DM,r}$, using the Jeans equation, where the (total) mass profile is taken from L08. Here we allow for the possibility of a difference between the total density profile (directly measured by lensing) and the profile of just the DM. From the DM velocity anisotropy and radial velocity dispersion we can then deduce the DM total specific kinetic energy. Best-fit values of $\rho _{\rm DM}(r)$ and $\beta_{\rm DM}$ are determined by fitting the DM total specific kinetic energy to the galaxy total specific kinetic energy. More specifically, $\beta_{\rm DM}$ and $\rho_{\rm DM}(r)$ are determined by minimizing $\chi^2= \sum_i V^{T}(r_i)\cdot C^{-1} \cdot V(r_i)$, where $V(r_i) = \sigma_{\rm DM,tot}^2(r_i)-\sigma_ {\rm gal,tot}^2(r_i)$ and C is the covariance matrix of the measured $\sigma_{\rm gal,tot}^2(r_i)$. The galaxy total specific kinetic energy itself is not a direct measurement. It was derived as shown in eq.~\ref{sigma_total derivation} from $\sigma_{\rm gal,r}$ and $\beta_{\rm gal} (r)$. Since the values of $\sigma_{\rm gal,tot}$ at the various radial positions $r_i$ are derived from underlying parameterized expressions (as given below for $\beta_{\rm gal} (r)$), their uncertainties are correlated. The derivation of $\beta_{\rm gal}(r)$ and $\sigma_ {\rm gal,tot}(r_i)$ is carried out by following the same procedure as in L09, except for allowing a higher freedom of $\beta_{\rm gal}(r)$ at large radii. N-body simulations for a variety of cosmologies show that the velocity anisotropy has a nearly universal radial profile (Cole \& Lacey 1996; Carlberg et al.\ 1997). In accord with the work of L09, $\beta_{\rm gal}(r)$ is taken to have the following form: \begin{equation} \beta_{\rm gal}(r) = (\beta_0+\beta_{\infty})\frac{(r/r_c)^2}{(r/r_c)^2+1}-\beta_0\ , \label{beta analytic expression} \end{equation} where we note that $\beta_{\infty}=1$ was adopted by L09. Now, the total number of $\sigma_{\rm gal,tot}(r_i)$ bins cannot exceed the total number of free parameters in the expressions for $\sigma_{\rm gal,r}$ and $\beta_{\rm gal}(r)$, which is $6$, since a larger number would cause a complete degeneracy among the various values of $\sigma_{\rm gal,tot}(r_i)$. Even taking $6$ bins resulted in an unphysical correlation matrix (i.e., one having a negative eigenvalue), which still indicates a near-degeneracy. Therefore we adopted 5 radial bins of $\sigma_{\rm gal,tot}(r_i)$, which was the maximum number for which degeneracy is not significant and error estimates are reasonable. \section{Results} \label{Results} \subsection{Velocity anisotropy profiles} \label{Velocity anisotropy} The DM velocity anisotropy, $\beta_{\rm DM}$, was determined as described in \textsection~\ref{Methodology}, with the constraint that the DM total specific kinetic energy must satisfy $\sigma_{\rm DM,tot}^2\geq 0$. The resulting acceptable fit, with $\chi^2/{\rm dof}=3.5/(5-3)$, is shown in figure~\ref{sigma_tot_fit}. The best-fit parameters of the analytical expressions for the DM density and velocity anisotropy profiles are given in table~\ref{best-fit parameters table}. It is important to note that while the errors on the parameters are rather large, in this fit we have constrained the DM parameters based only on the fit to equation~\ref{sigma_total}, without assuming the DM density profile to be similar to the total mass density profile. Thus, the results allow a largely independent comparison between the consequences of assuming equation~\ref{sigma_total} and the results of other observational probes of the cluster. \begin{table} \caption{The values of the parameters of DM density and velocity anisotropy. The errors are 1-$\sigma$ confidence. \label{best-fit parameters table}} \begin{center} \begin{tabular}{|l|c|} \hline Parameter & Value \\ \hline $r_s$ \;\;\; [h$^{-1}$ kpc] & $1330_{-605}^{+1210}$ \\ $\alpha$ & $2.79_{-0.76}^{+1.27}$ \\ $\beta_{\rm DM}$ & $0.49_{-0.27}^{+0.13}$ \\ \hline \end{tabular} \end{center} \end{table} The corresponding galaxy and DM velocity anisotropy profiles are plotted in figure~\ref{beta_profile_DM_Vs_gal} (top panel) together with their respective 1--$\sigma$ uncertainties. In figure~\ref{beta_profile_DM_Vs_gal} (lower panel) we compare the derived DM velocity anisotropy value to the profile derived from simulations. The current data only allow us to determine an overall, typical value of $\beta_{\rm DM}$ in the cluster, and given the rather large uncertainties, there is fair agreement between this value and the typical value of $\beta_{\rm gal}$ in this cluster, and also with the typical values of $\beta_{\rm DM}$ seen in simulated clusters. Note from the figure that while we allowed $\beta_{\infty}$ to be a free parameter for the galaxies, the best-fit value came out quite close to unity. \begin{figure} \centering \epsfig{file=sigma_tot_fit.eps, width=8cm, clip=} \caption{Profile of the total specific kinetic energy of the galaxies (blue circles and 1--$\sigma$ errorbars) and the fitted total specific kinetic energy of the DM (red line). \label{sigma_tot_fit} } \end{figure} We briefly describe the derivation of the DM velocity anisotropy profile from results of an ENZO simulation (for a more complete description of this particular simulation, please refer to Hallman et al.\ 2007). Since A1689 is a moderately-distant (z=0.183), high mass cluster, $M_{\rm vir} \sim 1.5\cdot10^{15}$ h$^{-1}$ M$_{\odot}$ (Broadhurst et al.\ 2005a; Oguri et al.\ 2005; Limousin et al.\ 2007; L08; Umetsu \& Broadhurst 2008; L09; Umetsu et al.\ 2009; Corless et al.\ 2009; Coe et al.\ 2010), we derive the DM velocity anisotropy profiles for a sample of high-mass clusters with $M_{\rm vir} > 10^{15}$h$_{0.7}^{-1}$ M$_{\odot}$ at $z=0.2$. The clusters were drawn from a cosmological adaptive mesh refinement (AMR) simulation performed with the ENZO code developed by Bryan \& Norman (1997) and Norman \& Bryan (1999) assuming a spatially flat CDM model (very similar to the concordance model). The AMR simulation assumed adiabatic gas dynamics (i.e., neither radiative heating, cooling, star formation or feedback were included). The box size was 512 h$^{-1}$ Mpc comoving on a side with $512^3$ DM particles, and DM mass resolution of about $10^{11}$ h$_{0.7}^{-1}$ M$_{\odot}$. The root grid contained $512^3$ grid cells, and the grid was refined by a factor of two, up to seven levels, providing a spatial resolution of $6.5$ kpc h$^{-1}$ at $z=0.2$. For each halo out of the total number of 51, we derive the DM velocity anisotropy profile, which is averaged over all halos, with an uncertainty that is taken to be the standard deviation as determined by Lemze et al.\ (2010, in preparation). The derived profile is in agreement with those found in other studies (e.g., Colin, Klypin, \& Kravtsov 2000; DM04; Mamon \& Lokas 2005; Valdarnini 2006). \begin{figure} \centering \epsfig{file=beta_profile_DM_Vs_gal.eps, width=8cm, clip=} \epsfig{file=beta_profile_DM_data_Vs_simulations.eps, width=8cm, clip=} \caption{Top panel: comparison between $\beta_{\rm DM}$ and $\beta_{\rm gal}$. Shown are the velocity anisotropy profiles of galaxies (blue dotted lines) and DM (black solid lines). Bottom panel: comparison between the DM velocity anisotropy inferred from data (black solid lines) and that derived from simulations (blue dot-dashed lines). For each set the central line shows the best fit and the other two lines show the $\pm$ 1--$\sigma$ uncertainty range. \label{beta_profile_DM_Vs_gal} } \end{figure} \subsection{Density profiles} \label{Density profiles} To further explore phase space occupation of DM and galaxies, we compare the deduced DM density profile to the galaxy and to the total density profile. The shape of the galaxy density profile is represented with a core, and that of the total density with either a model-independent profile, a `universal' NFW form, or a cored form. In figure~\ref{density_profiles_comparison} we compare our results for these profiles, i.e., we compare the DM density from our current fitting to equation~\ref{sigma_total} (green dot-dashed curves showing the best-fit result as well as the 1--$\sigma$ uncertainty region) to our previous results for the galaxy density (blue solid curves) and the total mass profile, whose various versions are shown by the points with error bars (model-independent fit), dashed black curve (NFW), and dotted red curve (core). In order to include all these in the same figure, and allow a comparison of the relative shapes, we arbitrarily scaled the profiles so that they match at 700 h$^{-1}$ kpc $\sim\frac{1}{3}r_{\rm vir}$ (with arbitrary units in the $y$-axis). \begin{figure} \centering \epsfig{file=density_profiles_comparison_new.eps, width=8cm, clip=} \caption{The galaxy density (blue solid curve with upper and lower solid curves marking the 1--$\sigma$ uncertainty) is compared to the deduced DM density (green dot-dashed curves showing the best-fit and and 1--$\sigma$ uncertainty) and to the total matter density for the model-independent fit (points with error bars), the NFW fit (black dashed curve), or the core fit (red dotted curve). We show {\em relative}\/ density profiles, all scaled to match at 700 h$^{-1}$ kpc (except for the galaxy 1--$\sigma$ lines). The left and right black vertical lines indicate $\frac{1}{3} r_{\rm vir}$ and $r_{\rm vir}$, respectively. Note that the galaxy density line at low radii is an extrapolation due to lack of data in this region (and therefore no error bars are shown). \label{density_profiles_comparison} } \end{figure} While the DM scale radius derived here from the velocity dispersion fit has a large error, its best-fit value is significantly higher than that derived previously (L08, table 4) for the total (mostly DM) density. This is reflected by the slower rise of the DM density at small radii compared to the total density (figure~\ref{density_profiles_comparison}). This may indicate that our assumption of eq.~\ref{sigma_total} invalid at small radii. Another perspective on the assumption that DM follows the same dynamics as the galaxies can be seen from the steepness of the directly measured density profiles. To assess the steepness of the fitted profiles we plot in figure~\ref{gamma_comparison} the distribution of their power-law indices, \begin{equation} \gamma(r)=\frac{d\log[\rho(r)]}{d\log[r]}\ . \end{equation} The power-law index of the galaxy profile is shown by the blue lines, and that of the total mass profile by the black (NFW) and red (core) lines. We did not plot also the power-law slope of the model-independent fit, since it is clear from figure~\ref{density_profiles_comparison} that it would look very similar to the results for the NFW and core profiles. Since the DM profile should be rather similar to the total mass profile, the conclusion from these figures here again is that the galaxies and the DM have consistent density profiles for $r \ga \frac{1}{3} r_{\rm vir}$, but the profiles are significantly different at smaller radii. \begin{figure} \centering \epsfig{file=gamma_comparison.eps, width=8cm, clip=} \epsfig{file=gamma_ratio_profile.eps, width=8cm, clip=} \caption{Top panel: power-law indices of the galaxy and total mass profiles. The index of the galaxy profile is shown by the blue solid line, with upper and lower solid lines indicating 1--$\sigma$ uncertainty, and that of the total mass is shown by the black dashed (NFW) and dotted red (core) lines, with 1--$\sigma$ uncertainties. Bottom panel: ratio between the power-law index of the galaxy density to that of the total matter (NFW - black dashed with upper and 1--$\sigma$ uncertainty; core - red solid with 1--$\sigma$ uncertainty). The left and right black vertical lines mark $\frac{1}{3} r_{\rm vir}$ and $r_{\rm vir}$, respectively. \label{gamma_comparison} } \end{figure} \subsection{The collisionless profile of cluster galaxies} \label{The collisionless profile of cluster galaxies} In \textsection~\ref{Velocity anisotropy} we derived the DM velocity anisotropy profile assuming that both galaxies and DM are collisionless, finding that the best-fit value of DM density scale radius is higher than inferred from direct lensing observations for the total mass (L08). As was mentioned above, this may indicate that eq.~\ref{sigma_total} is not valid at all radii. To quantify differences between $\sigma_{\rm DM,tot}^2$ and $\sigma_{\rm gal,tot}^2$ we use the ratio \begin{equation} f_{\rm coll}(r)\equiv\frac{\sigma_{\rm DM,tot}^2(r)}{\sigma_{\rm gal,tot}^2(r)}\ , \end{equation} which we expect to be very close to unity if both components are fully collisionless (or if both deviate comparably from the purely collisionless limit). We have thus far assumed that $f_{\rm coll}(r)$ must equal unity and tried to obtain acceptable fits under this assumption, deriving other results in the process. Here we try a different approach, where we start by assuming that the DM velocity anisotropy profile in A1689 matches the profile derived from simulations (see \textsection~\ref{Velocity anisotropy}). We used our previously-determined DM mass density, $\rho_{\rm DM} \simeq \rho_{\rm tot}-\rho_{\rm gas}$, derived by assuming profiles for the total and gas density profiles. Two kinds of profiles were assumed, a model-independent and a model-dependent one, where in the model-dependent case we assumed an NFW and double beta model for the total and gas density profiles, respectively. After assuming a profile for the total and gas density profile, we can then solve the Jeans equation to determine $\sigma_{\rm DM,tot}^2$. In figures~\ref{f_coll_model_independent} and \ref{f_coll_model_dependent} we show the resulting ratio $f_{\rm coll}(r)$, or equivalently the corresponding velocity bias of the galaxies relative to the DM \begin{equation} b(r)=\sqrt{1/f_{\rm coll}(r)}\ , \label{f_coll b connection} \end{equation} for the model-independent and model dependent profiles, respectively. Our results are consistent with both the galaxies and the DM being purely collisionless at $r\gtrsim \frac{1}{3} r_{\rm vir}$, but there is some evidence for a significant deviation from this limit at smaller radii (except for the very smallest radii, where the observational constraints are weak). \begin{figure} \centering \epsfig{file=f_coll_fp.eps, width=8cm, clip=} \epsfig{file=b_fp.eps, width=8cm, clip=} \caption{Profiles of $f_{\rm coll}$ (top panel, blue solid curve) and the velocity bias $b$ (lower panel, blue solid curve) taking the total and gas density model independent profiles from L08 along with their 1--$\sigma$ uncertainty regions (marked by blue dot-dashed and dashed lines, for the case when the uncertainty of DM velocity anisotropy from simulations is included, and the case when this uncertainty is not included, respectively). The vertical black line marks $\frac{1}{3} r_{\rm vir}$, and the horizontal dotted line indicates the expected value if both DM and galaxies are purely collisionless. \label{f_coll_model_independent} } \end{figure} \begin{figure} \centering \epsfig{file=f_coll_NFW.eps, width=8cm, clip=} \epsfig{file=b_NFW.eps, width=8cm, clip=} \caption{The same as figure~\ref{f_coll_model_independent} except that here we take the total and gas density profiles to be NFW and double beta model (as described in L08), respectively. \label{f_coll_model_dependent} } \end{figure} \section{Discussion} \label{Discussion} The work reported here is a continuation of our comprehensive study of the dynamical properties of DM and galaxies, and the hydrodynamical properties of IC gas in the well-observed cluster A1689 (L08, L09). In L09 we derived the galaxy density and velocity distributions, from which we deduced the specific kinetic energy of the galaxies. In the work reported here we assumed that if DM and galaxies are fully collisionless they should have the same average specific kinetic energy (as manifested in eq.~\ref{sigma_total}). Using the mass profile which was previously derived in L08, together with the Jeans equation, we fitted the DM to the galaxy specific kinetic energy and determined the DM density and the DM velocity anisotropy. For the DM density we obtained a best-fit value for the power law index at large radii, $\alpha+1$, which is higher (i.e., steeper) than the NFW value of 3 by about $1\sigma$. The best-fit value of the scale radius is $\simeq 1.9\sigma$ higher than the scale radius of the total density This was the first of several indications that the dynamical state of the galaxies and the DM may differ at small radii, below $\sim \frac{1}{3} r_{\rm vir}$. In particular, our assumption of eq.~\ref{sigma_total} resulted in a derived DM density profile at small radii that was inconsistent with (specifically, shallower than) that we have previously measured for the total matter density. In order to quantify the possible difference between $\sigma_{\rm DM,tot}^2$ and $\sigma_{\rm gal,tot}^2$, we used another approach to determine their ratio $f_{\rm coll}$. Specifically, we adopted the profile of $\beta_{\rm DM}$ deduced from simulations and our derived DM mass density (from L08), together with the Jeans equation. The deduced profile of $f_{\rm coll}$ can be interpreted as a measure of how collisionless are the galaxies with respect to the DM (which is equivalently expressed also by $b$, the velocity bias of eq.~\ref{f_coll b connection}). Note that in some simulation work the velocity bias is defined in terms of the velocity dispersions of DM subhalos (rather than galaxies) compared to the DM background. We find that $f_{\rm coll}<1$ (at $> 1\sigma$) at some radii below $\frac{1}{3} r_{\rm vir}$, but that it is consistent with unity at larger radii. This implies that at $r\gtrsim \frac{1}{3} r_{\rm vir}$ we can indeed assume the validity of eq.~\ref{sigma_total}, but at $r\approx 450$ h$^{-1}$ kpc we deduce $b \approx 1.5^{+0.3}_{-0.2}$ and $b \approx 1.3^{+0.4}_{-0.2}$, for the model-independent and model dependent profiles, respectively, an indication that $b$ increases from large radii towards the center. This is in accord with results from simulations which found $b$ to vary from slightly higher than one (by $\sim10\%$) at large radii (Ghigna et al.\ 2000; Colin, Klypin, \& Kravtsov 2000; Diemand, Moore, \& Stadel, \ 2004, hereafter DMS04; Gill et al.\ 2004) to $b \simeq 1.3$ in the central region ($r<0.3r_{\rm vir}$) (Ghigna et al.\ 2000; DMS04). We note that there is some uncertainty regarding the dependence of this bias on the features of the subhalos or galaxies considered; e.g., whereas DMS04 found the bias to be independent of the subhalo mass, Ghigna et al.\ (2000) claim that there is no significant bias when only high-mass subhalos are considered. In an earlier work Diaferio et al.\ (1999) found a bias for blue galaxies of about $1.5-2$, but no bias for red galaxies. Their interpretation was that the bias is due to the fact that blue galaxies are not in equilibrium. The velocity bias can be explained by the fact that slow subhalos are much less common, due to tidal disruption early in the merging process, before the cluster was virialized. Halos with low relative velocities can merge shortly after entering the cluster, thus decreasing the number of small subhalos with low velocities; after virialization, mergers are suppressed (Gnedin 2003). Large subhalos, $>10^{-3}$ M$_{\rm cluster}$, are more likely to merge with the central halo, rather than in subhalo-subhalo mergers (Angulo et al.\ 2008). Thus, in the central part of the cluster there is also a decrease in the number of massive subhalos with low velocities. This may explain why the value of $b$ increases, going from the outer region towards the cluster center. The baryon content of halos may also affect the velocity bias: Due to ram-pressure stripping of galactic gas, which is more effective in the central cluster region (e.g., Arieli, Rephaeli, \& Norman 2008), the subhalo mass is more easily reduced by tidal disruption. The deduced subhalo mass function is reduced relative to that deduced from a corresponding DM-only simulation (Dolag et al.\ 2008). The current data only allow determining an overall typical value of $\beta_{\rm DM}$; its best-fit value is in general agreement with the $\beta_{\rm gal}$ deduced from the data, and the value of $\beta_{\rm DM}$ found in simulations. As we noted, our assumption of eq.~\ref{sigma_total} is likely valid at large radii, which implies that the procedure specified in \textsection~\ref{Methodology} is more reliable there. Host et al.\ (2009) used the X-ray temperature as a surrogate measurement for deriving the DM velocity anisotropy profile at intermediate radii by assuming a connection between $T_{\rm gas}$ and $T_{\rm DM}$. They applied their analysis to 11 low redshift and 5 intermidiate redshift clusters, including A1689. Their deduced values for $\beta_{\rm DM}$ are in very good agreement with our best-fit value. Together the two methods enable estimating the DM velocity anisotropy at all radii. Our approach here is less prone to the substantial uncertainty inherent in an attempt to determine the DM velocity anisotropy from a similar treatment which is based on IC gas properties. A possible difficulty with the latter approach may be due to a nonthermal pressure component that could appreciably affect results inferred from a solution to the hydrostatic equilibrium equation (e.g., Molnar et al.\ 2010). It can be seen from figure~\ref{density_profiles_comparison} and \ref{gamma_comparison} that at $r\sim700$ h$^{-1}$ kpc $\sim\frac{1}{3}r_{\rm vir}$ there is a change in the relation between the dynamical properties of the DM and galaxies. Carlberg (1994) showed analytically that if one assumes a power-law mass density profile and isotropic velocity dispersion, this yields a power-law profile for the velocity dispersion. He also found that in general a cooler tracer has a steeper density profile and is more centrally concentrated. We find that at small radii, $r\lesssim\ \frac{1}{3} r_{\rm vir}$, $f_{\rm coll}<1$, implying that DM is cooler than galaxies. Indeed, at these radii the DM profile is steeper (see figure~\ref{gamma_comparison}) and more concentrated (see figure~\ref{density_profiles_comparison}) than that of galaxies. The subhalo distribution would naively be expected to be closely related to the distribution of galaxies. In fact, galaxies and subhalos represent different populations and are not directly comparable since subhalo masses are more strongly affected by tidal stripping than galactic baryonic matter. It was shown in simulations that subhalos within $0.3r_{\rm vir}$ typically lose more than 70\% of their mass during the merging phase, while subhalos at $r>0.5r_{\rm vir}$ typically lose only $\lesssim 40\%$ of their mass (Nagai \& Kravtsov 2005, hereafter NK05). It was previously inferred from simulations that there is a spatially anti-biased subhalo distribution, in the sense that the DM subhalo profile has a larger core radius than the background DM profile (DMS04; Gao et al.\ 2004a). DMS04 interpreted their result as due to missing half of the halo population caused by the known numerical overmerging problem. However, NK05 used a high-resolution cosmological cluster simulation to show that the subhalo radial distribution is significantly less concentrated than that of DM due to tidal stripping, rather than this being a numerical artifact. They demonstrated that the radial bias disappears almost entirely if subhalos are selected using their mass or circular velocity during the merger phase. A similar result was obtained by Gao et al.\ (2004b) who found that defining the subhalo population by requiring a minimum circular velocity gives a subhalo distribution which is more concentrated than when selection is based on a minimum mass. This results from the fact that the subhalo distribution is easier to track by using the maximum circular velocity values, since mass loss is also accompanied by a decrease in the maximum circular velocity, but the decrease is slower than in the mass (NK05). In many previous studies the observed galaxy distribution was taken to be cuspy and similar to that of DM (e.g., DMS04; Gao et al.\ 2004a; NK05), using the galaxy distribution from Carlberg, Yee, \& Ellingson (1997, hereafter CYE97), who co-added observations of 14 clusters at various redshifts. The superposed sample contained 1150 galaxies, including background galaxies. In the L09 analysis of A1689, 500 cluster members were identified from spectroscopic data, and about 1900 cluster members from photometric data. L09 showed that the galaxy density profile is best fitted with a core profile; although a cuspy profile was found to be acceptable, the deduced values of the scale radius and the power-law index were questionable. Indeed, the cuspy profile in CYE97 gave a better fit with a higher power-law index (preferring Hernquist (1990) rather than an NFW profile), yielding a quite high value of the scale radius, $r_s=(0.66\pm 0.09)r_{200}$. Moreover, Adami et al.\ (1998) examined a sample of 62 clusters and found that most galaxy density profiles are better fitted with a cored rather than a cuspy profile, though for individual clusters the preference for a cored profile is rarely significant at the 90\% confidence level. When Adami et al.\ (1998) composed a superposed sample they obtained a clear preference for a core (King) profile (at more than 95 \% confidence). Adami et al.\ (1998) showed that CYE97, as well as Beers \& Tonry (1986), obtained a cuspy profile due to selection bias caused by not taking into account the effect of elongation. In some recent simulations (e.g., Saro et al.\ 2006) that included gas cooling and star formation, a detailed treatment of stellar evolution and chemical enrichment, as well as SN energy feedback in the form of galactic winds, the galaxy density profile has a shallower core, quite different from the cuspy DM profile at small radii, $r<0.4r_{200}$. Finally, we note that our approach of inferring $\beta_{\rm gal}$ using the Jeans equation, which is derived from the collisionless Boltzmann equation, is not fully self-consistent since at small radii, $r\lesssim\ \frac{1}{3} r_{\rm vir}$, we find possible deviation from a fully collisionless behavior. In addition, for simplicity we assumed that the cluster is spherically symmetric, though there are claims that this cluster has a significantly triaxial shape (Oguri et al.\ 2005; Morandi et al.\ 2010). However, biases due to the triaxial nature of the cluster should affect the DM and galaxies fairly similarly since they have a similar distribution. More importantly, it should be kept in mind that the results reported here are based on a comprehensive analysis of only one cluster. Obviously, the results should be viewed as preliminary until reproduced by a similar analysis of a sufficiently large sample of clusters. We plan to do so with a larger sample of relaxed X-ray selected clusters in the CLASH program\footnote{PI: Marc Postman; http://www.stsci.edu/$\sim$postman/CLASH/}. \section*{ACKNOWLEDGMENTS} We thank Alexey Vikhlinin, Greg Bryan, and Steen Hansen for many helpful discussions. We acknowledge discussions also with Shay Zucker, Ole Host, and Sharon Sadeh. DL acknowledges generous support by Dan David Foundation. The work of YR and MN was supported in part by US-Israel Binational Science Foundation grant 452/2008 at Tel Aviv University. RB acknowledges Israel Science Foundation grant 823/09. TJB and YR are supported by Israel Science Foundation grant 1218/06
1,108,101,562,701
arxiv
\section{Introduction} \vspace{-14pt} Since the twistor string formulation of scattering amplitudes in $\mathcal{N}=4$ super Yang-Mills \cite{Witten:2003nn, Roiban:2004vt, Roiban:2004yf}, much effort has been expended in developing worldsheet approaches to scattering amplitudes in more general quantum field theories. An important advance was made by the scattering equation formalism of Cachazo, He and Yuan \cite{Cachazo:2013gna,Cachazo:2013hca,Cachazo:2013iea,Cachazo:2014nsa,Cachazo:2014xea}, which realised scattering amplitudes in a large class of theories at tree level as integrals over the moduli space $\mathcal{M}_{0,n}$ of marked Riemann spheres of the form \eq{\int_{\mathcal{M}_{0,n}}\frac{d^{n}\sigma}{\mathrm{SL}(2,\mathbb{C})}\mathcal{I}_{L}\mathcal{I}_{R}\prod^{n-3}_{i=1}\delta(E_{i})} where the $E_{i}$ are conditions known as the scattering equations that serve to fully localise the integrand at discrete points on the moduli space. The half integrands $\mathcal{I}_{L,R}$ encode kinematical or colour degrees of freedom. The freedom in choosing these objects leads to integral formulae for a large class of quantum field theories. Soon after the development of the CHY formalism, it was observed that the half integrands $\mathcal{I}_{L,R}$ used in evaluating amplitudes in gauge theory and gravity\footnote{It is a fact that \emph{any} quantum field theory admits such a representation \cite{Baadsgaard:2015hia,Baadsgaard:2015ifa,Baadsgaard:2016fel}. However, the integrands giving gauge and gravity amplitudes at tree level are known to enjoy properties such as manifest color-kinematic duality \cite{Cachazo:2013gna,Bjerrum-Bohr:2016axv} that make them especially interesting.} can be equivalently computed as correlation functions of a two-dimensional theory known as the \emph{ambitwistor string} \cite{Mason:2013sva}. Since this model provided an explicit worldsheet interpretation for the CHY approach, the restriction to tree level amplitudes could be overcome. The genus expansion of the ambitwistor string has since been used to derive moduli space formulae at one-loop \cite{Adamo:2013tsa,Geyer:2015bja,Geyer:2015jch} and two-loop \cite{Geyer:2018xwu} orders. Although the ambitwistor string provides a consistent scheme through which the half integrands in the CHY framework may be determined, its relationship with the more conventional Ramond-Neveu-Schwarz (RNS) string remains to be fully understood. While it may be formally obtained as a low-energy ($\alpha'\rightarrow 0$) limit of the RNS string \cite{Mason:2013sva}, a more careful analysis seems to indicate that the two are better related by the \emph{tensionless} ($\alpha'\rightarrow \infty$) limit \cite{Ohmori:2015sha,Casali:2016atr}. In this letter, considering the latter viewpoint, by direct computation of the chiral integrand \cite{DHoker:1989cxq} of superstring NS states we demonstrate that it reduces to the corresponding half integrand in the ambitwistor string as the tensionless limit is approached. More concretely, we show that the superstring integrand (we denote the spin structure by $\delta$) takes the form \eq{\label{eq:2}\mathcal{A}_{g,n}[\delta] = \exp(\mathrm{KN})\times\mathcal{I}^{\alpha'}_{g,n}[\delta] } (where $\mathrm{KN}$ generalises the Koba-Nielsen factor to higher loops) such that in the limit of infinite $\alpha'$, $\mathcal{I}^{\alpha'}_{g,n}[\delta]$ equals the chiral half integrand computed by the ambitwistor string. \vspace{-15pt} \section{Holomorphic Factorization of Superstring Scattering Amplitudes} \vspace{-14pt} The scattering of $n$ NS states in superstring perturbation theory is defined by a formal integral over the supermoduli space $\mathfrak{M}_{g,n}$ (with measure $d\mu_{g,n}$) of super Riemann surfaces with $n$ NS punctures \eq{\int_{\mathfrak{M}_{g,n}}d\mu_{g,n}\langle{ |\delta(H_{A}|B)|^{2}\rangle}_{B,C}\times \mathcal{O}_{n}} where $H_{A}$ is a basis of Beltrami superdifferentials and $B$ and $C$ are ghost superfields encoding the $bc$ and $\beta\gamma$ systems such that \eq{B(z,\theta) = \beta(z)+ \theta b(z),} and \eq{C(z,\theta) = c(z)+ \theta \gamma(z).} We compute the quantity $\mathcal{O}_{n}$ by making use of the chiral splitting theorem due to d'Hoker and Phong \cite{DHoker:1989cxq}, which says \eq{\begin{aligned} \mathcal{O}_{n} = \int_{\mathbb{R}^{10g}}d^{10}p_{I} &\bigg|\bigg\langle\exp\left(\frac{i}{\alpha'}\int\chi(z)\psi^{\mu}\partial x^{\mu}(z)\right)\times\\ &\prod_{i}\mathcal{V}(z_{i},\theta_{i},k_{i},\epsilon_{i})\bigg\rangle\bigg |^{2} \end{aligned}} where $\chi(z)$ is the gravitino, parametrising odd moduli on the supermoduli space and \eq{\begin{aligned} &\mathcal{V}(z_{i},\theta_{i},k_{i},\epsilon_{i}) = \int d\widetilde{\theta}_{i} \exp\bigg(ik^{\mu}_{i}x^{\mu}_{+}(z_{i})+\\ &\frac{2i}{\alpha'}\theta_{i}\widetilde{\theta}_{i}\epsilon^{\mu}_{i}\partial x^{\mu}_{+}(z_{i}) + \theta_{i}k^{\mu}_{i}\psi^{\mu}(z_{i})+\widetilde{\theta}_{i}\epsilon^{\mu}_{i}\psi^{\mu}(z_{i})\bigg). \end{aligned}} The chiral fields $x_{+}(z)$ and $\psi(z)$ are purely holomorphic and obey (given a spin structure $\delta$) the operator product expansions \eq{x^{\mu}_{+}(z)x^{\nu}_{+}(z') \sim -\eta^{\mu\nu}\alpha'\ln(E(z,z'))} and \eq{\psi^{\mu}(z)\psi^{\nu}(z') \sim \eta^{\mu\nu}S_{\delta}(z,z').} where $E(z_{i},z_{j})$ is the prime form on a genus $g$ Riemann surface and $S_{\delta}(z_{i},z_{j})$ is the genus $g$ Szego kernel for spin structure $\delta$. The loop momenta $p_{I}$ are defined as monodromies of the chiral $\partial x_{+}$ fields around the $\mathfrak{A}_{I}$ cycles of the Riemann surface \eq{ \oint_{\mathfrak{A}_{I}}\partial x^{\mu}_{+}(z)dz = -i\alpha' p^{\mu}_{I}.} This condition is satisfied by performing the effective replacement \eq{\partial x^{\mu}(z) \rightarrow \partial x^{\mu}(z) -i\alpha' p^{\mu}\omega_{I}(z).} With these definitions, we define the \emph{chiral correlation function} \eq{\label{eq:11} \begin{aligned} \mathcal{A}_{g,n}[\delta] = & \bigg\langle\prod_{A}\delta(\langle{H_{A}|B\rangle})\exp\left(\frac{i}{\alpha'}\int\chi(z)\psi^{\mu}\partial x^{\mu}(z)\right)\\&\times\prod_{i}\mathcal{V}(z_{i},\theta_{i},k_{i},\epsilon_{i})\bigg\rangle_{B,C,x_{+},\psi}. \end{aligned}} The full superstring integrand is then expressed as a sum over two complex conjugate copies thereof \eq{\mathcal{I}_{g,n} = \sum_{\delta,\delta'}\eta_{\delta}\widetilde{\eta}_{\delta'}\mathcal{A}_{g,n}[\delta]\overline{\mathcal{A}_{g,n}[\delta]}.} where the complex conjugate chiral integrand is be evaluated at loop momenta $-p_{I}$. The constants $\eta_{\delta}$ and $\widetilde{\eta}_{\delta'}$ take values $\pm 1$ and perform the GSO projection based on the combination chosen. \vspace{-15pt}\section{Finding the Chiral Correlator on the Supermoduli Space}\vspace{-14pt} We start with the evaluation of (\ref{eq:11}) on $\mathfrak{M}_{g,n}$. In doing this, we note that we will implicitly absorb the integrals over the auxiliary Grassmann parameters $\widetilde{\theta}_{i}$ into the measure of $\mathfrak{M}_{g,n}$. This is only done to make the resulting expressions more compact. We begin by computing the matter part of the chiral correlator, defined intrinsically on the supermoduli space $\mathfrak{M}_{g,n}$. To do this we integrate over the chiral $x_{+}(z)$ and $\psi(z)$ in turn. The $x_{+}(z)$ integration introduces additional terms quadratic in $\psi(z)$. The $\psi(z)$ integral is then performed using the effective propagator \eq{\begin{aligned} \hat{S}_{\delta}(z,z') = &S_{\delta}(z,z') +\\ &\frac{1}{\alpha'}\int S_{\delta}(z,w)K(w,v)S_{\delta}(v,z')dwdv \end{aligned}} where \eq{K(w,v) = \chi(w)\partial_{w}\partial_{v}\ln(E(w,v))\chi(v).} This modified contraction is effected by factoring out \eq{\mathcal{X}_{\mathrm{PCO}} = \lab{\exp\left(\frac{i}{\alpha'}\int\chi(z)\partial x_{+}(z)\cdot\psi(z)dz\right)}.} When this is done, the chiral correlator is evaluated as a conventional Gaussian integral. The result is given by \eq{\mathcal{X}_{\mathrm{PCO}} \times \exp\left(\mathrm{KN} + H_{0,0} + H_{0,1}+ \mathcal{O}\left(\frac{1}{\alpha'}\right)\right)} where \eq{\begin{aligned} \mathrm{KN} = &\alpha'\sum_{i,I}k_{i}\cdot p_{I}\int^{z_{i}}_{P}\omega_{I}(z)dz + \\ &\alpha'\sum_{i\neq j}\frac{1}{2}k_{i}\cdot k_{j}\ln (E(z_{i},z_{j})) \end{aligned}} and \eq{\begin{aligned} H_{0,0} = &\int \chi(z)\chi(z')P(z)\cdot P(z')S_{\delta}(z,z')dzdz' \\ &+2\sum_{i}\bigg(\int [\chi(z)\theta_{i}P(z)\cdot k_{i}S_{\delta}(z,z_{i})+\\ &\chi(z)\widetilde{\theta}_{i}P(z)\cdot\epsilon_{i}S_{\delta}(z,z_{i})]dz\bigg)+\\ &\sum_{i\neq j}[\theta_{i}\theta_{j}k_{i}\cdot k_{j}S_{\delta}(z_{i},z_{j}) + \widetilde{\theta_{i}}\widetilde{\theta}_{j}\epsilon_{i}\cdot\epsilon_{j}S_{\delta}(z_{i},z_{j})\\ &-2\theta_{i}\widetilde{\theta}_{j}\epsilon_{j}\cdot k_{i}S_{\delta}(z_{j},z_{i})] + \sum_{i}2\theta_{i}\widetilde{\theta}_{i}\epsilon_{i}\cdot P(z_{i}). \end{aligned}} The quantity $H_{0,1}$ vanishes for even spin structure. For odd spin structure it is given by \eq{\begin{aligned} H_{0,1} = &\int\chi(z)\psi^{(0)}\cdot P(z)dz +\\ &\sum_{i}[\theta_{i}k_{i}\cdot\psi^{(0)}+ \widetilde{\theta}_{i}\epsilon_{i}\cdot\psi^{(0)}] \end{aligned}} where $\psi^{0}$ is the (ten dimensional) zero mode of the worldsheet fermion. In these expressions \eq{ P^{\mu}(z) = \sum_{i}k^{\mu}_{i}\ln(E(z,z_{i})) + \sum_{I}p^{\mu}_{I}\omega_{I}(z). } Note that $\omega_{I}$ are Abelian differentials of the first kind. We now make contact with the representation (\ref{eq:2}) by pointing out that the factor $\mathrm{KN}$ is not dependent on any Grassmann valued variables. Accordingly, we factor it out, leaving behind $H_{0,0}$, $H_{0,1}$ and terms subleading in $\frac{1}{\alpha'}$ contributing to $\mathcal{I}^{\alpha'}_{g,n}[\delta]$ \vspace{-15pt} \section{Comparison to the Ambitwistor String} \vspace{-14pt} To compare our result to the corresponding chiral integrand in the ambitwistor string, we need to reduce our result to an expression on the ordinary moduli space $\mathcal{M}_{g.n}$. However, projecting down to the ordinary moduli space from the supermoduli space isn't always possible\footnote{It is known that $\mathfrak{M}_{g}$ is not projected \cite{Donagi:2013dua,Donagi:2014hza} while the question remains unanswered in the presence of punctures.}. Due to the general difficulties involved in projecting down to $\mathcal{M}_{g,n}$, it is best to work in small enough neighbourhoods $\mathcal{U}\in \mathcal{M}_{g,n}$ onto which holomorphic projections always exist\footnote{I am grateful to Seyed Faroogh Moosavian and Edward Witten for discussions on this point.}. Let us then take some (arbitrary) point $\Sigma\in \mathcal{M}_{g,n}$. For a small enough neighbourhood $\mathcal{U}_{\Sigma}$, the moduli of $\mathfrak{M}_{g,n}$ can be parametrised by the ordinary bosonic moduli and a gravitino $\chi(z)$ that can be expanded as \cite{DHoker:2002hof} \eq{\label{eq:19} \chi(z) = \sum_{i}^{N_{odd}}\chi_{\alpha_{i}}\delta(z-z_{\alpha_{i}})} where the insertion points $z_{\alpha_{i}}$ are holomorphic functions of the bosonic moduli\footnote{The requirement that the PCOs depend on the moduli is due to the presence of spurious singularities. See \cite{Witten:2012bh,Sen:2014pia,Moosavian:2017fta} for discussions of this issue.}. The number $N_{\mathrm{odd}}$ of Grassmann directions spanned by the $\chi_{\alpha_i}$ is $0$ for genus zero and genus one with even spin structure. For genus one with odd spin structure $N_{\mathrm{odd}} = 1$ and for genus $g\geq 2$ we have $N_{\mathrm{odd}} = 2g-2$. In this local patch, the basis for the Beltrami superdifferentials is \eq{H_{A} = (\mu_{a}|\delta_{z,z_{\alpha_i}}),} where the $\mu_{a}$ are the $N_{\mathrm{even}}$\footnote{For genus $g=0$ $N_{\mathrm{even}}=0$, for $g=1$ $N_{\mathrm{even}}=1$ and for $g\geq 2$ $N_{\mathrm{even}}=3g-3$.} ordinary Beltrami differentials labelling deformations of the bosonic moduli and $\delta_{z,z_{\alpha_i}}$ are $N_{\mathrm{odd}}$ delta functions evaluating at $z_{\alpha_i}$. We have shown that in this set up, upon integrating away the fermionic degrees of freedom, the matter part of the chiral correlation function and the ghost partition function precisely match the corresponding results of the ambitwistor string \cite{Geyer:2018xwu}. Putting these together, we have proved that the chiral correlator of the RNS superstring reduces to the corresponding chiral half integrand of the ambitwistor string in the $\alpha'\rightarrow \infty$ limit. Concretely we have (specialising to the case of even spin structure) \eq{\label{eq:23} \mathcal{I}^{\alpha'}_{g,n}[\delta] = \mathcal{Z}_{gh}[\delta]\left(\mathrm{Pf}\Psi_{g,n}[\delta] + \mathcal{O}\left(\frac{1}{\alpha'}\right)\right).} Here, $\mathcal{Z}_{gh}[\delta]$ is the ghost partition function \eq{ \begin{aligned} \mathcal{Z}_{gh}[\delta] = \frac{1}{Z^{5}}\bigg\langle&\prod^{N_{\mathrm{even}}}_{i=1}\langle{\mu_{i}|b\rangle}\prod^{N_{\mathrm{odd}}}_{i=1}\delta(\beta(z_{\alpha_i}))\\ &\prod^{N_B}_{i=1}c(z_{a_i})\prod^{N_F}_{i=1}\delta(\gamma(z_{b_{i}}))\bigg\rangle_{\delta} \end{aligned}} where $\langle{\mu_{i}|b\rangle}$ is used to denote the inner product between the Beltrami differential $\mu_{i}$ and the field $b$ and it is to be understood that the expectation value is to be taken with respect to the $bc$ and $\beta\gamma$ systems. $Z$ is the chiral scalar partition function (see for example \cite{Geyer:2018xwu,DHoker:2001jaf} for an explicit representation in terms of the prime form and theta functions on $\mathcal{M}_{g,n}$). $N_{B}$ and $N_{F}$ are only nonzero at genus zero and one\footnote{At $g=0$, $N_B = 3$ and $N_{F}=2$, at $g=1$ for even spin structure $N_B = 1$ and $N_F = 0$ and for odd spin structure $N_B = N_F = 1$.}, corresponding to insertions of $c$ and $\gamma$ fields to account for superconformal Killing vectors. The $z_{a_i}$ and $z_{b_i}$ are arbitrarily chosen external punctures, not to be confused with PCO insertions. $\mathcal{Z}_{gh}[\delta]$ is computed by bosonisation \cite{Verlinde:1986kw,Verlinde:1987sd}. The quantity $\Psi_{g,n}$ is a matrix, given by\footnote{In evaluating the Pfaffian in (\ref{eq:23}), it is defined after removing a pair of rows and columns and one row and column corresponding to external states respectively at genus zero and genus one with odd spin structure. This is due to the fact that $N_{F}$ states with marked points $z_{b_{i}}$ are in the $-1$ picture.} \eq{\Psi_{g,n} = \begin{pmatrix} \mathbf{A}&-\mathbf{C}^{T}\\ \mathbf{C}&\mathbf{B}\end{pmatrix}} where \eq{\begin{aligned} &\mathbf{A}_{\alpha_i\alpha_j} = P(z_{\alpha_i})\cdot P(z_{\alpha_{j}})S_{\delta}(z_{\alpha_{i}},z_{\alpha_{j}}),\\ &\mathbf{A}_{\alpha_i i} = P(z_{\alpha_i})\cdot k_{i}S_{\delta}(z_{\alpha_i},z_{i}), \\ &\mathbf{C}_{\alpha_i j} = P(z_{\alpha_i})\cdot \epsilon_{i}S_{\delta}(z_{i},z_{\alpha_i}),\\ & \mathbf{A}_{ij} = k_{i}\cdot k_{i}S_{\delta}(z_{i},z_{j}),\;\; \mathbf{B}_{ij} = \epsilon_{i}\cdot\epsilon_{j}S_{\delta}(z_{i},z_{j}),\\ &\mathbf{C}_{ij} = \epsilon_{i}\cdot k_{j}S_{\delta}(z_{i},z_{j}),\;\; \mathbf{C}_{ii} = -k_{i}\cdot P(z_{i}). \end{aligned}} Indeed, the result (\ref{eq:23}) is in agreement with the genus $g$ chiral correlator defined in \cite{Geyer:2018xwu} after taking the $\alpha'\rightarrow 0$ limit is taken. We remark that the result is analogous for the case of odd spin structure; the chiral correlator is resolved into the ambitwistor correlator followed by corrections of order $\frac{1}{\alpha'}$ \cite{1851730}. Regarding this choice of local projection, there is one subtlety that to our knowledge has not been explicitly considered in previous analyses of the ambitwistor string. The choice of PCO insertions must be made in a manner that avoids spurious singularities. Indeed, this can always be done locally by picking PCOs that depend holomorphically on the bosonic moduli. In piecing together such local descriptions, it is not known if a globally consistent prescription to perform localisation in the ambitwistor string can be found. In practice however, it turns out that such a selection of PCOs exists for the ambitwistor string up to two loop order \cite{Geyer:2018xwu,1851730} as a result of the fact that the integral ultimately localises on a discrete set of points in $\mathcal{M}_{2,n}$. Proving this at higher genus requires further study. For readers familiar with the picture changing operator formalism, we point out that what we have done basically is to derive the ansatz due to Friedan, Martinec and Shenker \cite{Friedan:1985ge} using the language of supermoduli spaces. Historically, it was observed that this ansatz was poorly defined and led to amplitudes dependent on the choice of PCO insertions \cite{Verlinde:1987sd}. This is due to the fact that the projection was done globally, which as we have noted is generically not possible. It is in accordance with this that we have chosen to work in small neighbourhoods. In the context of the full superstring, local descriptions must be carefully glued together \cite{Sen:2014pia,Sen:2015hia}. Accordingly, it should be kept in mind that we have proved that the integrand $\mathcal{I}^{\alpha'}_{g,n}[\delta]$ matches the half integrand in the ambitwistor string in the tensionless limit given a specific local configuration of picture changing operators. The subtleties involved with gluing might however pose a problem if we actually want to perform the integration over the moduli space for the ambitwistor string at higher genus. \vspace{-15pt} \section{Conclusion} \vspace{-14pt} In this letter, we have proved the claim that the chiral superstring integrand in the RNS formalism is resolved into a Koba-Nielsen term and an $\alpha'$ dependent integrand that reduces to the half integrand of the ambitwistor string in the tensionless limit. We made use of the chiral splitting theorem due to d'Hoker and Phong to define a chiral correlation function for $n$ NS states, which we evaluated on the supermoduli space $\mathfrak{M}_{g,n}$. Due to potential issues involving non-projectedness at higher genus, we had to work on small neighbourhoods in $\mathcal{M}_{g,n}$, onto which a projection is possible. Evaluating the chiral correlation function on the ordinary moduli space in this fashion, we see that in the limit of vanishing tension the chiral correlator reproduced the chiral correlation function of the ambitwistor string. A full understanding of the tensionless limit of RNS superstrings and its relation to the ambitwistor string requires clarifying the behaviour of the higher genus Koba-Nielsen factor in the Gross-Mende limit \cite{Gross:1987ar,Gross:1987kza}. Due to an infinite number of saddle points on the universal covering space of $\mathcal{M}_{g,n}$, the na\"ive sum over solutions of the scattering equations cannot be taken (see \cite{Mizera:2019vvs} for the complications arising even at genus zero). Understanding this limit on the non-separating divisors of $\mathcal{M}_{g,n}$ would be especially relevant in understanding the $\alpha'\rightarrow 0$ limit (see \cite{Bjerrum-Bohr:2014qwa} for a discussion of the genus zero case.). The interplay (and potential equivalence or inequivalence) of the two limits would have important implications for the duality between colour and kinematics \cite{Mizera:2019gea,Mizera:2019blq}. \vspace{-15pt} \section*{Acknowledgements} \vspace{-15pt} I thank Jacob Bourjaily for encouragement and constructive comments that considerably improved the draft. I have benefited from exchanges with Yvonne Geyer, Alok Laddha, Sebastian Mizera, Ricardo Monteiro, Seyed Faroogh Moosavian, Oliver Schlotterer and Edward Witten. I am especially grateful to Seyed Faroogh Moosavian for early conversations on chiral splitting which motivated this analysis. This project has been supported by an ERC Starting Grant (No. 757978) and a grant from the Villum Fonden (No. 15369). \bibliographystyle{utphys}
1,108,101,562,702
arxiv
\section{Introduction} Einstein, Podolsky, and Rosen (EPR) \cite{epr} believed that the results of measurements on a local subsystem of a composite physical system which can be predicted with certainty would be determined by the local variables of the subsystems. However, the violation of Bell inequality \cite{bell} rules out all putative local hidden-variable (LHV) theories, and indicates that quantum nonlocality of entangled states is one of the most profound characters inherent in quantum mechanics. Moreover, Clauser, Horne, Shimony and Holt derived the well-known CHSH inequality, which provides a way of experimental testing of the LHV model \cite{Clauser}. Actually the nonlocality is intimately related to quantum entanglement. It is shown that the CHSH inequality is satisfied for every separable pure two-qubit state, but violated for all entangled pure two-qubit states, with the amount of violation increasing with the entanglement \cite{N.Gisin, S.P}. Nevertheless, this conclusion is not true for mixed entangled states, as Werner presented a mixed entangled state satisfying the CHSH inequality \cite{R. F. Werner1989}. Hence CHSH inequality is just a necessary, but not sufficient condition for separability of two-qubit states. Starting with the Bell and CHSH inequalities, many Bell type inequalities are also proposed with respect to different quantum systems \cite{bellineqs}. For three-qubit system, Svetlichny introduced an inequality whose violation is a sufficient condition for genuine tripartite nonlocality \cite{G. Sve}. Ghose {\it et al.} further derived the analytical expressions of violation of Svetlichny inequality for states in Greenberger, Horne and Zeilinger (GHZ) class \cite{S.Ghose}. However it is still intractable to determine whether a given state, especially mixed state, violates a certain Bell inequality or not, as one has to find the mean value of the related Bell operators for suitable observables \cite{rmp}. As an important class of mixed states from a quantum dynamical perspective, Schmidt-correlated (SC) states have been paid much attention to \cite{Rains,rains1,virmani,ming}. Just as Khasin {\it et al.} \cite{Khasin06} proposed, the bipartite SC states naturally appear in a system dynamics with additive integrals of motion. In fact, SC states $\rho = \sum_{m,n=0 }^{N-1} a_{mn} |m \cdots m \rangle \langle n \cdots n|$, $\sum_{m=0}^{N-1} a_{mm}=1$, are defined as the mixtures of pure states, sharing the same Schmidt basis \cite{Rains,ming}. The SC states exhibit some elegant properties. For example, for any local quantum measurement on SC states, the result does not depend on which party the measurement is performed. Moreover, their separability is determined by the positivity of partial transposition \cite{ming}. In this paper we investigate the violation of the CHSH inequality and Svetlichny inequality for SC states. By presenting an analytical expression of the maximum expectation value $F_{max}$ of CHSH inequality for two-qubit systems, we show that whether an SC state violates CHSH inequality is equivalent to whether it is entangled. For three-qubit systems, we give an analytical expression of the maximum expectation value $S_{max}$ of the Svetlichny inequality, and prove that there exist genuine entangled SC states which obey Svetlichny inequality. Furthermore, the relations between $F_{max}$ and concurrence \cite{woot}, $S_{max}$ and relative entropy entanglement \cite{relative entropy} for SC states are derived. At last we illustrate $F_{max}$ and $S_{max}$ are not monotonic under local operations and classical communications (LOCC) by explicit examples. This paper is organized as follows: in section II, we introduce the CHSH inequality and investigate the maximum expectation value $F_{max}$ for two-qubit SC states. Then the relation between $F_{max}$ and concurrence is provided. In Sec. III, the maximum expectation value $S_{max}$ of the Svetlichny inequality and its relation to the relative entropy entanglement are studied for three-qubit SC states. Finally, we conclude with a summary of our results in Sec. IV. \section{two-qubit SC states} The well-known CHSH inequality is shown to be both necessary and sufficient for the separability of a two-qubit pure state. The corresponding Bell operator for the CHSH inequality is given by \begin{eqnarray} F=AB+ A B^\prime+ A^\prime B- A^\prime B^\prime, \end{eqnarray} where the observables $A=\vec{a} \cdot \vec{\sigma}$ and $A^\prime =\vec{a}^\prime \cdot \vec{\sigma}$ are associated with the first qubit, $B=\vec{b} \cdot \vec{\sigma}$ and $B^\prime=\vec{b}^\prime \cdot \vec{\sigma}$ are associated with the second qubit, while $\vec{a}$, $\vec{a}^\prime$, $\vec{b}$ and $\vec{b}^\prime$ are unit vectors, $\vec{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ with $\sigma_x$, $\sigma_y$, $\sigma_z$ the Pauli matrices. $|\la\psi|F|\psi\ra|\leq 2$ holds if and only if the pure state $|\psi\ra$ is separable. For any mixed two-qubit state $\rho$, the expectation value $F(\rho)=Tr(\rho\,F)$ satisfies \be\label{2} |F(\rho)|\leq 2 \ee if $\rho$ admits local hidden variable model. Violation of the inequality (\ref{2}) implies that the state $\rho$ is entangled. Let $F_{max}(\rho)=\max_{A, A^\prime, B, B^\prime}F(\rho)$ be the maximal value of $F(\rho)$ under all possible observables $A$, $A^\prime$, $B$ and $B^\prime$. One can then decide whether a state $\rho$ is entangled in terms of the maximum expectation value. To find the maximum expectation value $F_{max}$ for a given state $\rho$, we define $\vec{a}=(\sin \theta_a \cos \phi_a, \sin \theta_a \sin \phi_a, \cos \theta_a)$, and similarly for the unit vectors $\vec{a}^\prime$, $\vec{b}$ and $\vec{b}^\prime$. In addition, we define unit vectors $\vec{d}, \vec{d}^\prime$ such that $\vec{b}+\vec{b}^\prime = 2\vec{d}\cos \phi$ and $\vec{b}-\vec{b}^\prime=2\vec{d}^\prime\sin \phi$. Thus \begin{eqnarray}\label{operator d equaltity} \vec{d} \cdot \vec{d}^\prime \!=\! \cos \theta_d \cos \theta_{d^\prime} \!+\!\sin \theta_d \sin \theta_{d^\prime} \cos (\phi_d -\phi_{d^\prime})=0. \end{eqnarray} Set $D=\vec{d} \cdot \vec{\sigma}$ and $D^\prime=\vec{d}^\prime \cdot \vec{\sigma}$, the expectation value $F(\rho)$ can be written as \begin{eqnarray}\label{F} F(\rho)&=&\langle AB \rangle + \langle AB^\prime \rangle +\langle A^\prime B \rangle - \langle A^\prime B^\prime \rangle \\\nonumber&=&\langle A(B+B^\prime) \rangle +\langle A^\prime (B-B^\prime) \rangle \\\nonumber &=& 2(\langle AD \rangle \cos \phi + \langle A^\prime D^\prime \rangle \sin \phi)\\\nonumber &\leq & 2 (\langle AD \rangle^2 + \langle A^\prime D^\prime \rangle^2)^{1/2}, \end{eqnarray} where we have used the fact that \begin{eqnarray}\label{x cos} x \cos \theta + y \sin \theta \leq (x^2 +y^2)^{1/2}, \end{eqnarray} with the equality holding when $\tan \theta = y/x$. For a two-qubit SC state $\rho_1$: \begin{eqnarray*} \rho_1=a_1 |00\rangle \langle 00| \!+\! a_2 |00\rangle \langle 11|\!+\! a_2^* |11\rangle \langle 00| + a_4 |11\rangle \langle 11|, \end{eqnarray*} with $a_1, a_4 \geq 0 $, $a_1+a_4=1$ and $a_1a_4\geq |a_2|^2$. The first term in Eq. (\ref{F}) with respect to this mixed state $\rho_1 $ turns out to be \begin{eqnarray} \label{first term of F rho 1} \langle AD \rangle &=& \cos \theta_a \cos \theta_d + 2 ({\rm Re} (a_2) \cos (\phi_a +\phi_d ) -{\rm Im} (a_2) \sin (\phi_a +\phi_d ))\sin \theta_a \sin \theta_d \nonumber \\ &\leq & \{\cos^2 \theta_d + 4 [{\rm Re} (a_2) \cos (\phi_a +\phi_d ) -{\rm Im}(a_2)\sin(\phi_a+\phi_d)]^2 \sin^2 \theta_d \}^{1/2} \nonumber\\ &\leq & \left[ \cos^2 \theta_d + 4 |a_2|^2 \sin^2 \theta_d \right]^{1/2}\nonumber\\ &=& \left[(1-4 |a_2|^2)\cos^2 \theta_d + 4 |a_2|^2 \right]^{1/2}, \end{eqnarray} where the inequality (\ref{x cos}) has been taken into account. From Eq. (\ref{F}) and Eq. (\ref{first term of F rho 1}) we have \begin{eqnarray} \label{F rho 1} F(\rho_1) &\!\!\leq\!\!& 2 [(1-4 |a_2|^2)(\cos^2 \theta_d\!+\! \cos^2 \theta_{d^\prime}) \!+\! 8 |a_2|^2 ]^{1/2} \nonumber \\ &\!\!\leq\!\!& 2[ 1+ 4 |a_2|^2 ]^{1/2}. \end{eqnarray} Here we have employed the fact that the maximum of $\cos^2 \theta_d + \cos^2 \theta_{d^\prime}$ is 1 according to Eq. (\ref{operator d equaltity}). The equality in Eq. (\ref{F rho 1}) holds when $\vec{a}=\vec{z}$, $\vec{a}^\prime= \vec{x}$, $\vec{b}= \sin \phi \cos \phi_d\, \vec{x} + \sin \phi \sin \phi_d\, \vec{y} + \cos \phi \,\vec{z}$ and $\vec{b}^\prime=-\sin \phi \cos \phi_d \,\vec{x} - \sin \phi \sin \phi_d \,\vec{y} + \cos \phi \,\vec{z}$ with $\tan \phi = 2 |a_2|$ and $\tan \phi_d = -\frac{{\rm Re} (a_2)}{{\rm Im} (a_2)}$. Therefore, we obtain \begin{eqnarray}\label{F rho 1 1} F_{max}(\rho_1)=2\{ 1+ 4 |a_2|^2 \}^{1/2}. \end{eqnarray} Furthermore, the maximum expectation value $F_{max}(\rho_1)$ has a direct relation with its concurrence \cite{woot}, which is an entanglement measure. The concurrence for a bipartite pure state $|\psi \rangle$ is defined by $C(|\psi \rangle)= \sqrt{2(1-Tr \rho_A^2)}$, where the reduced density matrix $\rho_A$ is given by $\rho_A=Tr_B(|\psi \rangle \langle \psi|)$. The concurrence is then extended to mixed states $\rho$ by the convex roof, $C(\rho) \equiv \min _{\{p_i, |\psi _i \rangle \}} \sum_i p_i C(|\psi _i \rangle)$, for all possible ensemble realizations $\rho= \sum _i p_i |\psi_i \rangle \langle \psi_i|$, where $p_i \geq 0$ and $\sum_i p_i=1$. For the state $\rho_1$ one has $C(\rho_1)=2|a_2|$. Hence we get \begin{eqnarray}\label{2 pure F and N } F_{max}(\rho_1)=2[1+C^2(\rho_1) ]^{1/2}, \end{eqnarray} which shows that $F_{max}(\rho_1)$ increases monotonically with $C(\rho_1)$. The violation of the CHSH inequality has also relations to the dense coding, which uses previously shared entangled states to send possibly more information than classical information encoding. The capacity of dense coding for a given shared bipartite state $\rho^{AB}$ is given by $\chi=\log_2 d_A +S(\rho_A)-S(\rho)$, with $S(\rho)=-tr(\rho\log_2 \rho)$ \cite{DC}. $\rho$ is useful for dense coding if its capacity is larger than $\log_2 d_A$. It is straightforwardly verified that for two-qubit SC state $\rho_1$, \begin{eqnarray*} \chi=&&1-a_1\log_1a_1-a_4\log_1a_4\\ &&+(\frac{1\!\!+\!\!\sqrt{1\!\!-\!\!4a_1a_4\!\!+\!\!4|a_2|^2}}{2}\log_2\frac{1\!\!+\!\!\sqrt{1\!\!-\!\!4a_1a_4\!\!+\!\!4|a_2|^2}}{2}\\ &&+\frac{1\!\!-\!\!\sqrt{1\!\!-\!\!4a_1a_4\!\!+\!\!4|a_2|^2}}{2}\log_2\frac{1\!\!-\!\!\sqrt{1\!\!-\!\!4a_1a_4\!\!+\!\!4|a_2|^2}}{2}), \end{eqnarray*} which also increases monotonically with the maximum expectation value $F_{max}(\rho_1)$ for given $a_1$ and $a_4$. Hence one has the following equivalent statements for the SC state $\rho_1$: (i) it is entangled, (ii) it's concurrence is greater than zero; (iii) it violates CHSH inequality; (iv) it is useful for dense coding. Now we generalize two-qubit SC state $\rho_1$ to mixed state $\rho_2$ \begin{eqnarray} \label{ge-state1} \rho_2&& \!=\! b_1|00\rangle \langle 00| + b_2 |01\rangle \langle 01| + b_3 |10\rangle \langle 10| +b_4 |11\rangle \langle 11| + c_1 |00\rangle \langle 11| +c_1^* |11\rangle \langle 00| \end{eqnarray} with $b_i\geq 0$, $i=1,2,3,4$, $\sum_{i=1}^{4} b_i=1$, $b_1b_4\geq |c_1|^2$. Nevertheless by similar calculation we can get its maximum expectation value \begin{eqnarray} F_{max}(\rho_2)\!=\!2 \{ (b_1 +b_4-b_2-b_3)^2+4|c_1|^2\}^{1/2}, \end{eqnarray} which can be obtained by $\vec{a}=\vec{z}$, $\vec{a}^\prime= \vec{x}$, $\vec{b}= \sin \phi \cos \phi_d \vec{x} + \sin \phi \sin \phi_d \vec{y} + \cos \phi \vec{z}$ and $\vec{b}^\prime=-\sin \phi \cos \phi_d \vec{x} - \sin \phi \sin \phi_d \vec{y} + \cos \phi \vec{z}$ with $\tan \phi = \frac{2 |c_1|}{b_1 +b_4-b_2-b_3}$ and $\tan \phi_d = -\frac{{\rm Re} (c_1)}{{\rm Im} (c_1)}$. Although the amount of maximum violation of CHSH inequalities increases with the entanglement for the SC states, the maximum expectation value $F_{max}$ is not a legitimate entanglement measure for two-qubit states, because it does not decrease monotonically under LOCC. For example, considering a transverse noise channel \cite{T. Yu} operating on Bell state $|\psi\rangle = \frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)$, the output state takes the following form, $\rho_3=\sum_{i,j=1,2}K_i\otimes K_j |\psi\rangle \langle \psi|K_i^\dagger \otimes K_j^\dagger$, where the Kraus operators $K_1$ and $K_2$ denote the transverse noise channel, \begin{eqnarray} K_1=\left( \begin{array}{cc} \gamma & 0\\ 0 & 1 \end{array} \right),~~~ K_2=\left( \begin{array}{cc} 0 & 0\\ \omega & 0 \end{array} \right), \end{eqnarray} with time-dependent parameters $\gamma=\exp(-\Gamma t/2), ~~~ \omega=\sqrt{1-\gamma^2}$. By a simplification, the final state, $\rho_3=\frac{1}{2}[ \gamma^4 |00\rangle \langle 00| + \gamma^2 (|00\rangle \langle 11| +|11\rangle \langle 00|) + (1 + \omega^4)|11\rangle \langle 11|+ \gamma^2 \omega^2 (|01\rangle \langle 01| +|10\rangle \langle 10|)]$, is just of the form in Eq. (\ref{ge-state1}). Therefore the maximum expectation value of $\rho_3$ is given by \begin{eqnarray} \label{F rho 2} F_{max}(\rho_3)=2\{(2\gamma^4-2\gamma^2+1)^2+\gamma^4\}^{1/2}. \end{eqnarray} It is obvious that the maximum expectation value $F_{max}$ is not a monotonic function of $\gamma$ from Eq. (\ref{F rho 2}). Hence it is not monotonic with time under LOCC, i.e., $F_{max}$ is not a legitimate entanglement measure. On the other hand, we can obtain the concurrence of $\rho_3$, $C(\rho_3)=\gamma^4$, is monotonic with $\gamma$. For $t>0.265805/\Gamma$, $\rho_3$ does not violate the CHSH inequality (see FIG. 1). Thus, CHSH inequality can not detect entanglement of such states, though in fact some of these states are distillable \cite{Bennett}, as shown in the experimental demonstration of the ''hidden nonlocality'' in \cite{Kwiat}. \begin{figure}[!h] \begin{center} \scalebox{0.56}[0.5]{\includegraphics {rho3prime.eps}}\label{figure Ftrho2prime}\caption{Dashed line: $F_{max}(\rho_3)$ versus $\Gamma t$. Solid line: concurrence $C(\rho_3)$ versus $\Gamma t$.} \end{center} \end{figure} \section{three-qubit SC states} For three-qubit SC states, we take into account the Svetlichny inequality. The Svetlichny operator is defined by \begin{eqnarray*} S\!&=&\!ABC+ ABC^\prime+ AB^\prime C - AB^\prime C^\prime + A^\prime BC- A^\prime BC^\prime - A^\prime B^\prime C- A^\prime B^\prime C^\prime, \end{eqnarray*} where observables $A=\vec{a} \cdot \vec{\sigma}$ and $A^\prime =\vec{a}^\prime \cdot \vec{\sigma}$ are associated with the qubit 1, $B=\vec{b} \cdot \vec{\sigma}$ and $B^\prime=\vec{b}^\prime \cdot \vec{\sigma}$ with qubit 2, and $C=\vec{c} \cdot \vec{\sigma}$ and $C^\prime=\vec{c}^\prime \cdot \vec{\sigma}$ with qubit 3. If a theory is consistent with a hybrid model of nonlocal-local realism, then the expectation value for any three-qubit state is bounded by Svetlichny inequality: $|S(\rho)| \leq 4$, where $S(\rho)=Tr(S\rho)$ is the expectation value of $S$ with respect to state $\rho$. In this section we are going to derive the analytical expression of maximum expectation value $S_{max}(\rho) = \max_{A,A^\prime,B,B^\prime,C,C^\prime} S(\rho)$ for three-qubit SC states. In order to find the maximum expectation value $S_{max}$, we implement the same transformation for $\vec{b}$ and $\vec{b}^\prime$ as in the two-qubit case. The expectation value $S(\rho)$ can be written as: \begin{eqnarray} \label{S} S(\rho) &=&\langle ABC \rangle + \langle ABC^\prime \rangle + \langle AB^\prime C \rangle - \langle AB^\prime C^\prime \rangle +\langle A^\prime BC \rangle - \langle A^\prime BC^\prime \rangle -\langle A^\prime B^\prime C \rangle - \langle A^\prime B^\prime C^\prime\rangle \nonumber\\ &=& \langle A(B+B^\prime)C \rangle + \langle A(B-B^\prime) C^\prime\rangle + \langle A^\prime (B-B^\prime)C \rangle - \langle A^\prime (B+B^\prime) C^\prime \rangle\nonumber\\ &=&2(\cos \phi \langle ADC \rangle + \sin \phi \langle AD^\prime C^\prime \rangle + \sin \phi \langle A^\prime D^\prime C \rangle-\cos\phi\langle A^\prime D C^\prime \rangle) \nonumber\\ &\leq& 2[(\langle ADC \rangle^2 + \langle AD^\prime C^\prime \rangle^2 )^{1/2} +( \langle A^\prime D^\prime C \rangle^2 + \langle A^\prime D C^\prime \rangle^2 )^{1/2}], \end{eqnarray} where we have made use of Eq. (\ref{x cos}) again. For the three-qubit SC state: \begin{eqnarray*} \rho_4\!&=&\!a_1 |000\rangle \langle 000|\! + \!a_2 |000\rangle \langle 111| \! + \!a_2^* |111\rangle \langle 000| \!+\! a_4 |111\rangle \langle 111| \end{eqnarray*} with $a_1, a_4 \geq 0 $, $a_1+a_4=1$ and $a_1a_4\geq |a_2|^2$. The first term in Eq. (\ref{S}) with respect to $\rho_4$ is given by \begin{eqnarray} \label{first term of S rho 4} \langle ADC \rangle &=& (a_1-a_4)\cos \theta_a \cos \theta_d \cos \theta_c \\\nonumber &&+ 2 [{\rm Re} (a_2) \cos(\phi_a+\phi_d+\phi_c) -{\rm Im} (a_2) \sin (\phi_a+\phi_d+\phi_c)]\sin\! \theta_a \sin \theta_d \sin \theta_c \nonumber\\ &\!\leq&\! [(a_1\! -\!a_4)^2\!\cos^2 \!\theta_a \cos^2\! \theta_d \!+\! 4 |a_2|^2 \sin^2 \!\theta_a \sin^2\! \theta_d ]^{\frac{1}{2}}. \nonumber\\ \end{eqnarray} From Eq. (\ref{S}) and Eq. (\ref{first term of S rho 4}) we get \begin{eqnarray} S(\rho_4) & \leq & 2 \{ [ (a_1-a_4)^2 \cos^2 \theta_a (\cos^2 \theta_d + \cos^2 \theta_{d^\prime}) \nonumber\\ &&+4 |a_2|^2 \sin^2 \theta_a (\sin^2 \theta_d +\sin^2 \theta_{d^\prime}) ]^{1/2} \nonumber\\ &&+ [(a_1-a_4)^2 \cos^2 \theta_{a^\prime} (\cos^2 \theta_d + \cos^2 \theta_{d^\prime}) \nonumber\\&&+ 4 |a_2|^2 \sin^2 \theta_{a^\prime} (\sin^2 \theta_d +\sin^2 \theta_{d^\prime}) ]^{1/2} \}. \end{eqnarray} Due to the constraint condition Eq. (\ref{operator d equaltity}), one has $\cos^2 \theta_d+ \cos^2 \theta_{d^\prime } \leq 1$ and $\sin^2 \theta_d + \sin^2 \theta_{d^\prime } \leq2$. Therefore we arrive at \begin{eqnarray}\label{S rho 4} S_{max}(\rho_4)= \max \{ 4|1-2a_1|, 8\sqrt{2}|a_2| \} \end{eqnarray} from the fact that \begin{eqnarray}\label{x cos2} x \cos^2 \theta + y \sin^2 \theta \leq \left\{ \begin{array}{cc}x, & x \geq y;\\ y, & x\leq y, \end{array}\right. \end{eqnarray} where the equality holds when $\theta=0$ for the first case, and when $\theta=\pi/2$ for the second case. Accordingly, $S_{max}(\rho_4)= 4 |1-2a_1|$ holds when $\vec{a},\, \vec{a}^\prime,\, \vec{b},\, \vec{b}^\prime $ are all aligned along $\vec{z}$, $\vec{c} =sign(1-2a_1)\vec{z}$ and $\vec{c}^\prime =- \vec{c}$, whereas $S_{max}(\rho_4)= 8\sqrt{2} |a_2| $ holds when all the measurement vectors lie in the $x-y$ plane with $\tan (\phi_a + \phi_d +\phi_c) =\tan( \phi_a +\phi_{d^\prime} +\phi_{c^\prime} )=\tan( \phi_{a^\prime} + \phi_{d^\prime} + \phi_c) = - \frac {{\rm Im}(a_2)}{{\rm Re}(a_2)}$, $\tan (\phi_{a^\prime} + \phi_{d} + \phi_{c^\prime}) =\pi$, $\phi_d - \phi_{d^\prime} = \frac{\pi}{2}$ and $\phi=\frac{\pi}{4}$. Eq. (\ref{S rho 4}) implies that $\rho_4$ violates the Svetlichny inequality if and only if $|a_2|> \frac{1}{2\sqrt{2}}$. However $\rho_4$ is always genuine tripartite entangled for nonzero $a_2$. Hence the violation of the Svetlichny inequality is only a sufficient condition for the genuine nonlocality of three-qubit SC states. Now we contrast the violation of Svetlichny inequality with entanglement. In terms of the reference \cite{lz}, the generalized concurrence \cite{albev} of three-qubit SC state $\rho_4$ can be obtained, $C(\rho_4)= \sqrt{6}|a_2|$. Then, the Svetlichny inequality does not hold when $C(\rho_4)\geq \frac{\sqrt{3}}{2}$, and its violation satisfies the following equation \begin{eqnarray} S_{max}(\rho_4)=\frac{8C(\rho_4)}{\sqrt{3}}. \end{eqnarray} Moreover, $S_{max}(\rho_4)$ has also direct relations to the relative entropy entanglement, $ E(\rho) =\min _{ \sigma \in D} S( \rho \parallel \sigma ) =\min_{ \sigma \in D} Tr [ \rho \log \rho - \rho \log \sigma ]$, where $D$ is the set of all fully separable states. It has been proven that $ \varrho =a_{1} |000 \rangle \langle 000 |+a_{4} |111 \rangle \langle 111 |$ is the optimal separable state for $\rho_4$ such that $E(\rho_4)=\min_{ \sigma \in D}S( \rho_4 \parallel \sigma ) = S( \rho_4 \parallel \varrho )$ \cite{ming}. Hence, when $\rho_4$ violates Svetlichny inequality, we have \begin{eqnarray*} E(\rho_4)\label{relative entropy of rho 4} &=&f(a_1,a_4,a_2,a_2^*)-f(a_1\log_2a_1,a_4\log_2a_4,a_2\log_2a_4,a_2^*\log_2a_1)\\ &=&g(a_1,a_4,S_{max}^2)-g(a_1\log_2a_1,a_4\log_2a_4,S_{max}^2\log_2a_1\log_2a_4) \end{eqnarray*} where $f(x_1,x_2,x_3,x_4)=f_+ \log_2 f_+ +f_- \log_2 f_-$, $f_{\pm}=[(x_1 +x_2)\pm\sqrt{(x_1-x_2)^2+4x_3x_4}]/{2}$ and $g(x_1,x_2,x_3)=g_+ \log_2 g_+ +g_- \log_2 g_-$, $g_{\pm}=[(x_1+x_2)\pm\sqrt{(x_1 -x_2)^2+\frac{x_3}{32}}]/{2}$. Now we consider the generalization of the three-qubit SC state $\rho_4$ to mixed state $\rho_5$: \begin{eqnarray} \label{rho5} \rho_5 &=& b_1 |000\rangle \langle 000|+ b_2 |001\rangle \langle 001| + b_3 |010\rangle \langle 010| + b_4|100\rangle \langle 100| + b_5|011\rangle\langle 011| +b_6 |101\rangle \langle 101| \nonumber\\ &&+ b_7 |110\rangle \langle 110| +b_8 |111\rangle \langle 111| +c_1 |000\rangle \langle 111| + c_1^* |111\rangle \langle 000|. \end{eqnarray} For such state, the $S_{max}$ becomes \begin{eqnarray*} S_{max}(\rho_5) =\max\{ 4 |b_1-b_2-b_3-b_4+b_5+b_6+b_7-b_8|,8\sqrt{2} |c_1|\}. \end{eqnarray*} Thus $\rho_5$ violates the Svetlichny inequality when $|c_1| > \frac{1}{2\sqrt{2}}$. Here $S_{max}(\rho_5)= 4 |b_1-b_2-b_3-b_4+b_5+b_6+b_7-b_8|$ holds when $\vec{a},\, \vec{a}^\prime,\, \vec{b},\, \vec{b}^\prime $ are all aligned along $\vec{z}$, $\vec{c} =sign(b_1-b_2-b_3-b_4+b_5+b_6+b_7-b_8)\vec{z}$ and $\vec{c}^\prime =- \vec{c}$. $S_{max}(\rho_5)= 8\sqrt{2} |c_1|$ holds when all the measurement directions lie in the $x-y$ plane with $\tan (\phi_a + \phi_d +\phi_c) =\tan( \phi_a +\phi_{d^\prime} +\phi_{c^\prime} )=\tan( \phi_{a^\prime} + \phi_{d^\prime} + \phi_c) = -\frac{{\rm Im}(c_1)}{{\rm Re}(c_1)}$, $\tan (\phi_{a^\prime} + \phi_{d} + \phi_{c^\prime}) =\pi$, $\phi_d - \phi_{d^\prime} = \frac{\pi}{2}$ and $\phi=\frac{\pi}{4}$. In particular, let's consider a transverse noise channel operating on the GHZ state $|\phi\rangle = \frac{1}{\sqrt{2}}(|000\rangle + |111\rangle)$. Then the final state $\rho_6=\sum_{i,j,l=1,2} K_i \otimes K_j \otimes K_l |\phi\rangle \langle \phi| K_i^\dagger \otimes K_j^\dagger \otimes K_l^\dagger=\frac{1}{2}[ \gamma^6 |000\rangle \langle 000| + \gamma^4 \omega^2 (|001\rangle \langle 001| + |010\rangle \langle 010| +|100\rangle \langle 100|) + \gamma^2 \omega^4 (|011\rangle \langle 011| + |101\rangle \langle 101| +|110\rangle \langle 110|) + (1+ \omega^6) |111\rangle \langle 111| + \gamma^3 (|000\rangle \langle 111| + |111\rangle \langle 000|)]$, which is just of the form in Eq. (\ref{rho5}). Therefore we have \begin{eqnarray} \label{S rho 5 prime} S_{max}(\rho_6) &=&\max\{ 2 |\gamma^6 +3 \gamma^2 \omega^4 - 3 \gamma^4 \omega^2 -1- \omega^6|, 4\sqrt{2} \gamma^3\}\nonumber\\ &=&\left\{ \begin{array}{cc} 2 (1\!-\!\gamma^6 \!-\!3 \gamma^2 \omega^4 \!+\! 3 \gamma^4 \omega^2 \!+\! \omega^6), & 0\!\leq\! \gamma\! \leq\! \frac{1}{\sqrt{2}};\\ 4\sqrt{2} \gamma^3, & \frac{1}{\sqrt{2}}\! \leq\! \gamma \!\leq\! 1, \end{array}\right. \end{eqnarray} which shows that $\rho_6$ violates the Svetlichny inequality when $t<0.693147/\Gamma$. Namely the Svetlichny inequality can not detect the hidden nonlocality any more for $t>0.693147/\Gamma$. From Eq. (\ref{S rho 5 prime}) and FIG. 2, we can see that $S_{max}(\rho_6)$ is not a monotonic function of time; accordingly we assert that $S_{max}$ is also not a suitable entanglement measure. \begin{figure}[!h] \begin{center} \scalebox{0.56}[0.5]{\includegraphics {Strho5prime.eps}}\caption{$S_{max}(\rho_6)$ versus $\Gamma\, t$} \end{center} \end{figure} \section{Conclusions} In summary, we have obtained an analytical formula of maximum expectation value $F_{max}$ of CHSH inequality for two-qubit SC states, from which we have shown that this inequality is both necessary and sufficient for the nonlocality of two-qubit SC states, though this is not true for general two-qubit mixed states. In addition, the relations between $F_{max}$, entanglement and capacity of dense coding for SC states have been also derived. Moreover, unlike the entanglement measure, $F_{max}$ is not monotonic with time under LOCC. For three-qubit systems, we have demonstrated that the violation of the Svetlichny inequality is only a sufficient condition for the genuine nonlocality of three-qubit SC states. Furthermore we have presented a relation between $S_{max}$ and relative entropy entanglement, which gives a way to determine the relative entropy entanglement of SC states experimentally. \section{Acknowledgements} Ming-Jing Zhao thanks Max-Planck- Institute for Mathematics in the Sciences for its hospitality. This work is supported by the NSFC 10875081, NSFC 10871227, KZ200810028013 and PHR201007107 and NSF of Beijing 1092008.
1,108,101,562,703
arxiv
\section{Acknowledgement}\label{sec:acknowledgement} This work was partly funded by the Austrian Research Promotion Agency (FFG) through the project DeepRUL (Project ID: 871357). \section{Background}\label{sec:background} \subsection{Plasma Etching} Semiconductor device production aims to create structures with specific material properties on the surface of a silicon wafer. This can be achieved by selectively adding or removing material and locally changing the chemical structure of the wafer material (e.g., doting and oxidation). Today, plasma etching is widely chosen for material removal as it provides the high precision level required for the efficiency of modern devices. Prior to the etching stage, areas of the wafer that should not be etched are masked. Next, the wafer is placed in a low-pressure chamber where a plasma is ignited. Ions in this plasma are accelerated towards the wafer and as they hit the surface, they either remove material by mechanical impact, or they form a chemical reaction with the material. The ions in the plasma must be chosen to mostly react with the substrate and not the mask. Additionally, reaction products should be volatile. Otherwise, they might cause deposits on the wafer. To ensure a reliable etching process, control pressures, gas temperatures, gas compositions, voltages, wafer cooling and the plasma composition must be controlled. Modern plasma etching equipment operate fully automated and process batches of wafers with predefined recipes, where each recipe is a sequence of predefined process parameters. In general, each equipment has a loading mechanism with vacuum locks and several process chambers. Figure~\ref{fig:Scheme_equipment} illustrates the central functional components of a modern multi-chamber plasma etching equipment. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{sections/graphics/Scheme_plasma_etching.jpg} \caption{\small{Schematic illustration of chambers and loading mechanism of a modern plasma etching equipment}} \label{fig:Scheme_equipment} \end{figure} \subsection{Data-Driven Predictive Maintenance Approaches} Most data-driven predictive maintenance approaches in the literature are based on statistical, probabilistic or machine learning methods and can be categorized into two groups~\cite{Si2013Wienerprocess}: the first group are prognostic models that directly observe production state processes. They either apply regression-based models~\cite{Barraza2017ARX}, or Markovian-based models~\cite{Xiang2012Markovian}. Furthermore, Wang et. al.~\cite{Wang2012HealthPrognostics} proposed a generic model for probabilistic health condition estimation and tested it on two scenarios: (i) electric cooling fans and (ii) an engine dynamic simulation. Machine learning algorithms such as Support Vector Machines (SVM)~\cite{Patil2015SVMRUL}, Support Vector Regression (SVR)~\cite{Ding2014BatteryRUL} and binary logistic regression~\cite{Phillips2015MachineryConditionclassification} form the second group and have also proved to be an effective solution for estimating TTF and other machine health descriptor variables. Deep Neural Networks (DNNs) have recently shown strong performance on variety of complex applications such as speech recognition \cite{Amodei2016deepSpeech}, image classification \cite{Krizhevsky2017NIPS} and acoustic sound classification and detection \cite{Dang2018SED}. However, as explainability of trained models is a key requirement in the semiconductor industry, other algorithms are typically preferred. \subsection{Prediction Maintenance in the Semiconductor Industry} Outside the specific context of plasma etching, a number of data-driven predictive maintenance approaches have been proposed for monitoring machine health degradation in the semiconductor industry. Munirathinam et al.~\cite{munirathinam2016predictive} constructed a decision model for a semiconductor fabrication plant by applying a variety of standard machine learning algorithms (e.g., KNNs, SVM). In order to reduce data dimensionality, they applied reduction approaches such as Principal Component Analysis (PCA), Variable Importance Analysis (VIA) and Chi Square statistics. However, although their models can predict whether a product passes a final quality inspection, they were not designed to predict possible equipment breakdowns. Lou et al.~\cite{luo2015data}, proposed a two-step maintenance framework for degradation prediction on a real case study in semiconductor manufacturing industries and achieved 74.1\% accuracy in predicting a machine's health degradation. Their work is divided into three stages: first, they adopted a back-propagation Neural network (BPNN) to forecast the machine's health. Second, as a backup for the failed cases of the first stage, they have employed Restricted Boltzmann Machine (RBM) and Deep Belief Networks (DBN), which have strong inductive learning abilities. Finally, using multiple regression forecasting, they checked the prediction accuracy in both stages. However, the model is specialized to predict only the contamination of the process chamber. Failures unrelated to the contamination are not considered in the model. Susto and McLoone~\cite{susto2013} adopted an SVM classifier to predict mechanical faults in semiconductor manufacturing. This type of failure is caused by usage and stress of the equipment parts (e.g., filament breaks). In their work, they classify a machine's run as faulty when the SVM's decision boundary falls below a threshold. They specified this threshold by considering two factors: unexpected breakdowns and unexploited lifetime. Their approach showed robustness to cost changes associated with unexpected breaks and inefficient lifetime of an equipment part. However, they do not discuss breakdowns of plasma etching equipment. Additionally, they focus on specific parts (e.g., breakdown of a tungsten filament) and not on a whole plasma etching process chamber. \section{Conclusion} In this paper, we described three different machine learning tasks that can be used for predicting Time-to-Failure (TTF) or the health state of plasma etching equipment in the semiconductor industry. Our results show that trained prediction models provide acceptable effectiveness for periods exceeding roughly 24 hours, which allows maintenance planners to react on those predictions. We also highlighted the importance of alarms and limit violations, which carry high degrees of domain knowledge. Since model transparency was a key requirement, we restricted ourselves to manual feature engineering and well-known machine learning techniques. In future, however, we will proceed and investigate explainable Deep Learning techniques for those prediction tasks. \section{Discussion} With the overall goal of predicting Time-to-Failure of plasma etching equipment in the semiconductor industry, we experimented with three different machine learning approaches. In all three tasks we were able to outperform comparable benchmarks resembling human breakdown predictions based on historic mean TTF observations. We highlighted the importance of precise breakdown recordings and subsequent TTF computations for building accurate and effective prediction models. We also emphasized the importance of including manufacturer-defined alarms and expert-defined limit violations, which both capture a high degree of domain knowledge. One limitation of our approach lies in the relatively high manual effort required for data cleansing, normalization and feature engineering. In order to support genericity and applicability of our overall method on other equipment types with similar data sources, we strongly focused on building generic features that are derived from standard data sources (sensor data, alarms, limit violations) and therefore applicable across equipment. However, we believe that further boosts in effectiveness are possible by modeling equipment-specific behaviors. A possible strategy is to investigate Deep Learning methods, which support automated feature learning but still have the drawback of being non-transparent and hardly explainable. Activation-based or gradient-based methods for feature isolation are possible solutions for those problems. Finally, we can identify two main orthogonal challenges that need to be tackled when implementing data-driven predictive maintenance in a production setting. First, data quality, especially recordings of breakdowns, is a key prerequisite for building effective prediction models. The most elaborate machine learning method won't provide more precision if the target variable (TTF) is flawed. Second, data-driven breakdown predictions must be embedded into existing maintenance management workflows and operations. This typically requires formation of dedicated groups of professionals having both knowledge of machine and process design as well as an understanding and intrinsic interest in novel data-driven maintenance technologies. \section{Experimental Setup}\label{sec:experiments} In our experimental setup, we followed the typical data science work-flow: first, we cleansed and normalized our dataset, which had been aggregated from a number of heterogeneous sources. Then, in collaboration with domain experts and based on the observations made in the previous exploratory analysis phase, we engineered a number of features that were potentially useful to the prediction model. Next, in order to evaluate the effectiveness of our models, we built a benchmark resembling human decision making. Finally, we built predictive models for three different TTF prediction and interpretation methods and evaluated their effectiveness against the respective benchmarks. In the upcoming subsections we describe each of those steps in more detail. \subsection{Data Cleansing and Normalization}\label{sec:experiments_cleansing} In this step, we removed non-relevant data points such as system-internal identifiers, user names or null-values that do no contribute valuable information to our models. Given the recipe dependencies of sensor data correlations described in Section~\ref{sec:subsec_correlations}, we standardized all sensor features by the sample mean and standard deviation of the corresponding recipe. This makes the values of each feature in the data have zero-mean and unit-variance and allows comparison across recipes. We also eliminated recipes used by non-productive runs (e.g., cleaning, experiments). Filtering by relevant recipes reduced the number of missing sensor parameter values to 53\%. We set remaining missing values to zero when we could safely assume that a certain process parameter was not used. Having a NaN in a helium flow, for instance, means that no helium was used in a process step. This also holds for voltage, intensity, and power values. Finally, we computed the TTF target variable for each chamber, assigned unique identifiers to each segment between breakdowns and normalized that variable to zero mean and unit variance. \subsection{Feature Engineering}\label{sec:experiments_feature_engineering} Given the cleansed and normalized datasets, we then engineered a number of feature sets, which fall into three main groups: features derived from APC sensor data (\emph{APC}), alarm features (\emph{AL}), and features derived from limit violations (\emph{LV}). For alarms we constructed a cumulative feature that sums the number of alarm occurrences (\emph{counter alarms}) in each productive interval. Additionally, we weighted each occurrence with a penalty value that captures the severity of an alarm. The penalty for each alarm is computed by inversing its median TTF ($AP_x = 1/median(TTF)_{x}$) and then added to an alarm occurrence. In analogy to alarms, we created a similar feature (\emph{counter violations}) for limit violations. Figure~\ref{fig:alarm_violation_counter} illustrates those weighted alarm and limit violation occurrence features. In sub-figure (a) we can clearly see an increase of violations during the run-time of a chamber and spikes shortly before breakdowns. Sub-figure (b) shows similar behavior, which can be modeled by computing the gradient of those features. \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/counter_features_modified_Alex.jpg} \caption{\small{Limit violation and alarm counter features compared to Time-to-Failure of one chamber. Sub-figure (a) depicts the cumulative weighted sum of the limit violations over TTFs of multiple productive intervals. Sub-figure (b) shows the cumulative weighted sum of the alarms within the same interval.}} \label{fig:alarm_violation_counter} \end{figure} Table~\ref{tb:featuresets} summarizes the feature sets used in our experiments. We selected a number of feature combinations based on feedback from domain experts who defined limit violations and thereby indirectly pointed our importance of a features based on past observations. In the following, we denote, for readability purposes, our feature set combinations as follows: \\[0.5cm] $FS_{1} = APC_{V} + APC_{R}$\\ $FS_{2} = APC_{V} + LV_{P}$\\ $FS_{3} = APC_{V} + APC_{R} + LV_{P} + AL_{P}$\\ $FS_{4} = APC_{V} + APC_{R} + LV_{P} + AL_{P} + Voltage Dips$\\ $FS_{5} = APC_{V} + LV_{P} + AL_{P}$\\ $FS_{6} = LV_{P} + AL_{P}$\\ $FS_{7} = AL_{P}$\\ \begin{table} \centering \caption{\small{Feature sets used in the model}} \label{tb:featuresets} \begin{tabular}[c]{ |p{1.5cm}|p{6cm}| } \hline Feature Set & Description \\[0.01cm] \hline $APC_{V}$ & Subset of APC process data containing only features with defined limit violations. \\ $APC_{R}$ & Contains information on the recipe mix of x runs (e.g., gradient sum, gradient max). \\ $LV_{P}$ & Engineered cumulative sum of limit violations with penalty (gradient sum, gradient max). \\ $AL_{P}$ & Engineered Cumulative sum of alarms with penalty (gradient sum, gradient max). \\ \hline \end{tabular} \end{table} \subsection{Model Building} We have chosen a number of machine learning algorithms that are known (cf. Section~\ref{sec:background}) for their robustness among applications of classification and regression for health monitoring and prognosis. For regression tasks we used Linear Regression (LR), Support Vector Machines (SVM), Decision Trees (DT), Random Forest (RF), and Multilayer Perceptrons (MLP). For the classification task, we used the same algorithms except LR. Additionally, we also chose Gradient Boosting Classifier (GBC), SVM with Stochastic Gradient Descent (SGD) and K-Nearest Neighbours (KNN). For all our models we used implementations of Scikit-learn \cite{pedregosa2011scikit}. The dataset is split in a way that ensures that recordings from the same productive interval (the time between two breakdowns) are always assigned to the same fold. This is necessary because data in a productive interval is autocorrelated and would reveal information on the next breakdown, if it was used for both training and testing. To ensure the reliability of our results, we applied a 4-fold cross validation and iteratively trained our models on three folds of data and tested it on one other fold not included in the training data set. \subsection{Benchmark Definition}\label{Benchmarks} We evaluated the effectiveness of our models in comparison to three benchmarks resembling human decision making: for the \emph{na{\"i}ve benchmark} (B1), we assume that the TTF is always constant and simply take the value of the mean TTF over all runs. The \emph{visionary benchmark (B2)} serves as an idealized reference and considers an adjusted mean TTF for each productive segment, which is unrealistic because it requires knowledge on future breakdowns. The \emph{realistic benchmark (B3)} is closer to reality and assumes that a breakdown occurs after $x$ productive hours where $x$ denotes mean productive time between breakdowns based on historic data. Figure~\ref{fig:benchmarks} visualizes all three benchmarks. \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/Benchmarks.jpg} \caption{\small{Illustration of three benchmarks used for model evaluation. The red line represents the first benchmark (mean TTFs over entire observation period); the green line the second benchmark (adjusted mean TTF for each productive interval) and the blue line represents the third benchmark (based on historic means).}} \label{fig:benchmarks} \end{figure} \subsection{Model Evaluation} We evaluate our models using standard metrics used in machine learning. For regression tasks we compute the root mean squared error (RMSE) of the predicted and the actual value. Then we compute the relative difference to our realistic benchmark B3 as follows: $ x_{rel} = (x -B3)/B3 $ with $x$ denoting the RMSE of the prediction and $B3$ denoting the RMSE of B3. Thus, a negative $x$ characterizes an improvement to the realistic benchmark, whereas positive values denote a prediction that is less effective than the benchmark. Although the RSME is suitable metric for finding the prediction that is on average closest to the actual TTF curve, it does not necessarily evaluate a model's practical applicability. A constant prediction of the mean value, for instance, might result in a sound RMSE, but is less useful for maintenance purposes as no degradations resembling the decay of TTF is shown. Therefore, we define the best useful model as the model having the lowest RMSE and showing degradation upon visual inspection of predictions. Furthermore, we evaluate the effectiveness of the classification task with the standard precision (P), recall (R) and F1-score (a trade off between precision and recall). For a more detailed explanation of machine learning evaluation metrics, we refer the reader to related literature such as Powers \cite{powers2011evaluation}. \section{Exploratory Analysis}\label{sec:exmploratory_analysis} Before having focused on prediction model building, we first gathered data from several nearly identically constructed plasma etching equipment and process chambers. Then we computed TTF for each chamber, which represents the target variable for all our prediction tasks and the ground-truth for validating trained models. We also conducted an exploratory analysis of available data points in order to identify possible correlations and to reduce the dimensionality of our data. In order to minimize competitive intelligence risks, all figures in this section are schematic and illustrate our findings without exposing details of the underlying manufacturing process. \subsection{Dataset Characteristics } Our dataset encompasses data recorded over a 6-month period and has been drawn from the following sources: \begin{enumerate} \item \emph{Automatic Process Control (APC)}: contains 492 distinct sensor data recordings for a single wafer from the underlying plasma etching process control system. This includes statistics of measurements, for instance relevant gas flows and voltages, as well as process-related information such as the time needed for a single etching step and the applied recipes. \item \emph{APC Limit Violations}: limits are upper- and lower-limit thresholds, which were defined by domain experts for selected APC process control parameters. Limit violations are categorized by severity (\emph{error} vs. \emph{information}) and can trigger actions ranging from automated equipment shutdowns to sending informational emails to domain experts. In total, our dataset contains recordings of 58 different limit violations. \item \emph{APC Alarms}: alarms are defined by the machine manufacturer and are categorized into five classes; \emph{warning}, \emph{information}, \emph{critical}, \emph{errors} and \emph{other}. Alarms can, analogous to limit violations, also trigger a number of possible actions, including equipment shutdowns. In total, our dataset contained recordings of 603 distinct alarms per process chamber. \item \emph{Real Time Clock (RTC)}: is a system that records state changes (e.g., standby, productive, breakdown) of plasma etching equipment and their corresponding parts. We retrieved those state changes for all equipment over the entire observation period. \item \emph{Voltage Dips}: describe the voltage reduction in the power supplies. The occurrence of this feature is very sparse. However, domain experts assume that the voltage dips are problematic and can cause equipment's breakdown. \end{enumerate} \subsection{TTF Computation}\label{sec:ttf_computation} In order to build a ground truth for subsequent prediction tasks, we reverse engineered the TTF for each process chamber by joining a machine's state from RTC with process durations extracted from APC. This metric yields a descending counter of productive hours before the next recorded breakdown. Figure~\ref{fig:breakdowns} illustrates the zigzag shape of an arbitrary TTF with four breakdowns. \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/pm3_breakdown_modified.jpg} \caption{\small{Time-to-Failure (y-axis) of a single chamber over the observation period (x-axis). Red marks indicate recorded breakdowns.}} \label{fig:breakdowns} \end{figure} \subsection{Analysis of APC Limit Violations} Limit violations can, as discussed before, trigger further actions. An example of a severe violation leading to an immediate shutdown are helium flows above a certain threshold, which are typically caused by dust particles in a chamber. Other violations are only recorded as warnings and do not cause immediate actions. To understand the impact of limit violations on future process chamber breakdowns, we analyzed historical limit violation recordings and computed the median TTF per defined violation. Figure~\ref{fig:violations} shows a selection of limit violation occurrences (scatter plot) and computed median TTFs (bar plot) in relation to TTF. The single box plots are arranged by their median TTF from lowest to highest, which shows that some violations tend to occur closer to a machine's failure (TTF equals 0) than others. Limit violations with zero TTF are bound to immediate shutdown actions. Intuitively, limit violations that have a median TTF slightly above zero are the most interesting parameters as they indicate upcoming breakdowns but do not immediately cause them. \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/violations_modified_without_dots.jpg} \caption{\small{Limit violation occurrences (scatter plot) in relation to TTF. Box plots show medians, colors indicate severity (orange: error; green: information).}} \label{fig:violations} \end{figure} \subsection{Analysis of APC Alarms} A plasma etching equipment can raise a number of alarms while processing a wafer. Figure~\ref{fig:alarms} illustrates APC alarm occurrences in relation to a chamber's TTF with each alarm colored by category. Analogous to the previous illustration of limit violations, it arranges alarms in ascending order by their median TTF. Thus, a lower median TTF indicates that an alarm was often raised near an upcoming breakdown. Again, we see that some alarms are more and some are less relevant for our prediction tasks and can assume that the most informative alarms show a median TTF slightly above zero. We also see that alarm severity does not necessarily correspond with the alarm categories defined by the equipment manufacturer. \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/intresting_alarms_no_dots.jpg} \caption{\small{APC alarm occurrences (scatter plot) in relation to TTF. Box plots show medians, colors indicate alarm categories (red: critical; yellow: error; brown: warning; green: information; blue: other).}} \label{fig:alarms} \end{figure} \subsection{Analysis of APC Sensor Data Correlations}\label{sec:subsec_correlations} We computed correlations between APC sensor data points in order to identify linear relationships between parameters. This allowed us to reduce the dimensionality of our dataset by 87.2\% by discarding parameters that correlate strongly with others and therefore add little information. While conducting a principal component analysis, we also found that many process control data recordings strongly depend on the recipe used in an etching process run. This can also be observed when correlating two APC process variables with each other, as shown in Figure~\ref{fig:cluster-correlation}. It shows that correlations are clustered by some external factor, which in this case, is the configured and applied recipe. \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/cluster-volt-temp-correlation.jpg} \caption{\small{Illustration of APC process variable correlations. Arrows show positive (green) or negative (red) correlations.}} \label{fig:cluster-correlation} \end{figure} \section{Introduction} Plasma etching is a key procedure in semiconductor wafer fabrication and can, in case of equipment failures or breakdowns, lead to significant production losses. Therefore, maintenance of plasma etching equipment, which aims at minimizing unplanned breakdowns, has become a crucial task. Recently, there has been a clear shift from re-active maintenance planning strategies such as \emph{Run-to-Failure} or \emph{Scheduled Maintenance} to a more pro-active strategy, which is called \emph{Predictive Maintenance (PdM)}. The goal of that strategy is to monitor the health state of an equipment and to predict upcoming failures by estimating the \emph{Time-to-Failure (TTF)} before the next breakdown \cite{matyas2018instandhaltungslogistik}. Known predictive maintenance approaches can roughly be categorized into \emph{model-based} and \emph{data-driven} methods~\cite{susto2015machine}. Model-based methods rely on domain expertise and knowledge about the physical model of a system in order to predict its degrading behavior. Data-driven approaches, on the other hand, are used when it is not possible to draw a complete picture of a system's physical properties and behaviors. They usually employ machine Learning techniques to model and detect changes in machine behavior. Their effectiveness heavily depends on so called \emph{Health Indicators}~\cite{Guo2017HIRUL}, which are quantitative features that were extracted from available sensor, product quality, and production process data. A selection of relevant features is then used to train a model that describes a machine's health degradation to eventually estimate its remaining lifetime. A number of studies have already focused on prediction tasks in the plasma etching context: Cheng et al.~\cite{Cheng2003} developed a fault detection and isolation system for plasma etching process chambers. Lou et al.~\cite{luo2015data} used information about a chamber's contamination and employed neural networks for predicting the degradation in a semiconductor manufacturing processes. Puggini and McLoone~\cite{puggini2015} applied \emph{Extreme Learning Machines} to predict the etch rate of each wafer. Munirathinam et al.~\cite{munirathinam2016predictive} predicted a machine's state for maintenance scheduling by focusing on product quality parameters. In summary, existing work focuses on a specific fault type or specific physical properties such as the thickness of the walls caused by particle contamination. However, none of those approaches focus on predicting TTF of an entire plasma-etching chamber without directly measuring its health status. Therefore, we studied three different TTF prediction approaches in the context of plasma etching equipment and can summarize our contributions as follows: \begin{enumerate} \item \emph{Task 1}: We modeled TTF prediction as a \emph{regression task} and found that a simple Linear Regression model can be trained to predict the TTF trend and outperform a comparable benchmark resembling human judgment. \item \emph{Task 2}: We demonstrate that Task 1 can be transformed into a more effective health state prediction task (\emph{regression}) by converting the TTF target variable. \item \emph{Task 3}: We modeled TTF prediction as a \emph{classification task} in order to predict whether breakdowns will occur within defined intervals (e.g., upcoming 0-8h, 8-16h, etc.). We show that standard machine learning algorithms can outperform comparable benchmarks. \end{enumerate} All three tasks showed that alarm data, which is defined by the equipment manufacturer, and sensor data limit violations, which are defined by experts operating those equipment, provide the most informative features. We also found that prediction effectiveness is higher in 50-200h intervals than shortly before breakdowns (0-50h). In the following, we briefly introduce related background information on plasma etching and data-driven predictive maintenance approaches (Section~\ref{sec:background}). Then, in Section~\ref{sec:exmploratory_analysis}, we present key findings of our exploratory analysis before describing our experimental setup in Section~\ref{sec:experiments} and our results in Section~\ref{sec:results}. \section{Results}\label{sec:results} In the following, we present the results for three types of models we built for predicting the TTF for plasma etching equipment. \subsection{TTF Prediction (Regression)} The main goal of this task is to predict the remaining time to a machine's failure. In Section~\ref{sec:ttf_computation} we explained the calculation of the zigzag shaped TTF curve, which is the target variable of this task. To find the most informative set (or sets of features), we used different feature set combinations as input of our regression models with the objective of minimizing RMSE. Afterwards, we compared the results of each experiment with our previously defined benchmarks. \begin{table} \center \caption{\small{Results TTF prediction with \(B3_{RMSE} = 223.96.\)}} \label{tb:chamber-part-based-RMSE} \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Features & B1 & B2 & B3 & LR & SVM & MLP & Tree & RF \\ \hline FS1 & \multirow{7}{*}{-0.29} & \multirow{7}{*}{-0.49} & \multirow{7}{*}{0} & -0.36 & -0.40 & -0.34 & -0.42 & -0.33 \\ FS2 & & & & -0.31 & -0.40 & -0.3 & -0.18 & -0.29 \\ FS3 & & & & -0.26 & -0.40 & -0.27 & -0.08 & -0.31 \\ FS4 & & & & -0.24 & -0.40 & -0.35 & -0.07 & -0.31 \\ FS5 & & & & -0.26 & -0.40 & -0.29 & -0.05 & -0.31 \\ FS6 & & & & -0.28 & -0.42 & -0.31 & -0.08 & -0.28 \\ FS7 & & & & -0.28 & -0.42 & -0.28 & -0.05 & -0.28 \\ \hline \end{tabular} } \end{table} Table~\ref{tb:chamber-part-based-RMSE} presents the result of our experiments for one plasma etching chamber after cleaning erroneous breakdown recordings. It shows that Support Vector Machines (SVM) outperformed other models. However, when inspecting predictions visually, we observed that Li\textbf{}near Regression (LR) had the lowest RMSE while showing a degradation in its prediction. Trained with FS1 it outperformed B3 by 36\%, showed 7\% improvement over B1, and was 13\% worse than the second, visionary benchmark B2. In summary, our first experiments on a single process chamber showed that a trained regression model can predict the degrading trend of a TTF curve, however, with relatively high RMSE. When analyzing the errors, we found that some sequences of relatively short consecutive breakdowns were, according to domain experts' opinions, erroneous recordings caused by equipment starts and almost immediate shutdowns within maintenance operations. \subsection{Health State Prediction (Regression)} The goal of this task is to predict a machine's health status. For this purpose, we transformed the target value (TTFs) into a range between 0 and 1 (see Figure~\ref{fig:scaled_TTF_health_state}), where a health state of 1 is considered as healthy and 0 as failure (breakdown). All other pre-processing steps were the same as in the first task. \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/Transformed_TTF.jpg} \caption{\small{Illustration of time to failures in hours and the corresponding machine's health state.}} \label{fig:scaled_TTF_health_state} \end{figure} Table~\ref{tb:chamber-part-based-Health} shows that both Multilayer Perceptron (MLP) and Linear Regression (LR) models trained on all feature set combinations can outperform all benchmarks and visual inspection of predictions indicates degradation in both model families. Figure~\ref{fig:health-state-visualization} illustrates health state prediction results of a trained Linear Regression model from one cross-validation fold. \begin{table} \caption{\small{Results of Machine Health prediction with \(B3_{RMSE} = 0.36.\)}} \label{tb:chamber-part-based-Health} \resizebox{\columnwidth}{!}{ \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Features & B1 & B2 & B3 & LR & SVM & MLP & Tree & RF \\ \hline FS1 & \multirow{7}{*}{-0.19} & \multirow{7}{*}{-0.19} & \multirow{7}{*}{0} & -0.22 & -0.19 & -0.22 & -0.03 & -0.19 \\ FS2 & & & & -0.17 & -0.19 & -0.28 & -0.03 & -0.19 \\ FS3 & & & & -0.11 & -0.19 & -0.25 & 0.11 & -0.17 \\ FS4 & & & & -0.11 & -0.19 & -0.25 & 0.11 & -0.19 \\ FS5 & & & & -0.11 & -0.19 & -0.25 & 0.11 & -0.17 \\ FS6 & & & & -0.14 & -0.22 & -0.25 & 0.08 & -0.19 \\ FS7 & & & & -0.14 & -0.22 & -0.25 & 0.06 & -0.19 \\ \hline \end{tabular} } \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/1_prediction_vs_benchmarks} \caption{\small{Machine's health state prediction result of a Linear Regression model from one cross-validation fold.}} \label{fig:health-state-visualization} \end{figure} \subsection{TTF Prediction (Classification)} \label{Task3} In this task, we consider TTF prediction as a binary classification task that predicts whether a machine will face a breakdown within a predefined time interval of $0$ and $x$, where $x \in \{8h, 16h, 24h, 48h, 72h, 96h, 120h, 144h, 168h, 336h \}$. For a given interval, a run is labeled true when its TTF lies within that interval, otherwise the run is labeled as false. Figure~\ref{fig:fscore-visualization} shows the most effective (highest F1-score) model and feature set combination for each interval. We can clearly observe that trained models can outperform the realistic benchmark (B3) in all intervals. Furthermore, we see that prediction models trained for shorter intervals (e.g., 0-8h) have a lower F1-score than those for longer intervals. This reflects our findings from the previous regression tasks, which also yielded higher RMSE shortly before breakdowns. \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/graphics/F1_long.jpg} \caption{\small{F1-score values in multiple experiments. Each selected feature set is defined with different color and each model is determined with a shape.}} \label{fig:fscore-visualization} \end{figure} A summary of our experimental classification results is presented in Table~\ref{tb:classification-results} which compares the effectiveness of each interval-specific trained model to the corresponding interval-specific benchmark. As in Task 1 and 2 we can observe the importance of alarms and limit violations for shorter predictions as they are included in feature sets FS4, FS5 and FS7 but not in FS1. \begin{table}[h] \caption{\small{Summary of best classification results per interval}} \label{tb:classification-results} \resizebox{\columnwidth}{!}{ \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Interval}} & \multicolumn{3}{c|}{Realistic Benchmark} & \multicolumn{5}{c|}{Best Results} \\ \cline{2-9} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{{P}} & \multicolumn{1}{c|}{{R}} & \multicolumn{1}{c|}{{F1}} & \multicolumn{1}{c|}{Features} & \multicolumn{1}{c|}{Model} & \multicolumn{1}{c|}{P} & \multicolumn{1}{c|}{R} & \multicolumn{1}{c|}{F1} \\ \hline 0-8 h & 0.09 & 0.35 & 0.15 & FS4 & Tree & 0.25 & 0.36 & 0.29 \\ 0-16 h & 0.17 & 0.35 & 0.23 & FS5 & Tree & 0.36 & 0.53 & 0.42 \\ 0-24 h & 0.24 & 0.37 & 0.29 & FS7 & SVM & 0.39 & 0.75 & 0.51 \\ 2 days & 0.41 & 0.39 & 0.4 & FS7 & SVM & 0.59 & 0.76 & 0.66 \\ 3 days & 0.54 & 0.41 & 0.46 & FS1 & GBC & 0.68 & 0.91 & 0.78 \\ 4 days & 0.64 & 0.43 & 0.51 & FS1 & GBC & 0.74 & 0.96 & 0.83 \\ 5 days & 0.72 & 0.44 & 0.55 & FS1 & GBC & 0.79 & 0.98 & 0.88 \\ 6 days & 0.8 & 0.46 & 0.58 & FS1 & GBC & 0.84 & 0.99 & 0.91 \\ 7 days & 0.85 & 0.47 & 0.6 & FS1 & GBC & 0.88 & 0.99 & 0.93 \\ 14 days & 1 & 0.47 & 0.64 & FS6 & MLP & 1 & 1 & 1 \\ \hline \end{tabular} } \end{table}
1,108,101,562,704
arxiv
\section{Introduction} Axial algebras are a new class of non-associative algebras introduced recently by Hall, Rehren and Shpectorov \cite{Axial1} as a broad generalization of the class of Majorana algebras of Ivanov \cite{I09}. The key features of these algebras came from the theory of vertex operator algebras (VOAs) which first arose in connection with 2D conformal field theory and they were used by Frenkel, Lepowsky and Meurman \cite{FLM} in their construction of the moonshine VOA $V^\natural$ whose automorphism group is the Monster $M$, the largest sporadic finite simple group. The rigorous theory of VOAs was developed by Borcherds \cite{B86} as part of his proof of the monstrous moonshine conjecture. Roughly speaking, VOAs are infinite dimensional graded vector spaces $V = \bigoplus_{i=0}^\infty V_i$ with infinitely many products linked in an intricate way. The Monster was originally constructed by Griess \cite{G82} as the automorphism group of a $196,883$-dimensional non-associative real algebra, called the Griess algebra, and the Moonshine VOA $V^\natural$ contains a unital deformation of the Griess algebra as its weight $2$ part $V_2^\natural$. One of the key properties that axial algebras axiomatise was first observed in VOAs by Miyamoto \cite{Miy96}. He showed that you could associate involutory automorphisms $\tau_a$ of a VOA $V$, called \emph{Miyamoto involutions}, to special conformal vectors $a$ in $V_2$ called \emph{Ising vectors} \cite{Miy96}. Moreover, in the Moonshine VOA, $\frac{a}{2}$ is an idempotent in the Griess algebra $V_2^\natural$, called a \emph{$2\mathrm{A}$-axis} because the corresponding involution $\tau_a$ lies in the class $2A$ of the Monster $M$. The subalgebras of the Griess algebra generated by two $2\mathrm{A}$-axes, which we call \emph{dihedral subalgebras}, were first studied by Norton \cite{C85}. He showed that the isomorphism class of the dihedral subalgebra generated by $2A$-axes $a$ and $b$ is determined by the conjugacy class of the product $\tau_a \tau_b$. There are nine classes in $M$ containing products of two $2A$ involutions, labelled $1\textup{A}$, $2\textup{A}$, $2\textup{B}$, $3\textup{A}$, $3\textup{C}$, $4\textup{A}$, $4\textup{B}$, $5\textup{A}$ and $6\textup{A}$. Remarkably, Sakuma \cite{sakuma} showed that each sub VOA generated by two Ising vectors is also one of nine isomorphism types. Therefore, the above nine classes in $M$ are used as labels for the $2$-generated VOAs arising in Sakuma's theorem. Sakuma's result was extended to Majorana algebras in \cite{IPSS} and later to axial algebras with the Monster fusion law and a Frobenius form\footnote{Franchi, Mainardis and Shpectorov announced at the Axial Algebra Focused Workshop in Bristol in May 2018 that the Frobenius form condition has been removed.} in \cite{Axial1}. Majorana algebras were introduced by Ivanov \cite{I09} to abstract the properties of $2A$-axes. Axial algebras provide a further broad generalisation removing the less essential restrictions of Majorana algebras. An \emph{axial algebra} is a commutative non-associative algebra generated by \emph{axes}, that is, primitive semisimple idempotents whose adjoint eigenvectors multiply according to a certain fusion law. We say that an axial algebra is of \emph{Monster type} if its fusion law is the Monster fusion law (see Table \ref{tab:monsterfusion}). For the exact details see Section \ref{sec:background}. A Majorana algebra is then an axial algebra of Monster type which satisfies some additional conditions. Whenever the fusion law is $T$-graded, where $T$ is an abelian group, associated to each axis $a$ we get an automorphism $\tau_a(\chi)$ for every linear character $\chi\in T^*$. We define $T_a=\langle\tau_a(\chi):\chi\in T^*\rangle$, which has size at most $|T|$. The group generated by the $T_a$ for all axes $a$ is called the \emph{Miyamoto group}. For the important motivating example of the Griess algebra, the fusion law is $\mathbb{Z}_2$-graded and so, for every axis $a$, there is an involutory automorphism $\tau_a:=\tau_a(\chi_{-1})$ corresponding to the unique non-trivial character $\chi_{-1}$ of $\mathbb{Z}_2$. The Miyamoto group generated by all the $\tau_a$ is the Monster $M$ and the $\tau_a$ are the whole $2A$ conjugacy class. Another example of a class of axial algebras with a different fusion law are algebras of Jordan type comprising Matsuo algebras, whose Miyamoto groups are $3$-transposition groups, and Jordan algebras, whose Miyamoto groups include classical groups and groups of exceptional Lie type $F_4$ and $G_2$. \begin{problem*} For a given fusion law, which groups $G$ occur as the Miyamoto group of an axial algebra? \end{problem*} Seress \cite{seress} addressed this question for the class of Majorana algebras by developing an algorithm that computes, for a given $6$-transposition group $G$, possible $2$-closed Majorana algebras. (An axial algebra is $2$-closed if it is spanned by axes and by products of two axes.) He also provided a GAP implementation of his algorithm. However, his code was lost when he sadly died. Pfeiffer and Whybrow \cite{Maddycode} have recently developed a new and improved GAP implementation of Seress' algorithm. In this paper we describe a new algorithm for addressing the second of the above questions and present results obtained using a {\sc magma} implementation \cite{ParAxlAlg, magma} of this algorithm. The new algorithm differs from Seress's algorithm in several key ways. Our algorithm works for a general axial algebra over an arbitrary field with any $T$-graded fusion law, rather than just for the Monster fusion law (which is $\mathbb{Z}_2$-graded) over $\mathbb{R}$. Crucially, we do not assume that the algebra is $2$-closed. Indeed we find quite a few examples that are not $2$-closed. We also do not assume that the algebra has an associating bilinear form (a \emph{Frobenius form}), whereas Seress assumes this and also that the form is positive definite. We do not assume the so-called 2Aa, 2Ab, 3A, 4A, 5A conditions (see \cite[page 314]{seress}) which restrict the configuration of the dihedral subalgebras. Finally, we do not require that the axes $a$ be in bijection with the axis subgroups $T_a$. Let $\mathcal{F}$ be a $T$-graded fusion law and $G$ be a group acting on a set $X$. We aim to build an axial algebra where the action on the axes by (a supergroup of) the Miyamoto group is given by the action of $G$ on $X$. In Section \ref{sec:shape} we rigorously define admissible $\tau$-maps and the shape of an algebra. Roughly speaking, $\tau\colon X\times T^*\to G_0\leq G$ is an \emph{admissible $\tau$-map} if it has the properties that the map $(a,\chi)\mapsto\tau_a(\chi)$ in an axial algebra has. The subgroup $G_0\unlhd G$ generated by the image of this map will be our Miyamoto group. The \emph{shape} is a choice of $2$-generated subalgebra for each pair of axes $a,b\in X$. Since the isomorphism class of $2$-generated subalgebras is preserved under automorphisms, in particular, under the action of the Miyamoto group, we need only make one choice for each conjugacy class of pairs of axes. In fact, there are some addition constraints on the shape given by containment of $2$-generated subalgebras in one another as described in Section \ref{sec:shape}. Our algorithm takes $\mathcal{F}$, $G$, $X$, $\tau$ and the shape as its input. We show the following: \begin{theorem*} Suppose that the algorithm terminates and returns $A$. Then $A$ is a \textup{(}not necessarily primitive\textup{)} axial algebra generated by axes $X$ with Miyamoto group $G_0$, $\tau$-map $\tau$ and of the given shape. Moreover, the algebra $A$ is universal. That is, given any other axial algebra $B$ with the same axes $X$, Miyamoto group $G_0$, $\tau$-map $\tau$ and shape, $B$ is a quotient of $A$. \end{theorem*} We find several new examples of axial algebra with the Monster fusion law. Some of these are $3$-closed examples (in fact we find some examples which are $5$-closed), but we also find many examples that do not satisfy the so-called M8-condition. This condition severely restricts the allowable intersections of certain dihedral subalgebras in the shape. We also see in our results several shapes which do not satisfy the 2Aa, 2Ab, 3A, 4A, 5A conditions (see Section \ref{sec:dihedral}), but still produce good axial algebras. Interestingly, all the algebras we construct have a Frobenius form which is non-zero on the axes and invariant under the action of the Miyamoto group, even though we do not require this in our algorithm. Moreover, in all our examples, the form is positive semi-definite. It is known that axial algebras of Jordan type (those with three eigenvalues, $1$, $0$ and $\eta$) all have Frobenius forms and it has previously been observed that the other known examples also have Frobenius forms. Such a form, if it does exist, is uniquely determined by its values on the axes. So we make the following conjecture. \begin{conjecture*} All primitive axial algebras of Monster type admit a Frobenius form which is non-zero on the axes and invariant under the action of the Miyamoto group. \end{conjecture*} The structure of the paper is as follows. In Section \ref{sec:background}, we define axial algebras and discuss various properties such as Miyamoto involutions and dihedral subalgebras. We define the shape of an algebra in Section \ref{sec:shape}. Section \ref{sec:preliminaries} gives some lemmas and further properties of axial algebras which we will need. Our main result is the algorithm which is described in Section \ref{sec:algorithm}. Finally, in Section \ref{sec:results}, we present examples computed by our {\sc magma} implementation of the algorithm. \medskip We thank Simon Peacock for some useful comments on an early draft of this paper. \section{Background}\label{sec:background} We will review the definition and some properties of axial algebras which were first introduced by Hall, Rehren and Shpectorov in \cite{Axial1}. We will pay particular attention to the motivating examples coming from the Monster sporadic finite simple group and also indicate the extra conditions for such an axial algebra to be a Majorana algebra. \begin{definition} Let $\mathbb{F}$ be a field, $\mathcal{F}\subseteq\mathbb{F}$ a subset, and $\star\colon\mathcal{F}\times\mathcal{F}\to 2^{\mathcal{F}}$ a symmetric binary operation. We call the pair $(\mathcal{F},\star)$ a \emph{fusion law over $\mathbb{F}$}. A single instance $\lambda\star\mu$ is called a \emph{fusion rule}. \end{definition} Abusing notation, we will often just write $\mathcal{F}$ for $(\mathcal{F},\star)$. We can also extend the operation $\star$ to subsets $I,J\subseteq\mathcal{F}$ in the obvious way: $I\star J$ is the union of all $\mu\star\nu$ for $\mu\in I$ and $\nu\in J$. We note that after extending the operation, $(2^\mathcal{F},\star)$ is closed and so is a commutative magma. We will further abuse notation and mix subsets and elements. Let $A$ be a commutative non-associative (i.e.\ not-necessarily-associative) algebra over $\mathbb{F}$. For an element $a\in A$, the \emph{adjoint endomorphism} $\ad_a\colon A\to A$ is defined by $\ad_a(v):=av$, for all $v \in A$. Let $\Spec(a)$ be the set of eigenvalues of $\ad_a$, and for $\lambda\in\Spec(a)$, let $A_\lambda^a$ be the $\lambda$-eigenspace of $\ad_a$. Where the context is clear, we will write $A_\lambda$ for $A_\lambda^a$. We will also adopt the convention that for subsets $I\subseteq\mathcal{F}$, $A_I:=\bigoplus_{\lambda\in I}A_\lambda$. \begin{definition}\label{axialalgebra} Let $(\mathcal{F},\star)$ be a fusion law over $\mathbb{F}$. An element $a\in A$ is an \emph{$\mathcal{F}$-axis} if the following hold: \begin{enumerate} \item $a$ is \emph{idempotent} (i.e.\ $a^2=a$); \item $a$ is \emph{semisimple} (i.e.\ the adjoint $\ad_a$ is diagonalisable); \item $\Spec(a)\subseteq\mathcal{F}$ and $A_\lambda A_\mu\subseteq A_{\lambda\star\mu}$ for all $\lambda,\mu\in\Spec(a)$. \end{enumerate} Furthermore, we say that the $\mathbb{F}$-axis $a$ is \emph{primitive} if $A_1=\langle a\rangle$. \end{definition} Note that, when $\Spec(a)\neq\mathcal{F}$, we can still talk of $A_\lambda^a$ for all $\lambda\in\mathcal{F}$: if $\lambda\notin\Spec(a)$ then $A_\lambda^a=0$. With this understanding, the last condition means that $A_\lambda A_\mu\subseteq A_{\lambda\star\mu}$ for all $\lambda,\mu\in\mathcal{F}$. \begin{definition} An \emph{$\mathcal{F}$-axial algebra} is a pair $(A,X)$ such that $A$ is a commutative non-associative algebra and $X$ is a set of $\mathcal{F}$-axes generating $A$. An axial algebra is \emph{primitive} if it is generated by primitive axes. \end{definition} Where the fusion law is clear from context, we will drop the $\mathcal{F}$ and simply use the term \emph{axis} and \emph{axial algebra}. Although an axial algebra has a distinguished generating set $X$, we will abuse the above notation and just write $A$ for the pair $(A,X)$. Note that it has been usual in the literature to drop the adjective primitive and consider only primitive axial algebras. The fusion law over $\mathbb{R}$ associated to the Monster is given by Table \ref{tab:monsterfusion}. \begin{table}[!htb] \setlength{\tabcolsep}{4pt} \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{c||c|c|c|c} & $1$ & $0$ & $\frac{1}{4}$ & $\frac{1}{32}$ \\ \hline \hline $1$ & $1$ & & $\frac{1}{4}$ & $\frac{1}{32}$ \\ \hline $0$ & & $0$ &$\frac{1}{4}$ & $\frac{1}{32}$ \\ \hline $\frac{1}{4}$ & $\frac{1}{4}$ & $\frac{1}{4}$ & $1, 0$ & $\frac{1}{32}$ \\ \hline $\frac{1}{32}$ & $\frac{1}{32}$ & $\frac{1}{32}$ & $\frac{1}{32}$ & $1, 0, \frac{1}{4}$ \end{tabular} \caption{Monster fusion law}\label{tab:monsterfusion} \end{table} This fusion law is exhibited by the so-called $2A$-axes in the Griess algebra. Indeed, noting that these generate the Griess algebra shows that it is an axial algebra. We say that an axial algebra is of \emph{Monster type} if it is an axial algebra with the Monster fusion law. By definition, an axial algebra $A$ is spanned by products of the axes. We say that $A$ is \emph{$m$-closed} if $A$ is spanned by products of length at most $m$ in the axes. \begin{definition} A \emph{Frobenius form} on an axial algebra $A$ is a non-zero (symmetric) bilinear form $(\cdot,\cdot)\colon A\times A\to\mathbb{F}$ such that the form associates with the algebra product. That is, for all $x,y,z\in A$, \[ (x,yz)=(xy,z). \] \end{definition} We will be particularly interested in Frobenius forms such that $(a,a)\neq 0$, for all $a\in X$. That is, they are non-zero on the set of axes $X$. Note that an associating bilinear form on an axial algebra is necessarily symmetric \cite[Proposition 3.5]{Axial1}. Also, the eigenspaces for an axis in an axial algebra are perpendicular with respect to the Frobenius form. \begin{lemma}\label{formunique}\textup{\cite[Lemma 4.17]{axialstructure}} Suppose that $A$ is a primitive axial algebra admitting a Frobenius form. Then the form is uniquely determined by the values $(a,a)$ on the axes $a\in X$. \end{lemma} Majorana algebras were introduced by Ivanov by generalising certain properties found in subalgebras of the Griess algebra \cite{I09}. Axial algebras were developed as a generalisation of Majorana algebras, so Majorana algebras can be thought of as the precursor of axial algebras. As such, we can give a definition of them in terms of axial algebras. \begin{definition} A \emph{Majorana algebra} is a primitive axial algebra $A$ of Monster type over $\mathbb{R}$ such that \begin{enumerate} \item[M$1$] $A$ has a positive definite Frobenius form $(\cdot,\cdot)$; furthermore, $(a,a)=1$ for every axis $a$. \item[M$2$] \emph{Norton's inequality} holds. That is, for all $x,y\in A$, \[ (x\cdot y,x\cdot y)\leq(x\cdot x,y\cdot y). \] \end{enumerate} \end{definition} In some papers, the M2 axiom is not assumed and in others additional axioms on the subalgebras are assumed such as the $\textup{M}8$ axiom, which we will explain later in Section \ref{sec:dihedral}. \subsection{Gradings and automorphisms} The key property that axial algebras and Majorana algebras generalise from the Griess algebra is that there is a natural link between involutory automorphisms and axes. This link occurs precisely when we have a graded fusion law. \begin{definition} The fusion law $\mathcal{F}$ is \emph{$T$-graded}, where $T$ is a finite abelian group, if $\mathcal{F}$ has a partition $\mathcal{F}=\cup_{t\in T}\mathcal{F}_t$ such that \[ \mathcal{F}_s\star\mathcal{F}_t\subseteq\mathcal{F}_{st} \] for all $s,t\in T$. \end{definition} Note that, in the same way as we allow trivial eigenspaces, we also allow empty parts in the partition in the above definition. Let $A$ be an algebra and $a\in A$ an $\mathcal{F}$-axis (we do not require $A$ to be an axial algebra here). If $\mathcal{F}$ is $T$-graded, then this induces a \emph{$T$-grading} on $A$ with respect to the axis $a$. The weight $t$ subspace $A_t$ of $A$ is \[ A_t=A_{\mathcal{F}_t}=\bigoplus_{\lambda\in\mathcal{F}_t}A_\lambda. \] This leads to automorphisms of the algebra. Let $T^*$ denote the group of linear characters of $T$. That is, the homomorphisms from $T$ to $\mathbb{F}^\times$. For $\chi\in T^*$, we define a map $\tau_a(\chi)\colon A\to A$ by \[ v\mapsto\chi(t) v \] for $v\in A_t$ and extend linearly to $A$. Since $A$ is $T$-graded, this map $\tau_a(\chi)$ is an automorphism of $A$. Furthermore, the map sending $\chi$ to $\tau_a(\chi)$ is a homomorphism from $T^*$ to $\Aut(A)$. The subgroup $T_a:=\mathrm{Im}(\tau_a)$ of $\Aut(A)$ is called the \emph{axis subgroup} corresponding to $a$. We are particularly interested in $\mathbb{Z}_2$-graded fusion laws. In this case, we write $\mathbb{Z}_2$ as $\{+,-\}$ with the usual multiplication of signs. For example, the Monster fusion law $\mathcal{F}$ is $\mathbb{Z}_2$-graded where $\mathcal{F}_+=\{1,0,\frac{1}{4}\}$ and $\mathcal{F}_-=\{\frac{1}{32}\}$. When the fusion law is $\mathbb{Z}_2$-graded and $\mathrm{char}(\mathbb{F})\neq 2$, $T^*=\{\chi_1,\chi_{-1}\}$, where $\chi_1$ is the trivial character and $\chi_{-1}$ is the alternating character of $T=\mathbb{Z}_2$. Here the axis subgroup contains just one non-trivial automorphism, $\tau_a:=\tau_a(\chi_{-1})$. We call this the \emph{axial involution}, or \emph{Miyamoto involution}, associated to $a$. It is given by the linear extension of \[ v^{\tau_a}=\begin{cases} v&\mbox{if }v\in A_+;\\ -v&\mbox{if }v\in A_-. \end{cases} \] Let $Y\subseteq X$ be a set of axes in $A$. We define \[ G(Y):=\langle T_a:a\in Y\rangle. \] We call $G(X)$ the \emph{Miyamoto group}. For a subset $Y\subseteq X$ of axes, we define $\overline{Y}=Y^{G(Y)}$. By \cite[Lemma 3.5]{axialstructure}, $G(\overline{Y})=G(Y)$ and so $\overline{Y}^{G(\overline{Y})}=\overline{Y}$. We call $\overline{Y}$ the \emph{closure} of $Y$ and we say that $Y$ is \emph{closed} if $Y=\overline{Y}$. \begin{table}[p] \setlength{\tabcolsep}{4pt} \renewcommand{\arraystretch}{1.5} \centering \footnotesize \begin{tabular}{c|c|c} Type & Basis & Products \& form \\ \hline $2\textrm{A}$ & \begin{tabular}[t]{c} $a_0$, $a_1$, \\ $a_\rho$ \end{tabular} & \begin{tabular}[t]{c} $a_0 \cdot a_1 = \frac{1}{8}(a_0 + a_1 - a_\rho)$ \\ $a_0 \cdot a_\rho = \frac{1}{8}(a_0 + a_\rho - a_1)$ \\ $(a_0, a_1) = (a_0, a_\rho)= (a_1, a_\rho) = \frac{1}{8}$ \vspace{4pt} \end{tabular} \\ $2\textrm{B}$ & $a_0$, $a_1$ & \begin{tabular}[t]{c} $a_0 \cdot a_1 = 0$ \\ $(a_0, a_1) = 0$ \vspace{4pt} \end{tabular} \\ $3\textrm{A}$ & \begin{tabular}[t]{c} $a_{-1}$, $a_0$, \\ $a_1$, $u_\rho$ \end{tabular} & \begin{tabular}[t]{c} $a_0 \cdot a_1 = \frac{1}{2^5}(2a_0 + 2a_1 + a_{-1}) - \frac{3^3\cdot5}{2^{11}} u_\rho$ \\ $a_0 \cdot u_\rho = \frac{1}{3^2}(2a_0 - a_1 - a_{-1}) + \frac{5}{2^{5}} u_\rho$ \\ $u_\rho \cdot u_\rho = u_\rho$, $(a_0, a_1) = \frac{13}{2^8}$ \\ $(a_0, u_\rho) = \frac{1}{4}$, $(u_\rho, u_\rho) = \frac{2^3}{5}$ \vspace{4pt} \end{tabular} \\ $3\textrm{C}$ & \begin{tabular}[t]{c} $a_{-1}$, $a_0$, \\ $a_1$ \end{tabular} & \begin{tabular}[t]{c} $a_0 \cdot a_1 = \frac{1}{2^6}(a_0 + a_1 - a_{-1})$ \\ $(a_0, a_1) = \frac{1}{2^6}$ \vspace{4pt} \end{tabular} \\ $4\textrm{A}$ & \begin{tabular}[t]{c} $a_{-1}$, $a_0$, \\ $a_1$, $a_2$ \\ $v_\rho$ \end{tabular} & \begin{tabular}[t]{c} $a_0 \cdot a_1 = \frac{1}{2^6}(3a_0 + 3a_1 - a_{-1} - a_2 - 3v_\rho)$ \\ $a_0 \cdot v_\rho = \frac{1}{2^4}(5a_0 - 2a_1 - a_2 - 2a_{-1} + 3v_\rho)$ \\ $v_\rho \cdot v_\rho = v_\rho$, $a_0 \cdot a_2 = 0$ \\ $(a_0, a_1) = \frac{1}{2^5}$, $(a_0, a_2) = 0$\\ $(a_0, v_\rho) = \frac{3}{2^3}$, $(v_\rho, v_\rho) = 2$ \vspace{4pt} \end{tabular} \\ $4\textrm{B}$ & \begin{tabular}[t]{c} $a_{-1}$, $a_0$, \\ $a_1$, $a_2$ \\ $a_{\rho^2}$ \end{tabular} & \begin{tabular}[t]{c} $a_0 \cdot a_1 = \frac{1}{2^6}(a_0 + a_1 - a_{-1} - a_2 + a_{\rho^2})$ \\ $a_0 \cdot a_2 = \frac{1}{2^3}(a_0 + a_2 - a_{\rho^2})$ \\ $(a_0, a_1) = \frac{1}{2^6}$, $(a_0, a_2) = (a_0, a_{\rho^2})= \frac{1}{2^3}$ \vspace{4pt} \end{tabular} \\ $5\textrm{A}$ & \begin{tabular}[t]{c} $a_{-2}$, $a_{-1}$,\\ $a_0$, $a_1$,\\ $a_2$, $w_\rho$ \end{tabular} & \begin{tabular}[t]{c} $a_0 \cdot a_1 = \frac{1}{2^7}(3a_0 + 3a_1 - a_2 - a_{-1} - a_{-2}) + w_\rho$ \\ $a_0 \cdot a_2 = \frac{1}{2^7}(3a_0 + 3a_2 - a_1 - a_{-1} - a_{-2}) - w_\rho$ \\ $a_0 \cdot w_\rho = \frac{7}{2^{12}}(a_1 + a_{-1} - a_2 - a_{-2}) + \frac{7}{2^5}w_\rho$ \\ $w_\rho \cdot w_\rho = \frac{5^2\cdot7}{2^{19}}(a_{-2} + a_{-1} + a_0 + a_1 + a_2)$ \\ $(a_0, a_1) = \frac{3}{2^7}$, $(a_0, w_\rho) = 0$, $(w_\rho, w_\rho) = \frac{5^3\cdot7}{2^{19}}$ \vspace{4pt} \end{tabular} \\ $6\textrm{A}$ & \begin{tabular}[t]{c} $a_{-2}$, $a_{-1}$,\\ $a_0$, $a_1$,\\ $a_2$, $a_3$ \\ $a_{\rho^3}$, $u_{\rho^2}$ \end{tabular} & \begin{tabular}[t]{c} $a_0 \cdot a_1 = \frac{1}{2^6}(a_0 + a_1 - a_{-2} - a_{-1} - a_2 - a_3 + a_{\rho^3}) + \frac{3^2\cdot5}{2^{11}}u_{\rho^2}$ \\ $a_0 \cdot a_2 = \frac{1}{2^5}(2a_0 + 2a_2 + a_{-2}) - \frac{3^3\cdot5}{2^{11}}u_{\rho^2}$ \\ $a_0 \cdot u_{\rho^2} = \frac{1}{3^2}(2a_0 - a_2 + a_{-2}) + \frac{5}{2^5}u_{\rho^2}$ \\ $a_0 \cdot a_3 = \frac{1}{2^3}(a_0 + a_3 - a_{\rho^3})$, $a_{\rho^3} \cdot u_{\rho^2} = 0$\\ $(a_0, a_1) = \frac{5}{2^8}$, $(a_0, a_2) = \frac{13}{2^8}$ \\ $(a_0, a_3) = \frac{1}{2^3}$, $(a_{\rho^3}, u_{\rho^2}) = 0$, \end{tabular} \end{tabular} \caption{Norton-Sakuma algebras}\label{tab:sakuma} \end{table} \subsection{Subalgebras generated by two axes}\label{sec:dihedral} Since the defining property of axial algebras is that they are generated by a set of axes, it is natural to ask: What are the axial algebras that are generated by just two axes? We call such axial algebras \emph{$2$-generated} and, if the fusion law is $\mathbb{Z}_2$-graded, we also call them \emph{dihedral} because the Miyamoto group in this case is dihedral. In the Griess algebra, the dihedral subalgebras, called \emph{Norton-Sakuma algebras}, were investigated by Norton and shown to be one of nine different types \cite{C85}. In particular, for each pair of axes $a_0$, $a_1$ in the Griess algebra, the isomorphism class of the subalgebra which they generate is determined by the conjugacy class in the Monster of the product $\tau_{a_0} \tau_{a_1}$ of the two involutions $\tau_{a_0}$ and $\tau_{a_1}$ associated to the axes. The nine different types are: $1\textup{A}$ (when $a_0=a_1$), $2\textup{A}$, $2\textup{B}$, $3\textup{A}$, $3\textup{C}$, $4\textup{A}$, $4\textup{B}$, $5\textup{A}$ and $6\textup{A}$. The algebra $1\textup{A}$ is just one dimensional, but the remaining eight Norton-Sakuma algebras are given in Table \ref{tab:sakuma} whose content we will now explain. The notation is from \cite[Section 2]{seress}. Let $n\textrm{L}$ be one of the dihedral algebras. Since its generating axes $a_0$ and $a_1$ give involutions $\tau_{a_0}$ and $\tau_{a_1}$ in the Monster, we have the dihedral group $D_{2n} \cong \langle \tau_{a_0}, \tau_{a_1} \rangle$ acting as automorphisms of $n\textrm{L}$ (possibly with a kernel). In particular, let $\rho = \tau_{a_0}\tau_{a_1}$. We define \[ a_{\epsilon + 2k} = a_\epsilon^{\rho^k} \] for $\epsilon = 0,1$. It is clear that these $a_i$ are all axes as they are conjugates of $a_0$ or $a_1$. The orbits of $a_0$ and $a_1$ under the action of $\rho$ (in fact, under the action of $D_{2n}$) have the same size. If $n$ is even, then these two orbits have size $\frac{n}{2}$ and are disjoint, whereas if $n$ is odd, the orbits coincide and have size $n$. The map $\tau$ associates an involution to each axis $a$ and $\tau_a^g = \tau_{a^g}$ for all $g \in \Aut(n\textrm{L})$. In almost all cases, the axes $a_i$ are not enough to span the algebra. We index the additional basis elements by powers of $\rho$. Using the action of $D_{2n}$, it is enough to just give the products in Table \ref{tab:sakuma} to fully describe each algebra. The axes in each algebra are primitive and each algebra admits a Frobenius form that is non-zero on the set of axes and invariant under the Miyamoto group; the values for this are also listed in the table. Amazingly the classification of dihedral algebras also holds, and is known as Sakuma's theorem \cite{sakuma}, if we replace the Griess algebra by the weight two subspace $V_2$ of a vertex operator algebra (VOA) $V = \bigoplus_{n = 0}^\infty V_n$ over $\mathbb{R}$ where $V_0 = \mathbb{R} 1$ and $V_1 = 0$ (those of OZ-type). After Majorana algebras were defined generalising such VOAs, the result was reproved for Majorana algebras by Ivanov, Pasechnik, Seress and Shpectorov in \cite{IPSS}. In the paper introducing axial algebras, the result was also shown to hold in axial algebras of Monster type over a field of characteristic $0$ which have a Frobenius form \cite{Axial1}. It is conjectured that the Frobenius form is not required. \begin{conjecture}\label{conj:dihedral} A dihedral axial algebra of Monster type over a field of characteristic $0$ is one of the nine Norton-Sakuma algebras.\footnote{A proof of this conjecture was recently announced by Franchi, Mainardis and Shpectorov at the Axial Algebra Focused Workshop in Bristol in May 2018.} \end{conjecture} For Majorana algebras, the following axiom is also often assumed. \begin{enumerate} \item[$\mathrm{M}8$] Let $a_i \in X$ be axes for $0 \leq i \leq 2$. If $a_0$ and $a_1$ generate a dihedral subalgebra of type $2\mathrm{A}$, then $a_\rho \in X$ and $\tau_{a_\rho} = \tau_{a_0}\tau_{a_1}$. Conversely, if $\tau_{a_0}\tau_{a_1}\tau_{a_2}=1$, then $a_0$ and $a_1$ generate a dihedral subalgebra of type $2\mathrm{A}$ and $a_2 = a_\rho$. \end{enumerate} This severely restricts the possible configuration of subalgebras. We will explain this later in Section \ref{sec:shape} once we have introduced shapes. Seress \cite{seress} also assumed that the map $\tau$ was a bijection between the set of axes $X$ and a union of conjugacy classes of involutions in $G$. Moreover the following conditions which restrict the intersections of subalgebras were also assumed. Let $a_i, b_i \in X$ and $\rho(a_0, a_1) = \tau_{a_0} \tau_{a_1}$. \begin{enumerate} \item[$2\mathrm{Aa}$] If $\tau_{a_0} \tau_{a_1} \tau_{a_2} = 1$ and $\langle a_0, a_1 \rangle \cong 2\A$, then $a_2 \in \langle a_0, a_1 \rangle$ and $a_2 = a_{\rho}$. \item[$2\mathrm{Ab}$] If $\langle a_0, a_1 \rangle$ and $\langle b_0, b_1 \rangle$ are both of type $2\A$ and $\langle \rho(a_0, a_1) \rangle = \langle \rho(b_0, b_1) \rangle$, then the extra basis elements $a_\rho(a_0, a_1)$ and $a_\rho(b_0, b_1)$ are equal. \item[$3\A$] If $\langle a_0, a_1 \rangle$ and $\langle b_0, b_1 \rangle$ are both of type $3\A$ and $\langle \rho(a_0, a_1) \rangle = \langle \rho(b_0, b_1) \rangle$, then the extra basis elements $u_\rho(a_0, a_1)$ and $u_\rho(b_0, b_1)$ are equal. \item[$4\A$] If $\langle a_0, a_1 \rangle$ and $\langle b_0, b_1 \rangle$ are both of type $4\A$ and $\langle \rho(a_0, a_1) \rangle = \langle \rho(b_0, b_1) \rangle$, then the extra basis elements $v_\rho(a_0, a_1)$ and $v_\rho(b_0, b_1)$ are equal. \item[$5\A$] If $\langle a_0, a_1 \rangle$ and $\langle b_0, b_1 \rangle$ are both of type $5\A$ and $\langle \rho(a_0, a_1) \rangle = \langle \rho(b_0, b_1) \rangle$, then the extra basis elements $w_\rho(a_0, a_1)$ and $w_\rho(b_0, b_1)$ are equal up to a change of sign. \end{enumerate} We can also consider a wider class of axial algebras. Axial algebras of Jordan type $\eta$ were considered in \cite{Axial2}. Here there are just three eigenvalues, $1$, $0$ and $\eta$. When $\eta \neq \frac{1}{2}$, all algebras were classified and they relate to $3$-transposition groups. The \emph{Ising fusion law} $\Phi(\alpha, \beta)$ is given in Table \ref{tab:Ising}. \begin{table}[!htb] \setlength{\tabcolsep}{4pt} \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{c||c|c|c|c} & $1$ & $0$ & $\alpha$ & $\beta$ \\ \hline \hline $1$ & $1$ & & $\alpha$ & $\beta$ \\ \hline $0$ & & $0$ &$\alpha$ & $\beta$ \\ \hline $\alpha$ & $\alpha$ & $\alpha$ & $1, 0$ & $\beta$ \\ \hline $\beta$ & $\beta$ & $\beta$ & $\beta$ & $1, 0, \alpha$ \end{tabular} \caption{Ising fusion law $\Phi(\alpha, \beta)$}\label{tab:Ising} \end{table} In particular, note that the Monster fusion law is just $\Phi(\frac{1}{4}, \frac{1}{32})$. In \cite{felix}, Rehren studies dihedral axial algebras over $\Phi(\alpha, \beta)$ with a Frobenius form and shows that the nine algebras above can be generalised and live in families which exist for values of $\alpha$ and $\beta$ lying in certain varieties. It turns out that $(\alpha, \beta) = (\frac{1}{4}, \frac{1}{32})$ is a distinguished point. \section{Shapes}\label{sec:shape} The shape of an axial algebra $A$ specifies which $2$-generated subalgebras arise in $A$. Clearly, a precondition for such a description is the knowledge of the possible $2$-generated algebras; that is, for the class of axial algebras under consideration we either should have classified all $2$-generated algebras or, minimally, we should have an explicit list of such algebras that we want to allow in $A$. Note that the 2-generated algebras should be classified not up to an abstract algebra isomorphism, but rather up to the (unique possible) isomorphism sending the two generating axes of one algebra to the two generating axes of the other algebra. That is, we consider the $2$-generated algebras as having marked generators and isomorphisms must respect them: if $B$ has marked generators $a$ and $b$ and $B'$ has marked generators $a'$ and $b'$ then $(B,(a,b))$ is isomorphic to $(B',(a',b'))$ only if there is an isomorphism $\phi \colon B\to B'$ such that $\phi(a)=a'$ and $\phi(b)=b'$. In principle, an algebra may have non-equivalent pairs of generators and then this algebra must accordingly appear on the list several times. Note that for algebras of Monster type, Sakuma theorem classifies dihedral algebras exactly in this sense: in each of the eight Norton-Sakuma algebra the marked generators are $a_0$ and $a_1$ and any other pairs of generators is equivalent to $(a_0,a_1)$. Therefore, in order to motivate the general case, we consider first the case of an axial algebra of Monster type. Let $A$ be an axial algebra of Monster type and suppose that $X$ is a set of axes which generates $A$. Note that by enlarging our set $X$, we may assume that $X$ is closed under the action of the Miyamoto group $G$ of $A$. \begin{lemma}\label{Gfaithful} The action of $G$ on $X$ is faithful. \end{lemma} \begin{proof} Suppose that $g \in G$ fixes all the axes in $X$. However, the subspace of $A$ fixed by $g$ is a subalgebra and, since it contains $X$, it contains the whole algebra $A$. \end{proof} As $G$ is a group of automorphisms of $A$, if $a, b \in X$ generate a dihedral subalgebra $B$, then, for any $g \in G$, the subalgebra generated by $a^g, b^g$ is isomorphic to $B$. In this way, we obtain the \emph{shape} of the algebra which is a map $S$ from the set of $G$-orbits on $X \times X$ to the set of dihedral algebras. Given a pair of axes $(a, b)$, let $D_{a,b}$ be the dihedral group generated by $\tau_a$ and $\tau_b$. Define $X_{a,b} = a^D \cup b^D$, where $D := D_{a,b}$. It is clear that $D_{a,b} = D_{b,a}$ and $X_{a,b} = X_{b,a}$. A Sakuma algebra has type $n\textrm{L}$. We wish to show that $n$ can be determined solely from the action of the dihedral group $D_{a,b}$. \begin{lemma}\label{dihedralorbs} Let $a,b \in X$ and $D := D_{a,b}$. Then, $|a^D| = |b^D|$. If $a$ and $b$ are in the same orbit, then the length of this orbit is $1$, $3$, or $5$. Otherwise, if $a$ and $b$ are in different orbits, then the length of each orbit is $1$, $2$, or $3$. Moreover, the Sakuma algebra generated by $a$ and $b$ has type $n\textrm{L}$, where $n = |X_{a,b}|$. \end{lemma} \begin{proof} A direct proof would be long and computational. So instead we observe that each Norton-Sakuma algebra is contained in the Griess algebra and there we have a bijection between axes and $2\A$-involutions in the Monster $M$. So, we may take the dihedral subgroup $H \leq M$ generated by the involutions associated to each axis (in the Griess algebra). In particular, up to the kernel, the action of $H$ on $X$ is the same as the action of $D$ on $X$. Since in the Griess algebra we have a bijection between axes and $2\A$-involutions and $\tau_x^g = \tau_{x^g}$ for $g \in H$, we may consider the orbits of involutions in $H$ rather than the orbits of axes. The result now follows from properties of dihedral groups and Sakuma's theorem. \end{proof} Thus, when we know the action of $G$ on $X$, $n$ is known for each orbit and the shape is determined by choices of $\textrm{L}$. Furthermore, these choices are not independent. If $a,b,c,d \in X$ then we say $(a,b)$ \emph{dominates} $(c,d)$ if $c,d \in X_{a,b}$. In particular, when this happens, $X_{c,d} \subseteq X_{a,b}$ and $D_{c,d} \leq D_{a,b}$. Note also that the subalgebra $\langle c,d \rangle$ is contained in $\langle a, b\rangle$. Hence, if $(a,b)$ dominates $(c,d)$, then the choice of dihedral subalgebra $\langle a,b \rangle$ determines the choice for $\langle c,d \rangle$. For the Monster fusion law, we have the following non-trivial inclusions \[ \begin{array}{c|c} \langle a,b \rangle & \langle c,d \rangle \\ \hline 4\textrm{A} & 2\textrm{B} \\ 4\textrm{B} & 2\textrm{A} \\ 6\textrm{A} & 2\textrm{A} \\ 6\textrm{A} & 3\textrm{A} \end{array} \] Note that here, not only does the choice of $\langle a,b \rangle$ determine the choice for $\langle c,d \rangle$, but also the choice for $\langle c,d\rangle$ uniquely determines the choice for $\langle a,b\rangle$. Additionally, note that the pair $(a,b)$ always dominates $(b,a)$ and vice versa, so in the next concept which describes the totality of choices, we may just work with the set $\{a,b\}$ instead of the pairs $(a,b)$ and $(b,a)$. Notice also that since $X_{a,b} = X_{b,a}$, the concept of domination is not affected by the switch to sets. Let ${X\choose 2}$ denote the set of $2$-subsets of $X$. The orbits of $G$ on ${X\choose 2}$ are the vertices of a directed graph, called the \emph{shape graph}, with the edges given by domination. By the above comment, there is at most one choice of dihedral subalgebra for each weakly connected component (i.e. a connected component of the underlying undirected graph). So, the shape of an algebra is fully described by assigning one dihedral subalgebra per weakly connected component. Sometimes there is no choice for a given component. Namely, when that component contains a $6\A$, or $5\A$. Additionally, if the M8 axiom is assumed, then this further restricts the allowable shapes. Suppose that $a$ and $b$ are such that $X_{a,b} = \{a,b\}$ and $\tau_a$ and $\tau_b$ are the involutions associated to $a$ and $b$. Then $\tau_a\tau_b$ has order two. If $\tau_a\tau_b$ is in the image of the $\tau$-map, then M8 demands that the dihedral subalgebra $B = \langle a,b \rangle$ generated by $a$ and $b$ be a $2\A$. Conversely, if $\tau_a\tau_b$ is not in the image $\tau$, then the dihedral subalgebra $B$ must be a $2\B$. In both cases, this defines the shape on the connected component containing the orbit of $\{a,b\}$. However, the only connected components which don't contain any dihedral subalgebras with $n=2$ are those which just contain a single dihedral subalgebra with $n=3$. So, if the M8 condition is assumed the only choice over a shape is choosing whether those connected components which consist of a single $3\L$ are $3\A$, or $3\C$. We now turn to the general case of a fusion law $\mathcal{F}$ which is $T$-graded and an abstract group of permutations $G$ acting faithfully on a set $X$. We are thinking of an unknown axial algebra $A$ with fusion law $\mathcal{F}$ and the action of the Miyamoto group on the axes being the action of (a normal subgroup of) $G$ on $X$. It is clear that we may just consider actions up to isomorphism. We will define analogous concepts to above. \begin{definition} A map $\tau\colon X \times T^* \to G$ is called a \emph{$\tau$-map} if for all $x \in X$, $\chi \in T^*$, $g \in G$ \begin{enumerate} \item $\tau_x \colon T^* \to G$ is a group homomorphism; \item $\tau_x(\chi)^g = \tau_{x^g}(\chi)$. \end{enumerate} We call the image $G_0:=\langle \tau_x(\chi) : x \in X, \chi \in T^* \rangle\unlhd G$ the \emph{Miyamoto group} of $\tau$. \end{definition} As previously, we define $T_x := \langle \tau_x(\chi) : \chi \in T^* \rangle\leq G_0$. \begin{lemma}\label{taufix} $T_x \subseteq Z(G_x)$. \end{lemma} \begin{proof} Let $g \in G_x$. Then for $\chi \in T^*$, \[ [\tau_x(\chi), g] = \tau_x(\chi)^{-1} \tau_x(\chi)^g = \tau_x(\chi)^{-1} \tau_{x^g}(\chi) =\tau_x(\chi)^{-1} \tau_{x}(\chi) = 1\qedhere \] \end{proof} We define $D = D_{a,b} := \langle T_a, T_b \rangle$ for $a,b \in X$. Unlike the Monster type case, $D$ does not have to be a dihedral group. In an $\mathcal{F}$-axial algebra, $D_{a,b}$ acts on the subalgebra $\langle a,b \rangle$. Suppose that we know a list $\mathcal{L}$ of $2$-generated subalgebras with marked generators for the fusion law $\mathcal{F}$. We wish to impose conditions on $\tau$ so that $D_{a,b}$ has an action on $X_{a,b}:=a^D \cup b^D$ which is an action observed on the axes of some $2$-generated algebra in our list. Otherwise, $\tau$ cannot lead to a valid $\mathcal{F}$-axial algebra. \begin{definition} A $\tau$-map $\tau \colon X \times T^* \to G$ is called \emph{admissible} if for every set $\{a,b\} \in {X \choose 2}$, the action of $D_{a,b}$ on $X_{a,b}$ agrees with at least one algebra in the list $\mathcal{L}$. \end{definition} For example, let $\mathcal{F}$ be the Monster fusion law. Then a complete list of the dihedral subalgebras is known. In particular, the orbits of $a$ and $b$ under $D$ must have the properties given in Lemma \ref{dihedralorbs}. That is, \begin{enumerate} \item $k := |a^D| = |b^D|$. \item If $a$ and $b$ are in the same $D$-orbit, then $k = 1$, $3$, or $5$. \item If $a$ and $b$ are in different $D$-orbits, then $k = 1$, $2$, or $3$. \end{enumerate} From now on, we only consider admissible $\tau$-maps. The normaliser $N = N_{\mathrm{Sym}(X)}(G)$ of the action of $G$ on $X$ acts on the set of admissible $\tau$-maps by \[ \tau \mapsto \tau^n \qquad \mbox{where } (\tau^n)_x(\chi) := \tau_{x^{n^{-1}}}(\chi)^n \] for $n \in N$. Note that, by the definition of a $\tau$-map, $G$ acts trivially on each $\tau$. So an action of $N/G$ is induced on the set of $\tau$-maps. Thus, we may just consider admissible $\tau$-maps up to the action of $N/G$. Next we introduce domination. \begin{definition} For $\{a,b\},\{c,d\} \in {X \choose 2}$, we say $\{a,b\}$ \emph{dominates} $\{c,d\}$ if $c,d \in X_{a,b}$. \end{definition} \begin{definition} The \emph{shape graph} $\Gamma$ is a directed graph with vertices given by orbits of $G$ on ${X \choose 2}$ and edges given by domination between pairs from those orbits. \end{definition} As observed above, for the Monster fusion law, any one choice of $2$-generated subalgebra for a weakly connected component of the shape graph determines all other $2$-generated algebras in that component. For a general fusion law, the dominated algebra may not always determine the larger algebra uniquely. However, the larger, dominating algebra always determines the smaller algebra. We will call these the \emph{domination restrictions}. \begin{definition} Given an abstract group $G$ acting faithfully on a set $X$ and an admissible $\tau$-map, a \emph{shape} on $X$ is a set of choices of $2$-generated algebra for all orbits of $G$ on ${X \choose 2}$ which satisfy the domination restrictions. \end{definition} Given a group $G$ acting faithfully on a set $X$ and an admissible $\tau$-map $\tau$, we may consider all the possible shapes. Let $K = \stab_N(\tau)$. As noted above, $G$ acts trivially on each $\tau$, and in fact it also fixes every shape. On the other hand, $K$ (or rather $K/G$) permutes the $G$-orbits of ${X \choose 2}$, and so may act non-trivially on the set of shapes. So, we may consider shapes for $\tau$ up to the action of $K$. In summary, given an action of a group $G$ on a putative set of axes $X$, we can determine all the possible admissible $\tau$-maps. Given a particular $\tau$-map $\tau$, we can further determine all the possible shapes that an axial algebra with Miyamoto group $G_0$ and $\tau$-map $\tau$ could have. \section{Useful lemmas}\label{sec:preliminaries} In this section, we will discuss some properties which must hold in axial algebras. We will use these later in the algorithm to discover relations and to build up eigenspaces. Recall that we adopt the notation that for a subset $I \subseteq \mathcal{F}$, \[ A_I = \bigoplus_{\lambda \in I} A_\lambda \] We begin by noting that, since we allow $I$ to be a subset, we can add and intersect the $A_I$. \begin{lemma}\label{sumup&int} Let $I,J \subseteq \mathcal{F}$, then \begin{enumerate} \item[$1.$] $A_I + A_J = A_{I \cup J}$ \item[$2.$] $A_I \cap A_J = A_{I \cap J}$\qed \end{enumerate} \end{lemma} By an abuse of terminology, we will call the $A_I$ eigenspaces of $a$. \begin{lemma}\label{multiplydown} Let $a$ be an axis, $I \subseteq \mathcal{F}$, $\lambda \in I$ and $A_I = A_I^a$. Then, for all $u \in A_I$ \[ ua - \lambda u \in A_{I - \lambda} \] \end{lemma} \begin{proof} We may decompose $u \in A_I$ as $u = \sum_{\mu \in I} u_\mu$, where $u_\mu \in A_\mu$. Multiplying by $a$ and subtracting $\lambda u$, we have \begin{align*} ua - \lambda u &= \sum_{\mu \in I} u_\mu a - \lambda u \\ &= \sum_{\mu \in I} (\mu - \lambda) u_\mu \end{align*} Since the coefficient of $u_\lambda$ is zero, the above is in $A_{I - \lambda}$. \end{proof} Recall that we extended the operation $\star$ to all subsets of $\mathcal{F}$, turning the fusion law into a magma. Moreover, the eigenspaces $A_I$ satisfy the fusion law. However, not all fusion rules on subsets are equally useful for our algorithm. In particular, assuming that $\mathcal{F}$ is $T$-graded, we only need to consider $I$ fully contained in a part $\mathcal{F}_t$ for some $t\in T$. We call such subsets \emph{pure}. \begin{definition} Let $I \subseteq \mathcal{F}_s$ and $J \subseteq \mathcal{F}_t$ for $s, t \in T$. We define a fusion rule $I\star J = K$ to be \emph{useful} if \begin{enumerate} \item $K \subsetneqq \mathcal{F}_{s \star t}$; and \item there does not exist $I \subsetneqq I'\subseteq \mathcal{F}_s$, or $J \subsetneqq J' \subseteq \mathcal{F}_t$ such that \[ I' \star J = K \qquad \mbox{or} \qquad I \star J' = K \] \end{enumerate} \end{definition} In particular, given a useful fusion rule $I \star J = K$, if we require it to hold, all other rules $X \star Y = K$ for subsets $X \subseteq I$ and $Y \subseteq J$ will automatically be satisfied. In this way, it is enough to impose just the useful fusion rules and the grading to capture all the information from the fusion law. To calculate the useful fusion rules for any fusion law $\mathcal{F}$ we begin by writing out the expanded fusion table for all pure subsets of $\mathcal{F}$ with rows and columns partially ordered by inclusion. We then consider all sets $K$ which occur as entries in the table. The useful rules are precisely those where $K$ is not a full part $\mathcal{F}_t$, for $t\in T$, and it does not appear in the expanded table below in that column, or to the right in that row. Doing this to the Monster fusion law results in the following list. \begin{lemma}\label{Monsteruseful} The useful fusion rules for the Monster fusion table are \[ \begin{gathered} 1 \star 0 = \emptyset \qquad 1 \star \{1,0\} = 1 \qquad 1 \star \{0,\tfrac{1}{4}\} = \tfrac{1}{4} \qquad 1 \star \{1,0,\tfrac{1}{4}\} = \{1,\tfrac{1}{4}\} \\ 0 \star \{1,0\} = 0 \qquad 0 \star \{1,\tfrac{1}{4}\} = \tfrac{1}{4} \qquad 0 \star \{1,0,\tfrac{1}{4}\} = \{0,\tfrac{1}{4}\} \\ \tfrac{1}{4} \star \tfrac{1}{4} = \{1,0\} \qquad \tfrac{1}{4} \star \{1,0\} = \tfrac{1}{4} \\ \{1,0\} \star \{1,0\} = \{1,0\} \qquad \{1,0\} \star \{1,\tfrac{1}{4}\} = \{1,\tfrac{1}{4}\} \qquad \{1,0\} \star \{0,\tfrac{1}{4}\} = \{0,\tfrac{1}{4}\} \end{gathered} \] \end{lemma} Note that all useful fusion rules for the Monster fusion law come from the even part. That is because the values of $\star$ involving the odd part $\{\tfrac{1}{32}\}$ are fully determined by the grading. If $A$ is primitive, then for an axis $a$, $G_a$ certainly fixes every vector in $A_1^a$. We now describe another trick which uses this weaker condition. \begin{lemma}\label{wh-w} Let $1 \in I \subset \mathcal{F}$ and $u \in A_I(a)$ for an axis $a$. Suppose further that $G_a$ fixes every vector in $A_1^a$. Then, for all $g$ in the stabiliser $G_a$, \[ u^g - u \in A_{I - 1} \] \end{lemma} \begin{proof} We decompose $u = \sum_{\mu \in I} u_\mu$ with respect to the eigenspaces of $a$. Since $g$ fixes $a$, it preserves every eigenspace of $a$. Furthermore, since $g$ fixes every vector in $A_1^a$, we have the following \begin{align*} u^g - u &= \sum_{\mu \in I} u_\mu^g - \sum_{\mu \in I} u_\mu\\ &= u_1^g - u_1 + \sum_{\mu \in I-1} u_\mu^g - u_\mu \\ &= \sum_{\mu \in I-1} u_\mu^g - u_\mu \in A_{I - 1}\qedhere \end{align*} \end{proof} \section{Algorithm}\label{sec:algorithm} In this section, we describe our main result which is an algorithm for constructing an axial algebra. A very similar algorithm can also be used to build a module for a known axial algebra. However, we don't want to complicate this paper with extra definitions and so we just deal with the task of constructing an axial algebra. In principle, there is no reason to believe that an axial algebra which is generated by a finite set of axes is even finite dimensional. Clearly, if it is infinite dimensional, our algorithm will not finish. However, in practice, we can compute a large number of examples as we shall see in Section \ref{sec:results}. As described in Section \ref{sec:shape}, associated with a $T$-graded $\mathcal{F}$-axial algebra $A$ we have a group $G$ acting faithfully on a set $X$, an admissible $\tau$-map $\tau \colon X \times T^*\to G_0 \unlhd G$ and a shape. Given such a $G$, $X$, $\tau$ and shape, the algorithm builds an axial algebra $A$ with axes $X$ and Miyamoto group $G_0$. It does so by defining a partial algebra and completing it step by step into a full algebra. As input to our algorithm, we take a field $\mathbb{F}$, a $T$-graded fusion law $\mathcal{F}$, a group $G$ acting faithfully on a set $X$, an admissible $\tau$-map $\tau$ and a shape. These are fixed throughout the rest of this section. \subsection{Partial algebras} At the core of the algorithm is a concept which we call a partial algebra. We write $S^2(V)$ for the symmetric square of $V$. \begin{definition} Given a group $G$, a \emph{partial $G$-algebra} is a triple $W = (W, V, \mu)$ where $W$ is a $G$-module over $\mathbb{F}$, $V \subseteq W$ is a $G$-submodule and $\mu \colon S^2(V) \to W$ is a linear map which is $G$-equivariant. \end{definition} Where it is clear, we will abuse notation and write $uv$ for $\mu(u,v)$. \begin{lemma} Given a $G$-invariant set $Y$ in $W$, there exists a unique smallest submodule $W(Y)$ of $W$ such that \[ W(Y) = \langle Y \rangle + \mu(S^2(W(Y) \cap V)) \] \end{lemma} \begin{proof} Define $U_0 := \langle Y \rangle$ and inductively define \[ U_{i+1} := U_i + \mu(S^2(U_i \cap V)). \] Then the union of the $U_i$ is $W(Y)$. \end{proof} We call $W(Y) = (W(Y), W(Y) \cap V, \mu|_{S^2(W(Y) \cap V)})$ the \emph{partial subalgebra generated by $Y$}. If $W(Y) = W$, then we say $Y$ \emph{generates} $W$. For example, an axial algebra $A$ is a partial $G$-algebra, where $G$ is the Miyamoto group, and the set of axes $X$ generates $A$. \begin{definition} Let $(W, V, \mu)$ be a partial $G$-algebra and $(W', V', \mu')$ be a partial $G'$-algebra. A homomorphism of partial algebras is a pair $(\phi, \psi)$ where \begin{enumerate} \item $\phi \colon W \to W'$ is a vector space homomorphism such that $\phi(V) \subseteq V'$. \item $\psi \colon G \to G'$ is a group homomorphism such that \[ \phi(w^g) = \phi(w)^{\psi(g)} \] for all $w \in W$, $g \in G$. \item $\phi(\mu(u,v)) = \mu'(\phi(u), \phi(v))$ for all $u,v \in V$. \end{enumerate} \end{definition} In other words, we have the following commutative diagram and additionally the action of $G$ (sometimes acting through $\psi$) commutes with the diagram. \begin{center} \begin{tikzcd}[step=large] S^2(V) \arrow[r, "\mu"] \arrow[d, "\phi \otimes \phi"] & W \arrow[d, "\phi"] & V \arrow[l, hookrightarrow]\arrow[d, "\phi"] \\ S^2(V') \arrow[r, "\mu'"] & W' & V' \arrow[l, hookrightarrow] \end{tikzcd} \end{center} \subsection{Gluings} In order to correctly build an axial algebra, we must impose the conditions coming from the shape. We do this by gluing in subalgebras corresponding to the shape. First, consider an axial algebra $A$ and let $B$ be a subalgebra in the shape. Then there is a $K$-submodule $U$ of $A$ such that $\phi \colon U \to B$ is an algebra isomorphism which is invariant under the action of the induced Miyamoto group \[ K := \langle T_y : y \in Y \rangle \] where $Y = X \cap U$ is the subset of axes in $X$ which are in $U$. However, since $Y$ is a subset of $X$, $K$ does not necessarily act faithfully on $Y$. Let $N$ be the kernel of the action and $H := K/N$. Then, the Miyamoto group of $B$ is isomorphic to $H$ and so there exists a group homomorphism $\psi \colon K \to H$ with the property that \[ \phi(u^g) = \phi(u)^{\psi(g)} \] for all $g \in K$, $u \in U$. With this in mind, we make the following definition. \begin{definition} Let $(W, V, \mu)$ be a $G$-partial algebra generated by $X$ and $(W', V', \mu')$ be a partial $H$-algebra generated by a set $X'$. A \emph{gluing} of $W'$ onto a closed set of axes $Y \subseteq X$ is a homomorphism of partial algebras $(\phi, \psi)$ from the restricted $K$-partial subalgebra $(W(Y), W(Y) \cap V, \mu|_{S^2(W(Y) \cap V)})$ to $(W', V', \mu')$ such that \begin{enumerate} \item $K := \langle T_y : y \in Y \rangle \leq G$. \item $\phi \colon W(Y) \to W'$ is surjective and $\phi(Y) = X'$. \item $\psi \colon K \to H$ is surjective. \end{enumerate} \end{definition} \subsection{The algorithm} Our task is to build an algebra of the correct shape. We will do this by defining a sequence of partial algebras and at each stage `discovering' more of the multiplication. Throughout our algorithm $W = (W, V, \mu)$ will be a partial $G$-algebra generated by the set $X$, our putative set of axes on which $G$ acts faithfully. Our algorithm will terminate when $V = W$. That is, when we know all the multiplication. We begin with $W$ having basis indexed by the set $X$. That is, $W$ is a permutation module for the action of $G$ on $X$. No products are known at this stage, so $V = 0$. Throughout our algorithm, we keep track of various (sums of) eigenspaces for each axis. These are key to finding enough relations to allow our algorithm to terminate. Recall that the sum of eigenspaces is denoted by $W_I = \bigoplus_{\lambda \in I} W_\lambda$, for a subset $I \subseteq \mathcal{F}$. Note that at any given stage in our algorithm, we may not know the full $\lambda$-eigenspace and so we do not necessarily know the decomposition $W = \bigoplus_{\lambda \in \mathcal{F}} W_\lambda$. Indeed, we may know that a vector lies in $W_I$, for some $I \subset \mathcal{F}$, but not know how to decompose it into the sum of eigenvectors for eigenvalues $\lambda \in I$. For this reason, we keep track of sums of eigenspaces $W_I$. Note that relations are vectors in $W_\emptyset$. Since $G$ acts on $W$, we may just consider axes and their associated decompositions up to the action of $G$. It turns out that it is enough to keep track of just the $W_I$, for pure subsets $I$. That is, the $W_I$ for $I \subseteq \mathcal{F}_t$, for $t \in T$. We show that this holds, provided we make a mild assumption on the grading group $T$. Indeed, by assumption, for each axis $a \in X$, there is a decomposition $W = \bigoplus_{t \in T} W_t$. We claim that we can recover the decomposition $W = \bigoplus_{t \in T/R} W_t$, where $R := \bigcap_{\chi \in T^*} \ker(\chi)$, from the action on $T_a$ on $W$. Indeed, recall from the definition that $\tau_a(\chi) \in T_a$ acts on $W_t$ by scalar multiplication by $\chi(t)$. Since this must hold in any axial algebra we build, we can distinguish the $T$-grading up to the kernel $R = \bigcap_{\chi \in T^*} \ker(\chi)$. If $T^* \cong T$, then $R =1$. However if $R$ is non-trivial, for example when the characteristic divides $|T|$, or when the field doesn't contain the suitable roots of unity, we can only detect a coarser grading by $T/R \cong T^*$. Since we may always consider a more coarse grading, from now on, we may assume that $T^* \cong T$ and hence $T_a$ detects the $T$-grading. Note that for a $\mathbb{Z}_2$-grading, provided the field is not of characteristic two, $-1$ is always in the field and hence we can detect a $\mathbb{Z}_2$-grading using the axis subgroup. Let $J \subset \mathcal{F}$. Since we know the decomposition $W = \bigoplus_{t \in T} W_t$, this induces a decomposition $W_J = \bigoplus_{t \in T} W_{J_t}$, where $J_t := J \cap \mathcal{F}_t$. Now, the only results we will use in our algorithm are those found in Section \ref{sec:preliminaries}, namely, summation and intersection of subspaces, being an eigenvector, obeying the fusion law and, optionally, Lemma \ref{wh-w}. It is easy to see that for all of these, the information gained about $W_J$ is precisely the sum of the information gained about the $W_{J_t}$. For example, if $\lambda \in J$, then by Lemma \ref{multiplydown}, $ua-\lambda u \in A_{J-\lambda}$. But, since we may decompose $u = \sum_{t \in T} u_t$, we have \[ u_t a - \lambda u_t \in A_{J_t-\lambda} = \begin{cases} A_{J_t} & \mbox{if } \lambda \notin J_t \\ A_{J_t-\lambda}& \mbox{if } \lambda \in J_t \end{cases} \] In particular, we recover the only non-trivial result by just considering the pure subset $J_t \subseteq \mathcal{F}_t$. This justifies our claim that it is enough to keep track of the $W_I$, for pure subsets $I$. The information for the multiplication, and so also for the eigenspaces, will come from gluing in subalgebras to our partial algebra according to the shape. In order to fully describe our axial algebra, we must glue in enough subalgebras to cover all those $2$-generated subalgebras given in the shape. However, we may glue in known subalgebras of the correct shape which are generated by three or more axes. These have the advantage of containing more information. (We may also glue in some partial subalgebras, so long as we also glue in enough known subalgebras to cover those given in the shape.) Since no multiplication is known when we start and $W$ is spanned by the axes, for each gluing of a subalgebra $B$ onto a closed subset of axes $Y$, we have $W(Y) = \langle Y \rangle$ and $\phi$ is the corresponding injection on these axes compatible with the action. The algorithm has three main stages: \begin{enumerate} \item Expansion by adding the formal products of vectors we do not already know how to multiply. \item Work to discover relations and construct the eigenspaces for the axes. \item Reduction by factoring out by known relations. \end{enumerate} We continue applying these three stages until $V = W$ and our algorithm terminates. Again, we note that since we use the action of the group, we need only consider subalgebras and axes up to the action of $G$. If our algorithm does terminate, then we have the following result, which we will prove after describing our algorithm. \begin{theorem}\label{algorithmthm} Suppose that the algorithm terminates and returns $A$. Then $A$ is a \textup{(}not necessarily primitive\textup{)} axial algebra generated by axes $X$ with Miyamoto group $G_0$, $\tau$-map $\tau$ and of the given shape. Moreover, provided we do not use the optional Lemma $\ref{wh-w}$ in stage $2$ of the algorithm, the algebra is universal. That is, given any other axial algebra $B$ with the same axes $X$, Miyamoto group $G_0$, $\tau$-map $\tau$ and shape, $B$ is a quotient of $A$. \end{theorem} Note that, if we do use Lemma \ref{wh-w} in stage 2 of the above algorithm, then we have assumed that $G_a$ fixes every vector in $A_1^a$ for each axis $a$. This holds in primitive axial algebras, but not necessarily in the non-primitive case. \subsubsection*{Stage 1: Expansion} We expand $W$ to a larger partial algebra $W_\new$ by adding vectors which are the formal products of elements we do not yet know how to multiply. \begin{description}[leftmargin =0.5cm] \item[Step 1.] We begin by finding a complement subspace $C$ for $V$ in $W$. Hence, as a vector space \[ W = V \oplus C \] \end{description} Wherever possible, we choose $C$ to be a $G$-submodule. For example, in characteristic $0$, this is always possible. Since we know the multiplication on $V$ and our multiplication is commutative, we just need to add the products of $V$ with $C$ and products of $C$ with $C$. \begin{description}[leftmargin =0.5cm] \item[Step 2.] Form a new partial algebra $W_\new = (W_\new, V_\new, \mu_\new)$ with \begin{align*} W_\new &= W \oplus V\otimes C \oplus S^2(C) \\ V_\new &= W \end{align*} and $\mu$ extended in the obvious way to $\mu_\new$. \end{description} Note that if $C$ is a $G$-submodule, then the summands in $W_\new$ are all $G$-submodules and hence $W_\new$ can be seen to be a $G$-module in a natural way. Otherwise, we must compute the action of $G$ on $W_\new$. \begin{description}[leftmargin =0.5cm] \item[Step 3.] For each subalgebra $B$ glued onto a set of axes $Y$, we extend the gluing as follows. Since $U := W(Y) \subset W$ and $V_\new = W$, we now know all the products of elements in $U$, so we adjust the gluing. Specifically, let $U_V = U \cap V$ and find a complement $D$ so that \[ U = U_V \oplus D \] Then \[ U_\new := U \oplus \mu(U_V, D) \otimes \mu(D, D) \] is the subalgebra of $W_\new$ generated by $Y$. Note that $\mu(U_V, D) \cong U_V \otimes D$ and $\mu(D,D) \cong S^2(D)$. We extend the map $\phi$ to $\phi_\new$ in the obvious way, by mapping the new products in $U_\new$ to the corresponding products in $B$. Hence, $\phi_\new$ preserves multiplication. Observe that $U_\new$ is also a $K$-submodule and so the homomorphism $\psi$ is unchanged. Hence, $(\phi_\new, \psi)$ is a gluing of $B$ onto $Y$ in $W_\new$. \end{description} \begin{description}[leftmargin =0.5cm] \item[Step 4.] For each gluing, we add the kernel of $\phi_\new$ to the space of relations. \end{description} Indeed, if $\phi_\new(v) = 0$, then $v$ must be the zero vector in any final axial algebra, hence it is a relation. \begin{description}[leftmargin =0.5cm] \item[Step 5.] For each axis $a$ and subalgebra $U_\new$ which contains $a$, we use $\phi_\new$ to pull back the eigenspaces of $B \cap \phi_\new(U_\new)$ to add to the eigenspaces in $W_\new$. \end{description} Since we only consider axes and gluings up to $G$-orbit, we must be careful as one orbit of axes may split into several orbits when intersected with the subalgebra. We note that the above expansion step can be made to work if we do not expand to the whole of $W$, but just to some $G$-submodule $U$ of $W$ which contains $V$. That is, we choose some subspace complement $C$ to $V$ in $U$ (picking it to be a $G$-submodule if possible) and we expand to \[ W_\new = W \oplus V\otimes C \oplus S^2(C) \] and have $V_\new = U$. The gluing for the subalgebras and the eigenspaces are updated similarly to above. This partial expansion has the advantage that it is easier to do computationally as it is smaller and we may still be able to find relations. \subsection*{Stage 2: Building up eigenspaces} We begin by recovering the grading on $W_\new$, before finding further eigenvectors and relations. Recall that relations are simply elements of the eigenspace $W_{\new,\emptyset}$. \begin{description}[leftmargin =0.5cm] \item[Step 1.] For each axis $a$, we compute the action of $T_a$ on $W_\new$ and hence find the decomposition $W_\new = \bigoplus_{t \in T} W_{\new, t}$ with respect to $a$. \end{description} For example, in the Monster fusion law case, we have the $\mathbb{Z}_2$-decomposition $W_\new = W_{\new, +} \oplus W_{\new, -}$, where $W_{\new,+}$ and $W_{\new,-}$ are the $1$- and $-1$-eigenspaces of $\tau_a$, respectively. If $C$ is a submodule, then the calculation can be simplified as follows \[ W_{\new, t} = W_t \oplus \bigoplus_{s \in T} (V_s \otimes C_{s^{-1}t}) \oplus \bigoplus_{s \in T} C_s \times C_{s^{-1}t} \] where $V_s$ and $C_s$ are the $T$-graded parts of $V$ and $C$ respectively. We no longer need the old $W$, so we now drop the subscript and write $W$ for $W_\new$ and similarly $V$ for $V_\new$. \begin{description}[leftmargin =0.5cm] \item[Step 2.] We repeatedly apply the following techniques until the pure eigenspaces $W_I$ (including the relation eigenspace $W_{\emptyset}$) stop growing. \begin{enumerate} \item For each $t \in T$, we sum together and take intersections of the $W_I$ for each pure subset $I \subsetneqq \mathcal{F}_t$ as per Lemma \ref{sumup&int}. \item For each $t \in T$, let $\lambda \in I \subseteq \mathcal{F}_t$. For each $u \in W_I \cap V$, we add $ua - \lambda u$ to $W_{I - \lambda}$ as per Lemma \ref{multiplydown}. \item We apply each useful fusion rule $I \star J = K$. That is, for all $u \in W_I \cap V$ and $v \in W_J \cap V$, we add their product $uv$ to $W_K$. \end{enumerate} \end{description} Note that in parts (2) and (3), we of course may just do these for a basis of the eigenspaces concerned. In the case of the Monster fusion law, $\mathcal{F}_- = \{ \frac{1}{32} \}$. So, for the odd subspace $W_-$, there are no subspaces to sum or intersect in part (1) above. Also in part (2) for $W_-$, since the only choice for $\lambda$ is $\frac{1}{32}$, we obtain that $ua - \frac{1}{32}u\in W_{\emptyset}$ is a relation. Since $W_- = W_\frac{1}{32}$ will not grow in size, we need only apply part (2) once. Also, as noted after Lemma \ref{Monsteruseful}, all the useful fusion rules for the Monster fusion law come from the even part. Therefore, for the Monster fusion law, we only need apply part (2) once to the odd part and then just work on the even part. \begin{description}[leftmargin =0.5cm] \item[Step 3. (Optional)] If additionally we want to force that $G_a$ fixes every vector in $W_{1_T}$ (as is true for primitive algebras), then we may apply the technique from Lemma \ref{wh-w} to get $u^g - u \in W_{1_T-1}$ for all $g \in G_a$ and $u \in W_{1_T}$. \end{description} By the assumptions in Lemma \ref{wh-w}, we may only apply this lemma to subsets such that $1 \in I$. We claim that it is enough to just apply it to $1_T$. By the discussion at the beginning of the section, since $1 \in 1_T$ we need just consider pure subsets $I \subset {\cal F}_{1_T}$ with $1\in I$. Let $u \in W_I \subset W_{1_T}$. So, the vector $v = u^g -u$ is found in both $W_{I - 1}$ and $W_{1_T -1}$. Since the action of $g \in G_a$ preserves the eigenspaces, we know trivially that $v \in W_I$. So, by intersecting as in Step 2 (1), we recover that $v \in W_{I - 1} = W_{1_T -1} \cap W_I$. Moreover, once we have done the expansion step, we know the decomposition given by the $T$-grading and this does not change until the next expansion step. Hence, we need only apply Step 3 once per expansion. \subsection*{Stage 3: Reduction} If we have found some relations for our algebra (i.e. $W_{\emptyset}\neq 0$), we may reduce our partial algebra $W$ by factoring out by the relations. Let $R$ be the $G$-submodule generated by the $W_{\emptyset}$. Before forming the quotient, we search for additional relations by using the two following techniques. First, if $R$ intersects $V$ non-trivially, then we may multiply $R \cap V$ by elements of $V$. Since elements $r \in R$ are relations and must become zero in the target algebra, so are $vr$, for all $r \in R \cap V$ and $v \in V$. So we repeatedly multiply by elements of $V$ to grow $R$ until the dimension of $R$ stabilises. Secondly, suppose that $R$ intersects a subspace $U = W(Y)$ where we have glued in a subalgebra $B$. Let $(\phi, \psi)$ be the gluing map. Then $R' := \phi(U \cap R)$ are relations in the subalgebra $B$. Since we know the multiplication in $B$, we can use the first technique to multiply by elements of $B$ to grow $R'$ (this may include multiplying by elements we do not yet know how to multiply by in $W$, hence giving us extra information). We then pull back $R'$ to $W$ using $\phi^{-1}$ to get additional relations. \begin{description}[leftmargin =0.5cm] \item[Step 1.] We use the above two techniques repeatedly, until we find no further relations. Let $\pi \colon W \to W/R$ be the quotient map. We define $W_\new$ as the image $\pi(W)$, $V_\new = \pi(V)$ and $\mu_\new$ is the map induced by $\mu$. \end{description} \begin{description}[leftmargin =0.5cm] \item[Step 2.] For each gluing, we update both the subspace and the subalgebra by taking $U_\new = \pi(U)$ and $B_\new = B/\pi(U \cap R)$ and updating the gluing maps accordingly. \end{description} \begin{description}[leftmargin =0.5cm] \item[Step 3.] We transfer the axes and eigenspaces $W_I$ to $W_{\new}$ by applying $\pi$. \end{description} Note that if $R$ contains any relations of the form $a-b$ for axes $a$ and $b$, then we have reduced the (potential) algebra to one generated by a smaller set of axes $X'$. Hence we may exit the algorithm. Now that we have described our algorithm, we shall prove Theorem \ref{algorithmthm}. \begin{proof}[Proof of Theorem $\ref{algorithmthm}$.] It is clear from the construction of the algorithm that $A$ is spanned by products of axes in $X$. Since each axis is contained in its own $1$-eigenspace, they are idempotents. At stage 2 we use Lemma \ref{multiplydown}, so each axis must be semisimple. Also at stage 2 we impose the fusion law, therefore the multiplication must satisfy this and hence $A$ is an axial algebra for the required fusion law. By construction, for each axis $a \in X$ and $\chi \in T^*$, $\tau_a(\chi)$ is the corresponding Miyamoto automorphism and hence $G_0$ is the Miyamoto group. Observe that any axial algebra $B$ with the same axes, Miyamoto group, $\tau$-map and shape must satisfy the relations we have factored by in our algorithm. If we do not use Lemma \ref{wh-w} in stage 2, then we have not factored by any other relations and so $B$ must be a quotient of $A$. \end{proof} In practice, for reasons of efficiency, we perform some of the steps above in a different order. For example, we may perform the reduction step at any stage. In particular, it may be computationally advantageous to reduce once we find enough relations as any further calculations will be performed in a smaller space and hence may be quicker. \section{Results}\label{sec:results} In Table \ref{tab:results}, we present some of the results that the implementation of our algorithm \cite{ParAxlAlg} in {\sc magma} \cite{magma} has found. Our current implementation is restricted to a $\mathbb{Z}_2$-graded fusion law with one eigenvalue in the negative part and the examples given in the table are all for the Monster fusion law. All the results here are also over $\mathbb{Q}$, although our implementation works over finite fields and even function fields. Note that, although in our algorithm and implementation we do not require that the $\tau$-map be bijective, this is the case we concentrate on in the table as this is the situation considered by Seress \cite[Table 3]{seress}. The columns in the table are \begin{itemize} \item Miyamoto group $G_0$. \item Axes, where we give the size decomposed into the sum of orbit lengths. \item Shape. Here we omit shapes of type $5\textrm{A}$ and $6\textrm{A}$ as where these occur they are uniquely defined. If an algebra contains a $4\textrm{A}$, or $4\textrm{B}$, we omit to mention the $2\textrm{B}$, or $2\textrm{A}$, respectively, that is contained in it. Likewise, we omit the $2\A$ and $3\A$ that are contained in a $6\A$. \item Dimension of the algebra. A question mark indicates that our algorithm did not complete and a $0$ indicates that the algebra collapses. \item The minimal $m$ for which $A$ is $m$-closed. Recall that an axial algebra is $m$-closed if it is spanned by products of length at most $m$ in the axes. \item Whether the algebra has a $G_0$-invariant Frobenius form that is non-zero on the set of axes $X$. If it is additionally positive definite or positive semi-definite, we mark this with a pos, or semi, respectively. \end{itemize} In addition to the results in the table, we have computed many of the smaller groups acting on larger numbers of axes. For example, we have computed $S_4$ acting on $6$, $6+6$, $6+6+6$, $12$, $12+12$, $12+12+12$, $1+3+6$, $1+3+6+6$, $1+3+3+6+6$, $3+6$, $3+3+6$ and $3+6+6$ axes, but we do not present these results here. Several of these are useful for gluing in to complete examples for larger groups $G_0 \geq S_4$. Compared to Seress \cite{seress}, we find several new algebras. This includes several new examples that are $3$-closed, only one of which was previously known. It also includes many examples that do not satisfy the M8 condition, or the $2\mathrm{Aa}$, $3\A$, or $4\A$ conditions, but nevertheless lead to examples. Note that Seress considers both $A_6$ and the non-split extension $3^{\textstyle\cdot} A_6$. However, $3^{\textstyle\cdot} A_6$ does not have a faithful transitive action on $45$ points with an admissible $\tau$-map. Indeed, its only actions on $45$ points with an admissible $\tau$-map have kernel $C_3$ and $A_6$ acting faithfully. So, there is no axial algebra with Miyamoto group $3^{\textstyle\cdot} A_6$ acting on $45$ axes. We now note some interesting results coming from the computed examples: In all the cases below, there is at most one class of admissible $\tau$-map, however this is not true in general. For example, the group $2^4$ acting on $2+2+2+2$ axes has four classes of admissible $\tau$-maps at least three of which lead to non-trivial axial algebras. All the examples found so far are primitive (although in most cases the optional step 3 in stage 2 using Lemma \ref{wh-w} was used to construct them). The largest $m$ for which we have examples which are $m$-closed but not $(m-1)$-closed is $5$. There are two such examples which are $S_4$ acting on $1+3+6$ axes with shapes $4\A3\A2\A2\B2\B$ and $4\A3\C2\A2\B2\B$. These have dimension $52$ and $27$, respectively. All the examples computed have a $G_0$-invariant Frobenius form that is non-zero on the axes and all these forms are positive semi-definite. Although the vast majority are positive definite, there are examples for which the form is positive semi-definite but not positive definite. For example there is an algebra for the group $V_4$ acting on $2+2+1$ axes with shape $4\A2\A2\A$ and it has dimension $14$. The radical of the form is $3$-dimensional which gives an ideal in the algebra. Once we factor out by this, the resulting algebra also has the same group, orbit structure of axes and shape and is primitive of dimension $11$ with a positive definite Frobenius form. We have also found several different examples which do not satisfy the $2\mathrm{Aa}$, $3\A$, or $4\A$ conditions. In particular, when $\tau$ is not bijective, or is bijective and $1_G$ is in the image of the $\tau$-map (so there is an isolated axis) it is easy to find such examples. However, we should not expect these conditions to hold even when $\tau$ is a bijection. The example for $A_6$ on $45$ axes of shape $4\B3\A3\C$ has dimension $105$, but does not satisfy the $3\A$ axiom. There are pairs of $3\A$ subalgebras which are disjoint, but have the same induced Miyamoto group. Finally, consider the example which we cannot complete for $S_3\times S_3$ with $3+3$ axes and shape $3\A3\A2\A$. An algebra of this shape can be found in the algebra $A$ of shape $3\A2\A$ on $15$ axes. Namely, if we consider the subalgebra generated by the $3+3$ axes this has the required shape. Moreover, this subalgebra is in fact the full algebra $A$, but it is $4$-closed with respect to these $3+3$ axes. Since $A$ is a quotient of the algebra $\hat{A}$ we are trying to compute, the algebra $\hat{A}$ of shape $3\A3\A2\A$ is at least $4$-closed, which may be one reason it is hard to construct even though it is a small group. \begin{longtable}{cccccc} $G_0$ & axes & shape & dim & $m$ & form\\ \hline $S_3\times S_3$ & 3+3 & 3A3A2A & ? \\ $S_3\times S_3$ & 3+3 & 3A3A2B & 8 & 2 & pos\\ $S_3\times S_3$ & 3+3 & 3A3C2A & 0 & 0 & -\\ $S_3\times S_3$ & 3+3 & 3A3C2B & 7 & 2 & pos\\ $S_3\times S_3$ & 3+3 & 3C3C2A & 0 & 0 & -\\ $S_3\times S_3$ & 3+3 & 3C3C2B & 6 & 1 & pos\\ $S_3\times S_3$ & 3+9 & 3A3A & 18 & 2 & pos\\ $S_3\times S_3$ & 3+9 & 3A3C & 0 & 0 & -\\ $S_3\times S_3$ & 3+9 & 3C3A & 0 & 0 & -\\ $S_3\times S_3$ & 3+9 & 3C3C & 0 & 0 & -\\ $S_3\times S_3$ & 3+3+9 & 3A2A & 18 & 2 & pos\\ $S_3\times S_3$ & 3+3+9 & 3A2B & 25 & 3 & pos\\ $S_3\times S_3$ & 3+3+9 & 3C2A & 0 & 0 & -\\ $S_3\times S_3$ & 3+3+9 & 3C2B & 0 & 0 & -\\ &&&&\\ $S_4$ & 6 & 3A2A & 13 & 2 & pos \\ $S_4$ & 6 & 3A2B & 13 & 3 & pos\\ $S_4$ & 6 & 3C2A & 9 & 2 & pos \\ $S_4$ & 6 & 3C2B & 6 & 1 & pos \\ $S_4$ & 6+3 & 4A3A2A & 23 & 3 & pos\\ $S_4$ & 6+3 & 4A3A2B & 25 & 3 & pos\\ $S_4$ & 6+3 & 4A3C2A & 0 & 0 & -\\ $S_4$ & 6+3 & 4A3C2B & 12 & 2 & pos \\ $S_4$ & 6+3 & 4B3A2A & 13 & 2 & pos \\ $S_4$ & 6+3 & 4B3A2B & 16 & 2 & pos \\ $S_4$ & 6+3 & 4B3C2A & 9 & 1 & pos \\ $S_4$ & 6+3 & 4B3C2B & 12 & 2 & pos \\ &&&&\\ $A_5$ & 15 & 3A2A & 26 & 2 & pos\\ $A_5$ & 15 & 3A2B & 46 & 3 & pos\\ $A_5$ & 15 & 3C2A & 20 & 2 & pos\\ $A_5$ & 15 & 3C2B & 21 & 2 & pos\\ &&&&\\ $S_5$ & 10 & 3A2A & ? & \\ $S_5$ & 10 & 3A2B & ? & \\ $S_5$ & 10 & 3C2A & 0 & 0 & - \\ $S_5$ & 10 & 3C2B & 10 & 1 & pos\\ $S_5$ & 10+15 & 4A & 61 & 2 & pos\\ $S_5$ & 10+15 & 4B & 36 & 2 & pos\\ &&&&\\ $L_3(2)$ & 21 & 4A3A & ? & \\ $L_3(2)$ & 21 & 4A3C & 57 & 3 & pos\\ $L_3(2)$ & 21 & 4B3A & 49 & 2 & pos\\ $L_3(2)$ & 21 & 4B3C & 21 & 1 & pos\\ &&&&\\ $A_6$ & 45 & 4A3A3A & ? & \\ $A_6$ & 45 & 4A3A3C & 0 & 0& -\\ $A_6$ & 45 & 4A3C3C & 187 & 3 & pos\\ $A_6$ & 45 & 4B3A3A & 76 & 2 & pos\\ $A_6$ & 45 & 4B3A3C & 105 & 2 & pos\\ $A_6$ & 45 & 4B3C3C & 70 & 2 & pos\\ &&&&\\ $S_6$ & 15 & 3A2A & ? & \\ $S_6$ & 15 & 3A2B & ? & \\ $S_6$ & 15 & 3C2A & 0 & 0 & -\\ $S_6$ & 15 & 3C2B & 15 & 1 & pos\\ $S_6$ & 15+15 & 4A3A3A2A & ? & \\ $S_6$ & 15+15 & 4A3A3A2B & ? & \\ $S_6$ & 15+15 & 4A3A3C2A & 0 & 0 & -\\ $S_6$ & 15+15 & 4A3A3C2B & 0 & 0 & -\\ $S_6$ & 15+15 & 4A3C3C2A & 0 & 0 & -\\ $S_6$ & 15+15 & 4A3C3C2B & 0 & 0 & -\\ $S_6$ & 15+15 & 4B3A3A2A & 0 & 0 & -\\ $S_6$ & 15+15 & 4B3A3A2B & ? & \\ $S_6$ & 15+15 & 4B3A3C2A & 0 & 0 & -\\ $S_6$ & 15+15 & 4B3A3C2B & 0 & 0 & -\\ $S_6$ & 15+15 & 4B3C3C2A & 0 & 0 & -\\ $S_6$ & 15+15 & 4B3C3C2B & 0 & 0 & -\\ $S_6$ & 15+45 & 4A4A3A2A & 151 & 2 & pos\\ $S_6$ & 15+45 & 4A4A3A2B & 0 & 0 & -\\ $S_6$ & 15+45 & 4A4A3C2A & 0 & 0 & -\\ $S_6$ & 15+45 & 4A4A3C2B & 0 & 0 & -\\ $S_6$ & 15+45 & 4B4B3A2A & 0 & 0 & -\\ $S_6$ & 15+45 & 4B4B3A2B & 91 & 2 & pos\\ $S_6$ & 15+45 & 4B4B3C2A & 0 & 0 & -\\ $S_6$ & 15+45 & 4B4B3C2B & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4A2A2A2A & 151 & 2 & pos\\ $S_6$ & 15+15+45 & 4A2A2A2B & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4A2A2B2B & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4A2B2A2A & 151 & 2 & pos\\ $S_6$ & 15+15+45 & 4A2B2A2B & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4A2B2B2B & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4B2A2A2A & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4B2A2A2B & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4B2A2B2B & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4B2B2A2A & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4B2B2A2B & 0 & 0 & -\\ $S_6$ & 15+15+45 & 4B2B2B2B & 106 & 2 & pos\\ &&&&\\ $3.S_6$ & 45 & 3A & ? \\ $3.S_6$ & 45 & 3C & 0 & 0 & - \\ $3.S_6$ & 45+45 & 3A2A & 0 & 0 & - \\ $3.S_6$ & 45+45 & 3A2B & 0 & 0 & - \\ $3.S_6$ & 45+45 & 3C2A & 0 & 0 & - \\ $3.S_6$ & 45+45 & 3C2B & 136 & 2 & pos\\ &&&&\\ $(S_4\times S_3) \cap A_7$ & 18 & 3A3A3A & ? \\ $(S_4\times S_3) \cap A_7$ & 18 & 3A3A3C & 0 & 0 & - \\ $(S_4\times S_3) \cap A_7$ & 18 & 3A3C3C & ? \\ $(S_4\times S_3) \cap A_7$ & 18 & 3C3C3C & ? \\ $(S_4\times S_3) \cap A_7$ & 18+3 & 4B3A3A3A2A & ? \\ $(S_4\times S_3) \cap A_7$ & 18+3 & 4B3A3A3A2B & ? \\ $(S_4\times S_3) \cap A_7$ & 18+3 & 4B3A3A3C2A & 0 & 0 & - \\ $(S_4\times S_3) \cap A_7$ & 18+3 & 4B3A3A3C2B & 0 & 0 & - \\ $(S_4\times S_3) \cap A_7$ & 18+3 & 4B3A3C3C2A & ? \\ $(S_4\times S_3) \cap A_7$ & 18+3 & 4B3A3C3C2B & ? \\ $(S_4\times S_3) \cap A_7$ & 18+3 & 4B3C3C3C2A & 24 & 2 & pos \\ $(S_4\times S_3) \cap A_7$ & 18+3 & 4B3C3C3C2B & 27 & 2 & pos \\ &&&&\\ $L_2(11)$ & 55 & 6A5A5A & 101 & 2 & pos\\ &&&&\\ $L_3(3)$ & 117 & 3A & 0 & 0 & -\\ $L_3(3)$ & 117 & 3C & 144 & 2 & pos\\ &&&&\\ $(S_5\times S_3) \cap A_8$ & 30 & 3A3A & ? \\ $(S_5\times S_3) \cap A_8$ & 30 & 3A3C & 0 & 0 & -\\ $(S_5\times S_3) \cap A_8$ & 30 & 3C3A & 0 & 0 & -\\ $(S_5\times S_3) \cap A_8$ & 30 & 3C3C & 0 & 0 & - \\ $(S_5\times S_3) \cap A_8$ & 15+30 & 3A & 67 & 2 & pos\\ $(S_5\times S_3) \cap A_8$ & 15+30 & 3C & 0 & 0 & - \\ \hline \caption{Results}\label{tab:results} \end{longtable}
1,108,101,562,705
arxiv
\section{Introduction} \label{sec:intro} Machine learning (ML) and data-science are evolving as a multi-disciplinary field, comprising of software engineering on one end and domain-specific knowledge on the other. The ML community has largely adopted Jupyter notebook\xspace{}s as the de-facto standard for developing ML solutions. Notebooks are based on the principle of \textit{literate programming}~\cite{knuthLiterateProgramming1984a} that advocates the combination of code, documentation and visualization as a single document. The central idea of literate programming is to enhance comprehension and sharing of solutions to complex problems. This can be achieved by following literate programming principles such as: (1) enriching code with rich descriptive texts and figures, (2) creating a narrative structure in the program by adding headers to code snippets, and (3) logically dividing and labeling reusable sections of the program. In notebook\xspace{}s, executable code is written in code cells and documentation is written in markdown cells. An example notebook showing Python\xspace code and markdown cells can be seen in Figure \ref{motivatingnb}. Note that the most used language for developing ML-based solutions in notebook\xspace{}s is Python\xspace{} \cite{ruleExplorationExplanationComputational2018}. Enriching code snippets with explanatory text enhances the overall comprehensibility of notebooks and further promotes collaboration~\cite{wagemann2022five}. Furthermore, Wagemann et al.~\cite{wagemann2022five} suggests that a markdown/code cell ratio of 2, i.e., twice the number of markdown cells compared to code cells, is an indication of good literate programming practice. In addition, Samuel and Mietchen~\cite{samuel2022computational} also report that notebooks with higher markdown/code cell ratio are expected to have better reproducibility, which is a critical indicator in scientific studies. While Jupyter notebook\xspace{}s enable the easy creation of computational narratives according to literate programming principles, this is often not practiced in real-world notebooks \cite{keryStoryNotebookExploratory2018}. Instead, studies have shown that code-smells and bad practices are common in publicly available notebooks \cite{wangBetterCodeBetter2020}. According to a study by Rule et al.~\cite{ruleExplorationExplanationComputational2018}, interviewees defined their notebooks as personal scratch-pads and ``messy'', in other words, that their notebooks lack a narrative structure. The authors also highlighted that data scientists often do not annotate their notebooks, citing either lack of time or being ``too lazy''. In a later study, Pimentel et al.~\cite{pimentelLargeScaleStudyQuality2019} found that 30.93\% of the 1.4 million real-world notebooks they studied had no markdown cells. This finding is consistent with the latest study by Quaranta et al.~\cite{quarantaBestPractices2022}. On assessing the extent to which data scientists are familiar with, and follow best practices, the authors note that there is lack of effort in annotating notebooks with markdown cells. Yet, striving to adhere to literate programming principles becomes crucial in educational and sharing communities, for instance, in platforms such as Kaggle\cite{KaggleYourMachine}, as bad coding practices can lead to mistakes being carried on to the next generation of developers. Therefore, we argue that there is a strong need for the software engineering research community to develop tools for notebook users. To this end, this paper proposes HeaderGen\xspace{}, a tool-based approach to enhance the comprehension and navigation of undocumented Python\xspace{} based Jupyter notebooks by automatically creating a narrative structure in the notebook. Figure~\ref{t:ml_operations} shows a taxonomy of ML operations inspired by the work of Wang et al.~\cite{wangDocumentationMattersHumanCentered2022a}. Data scientists build an ML-based solution notebook by first preparing the data, then extracting key features, and then creating and training the model. HeaderGen\xspace leverages this implicit narrative structure of an ML notebook to add structural headers as annotations to the notebook. HeaderGen\xspace works by precisely detecting every function call in the notebook, classifying it according to the ML operations taxonomy, and then uses this classification information to create a structural map of the notebook. This map is displayed as an \textit{``index of ML operations''} at the top of the notebook, giving the notebook a narrative structure. Additionally, each code cell is annotated with a markdown header indicating the ML operations being performed. (see example in page \pageref{fig:annotations}) To yield useful results, HeaderGen\xspace{} requires a fast and accurate program analysis that can precisely identify all function calls in the notebook. However, we found that none of the existing techniques were able to statically identify all function calls in a notebook\xspace{} with acceptable precision, recall and, run time. This is attributed to the complex features of Python\xspace{}, such as duck typing, dynamic code execution, reflection, etc, that are challenging to static analyzers \cite{salisPyCGPracticalCall2021a,kummitaQualitativeQuantitativeAnalysis2021}. Moreover, unlike other programming languages like Java, Python\xspace{} lacks a lot of tool-support for state-of-the-art static analysis (SA) techniques. Instead, most tools available for Python\xspace{} today are based on a makeshift analysis of abstract syntax trees (AST) of Python\xspace{} source code \cite{yangComplexPythonFeatures2022}. Furthermore, due to the dynamically typed nature of Python\xspace{}, concrete static type-inference of variables is required for precise static analysis. A recently published call-graph generation technique called PyCG \cite{salisPyCGPracticalCall2021a} is based on an intermediate representation of the AST and handles several complex Python\xspace{} features. Yet, PyCG fails to analyze function calls to external libraries and its analysis is flow-insensitive, making it impossible to precisely identify function calls in real-world programs. HeaderGen\xspace rectifies these limitations. \begin{figure} \renewcommand\DTstyle{\rmfamily} \DTsetlength{0.2em}{0.7em}{0.2em}{0.4pt}{0pt} \dirtree .1 \fbox{\textbf{Generic Operations}}\vspace{-.6ex}. .2 Library Loading. .2 Visualization. } \vspace{.7em} \DTsetlength{0.2em}{0.7em}{0.2em}{0.4pt}{0pt} \dirtree{% .1 \fbox{\textbf{Data Preparation and Exploration}}\vspace{-.6ex}. .2 Data Loading. .2 Exploratory Data Analysis. .2 Data Cleaning Filtering. .2 Data Sub-sampling and Train-test Splitting. } \vspace{.7em} \DTsetlength{0.2em}{0.7em}{0.2em}{0.4pt}{0pt} \dirtree{% .1 \fbox{\textbf{Feature Engineering}}\vspace{-.6ex}. .2 Feature Transformation. .2 Feature Selection. } \vspace{.7em} \DTsetlength{0.2em}{0.7em}{0.2em}{0.4pt}{0pt} \dirtree{% .1 \fbox{\textbf{Model Building and Training}}\vspace{-.6ex}. .2 Model Training. .2 Model Parameter Tuning. .2 Model Validation and Assembling. } \caption{Taxonomy of machine learning operations based on \cite{wangDocumentationMattersHumanCentered2022a}.} \label{t:ml_operations} \end{figure} To summarize, the challenge that HeaderGen\xspace addresses is two-fold: (1) Inaccurate static analysis: the absence of a static program analysis technique that can precisely identify function calls in a Python\xspace program. To mitigate this, HeaderGen\xspace extends the call-graph analysis in PyCG with the ability to resolve function calls to external libraries and adds flow-sensitivity. (2)~Undocumented notebooks: many publicly available notebook\xspace{}s are undocumented, which hampers comprehension and goes against the principle of literate programming. HeaderGen\xspace uses precise SA to automatically annotate the notebook with structural headers and creates a narrative structure to aid comprehension of undocumented notebook\xspace{}s. To assess HeaderGen\xspace's static function-call analyzer, we use an extended version of PyCG's micro-benchmark, and in addition, a real-world benchmark with 15 notebook\xspace{}s from Kaggle. On the real-world benchmark, HeaderGen\xspace achieved 96.4\% precision and 95.9\% recall, outperforming PyCG and other function call analyzers based on off-the-shelf tools such as \emph{pyright} \cite{StaticTypeChecker2022} and \emph{Jedi} \cite{halterJediAwesomeAutocompletion2022}. On the same benchmark we also evaluated HeaderGen\xspace for header annotation and achieved 82.2\% precision and 96.8\% recall. Furthermore, we conducted a user-study with eight data-science practitioners and found clear evidence that HeaderGen\xspace improves the speed of navigation and comprehension. The contributions of this work are summarized as follows: \begin{itemize} \item We propose a novel static analysis based approach for Python\xspace Jupyter notebooks that can automatically annotate them with structural, commentary, and navigational text, aiming to facilitate the literal programming practice. \item We implement a static function call extraction technique for Python\xspace{} with 96.4\% precision and 95.9\% recall on our real-world benchmark. \item We give an evaluation of our approach based on extensive experimental results. \item We implement the prototype named HeaderGen\xspace and make it publicly available for our community to reuse. \end{itemize} The remainder of this paper is organized as follows: we present challenges in supporting computational notebooks with static header generation in the Section \ref{sec:motivating_example} followed by detailing our design in Section \ref{sec:method}. We then present the evaluation from Section \ref{sec:evaluation} to Section \ref{subsec:RQ4}, and discuss existing research in Section \ref{sec:relatedwork}. The limitations of HeaderGen\xspace is discussed in Section \ref{sec:discussion} and finally the paper is concluded in Section \ref{sec:conclusion}. \textbf{Availability.} HeaderGen\xspace is published on GitHub as open-source software under Apache 2.0 license: \url{https://github.com/ashwinprasadme/headergen} \section{Motivating Example} \label{sec:motivating_example} As a motivating example, consider the notebook\xspace{} in Figure~\ref{motivatingnb}. It consists of one markdown cell which is rendered as an HTML header, and five code cells that can be identified by the comments in the first line of each code cell. The example notebook in Figure \ref{motivatingnb} is a concise version of a real-world notebook containing a machine learning (ML) based solution. \begin{figure}[t] \centerline{\includegraphics[width=.95\linewidth]{images/motivating_example}} \caption{Machine learning Jupyter notebook example.} \label{motivatingnb} \end{figure} In cell 1, various ML libraries are imported. In cell 2, a sample dataset called ``iris" from the seaborn library is loaded, and further feature selection operations are performed to retain only the essential columns from the dataset. Values are type-cast to numpy based float64 type. Finally, the dataset is checked for null values. In cell 3, the dataset is split into training and test datasets. In cells 4 and 5, with the processed dataset, two different ML models are defined, trained, and their accuracies are reported. In cell 4, a basic linear model based on logistic regression is used. In cell 5, a deep learning based sequential model is used. Note that this notebook is undocumented and does not contain any explanatory text or structural headers as markdown cells, violating the literate programming principle. One in three notebooks found in the wild does not contain any markdown cells~\cite{pimentelLargeScaleStudyQuality2019}. In absence of explanatory text or structural headers, ML~practitioners, especially beginners, must spend more time to navigate and comprehend different aspects of the notebook. Particularly considering that nearly a third of all notebooks in the real-world contain at least 50 cells~\cite{pimentelLargeScaleStudyQuality2019}. On the other hand, the example notebook\xspace{} poses several challenges to SA, including: \begin{itemize}[leftmargin=0.2cm] \item \textbf{Import aliasing:} different ways of importing libraries, and importing libraries with aliases. \item \textbf{Dynamic typing:} in cell 2, the type of the variable \texttt{iris\_dataset} is not known statically, i.e., the return-type of the function \texttt{load\_dataset()} is not known statically. As a result, subsequent statements that involve the variable \texttt{iris\_dataset} cannot be resolved, i.e., in cell~2 lines 4--7. \item \textbf{Chained function calls:} consider the function call in cell~2 line~4, \texttt{iris\_dataset.values[].astype()}, here, the variable \texttt{iris\_dataset} is of type \textit{Dataframe} from the \textit{Pandas} library. \texttt{iris\_dataset.values} refers to an attribute of the class \textit{Dataframe}, which is in-turn defined as a \textit{Numpy} array. Furthermore, \texttt{astype()} refers to a function from the \textit{Numpy} library. Existing SA tools fail to resolve all this information statically. \item \textbf{Variable reuse:} the same variable \texttt{model} is reused in cells 4 and 5, for different model objects, i.e., \texttt{Sequential} and \texttt{LogisticRegressionCV} objects. Reuses of the same variable names are common in notebook\xspace{}s. Therefore, for precise annotation of code cells, the analyzer should know the type of an object at a specific location in the notebook. In other words, the analysis should be flow-sensitive. \end{itemize} In summary, for HeaderGen\xspace to accurately classify code cells based on function calls, the static analyzer needs to: (1)~handle complex Python\xspace features, (2)~statically resolve return-types of external library calls, and (3)~be flow-sensitive. \section{Approach} \label{sec:method} Figure \ref{fig:overview} gives a high-level overview of HeaderGen\xspace{}. First, it converts a notebook\xspace{} into a native Python\xspace{} script for analysis. This strips metadata from the notebook\xspace that are irrelevant for analysis. HeaderGen\xspace then analyzes the Python\xspace{} script to create an extended assignment graph (EAG). Further, HeaderGen\xspace extracts flow-sensitive callsite information based on the EAG, and finally annotates the notebook\xspace{} with headers based on the identified callsites and adds the index of ML operations based on the library-to-taxonomy mapping database. We discuss the details of constructing an EAG and extracting flow-sensitive callsite information in Sections \ref{subs:eag} and \ref{subs:flowsens}. Then, in Section \ref{subs:jnannotation}, we discuss the process of annotating the notebook based on the output of the analyzer. \subsection{Extended Assignment Graph} \label{subs:eag} To extract all possible callsites in the program, we add flow-sensitivity and the ability to analyze external libraries to the existing state-of-the-art context-insensitive and inter-procedural call-graph (CG) generation technique, PyCG \cite{salisPyCGPracticalCall2021a}. PyCG works on a custom intermediate representation of a Python\xspace{} AST and generates an assignment graph (AG) that represents assignment relations between program identifiers. The CG is then generated based on the AG by resolving all function calls that a program variable might point-to. Figure \ref{fig:agpycg} shows the AG generated by PyCG for the variable \texttt{model} in our motivating example. Since PyCG cannot analyze calls to external libraries, it does not add any edges to the \texttt{model} node. However, callsite information from real-world notebooks cannot be extracted with high accuracy without analyzing external library functions. To wit, in our motivating example, without analyzing the function \texttt{load\_dataset()} from the seaborn library, further references to the variable \texttt{iris\_dataset} cannot be resolved. Moreover, PyCG's analysis is flow-\emph{insensitive}, therefore the generated AG fails to distinguish between different assignments to the same variable. For instance, in our motivating example, \texttt{model} is redefined in cell 5 (cf. Figure \ref{motivatingnb}), however, the generated AG shown in Figure \ref{fig:agpycg} maintains only a single node for the \texttt{model} variable. PyCG over-approximates \texttt{model} with weak-updates to the AG, thereby, compromising on precision. Therefore, we extend PyCG's AG by an extended assignment graph (EAG) based on an additional helper analysis to enable flow-sensitive callsite recognition and further add a return-type approximation technique to resolve calls to external libraries. \begin{figure}[tbp] \centerline{\includegraphics[width=1\linewidth]{images/overview}} \caption{High-level overview of HeaderGen\xspace{}.} \label{fig:overview} \end{figure} \textbf{Definition-use Chain for Flow-sensitivity.} A definition-use chain \cite{kennedyUsedefinitionChainsApplications1978} (DUC) is a data structure that represents a definition, or assignment, of a program variable and all the subsequent uses without any re-definitions in between. DUCs are generated by analyzing all assignment statements in the program with consideration of variable scopes. We use an existing tool, Beniget \cite{serge-sans-pailleGastBeniget2022}, a DUC generation tool that works by analyzing the AST of Python\xspace{} programs. While a tool exists for Python\xspace{} to compute the DUC, no existing implementation makes use of DUC to construct flow-sensitive call-graphs for Python\xspace{}. HeaderGen\xspace first uses the DUC generated by Beniget to create a location map that gives information about what variables are used at particular locations of a notebook. Then, this map is used to create the EAG that can differentiate variables based on the location of its definition. For instance, the EAG shown in Figure \ref{fig:ex_ag} captures multiple definitions of the \texttt{model} variable separately. \textbf{Return-type Resolution of Machine Learning Libraries.} Consider the variable \texttt{iris\_dataset} assigned to the return of function \texttt{load\_dataset()} at location cell 2 line 2, represented as (C2,2) in the motivating example (Figure \ref{motivatingnb}). Within the seaborn library, the call to \texttt{load\_dataset()} is resolved to \texttt{seaborn.utils.load\_dataset}, which returns an object of type \texttt{pandas.Dataframe}. For HeaderGen\xspace, this type information is crucial: only if HeaderGen\xspace knows \texttt{iris\_dataset}'s type can it statically analyze calls on this variable. For instance \texttt{iris\_dataset} is used at (C2,4), (C2,5), and (C2,7) all of which cannot be resolved without knowing \texttt{iris\_dataset} is of type \texttt{pandas.Dataframe}. Yet, Python\xspace{} is a dynamically typed language, return-type information is not readily available for most library code. Although a set of Python Enhancement Proposals (PEPs) such as PEP484\cite{PEP484Type} are placed for Python language to support type annotations directly in source code, recent work has suggested that such user-demanding knowledge is still missing \cite{di2022evolution}. Though it still remains an open challenge, researchers have given type inference for Python a lot of attention. While leading tech giants like Google, Meta, and Microsoft rely on static tools (e.g. \emph{pytype} \cite{Pytype2022}) to ensure the quality of their codebase, the majority of current efforts employ the deep learning technique. Unfortunately, none of the available tools can accomplish what we need. This is mainly because external function calls frequently create dataflow disruptions in notebook programs. Existing learning-based approaches such as Typilus~\cite{allamanisTypilusNeuralType2020} often only leverage the source code's contextual information to generate the probabilistic type candidates. Static tools such as \emph{pytype} and \emph{pyre} often ship with tailored type stubs, providing no support for user type stubs. The two tools also do not infer types for local variables, leaving class method calls hard to obtain. \emph{pyright}~\cite{StaticTypeChecker2022}, a type checking tool, enables support for using custom type stubs of external libraries, but, does not model library specific behavior leading to loss of recall. Moreover, \emph{pyright} needs to be further re-engineered to obtain inferred type hints as it is designed for type checking \cite{yang2022data}. Furthermore, the well-known open-source project \emph{Jedi}~\cite{halterJediAwesomeAutocompletion2022} cannot analyze complex Python\xspace features, and suffers from performance issues. \begin{figure}[t] \centering \subfloat[Assignment Graph]{\includegraphics[width=.27\linewidth]{images/AG.png} \label{fig:agpycg}} \hfill \raisebox{13mm}{\Large $\Rightarrow$} \hfill \subfloat[Extended Assignment Graph]{\includegraphics[width=.625\linewidth]{images/EAG.png} \label{fig:ex_ag}} \caption{Generated assignment graphs for the variable ``model'' in the motivating example, left in PyCG (empty), right in HeaderGen (flow-sensitive).} \label{fig:af_compare} \end{figure} \emph{PyCG} is of no help here: it does not analyze calls to external libraries, instead ignores them. We attempted to force \emph{PyCG} to analyze ML libraries such as \emph{Numpy} and \emph{Pandas}. Yet, we failed to obtain results due to crashes and out-of-memory exceptions. External libraries, especially ML libraries, can contain millions of lines of code and PyCG's fixed-point algorithm does not terminate within reasonable time and memory. Even after (unsoundly) limiting the number of iterations of PyCG's fixed-point algorithm, the resulting AG was unsuitable for real-world application because of the low precision and recall. An analysis of the ML libraries' code thus seems out of reach with current tooling. We further explore these limitations with a quantitative comparison of \emph{PyCG}, \emph{Jedi}, and \emph{pyright} with HeaderGen\xspace in the evaluation Section \ref{subsec:RQ4}. We thus instead designed a tool-assisted approximative technique for resolving return-types of function calls to external libraries. Figure~\ref{fig:ext_workflow} shows HeaderGen\xspace{}'s approach for return-type approximation. First, we created a database of stub files for popular ML libraries such as \emph{Keras}, \emph{Numpy}, \emph{Pandas}, etc. Stub files contain type hints defined relative to the original Python\xspace{} source code and stored as \textit{.pyi} file. To build the database, we first created scaffolding .pyi files for all ML libraries we selected. This was followed by a manual inspection of function documentation and in some instances, confirmation by manual function execution to create type annotations for individual function calls. We note that this is still a work in progress and does not yet cover the entire source code of all the ML libraries that we selected. We intend to fully automate type-stub generation in the future utilizing type-inference systems such as pytype \cite{Pytype2022} which are currently under development. However, no accurate and maintained type-inference implementations for Python\xspace currently exists. \begin{figure}[tbp] \centerline{\includegraphics[width=.8\linewidth]{images/ext_library}} \caption{Workflow of imported library function return-type resolution.} \label{fig:ext_workflow} \end{figure} As shown on the bottom left of Figure \ref{fig:ext_workflow}, additional steps are required to make return-type resolution work for Python. We took for granted that \texttt{sns.load\_dataset()} resolves to \texttt{seaborn.utils.load\_dataset}. But looking at the example in Figure~\ref{motivatingnb} at (C2,2), this fully qualified function name is not at all apparent. HeaderGen\xspace thus must implement two additional steps that resolve application-side function calls to their fully qualified names. First, the external function call in the notebook is resolved based on the import information and EAG. For instance, consider location (C1,2) in our motivating example. Here, \texttt{seaborn} is imported with alias \texttt{sns} and therefore the function call is resolved as \texttt{seaborn.load\_dataset}, as shown in in red text at the bottom left of Figure~\ref{fig:ext_workflow}. But in Python, top-level modules can access function definitions in submodules by transitive imports, mapping full path API names to shorter names. For instance, \texttt{seaborn} exports functions from submodules (\textit{utils.py} here) that actually implement the function. Fortunately, given the fact, that the module \texttt{seaborn} has now been determined, HeaderGen\xspace can next perform a \emph{dynamic} fully qualified name resolution using the builtin Python\xspace{} reflection mechanism \texttt{inspect}, and dynamic execution using the function \texttt{eval} on that module. During startup time, HeaderGen\xspace{} imports a selected set of popular ML libraries into memory. Then, during analysis, the \texttt{eval} function is used to dynamically evaluate strings as Python\xspace expressions. In our motivating example, a reference to the function returned by: \texttt{eval(`seaborn.load\_dataset')} is evaluated and stored. Note that the function \texttt{load\_dataset} is not called---only a reference to the function is dynamically created. Further, this reference is examined using the builtin \texttt{inspect} module, which can retrieve information about live Python\xspace{} objects. HeaderGen\xspace uses it to fetch the location of the function's definition in the source code, i.e., the fully qualified name. \subsection{Flow-sensitive Callsite Extraction} \label{subs:flowsens} The EAG generated in the previous step is used to construct a flow-sensitive CG using PyCG's CG construction algorithm. Wherein, the intermediate representation of the program is iterated while looking for callable objects based on the EAG and adding it to the CG. Then, the callsites are mapped according to the location of their definition in the notebook. This is achieved by mapping the line numbers of the Python\xspace script with the notebook during conversion. In addition, note that when a user-defined function that is defined elsewhere in the notebook, say \texttt{x()}, is called from a code cell, any other function called from inside \texttt{x()} is also added as originating from that particular location in the notebook, i.e., the transitive closure of the CG. This step is needed to ensure that HeaderGen\xspace can annotate code cells that are only calling functions defined in some other code cell. \subsection{Jupyter Notebook Annotation} \label{subs:jnannotation} The goal of HeaderGen\xspace{} is to aid data scientists in easily navigating and comprehending undocumented notebook\xspace{}s. To this end, the callsite information output by HeaderGen\xspace's analyzer is used to add helpful information to the notebook. First, function calls found by the analysis are classified based on the ML operations in Figure \ref{t:ml_operations}. The classification is based on a manually curated database that maps individual API calls of popular ML libraries to ML operations. The ML operation mapping was created by manually inspecting the official function documentation and cross-referencing usages in the real-world. Some functions can be easily mapped to one of the operation categories. For instance, \texttt{pyplot.plot} is classified as \textit{Visualization}. However, calls such as \texttt{numpy.reshape} can belong to both \textit{Data Cleaning Filtering} and \textit{Feature Transformation}, and therefore classified into both categories. \textbf{Pattern Matching.} Notebooks can contain code cells that perform ML operations without explicit function calls, but rather, use other Python\xspace constructs that alter objects. For instance, consider the first pattern in Table~\ref{table:usage_patterns} that represents a \emph{Feature Engineering} operation, i.e., $ \texttt{df{[}`xy'{]} = df.x * df.y} $. Here, a new column \emph{xy} is being created in the Dataframe object \texttt{df} by multiplying columns \emph{x} and \emph{y}. In absence of a function call, HeaderGen\xspace resorts to AST based pattern matching to identify ML operations. In this specific case, HeaderGen\xspace first consults the EAG to infer that the type of the variable \texttt{df} is a Dataframe. Then, both sides of the binary operator `$*$', i.e., \texttt{df.x} and \texttt{df.y}, are checked if they are indeed Dataframe accesses. From this, HeaderGen\xspace concludes that this statement is a \emph{Feature Engineering} operation. Table \ref{table:usage_patterns} further lists some of the Dataframe access patterns that HeaderGen\xspace currently supports. \begin{table}[t] \renewcommand{\arraystretch}{1.2} \centering \caption{Dataframe usage pattern mapped to ML operations} \label{table:usage_patterns} \begin{tabular}{lll} \hline \multicolumn{1}{c}{\textbf{ID}} & \multicolumn{1}{c}{\textbf{Pattern}} & \multicolumn{1}{c}{\textbf{ML Operation}} \\ \hline 1 & \texttt{df{[}`xy'{]} = df.x * df.y} & Feature Engineering \\ \hline 2 & \texttt{df.x = 1} & \begin{tabular}[c]{@{}l@{}}Feature Transformation\\ Data Preparation\end{tabular} \\ \hline 3 & \texttt{df.x{[}df.x == 1{]} = 1} & \begin{tabular}[c]{@{}l@{}}Feature Transformation\\ Data Preparation\end{tabular} \\ \hline 4 & \texttt{x = df.x{[}{[}`f1', `f2'{]}{]}} & Feature Selection \\ \hline 5 & \texttt{print(df{[}0:20{]})} & Exploratory Data Analysis \\ \hline \end{tabular} \end{table} \textbf{Text Annotation Generation.} Based on this classification and pattern matching, the following annotations are added to the notebook: (1) Index of ML Operations, (2) Code cell headers, and (3) Table of contents. \textbf{1) Index of ML Operations:} The index provides a clickable and nested list of all function calls in the notebook classified according to the taxonomy of ML operations shown in Figure~\ref{t:ml_operations}. Figure \ref{fig:index_ml_ops} shows the index of ML operations generated for our motivating example. The index is displayed on top of the notebook using HeaderGen\xspace's notebook\xspace{} plugin. If no functions are found for a particular ML operation category, the category is displayed struck out. Each ML operation category and cell list can be expanded or collapsed as required. Function calls are organized based on the library as seen in the figure. Additionally, different areas of the notebook are hyperlinked, this makes it easy for the user to explore the notebook back-and-forth. For instance, cell~5 can be quickly visited by pressing \textit{``goto cell \# 5''} and back to the index again by pressing \textit{``back to top''}. \textbf{2) Code cell headers:} High-level ML operation categories from the taxonomy are added as headers for individual code cells. Note that when code cells contain ML operations from more than one category, all of these are added to the header. The headers can be further extended to see all the functions used in the following code cell, along with the docstrings that were fetched during analysis time from the source code. \textbf{3) Table of contents:} Code cell headers are attached with anchors that allow in-page navigation. Using this information, the table of contents combines the headers of all code cells and adds an anchor-link to each entry. This simplifies access to relevant sections of the notebook based on the taxonomy. \begin{figure}[t] \centering {\includegraphics[width=.95\linewidth]{images/indexof_ml_ops} \label{fig:index_ml_ops}} \caption{Index of ML operations for our motivational example. \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}} ML operation category ``Model Training" is expanded to view all code cells that are performing model training operations. \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}} Cell \# 5 is expanded to view all function calls in the cell. \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {3}}} Fully qualified function names are displayed. \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {4}}} Expanded view showing the arguments used and its docstring.} \label{fig:annotations} \end{figure} \section{Evaluation} \label{sec:evaluation} We evaluated HeaderGen\xspace to answer the following four research questions: \begin{itemize}[leftmargin=1cm] \itemsep.5em \item[\textbf{\textit{RQ1:}}] \textit{Does HeaderGen improve comprehension and navigation of undocumented Jupyter Notebooks?} \item[\textbf{\textit{RQ2:}}] \textit{How accurate is HeaderGen's callsite recognition?} \item[\textit{\textbf{RQ3:}}] \textit{How accurately can HeaderGen classify code cells using callsites?} \item[\textit{\textbf{RQ4:}}] \textit{How does HeaderGen compare to other tools?} \end{itemize} We first describe the benchmarks we developed for evaluating HeaderGen\xspace, and then examine the research questions. \subsection{Benchmarks} We evaluate HeaderGen\xspace by building two benchmarks: (1) a micro-benchmark containing 121 notebook\xspace{}s, and (2) a real-world benchmark containing 15 notebook\xspace{}s from Kaggle. \begin{table*}[] \renewcommand{\arraystretch}{1.2} \centering \caption{Evaluation of HeaderGen\xspace on our real-world benchmark for callsite recognition and header annotation} \label{table:realworldbench_results} \begin{tabular}{cllrrlrrlrr} \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multirow{2}{*}{\textbf{Competition}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Name}}} & & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Votes}}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Views}}} & & \multicolumn{2}{c}{\textbf{Callsite Recognition}} & & \multicolumn{2}{c}{\textbf{Header Annotation}} \\ \cline{7-8} \cline{10-11} & \multicolumn{1}{c}{} & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & & \multicolumn{1}{c}{\textbf{Precision}} & \multicolumn{1}{c}{\textbf{Recall}} & & \multicolumn{1}{c}{\textbf{Precision}} & \multicolumn{1}{c}{\textbf{Recall}} \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Titanic - \\ Machine Learning \\ from Disaster\end{tabular}} & bulentsiyah/keras-deep-learning-to-solve-titanic & & 65 & 1,926 & & 100 & 90 & & 71.4 & 100 \\ & hongdnghuy/relu-sigmoid & & 13 & 693 & & 100 & 100 & & 80 & 100 \\ & vaidicjain/titanic-easy-deeplearning-acc-78 & & 9 & 449 & & 95.8 & 95.8 & & 100 & 87.5 \\ & tanvikurade/complete-analysis-of-titanic & & 18 & 277 & & 100 & 100 & & 72.7 & 98 \\ & alexanderbader/mytitanic & & 10 & 97 & & 94.7 & 93.5 & & 83.3 & 90.9 \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multirow{5}{*}{Predict Future Sales} & econdata/predicting-future-sales-with-lstm & & 7 & 2,935 & & 88.4 & 100 & & 100 & 100 \\ & lhavanya/predict-future-sales & & 3 & 457 & & 94.3 & 91.7 & & 85 & 100 \\ & elvinagammed/stacked-lstm-top-5-4-mae & & 9 & 419 & & 100 & 100 & & 91.3 & 100 \\ & ashishkapasiya/prediction-future-sales-with-keras & & 3 & 494 & & 90.9 & 97.2 & & 80.9 & 100 \\ & the0electronic0guy/keras-begineer-friendly & & 12 & 264 & & 100 & 100 & & 82.2 & 98.7 \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Santander Customer \\ Transaction Prediction\end{tabular}} & higepon/starter-keras-simple-nn-kfold-cv & & 20 & 4,145 & & 100 & 87.5 & & 61.1 & 100 \\ & vishesh17/keras-nn-with-scaling-and-regularization & & 32 & 3,052 & & 100 & 100 & & 85.7 & 94.7 \\ & christofhenkel/nn-with-magic-augmentation & & 19 & 1,408 & & 94.2 & 94.3 & & 100 & 100 \\ & naivelamb/multibranch-nn-baseline-magic & & 10 & 569 & & 96.6 & 94.6 & & 64.5 & 87 \\ & miklgr500/nn-embedding & & 10 & 502 & & 91.2 & 93.9 & & 74.2 & 95.8 \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \multicolumn{2}{l}{\multirow{2}{*}{}} & & \textbf{240} & \textbf{17,687} & & \textbf{96.4} & \textbf{95.9} & & \textbf{82.2} & \textbf{96.8} \\ \cline{4-5} \cline{7-8} \cline{10-11} \multicolumn{2}{l}{} & & \multicolumn{2}{c}{\textbf{Total}} & & \multicolumn{2}{c}{\textbf{Average}} & & \multicolumn{2}{c}{\textbf{Average}} \end{tabular} \vspace{-.25cm} \end{table*} \textbf{Jupyter Notebook Micro-benchmark.} We evaluate HeaderGen\xspace by adopting the benchmark created by Salis~et.~al~\cite{salisPyCGPracticalCall2021a} as part of PyCG. PyCG's benchmark does not have specific challenges targeting flow-sensitive analysis, and the benchmark contains ground truth only for flow-insensitive call-graphs. Yet, to evaluate HeaderGen\xspace's analysis, flow-sensitive callsite information is required, i.e., information about function calls associated with line numbers. To address this, we first converted Python\xspace{} scripts from PyCG's benchmark into notebook\xspace{}s, and then created ground truth by manually mapping callsites to line numbers. Furthermore, we created eight new test cases that have specific challenges to flow-sensitivity. \textbf{Real-world Benchmark.} To assess HeaderGen\xspace in real-world scenarios, we tested for precision and recall on 15 notebook\xspace{}s from Kaggle, a community where data-science practitioners come together to create and share ML based solutions written in notebook\xspace{}s. The platform hosts open competitions where data scientists around the world compete against each other to build the best solution. Kaggle encourages beginners to learn from experts in the field by making their submissions public. However, these notebooks often lack documentation. We found that 99 of the top 500 notebooks submitted to the most popular competition on Kaggle contained no markdown cell. Therefore, we base our real-world benchmark on these undocumented notebooks which are still being viewed. We selected notebooks from three different and most popular competitions on Kaggle based on the number of submissions to encourage variation in the benchmark: (1) Titanic - Machine Learning from Disaster, (2) Predict Future Sales, and (3) Santander Customer Transaction Prediction. We downloaded the top 30 notebooks according to votes for each competition with the search term ``Keras''. Since Keras \cite{KerasPythonDeep} is a popular ML library among novices. We used the Kaggle API to search and download notebooks. All 30 notebooks from each competition were further filtered to target those without any markdown cells. Finally, we selected the top five most viewed notebooks from each competition. The selected notebooks in our benchmark are listed in Table~\ref{table:realworldbench_results}. These notebooks have a median of 20 code cells, compared to 13 cells that are found in real-world notebooks as reported by Pimentel et al. \cite{pimentelLargeScaleStudyQuality2019}. Note that these undocumented notebooks still have 240 upvotes and 17,687 views as of Octorber 2022. The ground truth is then created manually by inspecting code cells in each notebook, and listing the fully qualified names of all function calls. Notebooks were executed cell-by-cell and dynamically analyzed using Python\xspace{}'s reflection module \texttt{inspect} to gather the fully qualified names. Multiple iterations were carried out to avoid errors in the ground truth. \section{RQ1: Comprehension and Navigation Study} \label{sec:userstudy} The goal of HeaderGen\xspace{} is to increase comprehension and navigation in undocumented notebook\xspace{}s. We therefore conducted a user-study to quantitatively measure the improvements of HeaderGen\xspace over undocumented notebook\xspace{}s. \subsection{Study Design} The study is aimed at recreating the exploration of notebooks that data scientists routinely do. The study is designed as a within-subject study where the participants were given two notebooks from our real-world benchmark and asked to complete five comprehension tasks on each notebook one after the other. To minimize learning effects, we chose a latin-square design: participants were divided into two groups. While participants in group-1 were given the undocumented notebook first, followed by the HeaderGen\xspace{} annotated version, participants in group-2 saw the annotated notebook first. Each study was conducted in a one-on-one online session lasting about one hour using a video-conferencing tool. First, an overview of the study-protocol was presented to the participant including a walk-through of HeaderGen\xspace{}. Next, participants were provided access to the remote Jupyter instance along with a questionnaire containing step-wise instructions on how to proceed. Before proceeding to the study, participants were instructed to examine an example notebook annotated with HeaderGen\xspace{} in order to get them comfortable with the features. The entire session was recorded with the consent of the participant for further analysis. Upon completion of the comprehension tasks, participants were asked to fill a likert-scale questionnaire to understand the participant's perception of improvements provided by HeaderGen\xspace{}. Finally, participants were asked if they had any general comments about the tool. \textbf{Comprehension Tasks.} We created a set of tasks to simulate typical questions that arise when a data scientist is exploring an unseen notebook. The tasks were finalized after discussions with a data-science expert. For each task, participants were expected to select the right answers from all the choices given to them. Overall, six comprehension tasks were created, as listed in Table~\ref{table:comprehensiontasks}. For each notebook given to the participant, five tasks from the table were assigned to them based on the relevance to the notebook. \begin{table}[t] \renewcommand{\arraystretch}{1.2} \centering \caption{Comprehension tasks} \label{table:comprehensiontasks} \begin{tabular}{ll} \hline \rowcolor[HTML]{FFFFFF} \textbf{Id} & \textbf{Question} \\ \hline Q1 & What are the deep learning layers used in the model? \\ \hline Q2 & What are the different data cleaning \& data preparation operations? \\ \hline Q3 & \begin{tabular}[c]{@{}l@{}}Which of the following cells are used for model building \\ and model training?\end{tabular} \\ \hline Q4 & Select ML and visualization libraries that are used in the notebook \\ \hline Q5 & What are the different visualizations used in the notebook? \\ \hline Q6 & How is the dataset split into test and train subsets? \\ \hline \end{tabular} \end{table} \textbf{Likert-scale Questionnaire.} Following the completion of the session, participants were asked to rate the level of agreement to statements about the usefulness of HeaderGen\xspace{}. The level of agreement was based on a 5-point Likert scale, where ``1'' is \textit{Strongly disagree} and ``5'' is \textit{Strongly agree}. The statements given to the participants are listed in Table \ref{table:likertstatements}. \begin{table}[t] \renewcommand{\arraystretch}{1.2} \centering \caption{Statements about perception of usefulness} \label{table:likertstatements} \begin{tabular}{ll} \hline \textbf{Id} & \textbf{Statement} \\ \hline S1 & \begin{tabular}[c]{@{}l@{}}The classification of cells according to ML phases and headers\\ helped me navigate the undocumented notebook.\end{tabular} \\ \hline S2 & \begin{tabular}[c]{@{}l@{}}The generated list of functions used in the notebook\\ helped me understand the notebook better.\end{tabular} \\ \hline S3 & \begin{tabular}[c]{@{}l@{}}The header annotations added to the notebook is rather\\ hindering the understanding of the notebook.\end{tabular} \\ \hline S4 & I would install HeaderGen\xspace if it is made available as a plugin. \\ \hline \end{tabular} \end{table} \subsection{Participants} The study comprised of eight participants. Three of them were master students from the computer science department, three of them were full-time employees working in the data-science domain, and two of them were computer science researchers. Students were recruited by contacting the group leaders in the data-science research department. Professional employees were contacted using Linkedin \cite{LinkedIn} based on their job titles. The researchers were contacted based on their publications in common research topics. Due to privacy concerns, information of the participants are omitted. Participation was voluntary and did not involve monetary incentives. \subsection{Metrics} \begin{description} \item[(1) Time:] Time taken to complete all five tasks per notebook. \item[(2) Accuracy:] Inspired by a similar comprehension study by Adeli et. al. \cite{adeliSupportingCodeComprehension2020a}, the accuracy is measured using F1-score that takes into account both precision and recall. \item[(3) Navigability:] The perceived navigability based on responses to Likert scale questions. \item[(4) Usefulness:] The perceived usefulness based on responses to Likert scale questions. \end{description} \subsection{Results} The study resulted in 80 ($8 * 5 * 2$) measurements for accuracy, from eight participants performing five tasks on two treatments (undocumented and annotated), and 16 ($8 * 2$) measurements for time, from two treatments. We compare accuracy and time measurements between treatments using the non-parametric two-sided Wilcoxon Signed Rank (WSR) test as the measurements between treatments are paired and the sample size is small. In addition, all measurements are analyzed based on descriptive statistics. Figure \ref{fig:combined_graph} shows the box-plot of accuracy scores, time measurements, and perception ratings. \textbf{Time.} Both mean and median values of time taken for the annotated treatment (\textit{mean=336.6s, median=328.5s}) are lower than the undocumented variants (\textit{mean=486.4s, median=464.5s}). Moreover, WSR test on time measurements showed statistical significance (\textit{p-value=0.025, statistic=34.0}). The large difference in completion time for the undocumented variant is associated with the back-and-forth navigation in the notebook trying to find relevant areas. This shows that participants took significantly more time to complete comprehension tasks when given an undocumented notebook. \textbf{Accuracy.} The mean accuracy of all comprehension tasks was greater for the annotated treatment, except for task T6, where it was equal. The variance of accuracy across the tasks was three times higher for the undocumented treatment, showing that it is more likely to yield better accuracy with annotated notebooks. However, the median is greater for the annotated treatment only in T4 and T5. In addition, WSR test showed that the accuracy scores from the study are not statistically significant between two treatments (\textit{p-value=0.106, statistic=55.0}). Nonetheless, note that the study was not time-boxed. Participants thus took significantly longer to solve the tasks correctly for undocumented notebooks. \textbf{Navigability and Usefulness.} The perceived ratings for statements in Table \ref{table:likertstatements} showed that the participants find HeaderGen\xspace considerably helpful in completing the tasks. None of the participants disagreed to statements S1, S2 and S4, and none of them agreed to statement S3. All participants showed interest in actually installing the tool when it is published. \textbf{Qualitative Results.} Participants noted that HeaderGen\xspace would be especially useful when dealing with very large undocumented notebooks as it provides a ``map'' of the notebook. Participants also found the function documentation to be useful, given that the libraries are continuously evolving and that they would often come across methods that they have not seen before. Furthermore, minor recommendations to improve the taxonomy categories were noted and added to the final version. Recommendations to change the layout of the plugin were also noted and will be considered in future versions. \textbf{Threats to Validity.} The study we conducted is prone to some common limitations of conducting user studies. Due to the small number of participants, it may not be representative of a larger population. However, participants were selected from all fields: students, professionals, and academics to get inputs from different perspectives. Furthermore, since the study follows a within-subject design, the order of tasks and treatments can have an effect on the outcome. Therefore, to limit the learning effect, we use latin-square design to randomize the order of treatments, tasks, and multiple choices. However, using notebooks that only use the Keras API might have had a learning effect as the study progressed. Although the participants were experienced working with the default notebook\xspace environment, HeaderGen\xspace adds additional interfaces that might seem confusing at first. As a result, some participants did not make full use of HeaderGen\xspace's capabilities. \begin{figure*}[] \centerline{\includegraphics[width=.9\linewidth]{images/combined_graph.png}} \caption{\textbf{Left:} Box plots of accuracy for participant responses grouped by treatment. \textbf{Center:} Box plots of time measurements for two treatments. \textbf{Right:} Box plots of responses to likert-questions about perception.} \label{fig:combined_graph} \end{figure*} \section{RQ2: Accuracy of Callsite Recognition} \textbf{Micro-benchmark Results.} We evaluate HeaderGen\xspace{} for complete and sound recognition of callsites. The analysis is complete when there are no false positives, and sound when there are no false negatives. In total, the analysis is sound in 113 of 121 cases, and complete in 113 of 121 test cases. Lack of soundness in eight of 121 test cases are due to the lack of implementation for analyzing challenging Python\xspace{} features such as decorators. On the other hand, out of the eight test cases that are incomplete, only three of them are due to missing implementation of challenging features. The remaining five test cases are not complete because our analysis is context-insensitive. As a result, it over-approximates the solution in certain scenarios. Note that we do not perform a direct comparison of HeaderGen\xspace with PyCG because the micro-benchmark does not pose specific challenges to flow-sensitivity, except for the new \textit{flow\_sensitive} category with eight test cases that we added. When compared to PyCG for this category, as expected, PyCG is incomplete for all eight test cases. Furthermore, note that this micro-benchmark contains no challenges associated with handling external library function calls. \textbf{Real-world Benchmark Results.} Table \ref{table:realworldbench_results} lists the precision and recall values of HeaderGen\xspace{} for real-world notebooks. HeaderGen\xspace{} achieves an average of 96.4\% precision and 95.9\% recall. Note that in four instances, the analysis achieves 100\% precision and recall. The precision loss is due to our type-stub database's over-approximation of return-types. For instance, a call \texttt{x.isnull()} can be either \texttt{Series.isnull} or \texttt{DataFrame.isnull}, depending on whether \texttt{x} is a \textit{Series} or \textit{Dataframe}, which is determined based on the underlying structure of the data. However, this is not straight forward to infer and needs advanced data-flow analysis. Where recall is lost, it is because our analysis lacks supports for some complex Python\xspace{} features. \section{RQ3: Accuracy of Generated Headers} HeaderGen\xspace uses identified function calls in code cells to automatically add relevant headers based on the taxonomy of ML operations. We evaluated the headers generated by HeaderGen\xspace{} for precision and recall against manually annotated headers. Again, we use our real-world benchmark as a basis. 15 notebooks from the benchmark were divided and assigned to four data scientists working in the industry for manual annotation of each code cell. Notebooks were distributed such that each notebook was seen by at least two reviewers. Based on the taxonomy of ML operations, each annotator inspected and classified each code cell into relevant categories. The inter-rater reliability score, as measured by Cohen's kappa coefficient \cite{cohenCoefficientAgreementNominal1960}, was improved by conducting follow-up interviews with all four reviewers. Finally, a score of 0.89 was achieved, which signals an almost perfect agreement. \textbf{Results.} The resulting precision and recall are listed on the right side of Table \ref{table:realworldbench_results}. The headers that are generated by HeaderGen\xspace are matched on the high-level categories of the taxonomy listed in Figure \ref{t:ml_operations}. HeaderGen\xspace{} achieves a precision of 82.2\% and recall of 96.8\%. Precision is lost because some functions can be mapped to more than one ML operation. \section{RQ4: Comparison with Existing Tools} \label{subsec:RQ4} We compare HeaderGen\xspace in terms of callsite recognition and header annotation with \emph{PyCG}, \emph{pyright}, and \emph{Jedi} using our real-world benchmark. Since both \emph{pyright} and \emph{Jedi} are designed for type checking and auto-completion, we added helper functions to output type information and callsite information as required by HeaderGen\xspace. Furthermore, note that our type-stub database of ML libraries was provided to \emph{pyright} and \emph{Jedi} for analysis. \textbf{Results.} The precision and recall values are listed in Table~\ref{table:tools_call_sites_compare}. Since header annotation is based on identified callsites, it is evident that higher recall of callsite recognition leads to higher recall in header annotation. HeaderGen\xspace achieves the highest recall of $95.9\%$ which leads to a $96.8\%$ recall in header annotation of code cells. However, \emph{pyright} is the closest with $87.2\%$ recall for callsite recognition which leads to $82.7\%$ recall for header annotation. Note that without our type-stub database, these tools would perform even worse. The loss of precision is attributed to the over-approximation of return-types in our type-stub database as discussed earlier. \begin{table}[t] \renewcommand{\arraystretch}{1.2} \centering \caption{Comparison with existing tools on our real-world benchmark} \label{table:tools_call_sites_compare} \begin{tabular}{c|cc|cc} \hline \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Tool}}} & \multicolumn{2}{c|}{\textbf{Callsite Recognition}} & \multicolumn{2}{c}{\textbf{Header Annotation}} \\ \cline{2-5} \multicolumn{1}{c|}{} & \textbf{Precision} & \textbf{Recall} & \textbf{Precision} & \textbf{Recall} \\ \hline HeaderGen & 96.4 & 95.9 & 82.2 & 96.8 \\ Pyright & 96.7 & 87.2 & 83.8 & 82.7 \\ Jedi & 84.6 & 65.8 & 85.1 & 69.8 \\ PyCG & 41.7 & 23.3 & 84.6 & 26.2 \\ \hline \end{tabular} \end{table} \textbf{Modeling of Pandas Behavior.} Listing~\ref{lst:typingschallenges} shows simplified data manipulation methods of the Pandas library based on our real-world benchmark. Furthermore, Table \ref{table:tools_type_inference} lists the type of each variable used in Listing~\ref{lst:typingschallenges} as inferred by the tools being compared. It can be seen that both \emph{pyright} and \emph{Jedi} fail to infer return-types of variables \texttt{x1} through \texttt{x6}. This is because HeaderGen\xspace can model complex pandas accesses while the other two tools fail. For instance, in line 6, a dot notation access \texttt{df.a} is ignored by other tools while HeaderGen\xspace models it as a \texttt{Series}. \begin{lstlisting}[language=python, numbers=right,label=lst:typingschallenges,caption=Common uses of Pandas DataFrame that existing tools fail to infer.,style=pstyle,float=h] import pandas as pd df = pd.read_csv("./input.csv") x1 = df["a"].map(lambda x: x + 1.0) x2 = df.iloc[[False]].reset_index().copy() x3 = df.a.fillna(0) x4 = df.groupby(["a"])[["b"]].agg({"b": ["min"]}) x5 = df[["b", "c"]] x6 = df.c.values.astype(int) \end{lstlisting} \vspace{-.5cm} \begin{table}[h] \renewcommand{\arraystretch}{1.2} \centering \caption{Comparison of type inference by existing tools for listing \ref{lst:typingschallenges}} \label{table:tools_type_inference} \begin{tabular}{c|llll} \hline \textbf{Var} & \textbf{Actual} & \textbf{HeaderGen} & \textbf{Pyright} & \textbf{Jedi} \\ \hline df & DataFrame & DataFrame & DataFrame & DataFrame \\ \hline x1 & Series & Series & Any & Any \\ \hline x2 & DataFrame & DataFrame & Any & Any \\ \hline x3 & Series & Series & Any & Any \\ \hline x4 & DataFrame & DataFrame & Any & Any \\ \hline x5 & DataFrame & DataFrame & Any & Any \\ \hline x6 & Ndarray & Ndarray & Any & Any \\ \hline \end{tabular} \end{table} \input{related_work.tex} \section{Limitations \& Future work} \label{sec:discussion} To obtain sound function name resolution, our approach uses the reflection mechanism from Python runtime, which somewhat reduces the coverage of APIs depending on the actual library version installed. We will explore static API mapping techniques to solve transitive imports in Python to address this. Second, our approach currently relies on manual classification of library function calls to ML operations. To address this, we are currently investigating natural language processing techniques to automatically classify library functions to ML operations based on function docstrings. Lastly, our analysis is limited to the scope of machine learning applications. However, the framework we designed is not limited to a particular scope. HeaderGen\xspace can annotate notebooks with domain-specific return-type stubs and library taxonomy. Additionally, input from HeaderGen\xspace can be used to automatically restructure code cells in notebook\xspace{}s for better readability. For instance, by splitting up complex code cells performing multiple ML operations into sequential code cells. Furthermore, fast and precise function call analysis of HeaderGen\xspace can facilitate large-scale mining studies of Python\xspace code base. \section{Conclusion} \label{sec:conclusion} Many notebook\xspace{}s encountered in the wild are undocumented, making program comprehension and navigation difficult. To this end, HeaderGen\xspace utilizes precise static analysis to automatically annotate notebook\xspace{}s with structural headers based on a taxonomy of machine learning operations. HeaderGen\xspace achieved high precision and recall on both of our micro and real-world benchmarks. We further showed that HeaderGen\xspace can annotate headers with adequate precision and high recall when evaluated against ground truth manually curated by experts. Finally, we conducted a user-study to demonstrate that data scientists found HeaderGen\xspace to be helpful in improving program comprehension and navigation. \bibliographystyle{IEEEtran} \section{Related Work} \label{sec:relatedwork} \textbf{Tool-support for Jupyter Notebooks.} In recent years, many publications\cite{keryStoryNotebookExploratory2018, ruleExplorationExplanationComputational2018, pimentelLargeScaleStudyQuality2019, koenzenCodeDuplicationReuse2020, wangBetterCodeBetter2020, eppersonStrategiesReuseSharing2022a, quarantaBestPractices2022} have experimentally analyzed notebooks to gather insights on coding patterns and highlight that notebook quality is poor and needs attention from the software engineering community. However, there has been little research into developing tools to help with the highlighted issues. To this end, Wang et al. \cite{wangDocumentationMattersHumanCentered2022a} propose \textit{Themisto}, a tool that encourages data scientists to write documentation for code cells by first applying a deep learning based approach to automatically generate documentation in natural language and then recommending to the user whether to adopt it or use it directly. \emph{Themisto} directly uses AST of the Python\xspace code to train its model and does not explore using SA based approaches to extract contextual information from source code. To this end, we expect that analysis results from HeaderGen\xspace can aid deep learning based approaches to achieve better results. In another study, Pimentel et al. \cite{pimentelLargeScaleStudyQuality2019} studied 1.4 million notebooks for features that affect reproducibility and suggested a set of best practices. Following this, Wang et al. \cite{wangAssessingRestoringReproducibility2020} propose \textit{Osiris}, a tool-based approach to restore reproducibility in notebook\xspace{}s by using AST parsing for data-flow analysis to find dependencies of variables between code cells. Furthermore, Yang et al.~\cite{yang2022data} design a SA approach to detect data leakage in notebooks. Our work automatically annotates code cells and provides tool-support for literal programming. \textbf{Static Analysis for Python.} Although Python is one of the most popular programming languages, there is still a shortage of SA infrastructure for Python as noted by Yang et al's~\cite{yangComplexPythonFeatures2022} empirical investigation of Python's features. Yang et al. further argue that analysis for Python cannot adopt algorithms developed over past decades of scientific research due to its unique language features. Dynamic features such as duck typing in Python that make it stand out for fast-prototyping result in difficulties for its analysis. Call graph construction, which is the foundational technique in SA, remained an open problem until 2021 when a practical call graph generation approach named PyCG was offered\cite{salisPyCGPracticalCall2021a}. However, the call graph generator does not consider flow of values and has no support for Jupyter notebooks. Moreover, there is still no general-purpose SA framework for Python that can provide data flow IRs. The closest one is the Scalpel project \cite{liScalpelPythonStatic}. Nevertheless, Scalpel does not infer return-types for external function calls and does not take notebook cells into no consideration. In this work, we supplement existing SA work for real-world application by offering return-type resolution of external library API and flow-sensitive function callsite extraction using def-use relations.
1,108,101,562,706
arxiv
\section{Introduction} Block copolymer (BCP) melts can self assemble into well-ordered mesophases\cite{bates_block_1999,bates_block_1990,bates_fluctuations_1994,matsen_unifying_1996}, which are repeated periodically with a domain size $H_0$, typically of the order of $1-300$ nm \cite{hashimoto_domain-boundary_1980}. This periodicity makes BCPs excellent matrices to host nanoparticles (NPs), which can be localised in specific regions of the phase-separated BCP\cite{okumura_nanohybrids_2000,kim_effect_2006}. Mixtures of BCPs and colloids have long been studied using theory\cite{pryamitsyn_strong_2006,pryamitsyn_origins_2006} simulations \cite{huh_thermodynamic_2000,thompson_block_2002,thompson_predicting_2001} and experiments\cite{bockstaller_size-selective_2003,bockstaller_block_2005} due to the interesting behaviour resulting from the co-assembly of selective nanoparticles and phase-separated block copolymer. Nanorods (NR) have attracted considerable attention as constituents of functional polymer nanocomposite materials\cite{hore_functional_2014}. The orientational degree of freedom of anisotropic colloids introduces new possibilities of BCP/NP co-assembly, thanks to the intrinsic ordered structures of the neat BCP (lamellar, cylindrical, etc). For instance, gold NRs have been found to orient along the lamellar domain axis when confined in one of the symmetrical phases\cite{deshmukh_two-dimensional_2007,tang_self-assembly_2009}. Similarly, gold NRs template the direction of the cylindrical domains in an asymmetrical diblock copolymer mixture\cite{laicer_gold_2005}. Ordered arrays of aligned NRs were achieved by Thorkelsson \cite{thorkelsson_direct_2012,thorkelsson_end--end_2013} in the co-assembly of BCP and anisotropic particles, where NRs were organised in an end-to-end configuration. Nanoplates alignment in a lamellar-forming BCP has been recently studied\cite{krook_alignment_2018}. Experiments have reported the existence of an ordered phase when NRs are mixed with asymmetric diblock copolymer \cite{ploshnik_co-assembly_2010,ploshnik_hierarchical_2010} in thin films. Shenhar and Banin studied polystyrene-\textit{block}-poly(methyl methacrylate) (PS-\textit{b}-PMMA) copolymers mixed with PS-modified CdSe NRs, and found that NRs preferentially organized in a side-to-side configuration, forming long rows of particles in the PS domains, with an orientation normal to the interface between BCP domains, ie, perpendicular to the direction of the lamella domain. Furthermore, the number of rows and degree of order could be related to the size of the NRs and copolymer spacing. Theoretical and computational works have studied the self-assembly of BCP and anisotropic NPs. Dissipative Particle Dynamics (DPD) has been largely used, thanks to the ability to combine several beads into rod-like sequences. Zhang et al studied the phase behavior of such systems and the orientation of nanoparticles \cite{he_effect_2009,he_phase_2009,he_mono-_2010}, and the effect of shear in the global orientation has also been quantified \cite{pan_dynamic_2011}. Osipov et al \cite{osipov_spatial_2016,osipov_induced_2017,osipov_phase_2018} used strong and weak segregation theory to determine the distribution of anisotropic particles in a diblock copolymer, with a low fraction of NPs present in the system. Here, we make use of the considerably fast Cell Dynamic Simulation (CDS) method to simulate the BCP dynamics while Brownian Dynamics describes the assembly of ellipsoidal colloids. These simulations are compared with experiments involving CdSe NRs, in order to study the co-assembly of colloids within BCP domains. Simulations are used to gain insight over the behavior and occurrence of the orientational order of anisotropic colloids. Ellipses are used to mimic the shape of NRs. The Cell Dynamic Simulation method has been used extensively both in pure BCP systems \cite{ren_cell_2001,pinna_large_2012,pinna_diblock_2011,dessi_cell_2013} and nanocomposite systems\cite{pinna_modeling_2011}, reproducing experiments such as aggregation of incompatible colloids \cite{diaz_cell_2017,ploshnik_hierarchical_2013} and NP-induced phase transitions \cite{diaz_phase_2018}. Its relative computational speed makes it suitable to study properties that involve large systems over extended times, while the phenomenological approach in its model limits its validity in the microscopic realm. This hybrid method permits to explore the high NP filling fraction regime, in which the presence of NPs introduces considerable perturbations to the neat BCP matrix, such as morphological phase transitions. We aim to systematically study the phase behaviour of a polymer composite system made of diblock copolymer and anisotropic NPs, restricting to the case of NPs which are compatible with one of the copolymers. Size, shape and number of particles were explored, to address its effect on both the diblock copolymer morphology and especially the colloidal assembly. Several length scales are present in polymer nanocomposite systems\cite{langner_mesoscale_2012}, specially in the case of NRs or elliptical particles. This variety of sizes has been shown to result in interesting effects of confinement, and presents a challenge for its study. Simulations are compared with experimental results, to first assess its validity and then explore several parameters and configurations which are experimentally more challenging. \section{Model} The evolution of the BCP/colloids system is determined by the excess free energy which can be separated as \begin{equation} \mathcal{F} _{tot} = \mathcal{F} _{pol}+ \mathcal{F} _{cc} + \mathcal{F} _{cpl} \end{equation} with $ \mathcal{F} _{pol}$ being the free energy functional of the BCP melt, $ \mathcal{F} _{cc}$ the colloid-colloid interaction and the last contribution being the coupling term between the BCP melt and the colloids. The diblock copolymer is characterized by the order parameter $\psi ( \mathbf{r} ,t )$ which represents the differences in the local volume fraction for the copolymer A and B \begin{equation} \psi ( \mathbf{r} ,t )= \phi_A ( \mathbf{r} ,t) - \phi_B ( \mathbf{r} ,t) +(1-2f_0) \end{equation} with respect to the relative volume fraction of A monomers in the diblock, $f_0= N_A/ (N_A +N_B)$. The order parameter must follow the continuity equation in order to satisfy the mass conservation of the polymer: \begin{equation} \frac{\partial\psi ( \mathbf{r} , t )}{\partial t}= -\nabla\cdot \mathbf{j} ( \mathbf{r} ,t ) \end{equation} If the polymer relaxes diffusely towards equilibrium, the order parameter flux can be expressed in the form \begin{equation} \mathbf{j } ( \mathbf{r} ,t )= -M \ \nabla \mu ( \mathbf{r} , t ) \end{equation} as a linear function of the order parameter chemical potential \begin{equation} \mu ( \mathbf{r} , t )= \frac{\delta \mathcal{F} _{tot} [ \psi] }{ \delta \psi} \end{equation} Introducing these equations into the continuity equation and taking into account the thermal fluctuations we obtain the Cahn-Hilliard-Cook equation (CHC) \begin{equation} \frac{\partial\psi ( \mathbf{r} , t )}{\partial t}= M\ \nabla^2 \left[ \frac{\delta \mathcal{F} _{tot} [ \psi] }{ \delta \psi}\right] + \xi ( \mathbf{r} , t) \label{eq:ellipse.cahn} \end{equation} where $M$ is a phenomenological mobility constant and $\xi$ is a white Gaussian random noise which satisfies the fluctuation-dissipation theorem\cite{ball_spinodal_1990}. The copolymer free energy is a functional of the local order parameter which can be expressed in terms of the thermal energy $k_B T$ as \begin{equation} \mathcal{F} _{pol} [ \psi ( \mathbf{r} ) ]= \int d \mathbf{r} \left[ H(\psi) +\frac{1}{2} D | \nabla\psi |^2 \right] + \\ \frac{1}{2} B \int d \mathbf{r} \int d \mathbf{r} ' \ G( \mathbf{r} - \mathbf{r} ' )\psi( \mathbf{r} )\psi( \mathbf{r} ') \end{equation} where the first and second terms are the short and the long-range interaction terms respectively, the coefficient $D$ is a positive constant that accounts for the cost of local polymer concentration inhomogeneities, the Green function $G( \mathbf{r} - \mathbf{r} ' )$ for the laplace Equation satisfies $\nabla^2 G( \mathbf{r} - \mathbf{r} ') = -\delta ( \mathbf{r} - \mathbf{r} ')$, $B$ is a parameter that introduces a chain-length dependence to the free energy\cite{hamley_cell_2000}. The lamellar periodicity is $H_0 \propto 1/\sqrt{B}$. The local free energy is\cite{hamley_cell_2000,ren_cell_2001}, \begin{equation} H(\psi ) = \frac{1}{2}\left[ -\tau_0+ A(1-2f_0)^2 \right] \psi ^2 \\ +\frac{1}{3} v (1-2f_0)\psi^3 +\frac{1}{4}u \psi^4 \label{eq:Hpsi} \end{equation} where $\tau_0,A,v,u $ are phenomenological parameters\cite{ren_cell_2001} which can be related to the block-copolymer molecular specificity. Previous works\cite{pinna_modeling_2011,ren_cell_2001,ohta_equilibrium_1986} describe the connection of these effective parameters to the BCP molecular composition. $\tau ' = -\tau_0+A(1-2f_0)^2$, $D$ and $B$ can be expressed\cite{ohta_equilibrium_1986} in terms of degree of polymerization $N$, the segment length $b$ and the Flory-Huggins parameter $\chi$(inversely proportional to temperature) . Subsequently, we will consider $u$ and $v$ constants\cite{leibler_theory_1980}, which define all the parameters identifying the BCP local free energy $H(\psi)$ . As previously shown \cite{sevink_selective_2011,pinna_mechanisms_2009}, CDS can be used along with more detailed approaches like dynamics self-consistent field theory (DSCFT), using CDS as a precursor in exploring parameter space due to the computationally inexpensiveness nature of CDS. We can express the time evolution of $\psi$ , Equation \ref{eq:ellipse.cahn}, using CDS as \begin{equation} \psi ( \mathbf{r} _i , t+1 )= \psi ( \mathbf{r} _i,t )- \delta t [ \langle \langle \Gamma ( \mathbf{r} _i, t \rangle\rangle \\ - \Gamma ( \mathbf{r} _i, t ) + B [ 1- P ( \mathbf{r} _i, t) \psi ( \mathbf{r} _i,t )] -\eta \xi ( \mathbf{r} _i, t) ] ] \label{eq:ellipse.time_evol} \end{equation} $ \mathbf{r} _i$ being the position of the node $i$ at a time $t\delta t$, and the isotropic discrete laplacian for a quantity $X$ is given by \cite{oono_study_1988} $\frac{1}{\delta x^2} [ \langle\langle X \rangle\rangle -X ] $. Specifically, we will use \begin{equation} \langle \langle \psi \rangle \rangle = \frac{1}{6} \sum_{NN} \psi +\frac{1}{12} \sum _{NNN} \psi \end{equation} NN, NNN meaning nearest neighbours and next-nearest neighbours, respectively, for the two dimensional case. The lattice spacing is $\delta x $. In Equation \ref{eq:ellipse.time_evol} we have introduced the auxiliary function \begin{equation} \Gamma ( \mathbf{r} , t ) = g( \psi ( \mathbf{r} , t) )- \psi ( \mathbf{r} , t) D \left[ \langle\langle \psi ( \mathbf{r} , t) \rangle \rangle -\psi ( \mathbf{r} , t) \right] \end{equation} and also, the map function \cite{bahiana_cell_1990,ren_cell_2001} \begin{equation} g (\psi)= -\tau ' \psi -v (1-2f_0)\psi^2 -u \psi^3 \end{equation} \subsection{Polymer/colloid interaction } Contrary to the polymeric matrix, a suspension of $N_p$ nanoparticles describes each colloidal NP individually through the center of mass and orientation degrees of freedom $ \mathbf{R} _i,\phi_i$. The interaction between the polymer and colloids is introduced through a contribution to the free energy $ \mathcal{F} _{cpl}$, which must take into account the fact that colloids may have a preference for the A-block of the A-\textit{b}-B BCP. The simplest free energy that satisfies that is \begin{equation} \mathcal{F} _{cpl} = \sum _{i=1}^{N_p} \sigma \int d \mathbf{r} \ \psi_{c } ( \mathbf{r} , \mathbf{R} _i,\phi_i) \left[ \psi ( \mathbf{r} )-\psi_0 \right]^2 \label{eq:ellipse.coupling} \end{equation} where $\sigma$ defines the strength of the interaction between polymer and colloids, and $\psi_0$ describes the affinity of NPs with the BCP. In previous works\cite{pinna_modeling_2011}, the size, shape and core/shell properties of the NP are described through the tagged function\cite{tanaka_simulation_2000} $\psi_c( \mathbf{r} )$. In order to account for non-spherical colloids, we generalise the spherical shape into an non-rotated ellipse placed at $ \mathbf{R} _i=(0,0)$ as \begin{equation} \psi_{c}(x,y)=\exp\left[ 1-\frac{1}{1- \left(\frac{x}{a}\right)^2-\left(\frac{y}{b}\right)^2} \right] \label{eq:ellipse.psic} \end{equation} which can be trivially extended for an arbitrary rotation $\phi_i$. The particle shape and size is characterised by a major semiaxis $a$, and hard-core major semiaxis $a_0=a/\sqrt{1+1/\ln 2}$, and the same relationship holds for the minor semiaxis $b$. The ratio $e=b/a$ accounts for anisotropy of the ellipse. The tagged function is $\psi_c( \mathbf{r} ) = 0$ outside of the ellipsoids, that is, for $(x/a)^2+(y/b)^2>1$. \subsection{Interparticle potential} In order to introduce colloid-colloid interactions we require an orientational-dependent pairwise additive potential. The potential we use is the standard Gay-Berne (GB) potential \cite{gay_modification_1981,berne_gaussian_1972} which derives from a Gaussian overlap study of ellipsoids, making it suitable to our interactions. The GB potential has been widely used to describe liquid crystals\cite{de_miguel_liquid_1991,berardi_monte_1993}. The interparticle potential can be written as \begin{equation} V(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2, \mathbf{r} )= \epsilon(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2,\hat{\textbf{r}}) \\ \left[ \left( \frac{1}{r-\sigma(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2, \mathbf{r} )} \right)^{12} - \left( \frac{1}{r-\sigma(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2, \mathbf{r} )} \right)^{6} \right] \end{equation} which is a modified Lennard-Jones interaction with anisotropic length and energy scales, $\sigma(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2,\hat{\textbf{r}})$ and $\epsilon(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2,\hat{\textbf{r}})$, respectively. The centre-to-centre distance is $r$ while $\hat{\textbf{u}}_i$ stands for the orientation of the major axis of particle $i$. This potential provides a length scale that describes the anisotropy of the ellipsoid \begin{equation} \sigma(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2,\hat{\textbf{r}})= 2b \\ \left\lbrace 1- \frac{1}{2}\chi \left[ \frac{( \mathbf{r} \cdot \hat{\textbf{u}}_1+ \mathbf{r} +\cdot\hat{\textbf{u}}_2)^2}{1+\chi(\hat{\textbf{u}}_1\cdot\hat{\textbf{u}}_2)} + \frac{( \mathbf{r} \cdot \hat{\textbf{u}}_1- \mathbf{r} +\cdot\hat{\textbf{u}}_2)^2}{1-\chi(\hat{\textbf{u}}_1\cdot\hat{\textbf{u}}_2)} \right] \right\rbrace^{-1/2} \end{equation} and takes a value $2b$ at the side-to-side configuration. The energetic anisotropy is described with two parameters: $U_0$ describes the strength of the interaction while $\epsilon_r=\frac{\epsilon_e}{\epsilon_s}$ is an expression of the anisotropy of the wells. The depth of the well is given by \begin{equation} \epsilon(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2,\hat{\textbf{r}})= \epsilon(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2) \epsilon'^2(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2,\hat{\textbf{r}}) \end{equation} with \begin{equation} \epsilon(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2)= U_0 \left[ 1-\chi^2(\hat{\textbf{u}}_1\cdot \hat{\textbf{u}}_2)^2) \right]^{-1/2} \end{equation} and \begin{equation} \epsilon'(\hat{\textbf{u}}_1,\hat{\textbf{u}}_2,\hat{\textbf{r}})= 1-\frac{1}{2}\chi' \left[ \frac{( \mathbf{r} \cdot \hat{\textbf{u}}_1+ \mathbf{r} +\cdot\hat{\textbf{u}}_2)^2}{1+\chi'(\hat{\textbf{u}}_1\cdot\hat{\textbf{u}}_2)} + \frac{( \mathbf{r} \cdot \hat{\textbf{u}}_1- \mathbf{r} +\cdot\hat{\textbf{u}}_2)^2}{1-\chi'(\hat{\textbf{u}}_1\cdot\hat{\textbf{u}}_2)} \right] \end{equation} where two anisotropy parameters are introduced, regarding length and energy, respectively, \begin{equation} \chi=\frac{a^2-b^2}{a^2+b^2}; \ \chi'=\frac{\epsilon_s^{1/2}-\epsilon_e^{1/2}}{\epsilon_s^{1/2}+\epsilon_e^{1/2}} \end{equation} \subsection{Colloid Dynamics: Brownian Dynamics } Since the NPs are anisotropic, the equation of motion does not involve only the friction constants but a diffusion tensor, $ \mathcal{D} $. In general\cite{han_brownian_2006}, \begin{equation} \frac{d \mathbf{r} }{dt }= \mathcal{D} _t \cdot \mathbf{f} \end{equation} while the particle's orientational degree of freedom relates to the random ($M_r$) and exerted torques as \begin{equation} \frac{\partial \phi_i}{\partial t }= \left( M_i+M_r \right)/\gamma_\phi; \ \ M_i= -\frac{\partial \mathcal{F} }{\partial \phi_i} \end{equation} with \cite{hagen_brownian_2011} \begin{equation} \mathcal{D} _t= \bar{ \mathcal{D} } \mathcal{I}+\frac{1}{2} \Delta \mathcal{D} \begin{pmatrix} \cos 2\phi & \sin 2\phi \\ \sin 2\phi & -\cos 2\phi \end{pmatrix} \end{equation} and $\bar{ \mathcal{D} }=\frac{1}{2} (D_a+D_b)$ and $\Delta \mathcal{D} = D_a-D_b$, $D_a$ and $D_b$ being the diffusion constants along each axis. The values of $D_a,D_b$ and $\gamma_{\phi}$ are derived from the expressions obtained by Perrin \cite{zheng_self-diffusion_2010,happel_low_1983,perrin_mouvement_1934} \subsection{Order parameter} To describe the orientation of the anisotropic NP, an order parameter can be used, which has been extensively employed in nematic liquid crystal systems, \begin{equation} S=\langle 2(\hat{\textbf{u}}\cdot \mathbf{P})^2 -1 \rangle \label{eq:ellipse.S} \end{equation} which is an average over all particles of the scalar product of the orientation unit vector $\hat{\textbf{u}}$ and a local unit vector $\mathbf{P}$ that is related to the gradient of the polymer order parameter $\psi( \mathbf{r} ,t)$. This unit vector $\mathbf{P}$ is normal to the interface between copolymer domains. \section{Results and discussion} As a first approach, we study the condition for the appearance of an ordered phase in a BCP with different compositions $f_0$, which gives rise to a variety of BCP morphologies. After that, the role of the NR length will be described, in relation to the BCP periodicity. Finally, the role of the NP-NP interaction is asserted taking into account several initial conditions. We introduce dimensionless parameters rescaling $D\to D/\delta x ^2$ and $B\to B \delta x ^2$. Lengths are expressed in terms of grid points. The standard values of CDS\cite{ren_cell_2001,pinna_large_2012,pinna_modeling_2011} will be used $ \tau_0=0.35, u=0.5,v=1.5,A=1.5,D=1.0 $ while the BCP/NP interaction is set to $\sigma=1.0$. A cell spacing $\delta x =0.5$ and time discretisation $\delta t=0.1$ are chosen. Unless otherwise specified, the NP size is set to $a_0=2$ and $e=0.3$ while the BCP periodicity is determined by the CDS parameter $B=0.002$. The NP thermal energy is set to $k_BT=0.1$. The box size of simulations is $128\times 128$ except for larger systems which are explicitly stated in the text. This work focuses on A-block compatible NPs inspired by experimental ordered hierarchical structures of NRs in BCP\cite{ploshnik_co-assembly_2010,ploshnik_hierarchical_2010}, by selecting a value of the affinity $\psi_0=-1$ in reduced units with the equilibrium value of $\psi$. The anisotropy of experimental NRs is modelled with ellipsoidal NPs with a Gay-Berne potential. Recently, a generalised approach to NP shape has been presented to simulate superellipses immersed in BCP \cite{diaz_nonspherical_2019}, including rectangular-shaped NPs. Nonetheless, the lack of an appropriate NP-NP potential limits the realistic comparison with experiments. We expect that, despite the differences in shape, the inclusion of anisotropic shape and orientation-dependent NP-NP potential will be sufficient to mimic experimental results, while limiting the possibility to establish a one-to-one comparison between experiments and simulations. \subsection{Phase diagram of A-compatible ellipsoidal colloids} The phase diagram of diblock copolymer/colloids has been widely studied both for nanospheres\cite{huh_thermodynamic_2000} and anisotropic NPs\cite{tang_self-assembly_2009}. The presence of NPs which are compatible with one of the blocks increases the effective overall volume fraction of the hosting domain, which in turn results in a phase transition. As a first approach to a system of BCP and anisostropic NPs, we explore the effect that ellipse-shaped colloids have on the BCP morphology, by analysing the phase of BCP with arbitrary composition $f_0$ in the presence of a filling fraction $\phi_p$ of NPs. \begin{figure}[hbtp] \centering \includegraphics[width=0.75\linewidth]{figure1} \caption{Phase diagram of a diblock copolymer nanocomposite system characterised by a filling fraction $\phi_p$ of ellipsoidal colloids and $f_0$ volume fraction of the A blocks in the neat BCP. Squares, blue circles and red circles stand for lamellar, cylindrical phase and inverted cylindrical phase, respectively. Dotted markers represent phase points in which $S>0.3$ (eq. \ref{eq:ellipse.S}) ie, where ellipsoids are aligned mostly normal to the interface. } \label{fig:ellipse.phase_diagram} \end{figure} In Figure \ref{fig:ellipse.phase_diagram} the filling fraction of ellipsoids is explored for different BCP compositions, $f_0$. The NP-NP interaction scale is set to $U_0/\sigma = 0.01$ so that the interparticle potential is not dominating over the BCP-NP interaction. As expected, at low filling fraction , $\phi_p$, particles are simply segregated within their preferred phase (blue) which within the tested range of $f_0$ is the minority phase. In the absence of constrains by the BCP (ie. a low local filling fraction) ellipsoids display no orientational order. Furthermore, the BCP maintains a cylindrical phase (circular domains, in two dimensions). At higher filling fractions, the ellipsoids enlarge the hosting domains to a point in which a cylinder-to-lamellae phase transition is induced. At the same time, higher filling fractions lead to a particular ordered phase in the colloids: ellipsoids prefer to orient normal to the interface and with a side-to-side interparticle configuration. This phase has been reported experimentally by Shenhar and Banin\cite{ploshnik_co-assembly_2010,ploshnik_hierarchical_2010}, where ordering was reported to be driven by both attractive NP-NP interaction and minimisation of the repulsive interactions between the NRs and the B phase. The orientation of the ellipsoids relative to the local interface is tracked by using the order parameter $S$ defined in equation \ref{eq:ellipse.S}. In Figure \ref{fig:ellipse.phase_diagram} a black dot is added for phase points in which $S>0.3$, that is, where orientational order is considerably high. Furthermore, we can define an effective filling fraction on the basis of A-compatible colloids having a reduced volume $V_A=f_0 V_{total}$ available space to occupy. This effective filling fraction is \cite{huh_thermodynamic_2000} \begin{equation} \phi_p^{eff}=\frac{\phi_p}{f_{eff}}=\frac{\phi_p}{\phi_p +(1-\phi_p)f_0} \label{eq:ellipse.phipeff} \end{equation} A plot of the orientational order parameter $S$ against the defined effective filling fraction of ellipsoids is shown in Figure \ref{fig:ellipse.phipeff} where disorder ($S \sim 0$) is found for low effective filling fraction. A rapid change in $S$ occurs as a moderate effective filling fraction is reached, while at the same time the cylinders-to-lamellae transition is induced. This suggests that the orientational order strongly depends on the filling fraction of ellipsoids relative to the hosting domain, that is, ellipsoids need to be considerably constrained within their hosting domains. It is noted that inverted cylindrical phase (ie, ellipsoids occupying the majority of the space) display slightly lower order than lamellar ones, despite being at higher effective filling fraction. This is due to higher local curvature of the interfaces that is characteristic of BCP cylindrical phases. This hypothesis is corroborated by analysing the snapshots in detail in the next figures. \begin{figure}[hbtp] \centering \includegraphics[width=0.75\linewidth]{figure2} \caption{Values of the orientational order parameter $S$ for different values of the effective volume fraction of ellipsoids $\phi_p^{eff}$ as of eq. \ref{eq:ellipse.phipeff}. Squares, blue circles and red circles stand for lamellar, cylindrical phase and inverted cylindrical phase, respectively.} \label{fig:ellipse.phipeff} \end{figure} Figure \ref{fig:ellipse.comparison-highphip} (a) shows an instance of ordering at a moderate filling fraction $\phi_p = 0.16$ and $f_0=0.4$ in a $256 \times 256$ grid system. One can notice that the side-to-side configuration is not homogeneous along the domains. Instead, we observe coexistence of both 1 and 2 rows, along with disordered states and even parallel (along the interface) orientation. Nonetheless, this behaviour appears more often near curved interfaces, as well as near defects of the lamellar structure. These features can be found in experiments involving CdSe NRs mixed with PS-\textit{b}-PMMA at a filling fraction $\phi = 0.26$. This high resolution SEM image displays the side-to-side configuration of NRs within the PS domains. In Fig. \ref{fig:ellipse.comparison-highphip} (b), 1 and 2 rows occur for the same NR size and instances of disorder of parallel configuration appear, specially at the end of domains or at intersections, that is, defects in the lamellar structure. Details of the experimental setup and initial conditions can be found in reference \cite{ploshnik_hierarchical_2010}. \begin{figure}[hbtp] \centering \includegraphics[width=0.9\linewidth]{figure3} \caption{ Moderate filling fraction of anisotropic NPs in diblock copolymer mixture. Comparison between (a) simulations with inset and (b) SEM image showing $33$ nm-long CdSe NRs co-assembled with PS-\textit{b}-PMMA with $H_0=132$ nm periodicity (PS domain size is $L_0=75$ nm). The experimental NR diameter is $4.6$ nm with a filling fraction $0.26$. } \label{fig:ellipse.comparison-highphip} \end{figure} \subsection{Relative size of hosting domain/nanoparticle} In Figure \ref{fig:ellipse.comparison-highphip} (b) the size of the NR major axis is chosen to fit two rows into a BCP lamellar domain. Similarly, in Figure \ref{fig:ellipse.comparison-highphip} experiments and simulations show coexistence of 1 and 2 rows of anisotropic NPs. The role of the relative $2a/(H_0/2)$ size can be explored for a higher number of rows by simulating a larger periodicity, given by the parameter $B$ in the Ohta-Kawasaki free energy, which determines the value of the BCP periodicity $H_0$. Figure \ref{fig:ellipse.rows} shows simulations of 3 different sizes. The values are chosen to fit $3$, $4$ and $5$ rows. While the larger sizes in Figure \ref{fig:ellipse.rows} (b) and (c) show a well ordered configuration, the global order of the $a=2$ case is low (a). This is in accordance with experiments \cite{ploshnik_co-assembly_2010} in which smaller NRs displayed lower ordering. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure4} \caption{ Final step of a system of ellipsoids with $b=0.6$ minor axis and three values of major semi-axis $a_0=2$, $2.5$ and $3.33$ for (a) (b) and (c), respectively. . The BCP constant $B=0.0002$ is used to result in a large lamella domain. } \label{fig:ellipse.rows} \end{figure} Composto et al \cite{deshmukh_two-dimensional_2007} showed that when the major dimension of NRs is larger than the lamella domain width, colloids tend to orient along the domain axis. While experiments have shown that smaller NRs orient normal to the domain direction, intermediate sizes can be explored using simulations. In Figure \ref{fig:ellipse.tilted} we explored the role of the ratio $2a/L$ with $2a$ being the effective major length of the ellipsoid and $L=H_0/2-2\xi$, which is a measure of the available horizontal spacing for NPs. We should note that inspection of the $\psi$ profile shows a clear weak segregation regime for the BCP, which makes difficult to delimiter an interface/bulk region. In any case, the curve of $S$ along with the snapshots in Figure \ref{fig:ellipse.tilted} clearly shows a $S\sim 1$ regime when the ellipsoids can easily fit into the domains and normal to the interface. As the size of the NPs is increased, a slight rotation appears, which results in a decrease in $S$. In conclusion, we observe a tilted configuration when the ratio between the major dimension of the ellipsoid and the BCP spacing is slighly larger than $1$. \begin{figure}[hbtp] \centering \includegraphics[width=0.75\linewidth]{figure5} \caption{Decrease of orientational ordering of ellipsoids when the size of the major axis $2a$ is larger than the available normal spacing $H_0/2-2\xi$ with $H_0$ the lamella periodicity and $\xi$ the interface semi-size} \label{fig:ellipse.tilted} \end{figure} \subsection{Low volume fraction of nanoparticles} The side-to-side, perpendicular-to-domain-axis colloid configuration is shown to appear when the anisotropic NP occupies most of the hosting domain, thus the surrounding B-block boundary inflects a pressure. In Figure \ref{fig:ellipse.lowphip} we show that even at low filling fraction the normal configuration holds. This is due to the attractive interaction between colloids, which minimizes the free energy even at relatively low filing fractions. The simulations (a) show a resemblance with the experiments in (b), where NPs indeed form aggregates within their preferred domain. Defects (perpendicular and disordered orientation) are present both in simulations and experiments. \begin{figure}[hbtp] \centering \includegraphics[width=0.9\linewidth]{figure6} \caption{Low filling fraction of anisotropic NPs in BCP, comparing (a) simulations and (b) SEM image of $33$ nm long CdSe NRs at a $0.15$ filling fraction. The diameter of the experimental NRs is $4.6$ nm. Unoccupied lamellar domain which is available for NPs is shown in light gray/white in (a) and as gray areas in (b). } \label{fig:ellipse.lowphip} \end{figure} \subsection{Role of the initial condition} The hierarchical co-assembly displayed by NRs in experiments depends on the initial arrangement of particles within the BCP before annealing\cite{ploshnik_co-assembly_2010,ploshnik_hierarchical_2010}. In particular, the BCP was unable to break already-formed NP clusters. For that reason, simulations can be used to study the co-assembly starting from different initial conditions. A competition between the tendency of attractive NPs to form aggregates on the one hand, and the equilibrium periodic morphology of the BCP on the other hand needs to be studied in detail. For that reason two limiting regimes are selected: weakly and strongly interacting NPs with $U_0=0.001$ and $1.0$, respectively. \subsubsection*{Weakly interacting nanoparticles} Figure \ref{fig:ellipse.initial1} shows five instances of the evolution of a system of relatively short particles with respect to the lamellar spacing of the diblock copolymer. The initial and final states are shown, while the order parameter $S(t)$ plot over time can be found in the right-most column. In all cases a dotted line marks the horizontal line $S=0$, so that instances of ordering $S>0$ are clear. In this figure, the BCP concentration profile is initialised as a sinusoidal, therefore, it is initially ordered. In (a), the NPs are randomly oriented and placed within the white domains. The system is then evolved into a final, ordered configuration of side-to-side ellipses. The system also exhibits alternating one and two rows of ellipses. (b), (c) and (d) show three different initial conditions regarding the orientation of the ellipses at $t=0$, respectively, $\phi_i=0,\pi/2$ and $\pi/4$. Regardless of the initial condition, the final configuration is similar, meaning that this configuration is energetically favourable. The order parameter $S$ in (a), (c) and (d) reaches a final (approximately steady) state only at the very long stages of the simulation. Instead, the already-horizontal ordering of the ellipses in (b) barely changes $S$ over time. Finally, (e) shows ellipses which are initially forming clusters in their preferred BCP domains. The final BCP morphology lacks the global orientation of the previous instances, since the NPs are initially forming clusters. Nonetheless, the orientation and ordering of ellipses is equally normal to the interface, which can be checked visually and by the positive $S$ value of the orientational order parameter. \begin{figure}[hbtp] \centering \includegraphics[width=0.8\linewidth]{figure7} \caption{Initial and final snapshot of several systems with different initial conditions. The time evolution of the orientational order parameter $S(t)$ is also shown for each case. In all cases the BCP is initially set to a sinusoidal concentration profile. In (a), NPs are randomly oriented and positioned, within the white domain. In (b),(c) and (d), positions are again randomly chosen, while the orientation is $\phi_i=0,\pi/2,\pi/4$, respectively. In (e), the ellipses are placed randomly within clusters. } \label{fig:ellipse.initial1} \end{figure} Figure \ref{fig:ellipse.initial2} shows four instances of NR initial configuration in an initially disordered BCP. While the position of the particles is randomly chosen, the orientation is random for (a) and $\phi_i=0,0.35\pi$ for (b) and (c), respectively. The final state is shown in the central column whereas the evolution of the order parameter is displayed in the right column. In (d) the NPs are initially forming clusters without a collective orientation (disordered). In all of these cases the final state is a side-to-side configuration with ellipses oriented normal to the interface, that is, $S> 0$. Observing the evolution of $S(t)$ one can notice that the (b) and (c) cases reach the final $S$ value at a much shorter timescale than angularly-disordered cases (a) and (d). \begin{figure}[hbtp] \centering \includegraphics[width=0.8\linewidth]{figure8} \caption{Initial and final snapshot of several systems with different initial conditions. The time evolution of the orientational order parameter $S(t)$ is also shown for each case. In all cases the BCP is initially random(disordered). In (a) the orientation and position of all particles is chosen randomly. In (b) and (c), the position is chosen randomly, while the orientation with respect to the horizontal axis is $\phi_i=0,0.35\pi$, respectively. In (d) the particles are initially forming clusters. } \label{fig:ellipse.initial2} \end{figure} Figure \ref{fig:ellipse.initial3} shows an initially ordered diblock copolymer, with NPs forming clusters with a low internal order (contrary to Figure \ref{fig:ellipse.initial1} (e) ) in which the internal orientation was random) . The time evolution suggests that the NPs are dispersed within the BCP, which is modified to accommodate the existing long-ordered sequences of ellipses. NPs form ordered arrays of side-to-side orientation within the white domains. \begin{figure}[hbtp] \centering \includegraphics[width=0.8\linewidth]{figure9} \caption{Initially ordered BCP with NP forming ordered clusters at $t=0$. Initial and final snapshots are shown in the left and center figures while the order parameter $S(t)$ is displayed in the right-most figure. } \label{fig:ellipse.initial3} \end{figure} \subsubsection*{Strongly interacting nanoparticles} In all of the above described cases, the BCP is able to acquire a stripe-like morphology, while NPs tend to appear relatively dispersed within the A domains, regardless of the initial condition. This was valid for particles which interact weakly between each other, such as metal NPs that lack a fixed dipole moment. Nonetheless, semiconductor NRs, such as CdSe, exhibit a dipole moment which gives rise to a strong interparticle attractive interaction that leads to a strong tendencty towards particle aggregation\cite{ploshnik_hierarchical_2010}. A strong interparticle potential scale parameter $U_0=1$ can be used to understand the co-assembly behavior in the strong interparticle potential limit, as shown in figures \ref{fig:ellipse.initial_u0_1},\ref{fig:ellipse.initial_u0_2} and \ref{fig:ellipse.initial_u0_3}. These are directly related to figures \ref{fig:ellipse.initial1}, \ref{fig:ellipse.initial2} and \ref{fig:ellipse.initial3} by selecting the same initial conditions. Comparing figures \ref{fig:ellipse.initial1} (weakly interacting) and \ref{fig:ellipse.initial_u0_1} (strongly interacting) we can clearly draw the conclusion that at $U_0=1$ the NPs are driving the co-asembly, with the BCP domains being formed around aggregates of colloids (except for (b), as compared with the weakly interacting case, in which the BCP tended to form elongated lamellar-like domains. Ordering in the NPs is also strongly dominated by the initial configuration in the $U_0=1$ regime, with $S$ reaching positive values only in (b) and (e) cases. A comparison between the curve of $S(t)$ in Figure \ref{fig:ellipse.initial1} (c) and \ref{fig:ellipse.initial_u0_1} (c) leads to the conclusion that in the weakly interacting NPs regime the NPs undergo first a dispersion within the BCP domains, while in a slower time scale the NPs achieve global normal orientation with the BCP interface. This global ordering is not present at $U_0=1$, where the strong interparticle potential rapidly assembles the NPs into vertically oriented groups of particles in aggregates. Similarly, in (e) the aggregated particles at $t=0$ tend to form clusters also after the time evolution, with the BCP clearly being forced to form less elongated domains. \begin{figure}[hbtp] \centering \includegraphics[width=0.8\linewidth]{figure10} \caption{Initial and final snapshot with different initial conditions (same as Figure \ref{fig:ellipse.initial1}) in the strong interaction regime $U_0=1$. The time evolution of the orientational order parameter $S(t)$ is also shown for each case. In all cases the BCP is initially fixed to a sinusoidal concentration profile. In (a), NPs have initial random orientation and their position is random and confined to the white domains. In (b),(c) and (d), positions are again randomly chosen, while the orientation is $\phi_i=0,\pi/2,\pi/4$, respectively. In (e), the ellipses are placed randomly within small clusters} \label{fig:ellipse.initial_u0_1} \end{figure} The initial orientation of NPs when the BCP is initially disordered also affects the co-assembly in the strong interacting NP regime. In Figure \ref{fig:ellipse.initial_u0_2}, compared to its equivalent shown in figure \ref{fig:ellipse.initial2}, clearly displays less elongated domains, with NPs more prone to form well-ordered structures, while at the same time forming more aggregates that enhance the local size of the hosting domains. Again, this suggests that the BCP is unable to complete its assembly by disassembling the aggregates, instead, it merely forms domains around aggregates of ellipses. \begin{figure}[hbtp] \centering \includegraphics[width=0.8\linewidth]{figure11} \caption{Initial and final snapshot with different initial conditions (same as Figure \ref{fig:ellipse.initial2}) in the strong interaction regime $U_0=1$. The time evolution of the orientational order parameter $S(t)$ is also shown for each case. In all cases the BCP is initially random(disordered). In (a) the orientations and positions of all particles are chosen randomly. In (b) and (c), the position is chosen randomly, while the orientation with respect to the horizontal axis is $\phi_i=0 $ and $0.35\pi$, respectively. In (d) the particles are initially forming clusters. } \label{fig:ellipse.initial_u0_2} \end{figure} Similarly, in Figure \ref{fig:ellipse.initial_u0_3} the initial NP aggregates cannot be broken into elongated domains to the same degree as occurred in Figure \ref{fig:ellipse.initial3}. Instead, the BCP forms domains around clusters of particles. Since the initial aggregates were already made of considerably well-ordered NPs, the final $S$ value is particularly high, meaning that a high ordering is achieved. \begin{figure}[hbtp] \centering \includegraphics[width=0.8\linewidth]{figure12} \caption{ Initially ordered BCP with NP forming ordered clusters at $t=0$. Initial and final snapshots are shown in the left and center figures while the order parameter $S(t)$ is displayed in the right-most figure. The NP-NP interaction parameter is $U_0=1$. } \label{fig:ellipse.initial_u0_3} \end{figure} In summary, the role of the initial condition is crucial when the NPs interact strongly with each other. This strong interaction leads to the formation of aggregates in the early stages of the time evolution of the system that are not broken by the BCP evolution (whether it is from disorder to order, or an already phase-separated BCP). Weakly interacting NPs, on the other hand, undergo a co-assembly on a similar time scale as the BCP, therefore, the side-to-side along with the normal-to-interface configuration is easily obtained under any initial condition. \subsection{Role of energy parameters} Simulations can be used to gain insight on the effect that the interaction parameters have on the formation of the side-to-side configuration normal to the interface between domains. A simple energy analysis for NRs suggests that this configuration is energetically preferential both for the inter-colloidal potential and the NP-polymer coupling \cite{ploshnik_co-assembly_2010,ploshnik_hierarchical_2010}. The interparticle Gay-Berne potential described in the Model section sets the interaction between two ellipses with two energetic parameters: $U_0$ sets the scale of the interaction, while $\epsilon_r$ describes the anisotropy in the depth of the potential minima. For that reason, in Figure \ref{fig:ellipse.epsr} we explore these two parameters via the orientational order parameter $S$. It is clear that the anisotropy value $\epsilon_r$ is key on the formation of the side-to-side configuration as we find $S\sim 0$ as the anisotropy of the potential is closer to $1$. This leads to more tip-to-tip configurations, which in turn are better accommodated with the NPs oriented along the domains, as can be found in the two snapshots in the right hand-side of Figure \ref{fig:ellipse.epsr}. Contrary to that, high anisotropy leads to well-ordered side-to-side configuration. Because the lamella domain spacing is similar to the major size of the ellipses, the BCP accommodates only one row of ellipses. \begin{figure}[hbtp] \centering \includegraphics[width=.75\linewidth]{figure13} \caption{Orientational order parameter $S$ for different final stages tuning the interparticle potential parameters: $U_0 $ and $\epsilon_r$, the strength and anisotropy of the Gay-Berne potential, respectively. Three snapshots of the representative parameters are shown. } \label{fig:ellipse.epsr} \end{figure} Figure \ref{fig:ellipse.epsr} shows that the energetic scale $U_0$ plays a less relevant role than the anisotropy factor $\epsilon_r$. We observe that even at low values of $U_0$ the ordering is kept $S>0.4$ which is indicative that the configuration is stable even at low interparticle potential strengths. These results are in accordance with a simple energy analysis shown in the Supplementary Information, where we conclude that in order to have an energy minimum in the side-to-side configuration, $e>>\epsilon_r$ should be satisfied. Such energy analysis is considered only in the case of relatively high filling fraction of NPs in the system, a regime in which the particles-to-polymer coupling is considerably strong, as the contacts between the colloids and the interfaces become important. Figure \ref{fig:ellipse.phd-U0-phip} shows a phase diagram of the filling fraction of particles in the system $\phi_p$, and the strength of the potential $U_0$. The cylinder-forming neat BCP is chosen by setting $f_0=0.3$. The orientational order is characterised by the order parameter $S$. In the low filling fraction regime we can find ellipsoids segregated within the minority white domains without a particular orientation with respect to the BCP interface. On the other hand, at high filling fraction the nanoparticles can induce a transition into elongated BCP domains. In this regime, a lower value of $U_0$ leads to a higher ordering. Contrary to that, a large interaction strength leads to the interparticle potential driving the ordering behaviour of the system. In this regime the NPs are less prone to minimise the contacts with the black domains, and minimisation of the interparticle potential is dominant enough. This result agrees with the conclusion drawn from the comparison between figures \ref{fig:ellipse.initial1}-\ref{fig:ellipse.initial3} and figures \ref{fig:ellipse.initial_u0_1}-\ref{fig:ellipse.initial_u0_3} which showed that a lower interaction strength led to a higher degree of ordering. \begin{figure}[hbtp] \centering \includegraphics[width=0.75\linewidth]{figure14} \caption{Phase diagram of the assembly of ellipses in a diblock copolymer with $f_0=0.3$. The number of particles is explored in the X axis $\phi_p$ and the strength of the interparticle potential is tuned via $U_0$. Markers relate to the value of the orientational order parameter as: blue cross {\color{blue} x} for $S<0.01$; red dots {\color{red} $\boldsymbol{\cdot}$} for $0.01<S<0.3$; and black plus sign {\color{black} +} for $S>0.3$ } \label{fig:ellipse.phd-U0-phip} \end{figure} We can therefore conclude that at low volume fraction, strong interparticle interaction is necessary for obtaining ordered NP superstructures. This is not the case at higher filling fraction, in which merely the NP-BCP interaction is enough to ensure that the NP will assemble in the described configuration. \section{Conclusions} The co-assembly of anisotropic nanoparticles in BCPs has been studied by means of mesoscopic simulations in the case of A-modified NPs with an elliptical shape. Ellipsoidal nanoparticles have been shown to induce phase transitions in the block copolymer matrix due to an increase in the effective concentration of the hosting copolymer. In turn, the combination of BCP-NP coupling and intercolloidal attractive forces leads to a well-ordered configuration of anisotropic colloids which reproduces experimental results \cite{ploshnik_co-assembly_2010,ploshnik_hierarchical_2010}. When confined within one of the BCP phases, ellipsoids are found to orient normal to the domain axis in order to minimise the contacts with the surrounding incompatible phase while minimising the angle-dependent intercolloidal potential. Ellipsoidal colloids are used to mimic CdSe nanorods mixed with PS-\textit{b}-PMMA used in experiments. Direct comparison between microscopy images and simulations shows considerable similarity between simulations and experimental results. Despite the differences in the dynamic path between the experiments (solvent vapour annealing and three dimensional thickness of the film) and the simulations, we show that the final state of the ordered configuration is reproducible with different initial conditions resulting in side-to-side nanorods. This suggests that the simulated model captures the most dominant factors in the co-assembly, that is, interactions between the components of the system and the filling fraction. The size of the NP with respect to the BCP periodicity plays a crucial role in the assembly of anisotropic colloids. Smaller NPs tend to form more rows than large ones for a given BCP periodicity, whereas the ordering increases with larger particles, which is in accordance with experiments. Furthermore, NPs that are slightly larger than the lamellar spacing undergo a rotation with respect to the interface, while maintaining the side-to-side intercolloidal organisation. A study of several different initial conditions (both initially ordered and disordered) has drawn the conclusion that weakly interacting NRs organise side-to-side within a phase-separated BCP that achieves a lamellar morphology, regardless of the initial condition. In this regime the BCP can undergo the usual phase-separation even in the case of an initially-clustered NP configuration. On the other hand, the initial configuration of colloids is crucial in the case of strongly-interacting nanoaparticles, which are trapped in a metastable state that does not allow the system to reach a side-to-side organisation. This occurs, for example, if NPs are initially forming aggregates, which the BCP is unable to break-up. Weakly interacting NR's behaviour can be related to metal NRs which lack a fixed dipole moment, in which the co-assembly is mostly dictated by the block copolymer morphology. Semiconductor NRs such a CdSe display a fixed dipole moment ($3.3 \times 10^{-28}$ C m for $33$ nm rods\cite{li_origin_2003}, for example) leading to side-to-side organisation even at low concentrations. Even higher NR-NR interaction such as ZnO NR ( dipole moment of $4.1 \times 10^{-26}$ C m for $33$ nm rods\cite{dag_large_2011}) can be related to a high value of $U_0$. Finally, a high energy anisotropy in the colloid-colloid potential has been shown to be crucial for determining the final side-to-side/normal to domain axis configuration. In summary, we have presented a computational method that mimics a complex co-assembly process of anisotropic NPs in BCPs. We have been able to gain insight over the role of several parameters that are experimentally difficult to explore. We have identified the importance of the relative size between the NP main axis and the lamellar spacing, which dictates the relative orientation and the number of rows in the assembly. Two different energy regimes have been identified: weakly and strongly interacting NPs undergo different types of co-assembly with the BCP, which corresponds to semiconductor and metallic NPs. Weakly-interacting NPs with a high energetic anisotropy have been shown to display the highest level of side-to-side configuration while allowing the BCP lamellar morphology to fully develop. \begin{acknowledgement} I. P. acknowledges support from MINECO (Grant No. PGC2018-098373-B-100), DURSI (Grant No. 2017 SGR 884) and SNF Project No. 200021-175719. The authors thank Elina Ploshnik, Asaf Salant and Uri Banin for their contribution to the experimental results shown in the paper. Financial support was provided by the Israeli Science Foundation, grant number 229/17. JD thanks the BritishSpanish Society for financial support. \end{acknowledgement} \begin{suppinfo} A simplified energetic analysis to justify the side-to-side, normal to interface orientation of nanorods \end{suppinfo}
1,108,101,562,707
arxiv
\section{} \end{document}
1,108,101,562,708
arxiv
\section{Introduction} In the Bayesian paradigm for presenting forensic evidence to court, it is recommended that the weight of the evidence be summarized as a \emph{likelihood ratio} (LR) between two opposing hypotheses of how the evidence could have been produced. Such LRs are necessarily based on probabilistic models, the parameters of which may be uncertain. It has been suggested by some authors that the value of the LR, being a function of the model parameters should therefore also be considered uncertain and that this uncertainty should be communicated to the court. In this tutorial, we consider a simple example of a \emph{fully Bayesian} solution, where model uncertainty is integrated out to produce a value for the LR which is \emph{not} uncertain. We show that this solution agrees with common sense. In particular, the LR magnitude is a function of the amount of data that is available to estimate the model parameters. Bayesian methods are often criticised because of the difficulty of choosing appropriate priors, especially when the priors are non-informative. We do not deny these difficulties, but the problem is not solved by adopting frequentist methods that effectively sweep the prior under the carpet and pretend it does not exist. In this tutorial we do need to choose a non-informative prior and we choose it by examining the effect it has on the end-result. We shall reference the following books: E.T.~Jaynes, \emph{Probability Theory: The Logic of Science}, Cambridge University Press 2003, which we shall abbreviate as PTLOS; and D.J.~Balding, Weight-of-evidence for Forensic DNA Profiles, Wiley 2005, abbreviated as WEFDNA. \section{Simplified DNA model} \def\pvec{\mathbf{p}} \def\nvec{\mathbf{n}} \def\xvec{\mathbf{x}} \def\qvec{\mathbf{q}} \def\avec{\mathbf{a}} In this tutorial we shall derive the details of how to compute the LR with a \emph{simplified} DNA-like model. The idea is not to provide a recipe that can be used in real forensic DNA analysis, but rather to choose a model that facilitates better understanding of the basic look and feel of a fully Bayesian solution. We need the model to be very simple so that we can perform the Bayesian integrals in closed form. More realistic models would require more complex methods, which would obscure the primary purpose of this tutorial. We suppose that the \emph{DNA profile} of every individual has $K$ different binary \emph{loci} the \emph{state} of each of which can be either 1 or 0. Every individual is therefore categorized by $K$ binary variables, which gives a total number of $2^K$ states.\footnote{In real DNA profiling, there are different locus types, with more complex state spaces. For example, STR loci consist of two parts with independent states, one inherited from the father and the other from the mother. Each part has 2 or more states, called \emph{alleles}. DNA profiling technology can detect the state of each part, but does not show which comes from the mother and which from the father.} We represent a DNA profile by a vector of the form $\avec=(a_1,a_2,\ldots,a_K)$, where $a_k\in\{0,1\}$ represents the state of locus $k$. We assume that given a DNA sample (either recovered at the crime scene where it was left by the \emph{perpetrator}, or obtained from the \emph{suspect}), the state of each locus may be determined without error. The main complication is when all suspect and perpetrator loci match, that there is a non-zero probability that some person other than the suspect could have the same DNA profile. To compute this probability, we need to model profile distributions. \section{Profile distribution model} Here we define a generative model that is probably about as simple as it can be. Again, our goal is just to illustrate the basic principles of a fully Bayesian approach to this kind of problem. The goal of this exercise is not to reproduce a realistic DNA model---in real population genetics, the models are more complex. Let the probability that locus $k$ of a randomly chosen person has state 1 be $q_k$, and the probability that it has state 0 be $1-q_k$. According to this model we assume the following independencies: \begin{itemize} \item The locus states are independent: knowing the state of locus $k$ for one or more individuals, tells us nothing about the states of other loci $k'$. \item For each locus $k$, the binary state for each person is sampled as an \emph{iid} Bernoulli trial with parameter $q_k$. \end{itemize} We can collect the locus probabilities in the vector\footnote{Note that the elements of $\qvec$ usually do not sum to one. These are $K$ independent probabilities, not one $K$-ary categorical distribution.} $\qvec=(q_1,q_2,\ldots,q_K)$. We refer to $\qvec$ as the \emph{model parameter}, which encodes everything there is to know (under the above modelling assumptions) about how locus states are distributed in the population. The model can be summarized by: \begin{align} P(\avec|\qvec) &= \prod_{k=1}^K q_k^{a_k}(1-q_k)^{1-a_k} \end{align} which is the probability that a randomly chosen individual has DNA profile $\avec=(a_1,a_2,\ldots,a_K)$ in a population characterized by the model parameter $\qvec=(q_1,q_2,\ldots,q_K)$. The complication is that we are \emph{not} given $\qvec$. Its value has to be inferred from prior assumptions and from data. \section{Inferring the model parameter} \def\pivec{\boldsymbol{\pi}} We do a Bayesian inference for the value of $\qvec$, by computing a posterior distribution. \subsection{Prior} As prior for $q_k$, we assign a \emph{beta distribution}. This choice has a threefold motivation: (i) The beta distribution is a conjugate prior for this problem, which allows for closed-form Bayesian calculations. (ii) It is commonly used in forensic DNA practice. (iii) It is general enough to include various non-informative priors, which will be of special interest to us. We assign independently for each $q_k$ a beta distribution with hyper-parameter $\pi_k = (\alpha_k,\beta_k)$, so that: \begin{align} P(\qvec|\pivec) &= \prod_{k=1}^K \BD(q_k|\alpha_k,\beta_k) \\ &= \prod_{k=1}^K \frac{q_k^{\alpha_k-1}(1-q_k)^{\beta_k-1}}{B(\alpha_k,\beta_k)} \end{align} where we have defined $\pivec=(\pi_1,\pi_2,\ldots,\pi_K)$. The normalization constant of the beta distribution is given by the \emph{beta function}, defined as: \begin{align} B(\alpha,\beta)&=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)} = \int_0^1 q^{\alpha-1}(1-q)^{\beta-1} \,dq \end{align} where $\Gamma$ is the gamma function. For the beta distribution to be normalized, we need $\alpha_k,\beta_k>0$ and unless stated otherwise, we shall assume this condition holds for all our calculations below. In places, we will however consider the limit as $\alpha_k=\beta_k\to0$. When we do this, we will follow the advice of PTLOS and complete the whole calculation under the assumption $\alpha_k,\beta_k>0$ and apply the limit only to the final result. \subsubsection{Non-informative priors} \label{sec:non} If we want to use a non-informative prior, we let $\alpha=\alpha_k=\beta_k$ by symmetry, and we can choose some $\alpha$, for example in the range $0<\alpha\le1$. The case $\alpha\to0$ is called the \emph{Haldane} prior, the case $\alpha=0.5$ is the \emph{Jeffreys} prior and $\alpha=1$ is the \emph{Laplace} prior. The Haldane prior is \emph{flat} in the sense that the probability density for $\log\frac{q}{1-q}$ is uniform, but since this reparametrization of $q$ covers the whole real line, this prior is improper. The Jeffreys prior is flat in the sense that the probability density for $\arcsin(2q-1)$ is uniform between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$. The Laplace prior is flat in the sense that the probability density for $q$ is uniform between 0 and 1. As these names show, different workers in probability theory have arrived at different conclusions about which prior should be used to encode non-informativeness about the Bernoulli model parameter. To make our calculations concrete, we will have to make a definite choice of prior. We shall solve this problem in a later section, by examining the effect of the prior on the end-result of our calculation. \subsubsection{Informative prior} In forensic DNA\footnote{See WEFDNA pp. 63-64.} it is customary to reparametrize the beta prior as: \begin{align} \alpha_k &= \frac{1-\theta}{\theta}p_k\,, & \beta_k &= \frac{1-\theta}{\theta}(1-p_k) \end{align} where $0<p_k<1$ and $0<\theta<1$. Here $\theta$ is known as the \emph{population structure parameter}. With this parametrization, $\BD(q_k|\alpha_k,\beta_k)$ has the following mean and variance: \begin{align} \langle q_k\rangle &= \frac{\alpha_k}{\alpha_k+\beta_k} = p_k & \langle(q_k-p_k)^2\rangle &= \theta p_k(1-p_k) \end{align} For small values of $\theta$, one obtains an \emph{informative} prior, with a small variance and a sharp peak near $p_k$. In the extreme as $\theta\to0$, we get a strongly informative prior, which will override contributions made by finite data and therefore asserts $q_k=p_k$. For the case $p_k=\frac{1}{2}$ and $\theta=\frac{1}{2\alpha+1}\ge\frac{1}{3}$, we recover the above-mentioned non-informative priors: Laplace at $\theta=\frac{1}{3}$, Jeffreys at $\theta=\frac{1}{2}$ and in the extreme as $\theta\to1$, the Haldane prior, which gives maximum weight to the data. These effects will be shown below. \subsection{Database} \label{sec:db} \def\Amat{\mathbf{A}} We make provision in our calculation to optionally use a database of examples to help us infer values for $\qvec$. Let $\Amat=(\avec_1,\avec_2,\ldots,\avec_L)$ be a database of DNA profiles for $L$ different individuals, where the profile for individual $\ell$ is $\avec_\ell=(a_{1\ell},a_{2\ell},\ldots,a_{K\ell})$ and where $a_{k\ell}\in\{0,1\}$ is the binary state of locus $k$ of individual $\ell$. We assume the DNA profiles in $\Amat$: \begin{itemize} \item have been sampled iid from the same population as the suspect and perpetrator and are therefore relevant to inferring the parameter $\qvec$, \item but the individuals are distinct from the suspect and the perpetrator. \end{itemize} Our calculations will allow for the case of the empty database, where $L=0$. \subsection{Likelihood} Because of our independence assumptions in the model, the likelihood for $\qvec$, given the database $\Amat$ is: \begin{align} P(\Amat | \qvec) &= \prod_{\ell=1}^L \prod_{k=1}^K q_k^{a_{k\ell}}(1-q_k)^{1-a_{k\ell}} \\ &= \prod_{k=1}^K q_k^{n_k}(1-q_k)^{L-n_k} \end{align} where $n_k=\sum_{\ell=1}^L a_{k\ell}$ is the number of times locus $k$ has state 1 and $L-n_k$ is the number of times it has state 0. \subsection{Posterior} We can now infer the value of $\qvec$ by computing the posterior: \begin{align} P(\qvec|\Amat,\pivec) &= \frac {P(\qvec|\pivec)P(\Amat|\qvec)} {\int P(\qvec'|\pivec)P(\Amat|\qvec') \,d\qvec'} \\ &= \prod_{k=1}^K \frac { q_k^{\alpha_k+n_k-1}(1-q_k)^{\beta_k+L-n_k-1}} {\int_0^1 q_k'^{\,\alpha_k+n_k-1}(1-q_k')^{\beta_k+L-n_k-1} \,dq_k'}\\ \label{eq:qpost} &= \prod_{k=1}^K \BD(q_k|\alpha_k+n_k,\beta_k+L-n_k) \end{align} where the integral in the denominator was solved by inspection, by recognizing the numerator as another beta distribution. This is due to the fact that the beta distribution is conjugate to the Bernoulli likelihood and therefore should result in a beta posterior. Notice that if the database is empty, then $n_k=L=0$ and the posterior is just the prior. The prior parameters $\alpha_k$ and $\beta_k$ play the same roles mathematically as the event counts $n_k$ and $L-n_k$ and are consequently referred to as \emph{pseudo-counts}. The total pseudo count, $\alpha+\beta$ can be interpreted as the \emph{size} of some pseudo database, which is then effectively pooled with $\Amat$ by the additions in~\eqref{eq:qpost}. In the alternative prior parametrization, $\frac{1-\theta}{\theta}p_k$ and $\frac{1-\theta}{\theta}(1-p_k)$ are the pseudo counts and $\frac{1-\theta}{\theta}$ is the size of the pseudo database. The posterior $P(\qvec|\Amat,\pivec)$ represents our total state of knowledge about $\qvec$ and can be used in all calculations in place of the unknown $\qvec$. \section{Forensic LR} \def\rvec{\mathbf{r}} \def\LR{\text{LR}} We are given two DNA profiles: One for the \emph{suspect}, $\svec=(s_1,s_2,\ldots,s_K)$ and one for the \emph{perpetrator}, $\rvec=(r_1,r_2,\ldots,r_K)$. We work with two hypotheses and assume they are the \emph{only} possible explanations for the observed data $\svec,\rvec$: \begin{itemize} \item The \emph{prosecution hypothesis}, $H_p$, asserts that suspect and perpetrator are the same person. \item The \emph{defence hypothesis} $H_d$, asserts that they are different individuals. \end{itemize} Below we compute the likelihoods under each hypothesis. For now, we assume that if they don't match, $\rvec\ne\svec$, then in the absence of DNA measurement errors, this proves deductively that $H_d$ is true and $H_p$ is false. In the matched case, $\rvec=\svec$, however, we need probabilistic reasoning. The most natural way to do this would be to compute the \emph{posterior}, \begin{align} P(H_p|\rvec,\svec,\Amat,\pivec,\Pi) = 1-P(H_d|\rvec,\svec,\Amat,\pivec,\Pi) \end{align} where we have introduced the \emph{prior for the prosecution hypothesis}, \begin{align} \Pi=P(H_p|\Pi)=1-P(H_d|\Pi) \end{align} which is assigned by a reasoning process not involving DNA profiles. However, in the Bayesian paradigm for presenting evidence in court one equivalently considers the \emph{posterior odds for $H_p$ against $H_d$}, which can be separated\footnote{In the real world, this simple factorization applies only in a limited number of cases. If different alternative culprits, with different levels of relatedness to the suspect are considered, somewhat more general formulas have to be used, as explained in WEFDNA.} into two factors: \emph{likelihood ratio} and \emph{prior odds}, respectively representing the contributions of the DNA analysis and all other evidence not related to DNA: \begin{align} \label{eq:odds} \frac{P(H_p|\rvec,\svec,\Amat,\pivec,\Pi)}{P(H_d|\rvec,\svec,\Amat,\pivec,\Pi)} &= \LR \, \frac{\Pi}{1-\Pi} \end{align} where \begin{align} \LR &= \frac{P(\rvec,\svec|H_p,\Amat,\pivec)}{P(\rvec,\svec|H_d,\Amat,\pivec)} \end{align} is referred to as \emph{the likelihood ratio}. It is then recommended that the end-goal of the forensic DNA analysis is to compute $\LR$, which can be done independently of $\Pi$. We derive expressions for both likelihoods below and then form the ratio. Finally, notice that if $\LR=1$, then the DNA analysis is \emph{completely non-informative} about $H_p$ versus $H_d$: in this case the posterior (odds) is the same as the prior (odds). \subsection{Prosecution likelihood} Under the prosecution hypothesis, $\rvec$ and $\svec$ come from the same individual, so that $P(\rvec,\svec|H_p,\qvec)=\delta(\rvec,\svec)P(\svec|\qvec)$, where $\delta(\rvec,\rvec)=1$, or $\delta(\rvec,\svec)=0$ if $\rvec\ne\svec$. Since we are not given $\qvec$, but instead we are given the prior $\pivec$ and the database $\Amat$, we must condition on what we have and instead compute: \begin{align} P(\rvec,\svec|H_p,\pivec,\Amat) &=\delta(\rvec,\svec) P(\svec|\pivec,\Amat) \end{align} where \begin{align} P(\svec|\pivec,\Amat) &= \int_0^1\int_0^1\cdots\int_0^1 P(\svec|\qvec)P(\qvec | \pivec,\Amat)\,dq_1 dq_2\cdots dq_K \\ &= \int_Q P(\svec|\qvec)P(\qvec | \pivec,\Amat)\,d\qvec \end{align} where $Q$ is short-hand for the $K$-cube over which we are integrating. Note $P(\svec|\pivec,\Amat)$ is called the \emph{predictive distribution} for $\svec$, because it predicts the value of an as yet unseen profile, given that we have already seen the profiles in $\Amat$. Again by virtue of the conjugate prior, the predictive distribution can be found in closed form: \begin{align} P(\svec|\pivec,\Amat) &= \int_Q P(\svec |\qvec) P(\qvec | \pivec,\Amat) \,d\qvec \\ &= \prod_{k=1}^K \int_0^1 q_k^{s_k}(1-q_k)^{1-s_k} \frac{q_k^{\alpha_k+n_k-1}(1-q_k)^{\beta_k+L-n_k-1}}{B(\alpha_k+n_k,\beta_k+L-n_k)} \,dq_k \\ &= \prod_{k=1}^K \fra {\int_0^1 q_k^{\alpha_k+s_k+n_k-1}(1-q_k)^{\beta_k+L+1-s_k-n_k-1} \,dq_k} {B(\alpha_k+n_k,\beta_k+L-n_k)} \\ &= \prod_{k=1}^K \frac {B(\alpha_k+s_k+n_k,\beta_k+L+1-s_k-n_k)} {B(\alpha_k+n_k,\beta_k+L-n_k)} \\ &= \prod_{k=1}^K P(s_k|\pi_k,n_k,L) \end{align} Now we can expand the beta functions in terms of gamma functions and simplify the ratios of gammas with the identity $\Gamma(x+1)=x\Gamma(x)$, to find the predictive probability:\footnote{Notice~\eqref{eq:pred1i} agrees with equation 5.6 on page 64 in WEFDNA.} \begin{align} \label{eq:pred1n} P(s_k=1|\pi_k,n_k,L) &= \frac{\alpha_k+n_k}{\alpha_k+\beta_k+L} \\ \label{eq:pred1i} &= \frac{(1-\theta)p_k+\theta n_k}{(1-\theta)+\theta L} \end{align} For the informative prior case, notice that $\theta$ gives \emph{interpolation} weights between data and the prior parameter $p_k$. At the one extreme if $\theta\to1$ (Haldane prior), we disregard the prior parameter $p_k$ and end up with just the data proportion $\frac{n_k}{L}$. At the other extreme if $\theta=0$, we disregard the data $\Amat$ and end up with the prior parameter $p_k$. (If we use the non-informative Laplace prior, with $\alpha_k=\beta_k=1$, then~\eqref{eq:pred1n} is known as Laplace's \emph{rule of succession}.) Finally, the predictive probability\footnote{Notice $P(s_k=0|\alpha_k,\beta_k,n_k,L)+P(s_k=1|\alpha_k,\beta_k,n_k,L)=1$.} for the event $s_k=0$ is: \begin{align} P(s_k=0|\pi_k,n_k,L) &= \frac{\beta_k+L-n_k}{\alpha_k+\beta_k+L} \\ &= \frac{(1-\theta)(1-p_k)+\theta (L-n_k)}{(1-\theta)+\theta L} \end{align} Note that even for an empty database ($L=n_k=0$), our assumption $\alpha_k,\beta_k>0$ guarantees non-zero predictive probabilities. \subsection{Defence likelihood} Under the defence hypothesis, $\rvec$ and $\svec$ come from different individuals and their probabilities are independent given $\qvec$, so that $P(\rvec,\svec|\qvec)=P(\rvec|\qvec)P(\svec|\qvec)$. However, $\qvec$ is not given, so the independence no longer holds: knowledge of one profile changes the probability for $\qvec$, which in turn changes the probability for the other profile. This dependency is automatically taken care of by applying the rules of probability theory by integrating out the unknown $\qvec$: \begin{align} P(\rvec,\svec|H_d,\pivec,\Amat) &= \int_Q P(\rvec |\qvec) P(\svec |\qvec) P(\qvec | \pivec,\Amat) \,d\qvec \\ &= \prod_{k=1}^K \frac {B(\alpha_k+s_k+r_k+n_k,\beta_k+L+2-r_k-s_k-n_k)} {B(\alpha_k+n_k,\beta_k+L-n_k)} \\ &= \prod_{k=1}^K P(r_k,s_k|\pi_k,n_k,L) \end{align} where we can expand and simplify again to find the predictive probability: \begin{align} \label{eq:pred11n} P(r_k=s_k=1|\pi_k,n_k,L) &= \frac{\alpha_k+n_k}{\alpha_k+\beta_k+L} \; \frac{\alpha_k+n_k+1}{\alpha_k+\beta_k+L+1} \\ &=P(r_k=1|\pi_k,n_k,L) P(s_k=1|\pi_k,n_k+1,L+1) \end{align} Notice the similarity between the two factors in the RHS: the right factor is obtained from the left by adding 1's to the observation counts. Notice also that if $\alpha+n_k\gg1$, then $P(r_k=s_k=1|\pi_k,n_k,L)\approx P(r_k=1|\pi_k,n_k,L)P(s_k=1|\pi_k,n_k,L)$, making the two events \emph{almost} independent. The probability for the other event of interest\footnote{We don't need the events $(0,1)$ and $(1,0)$ here, because we are interested in the case where profiles match.} is obtained similarly as: \begin{align} P(r_k=s_k=0|\pi_k,n_k,L) &= P(r_k=0|\pi_k,n_k,L) P(s_k=0|\pi_k,n_k,L+1) \end{align} \subsection{LR} Forming the likelihood-ratio, we find: \begin{align} \LR&=\frac{P(\rvec,\svec|H_p,\pivec,\Amat)} {P(\rvec,\svec|H_d,\pivec,\Amat)} = \prod_{k=1}^K \LR_k(r_k,s_k) \end{align} where \begin{align} \LR_k(r,s)&= \frac {\delta(r,s)P(s|\pi_k,n_k,L)} {P(r,s|\pi_k,n_k,L)} \\ &= \frac{\delta(r,s)P(s|\pi_k,n_k,L)} {P(s,s|\pi_k,n_k,L)} \\ &=\frac{\delta(r,s)P(s|\pi_k,n_k,L)}{P(s|\pi_k,n_k,L)P(s|\pi_k,n_k+s,L+1)} \\ &=\frac{\delta(r,s)}{P(s|\pi_k,n_k+s,L+1)} \end{align} More explicitly, for the mismatched cases we have \begin{align} \LR_k(0,1)=\LR_k(1,0)=0 \end{align} and for the matched cases we have \begin{align} \LR_k(1,1) &=\frac{\alpha_k+\beta_k+L+1}{\alpha_k+n_k+1}\,, & \LR_k(0,0) &= \frac{\alpha_k+\beta_k+L+1}{\beta_k+L+1-n_k} \end{align} or, with the other prior parametrization: \begin{align} \label{eq:lr11} \LR_k(1,1) &= \frac{(1-\theta)+\theta(L+1)}{(1-\theta)p_k+\theta(n_k+1)} \\ \intertext{and} \label{eq:lr00} \LR_k(0,0) &= \frac{(1-\theta)+\theta(L+1)}{(1-\theta)(1-p_k)+\theta(L+1-n_k)} \end{align} Notice again, that $\theta$ interpolates between data and the prior parameter $p_k$. The minimum value (for the matched case $r_k=s_k$) is 1. This is a consequence of the error-free measurement assumption. If non-zero error probabilities were considered, values of less than 1 would be possible. \section{Plug-in recipe} In this section, we shall refer to: \begin{itemize} \item One or more \emph{reference populations}, from which one or more databases are drawn to help to estimate the parameters $p_k$ and $\theta$ for an \emph{informative} prior. \item The \emph{relevant population}, from which the suspect and perpetrator were drawn. \end{itemize} In the general case, all these populations are assumed different from each other in the sense that locus state \emph{frequencies may differ} between them. For forensic DNA applications, WEFDNA motivates a \emph{plug-in} recipe to compute the $\LR$, where values for $\theta$ and the $p_k$ are point-estimates made from one or more reference databases. In this recipe, the $p_k$ are representative of the frequencies in the \emph{reference} populations, while the value of $\theta$ is chosen to reflect by how much the corresponding frequencies in the \emph{relevant} population may differ. Small values of $\theta$ encode small expected differences and larger values encode larger expected differences. WEFDNA motivates for values in the range $1\%\le\theta\le5\%$ to be used for most applications. Our database $\Amat$, as defined in section~\ref{sec:db}, is assumed to be drawn from the \emph{relevant} population, but in the usual forensic scenario, additional profiles from the relevant population are not available. In our notation, this means $\Amat$ is empty. In summary, in the WEFDNA plug-in recipe we set $L=n_k=0$, the $p_k$ are generally different from $\frac{1}{2}$ and $\theta$ is smallish. This forms an \emph{informative} prior for the $q_k$. This gives, for $r_k=s_k$: \begin{align} \LR_k(1,1) &= \frac{1}{(1-\theta)p_k+\theta}\,& \LR_k(0,0) &= \frac{1}{(1-\theta)(1-p_k)+\theta} \end{align} \section{Fully Bayesian recipe} Now we turn to the main purpose of this document, namely to explore a fully Bayesian recipe, where we start with a \emph{non-informative} prior and use only the given data, $\Amat,\rvec,\svec$, to infer the model parameter. It must be emphasized that this fully Bayesian recipe cannot be used as is to replace the plug-in recipe, because here we use the luxury of database $\Amat$, sampled from the relevant population. As noted above, in a realistic scenario, we do not have this luxury: instead we have to make do with data sampled from some other, somewhat different, reference population. Although a fully Bayesian recipe could in principle be derived for this more realistic scenario, this would come at the cost of a considerable increase in both conceptual difficulties as well as computational complexity. In this section, therefore we assume we \emph{do} have a database, $\Amat$, sampled from the relevant database and the only difficulty that remains is to choose the non-informative prior. \subsection{Which prior?} We are now faced with making a choice amongst the different flavours of non-informative priors. That is, we have to choose $\alpha_k$ and $\beta_k$, or equivalently $p_k$ and $\theta$. We concede that we are choosing a prior under the perhaps arbitrary constraint that it should be a beta distribution. A more thorough motivation for the prior should perhaps involve solving functional equations in the style of PTLOS. We feel however that the beta distribution already provides a rich enough space for the choice of prior. Moreover, as mentioned above, the non-informative Haldane, Jeffreys and Laplace priors all members of the beta family. To start, we motivate the choice $\alpha=\alpha_k=\beta_k$, or equivalently $p_k=\frac{\alpha_k}{\alpha_k+\beta_k}=\frac{1}{2}$. Before we have seen any data, all loci are on an equal footing, so that the priors for all $k$ must be the same. Next consider a database $\Amat$ with an equal number of 0's and 1's for some locus $k$, so that $n_k=L-n_k$. In this situation, there is no reason to prefer one state to the other, so that the model parameter posterior should satisfy the symmetry condition: $P(q_k|\alpha_k,\beta_k,L,n_k) = P(1-q_k|\alpha_k,\beta_k,L,n_k)$, which is obtained at $\alpha_k=\beta_k$. Another way to see this is simply to require $\LR_k(0,0)=\LR_k(1,1)$ when $n_k=L-n_k$. Now we have $p_k=\frac{1}{2}$ and we still need to choose $\theta$. To do this, consider the case of the \emph{empty} database, with $L=n_k=0$, for which case we still want our recipe to give a sensible answer. Now~\eqref{eq:lr11} and~\eqref{eq:lr00} give: \begin{align} \LR_k(1,1)&=\LR_k(0,0)=\frac{1}{(1-\theta)\frac{1}{2}+\theta}=\frac{2}{1+\theta} \end{align} When $\Amat$ is empty, we now argue that we don't even know whether the locus state varies in the population. So we are not justified in concluding that the match at the locus modifies the probabilities for $H_p$ vs $H_d$. If we maximize $\theta$ at the limit $\theta\to1$, then we obtain the non-informative value of $\LR_k=1$, so that the DNA evidence is effectively \emph{disregarded}. \subsection{Analysis} Here we analyse the behaviour of $\LR_k(r_k,s_k)$, when $r_k=s_k$ and $\theta=1$. We get: \begin{align} \LR_k(1,1) &= \frac{L+1}{n_k+1}, & \LR_k(0,0) &= \frac{L+1}{L+1-n_k} \end{align} We make several observations: \begin{itemize} \item The matched likelihood ratios are bounded: $1\le LR_k(s,s) \le L+1$. We have already commented on the lower bound. The upper bound is determined by the database size, $L$. This makes intuitive sense, the larger the database, the more our maximum confidence grows. Note however, that this maximum should be a relatively rare occurrence, as shown below. \item For an empty database, if $L=n_k=0$, then as discussed, $\LR_k(1,1)=\LR_k(0,0)=1$. \item For a non-empty database, as long as a locus $k$ has the same state in all of the observed data, $\Amat,\rvec,\svec$, then the $\LR$ is \emph{still} unity: If $n_k=L$, then $\LR_k(1,1)=1$ and if $n_k=0$, then $\LR_k(0,0)=1$. \item Conversely, for a given database size $L$, the maximum $\LR$ value is reached when the locus state observed in $s_k=r_k$ has never been observed in $\Amat$. This implies the trait shared by the suspect and perpetrator is \emph{rare}. The larger the database size, $L$, the more we are convinced of the rarity and the more we are convinced of the identity of suspect and perpetrator. \item For a large database, where both $n_k\gg1$ and $L-n_k\gg1$, the likelihood ratio for $s_k=r_k$ is the inverse of the frequency of the corresponding event in the database: $\LR_k(1,1)\approx\frac{L}{n_k}$ and $\LR_k(0,0)\approx\frac{L}{L-n_k}$. \end{itemize} We can briefly compare this recipe to a very naive recipe, where we simply assign $q_k=\frac{n_k}{L}$, irrespective of the size of the database. This would give $\LR(1,1)=\frac{L}{n_k}$ and $\LR_k(0,0)=\frac{L}{L-n_k}$. This agrees with the last case above of the Bayesian recipe, but in any other cases it could give overconfident results. In particular, if $n_k=0$, or $n_k=L$, one could get infinite $\LR$ values, which would be ridiculous in the extreme if $L=1$. The fully Bayesian recipe agrees with the naive recipe when data is plentiful, but continues to give sensible answers even when the data gets scarce to the point of vanishing. \subsubsection{Comment on Haldane prior} With a more realistic DNA model, where each STR locus has two independent sides (paternal and maternal), we can gain some extra insight into the nature of the Haldane prior. In this case, it can be shown (WEFDNA, section 6.2.2) that when $L=0$, the LR for a locus can nevertheless reach a maximum of 3. If the paternal and maternal sides are the same, then we get LR=1, but if they are different, we get LR=3. From this fact and the third bullet above, we learn that: \begin{quote} The LR at locus $k$ becomes non-informative ($\LR_k=1$) under the Haldane prior, if and only if no state change has been observed at locus $k$ in \emph{all} of the data, $\Amat,\rvec,\svec$. \end{quote} One may argue that loci used for forensic DNA profiling have been chosen for the purpose of giving good discrimination between individuals, precisely because they \emph{do} vary appreciably between individuals and that therefore the Haldane prior is too extreme. However, we are concerned here with \emph{sub-populations}, about which we cannot assume that every locus is informative---it may well be that a certain locus is constant over the whole sub-population. We therefore argue that the behaviour of the Haldane prior is appropriate: the LR for a locus remains non-informative ($\LR_k=1$), until we have observed at least one state change in our data. \end{document} \section{Appendix} \subsection{Jeffreys prior} As shown in section~\ref{sec:non}, flatness depends on the particular parametrization, so that it may be difficult to motivate the use of a prior because of flatness. The Jeffreys prior however does have a special motivation, in the sense that it is defined irrespective of the parametrization. The Jeffreys prior is a functional of the likelihood and it does not matter which parametrization we use for the likelihood. If we start with the likelihood $L(q|X)=P(X|q)$, where $X$ is some data set, then the Jeffreys prior is\footnote{There is a generalization for vector valued parameters, with similar invariance properties. See e.g. \url{http://en.wikipedia.org/wiki/Jeffreys_prior}.} \begin{align} \label{eq:jprior} P(q|L)&\propto \sqrt{\left\langle\left(\frac{d}{dq}\log L(q|X)\right)^2\right\rangle_{X|q}} \end{align} Let us now reparametrize to some other parameter, say $\gamma$, via a bijection $f$, so that $\gamma=f(q)$ and define $L'(\gamma|X)=L(f^{-1}(\gamma)|X)$, then some calculus shows that~\eqref{eq:jprior} gives an equivalent prior $P(\gamma|L')$, for which: \begin{align} P(\gamma|L')d\gamma &=P(q|L)dq \end{align} so that for any inference where we integrate out $q$ or $\gamma$, involving definite integrals of the form: \begin{align} \int h(f^{-1}(\gamma))P(\gamma|L')\,d\gamma &= \int h(q)P(q|L)\,dq \end{align} where $h(q)$ is some quantity of interest, we will arrive at identical results. The benefit of this reparametrization property is subtle. If we assign \emph{any} prior according to some criterion, e.g. flatness, and then reparametrize, such a reparametrization also \emph{won't} change the result of inference. But the reparametrized prior usually won't be flat anymore, so that it does not meet our criterion for a suitable prior anymore. In contrast, if we reparametrize the Jeffreys prior, it is still a Jeffreys prior. Put another way, the advantage of the Jeffreys prior is that it doesn't matter with which parametrization of our likelihood function we start---we will get the same result. If we assign a prior according to some other principle, we may arrive at different results if we start with different parametrizations.
1,108,101,562,709
arxiv
\section{Introduction} \label{sec:intro} Szemer\'edi's regularity lemma \cite{Sz76} gives a rough structural decomposition for all graphs and is one of the most powerful tools in graph theory. A major drawback of the regularity lemma is that the number of parts in the decomposition grows as an exponential tower of $2$'s of height a power of $1/\epsilon$, where $\epsilon$ is the regularity parameter~\cite{Gow97}. A natural question that has been studied by many researchers is: in what circumstances can one get a more effective bound? Namely, under what conditions does every graph in a family $\mathcal{F}$ of graphs necessarily have a partition with much fewer parts, say polynomial in $1/\epsilon$? One natural condition for a family $\mathcal{F}$ of graphs is that it is \emph{hereditary}, that is, if $G \in \mathcal{F}$ then every induced subgraph of $G$ is also in $\mathcal{F}$. For hereditary families, it turns out that the bound on the number of parts in a regular partition is polynomial in $1/\epsilon$ if the neighborhood set system of every graph in the family has bounded VC dimension, and otherwise the bound is tower-type. This gives a satisfactory answer to the problem. A \emph{set system} $\mathcal{S}$ is a collection of subsets of some ground set $\Omega$. Here we only consider finite $\Omega$. We say that $U \subseteq \Omega$ is \emph{shattered} by $\mathcal{S}$ if for every $U' \subseteq U$ there is some $T \in \mathcal{S}$ with $T \cap U = U'$. The \emph{Vapnik--Chervonenkis dimension} (or \emph{VC dimension}) of $\mathcal{S}$, denoted $\vcdim \mathcal{S}$, is the size of the largest shattered set. Let $G$ be a graph. The \emph{neighborhood} $N(v)$ of a vertex $v \in V(G)$ is the set of vertices adjacent to $v$. The \emph{VC dimension} of a graph $G$ is defined to be $\vcdim\{N(v):v \in V\}$. Given a bipartite graph $F$ with vertex bipartition $V(F) = U \cup V$, we say that a map $\phi \colon V(F) \to V(G)$ \emph{bi-induces} $F$ if for every $(u,v) \in U \times V$, the pair $uv$ is an edge of $F$ if and only if $\phi(u)\phi(v)$ is an edge of $G$. Note that we have no requirements about edges in $G$ between vertices in the image of $U$, and likewise with $V$. We say that $G$ contains a \emph{bi-induced copy of $H$} if there exists a map $\phi$ as above that is injective on each of $U$ and $V$.\footnote{Having a bi-induced copy of $F$ is weaker than having an \emph{induced} copy of $F$, where in the latter we also require that there are no edges in $G$ between vertices in the image of $U$, and likewise with $V$. Also, an alternative notion of bi-induced copy of $H$ assumes that $\phi$ is injective. The discussed results hold for this alternative notion as well.} It is known that the following are equivalent for a hereditary family $\mathcal{F}$ of graphs: \begin{enumerate} \item[(1)] The VC dimension of the graphs in $\mathcal{F}$ is uniformly bounded. \item[(2)] There is a bipartite graph $F$ such that none of the graphs in $\mathcal{F}$ has a bi-induced copy of $F$. \item[(3)] The family $\mathcal{F}$ has a forbidden induced bipartite graph, a forbidden induced complement of a bipartite graph, and a forbidden induced split graph. \item[(4)] The number of graphs in $\mathcal{F}$ on $n$ vertices is at most $2^{n^{2-\epsilon}}$ for some $\epsilon=\epsilon(\mathcal{F})>0$. In contrast, every other hereditary family of graphs contains at least $2^{n^2/4}$ labeled graphs on $n$ vertices. \item[(5)] There is a constant $k = k(\mathcal{F})$ such that every $n$-vertex graph in $\mathcal{F}$ has an equitable vertex partition into at most $\epsilon^{-k}$ parts such that all but at most an $\epsilon$-fraction of the pairs of parts have edge density at most $\epsilon$ or at least $1-\epsilon$. In contrast, every other hereditary family of graphs has a graph that requires a tower in a power of $1/\epsilon$ parts in any $\epsilon$-regular equitable vertex partition. \end{enumerate} The above characterizations give an interesting dichotomy between hereditary families of graphs of bounded VC dimension versus those of unbounded VC dimension. It shows that families of graphs with bounded VC dimension have smaller growth and are more structured. The equivalence of (1) and (4) was given by Alon, Balogh, Bollob\'as, and Morris~\cite{ABBM}. Alon, Fischer, and Newman~\cite{AFN} proved a bipartite version of the regularity lemma for graphs of bounded VC dimension, and the version for all graphs is due to Lov\'asz and Szegedy \cite{LS}. The proof was simplified with improved bounds by Fox, Pach, and Suk \cite{FPS}. Further results related to the above equivalences for tournaments can be found in \cite{FGSY}. A \emph{half-graph} is a bipartite graph on $2k$ vertices $\{u_1,\ldots,u_k\} \cup \{v_1,\ldots,v_k\}$ such that $u_i$ is adjacent to $v_j$ if and only if $i \leq j$. Malliaris and Shelah \cite{MaSh} proved if a graph has no bi-induced copy of the half-graph on $2k$ vertices, then one can partition the vertex set into $\epsilon^{-O_k(1)}$ many parts such that every pair of parts is $\epsilon$-regular (there are no irregular pairs). Bi-inducing a half-graph is related to a notion of stability in model theory, and for this reason Malliaris and Shelah called their result a ``stable regularity lemma''. The above discussion summarizes some relevant results for graphs. We now turn our attention to subsets of groups and their associated Cayley graphs. Let $G$ be a finite abelian group, written additively. Let $A \subseteq G$. Consider the \emph{Cayley sum graph} formed by taking the elements of $G$ as vertices, where $x,y \in G$ are adjacent if $x+y \in A$ (we may end up with some loops; alternatively, we can consider a bipartite version of this construction). The VC dimension of the graph corresponds to the VC dimension of the collection of translates of $A$, which we simply call the \emph{VC dimension of $A$}, defined as \[ \vcdim A := \vcdim\{ A + x : x \in G\}. \] For a bipartite graph $F$ with vertex bipartition $U \cup V$, we say that a map $\phi\colon V(F) \to G$ \emph{bi-induces $F$ in $A$} if, for every $(u,v) \in U \times V$, $uv$ is an edge of $F$ if and only if $\phi(u) + \phi(v) \in A$. We say that $A$ has a \emph{bi-induced copy of $F$} if there exists a map $\phi$ as above that is injective on each of $U$ and $V$. Observe that $A$ has a bi-induced copy of $F$ if its VC dimension is large enough. To see this, first note that if no pair of vertices in $U$ have identical neighborhoods in $V$, and $A$ has VC dimension at least $\abs{V}$, then $A$ has a bi-induced copy of $F$. Indeed, we can construct $\phi$ by mapping $V$ to a subset of $G$ shattered by translates of $A$ (such a choice exists since $\vcdim A \ge \abs{V}$). Since $\phi(V)$ is shattered, for every $u \in U$, there is some $y_u \in G$ such that $(A - y) \cap \phi(V) = \phi(N(u))$. Let $\phi$ send $u$ to this $y_u$, for each $u \in U$. We obtain a map $\phi \colon V(F) \to G$ that bi-induces $F$, though this map may not be injective on $U$ (it is always injective on $V$) if some pairs of vertices of $U$ have identical neighborhoods, but this can be easily fixed\footnote{\label{ft:vc-bi-induce}Consider the bipartite graph $F_+$ obtained from $F$ by adding $\lceil \log_2 \abs{U} \rceil$ new vertices to the vertex set $V$, and add edges from the new vertices to $U$ so that no two vertices in $U$ have identical neighborhoods in $F_+$. By earlier arguments, if $\vcdim A \ge \abs{V} + \lceil \log_2 \abs{V}\rceil$, then $A$ necessarily contains an bi-induced copy of $F_+$, and hence a bi-induced copy of $F$.}. Green \cite{Green05} proved an arithmetic analogue of Szemer\'edi's regularity lemma for abelian groups. The statement is much simpler in the case of abelian groups of bounded exponents, which is the main focus of our paper (some remarks regarding general groups are given in the final section). For an abelian group $G$ and a subset $A \subseteq G$, a coset $H+x$ of a subgroup $H$ is called \emph{$\epsilon$-regular} if all the nontrivial Fourier coefficients of $A \cap (H+x)$, when interpreted as a subset of $H+x$, are at most $\epsilon$. For each $\epsilon>0$ and positive integer $r$, Green's arithmetic regularity lemma states that there is $K=K(r,\epsilon)$ such that the following holds. If $G$ has exponent at most $r$ and $A \subseteq G$, then there is a subgroup $H \subseteq G$ of index at most $K$ such that all but an $\epsilon$-fraction of the cosets of $H$ are $\epsilon$-regular. Recently, an arithmetic analog of the Malliaris--Shelah stable regularity lemma was proved by Terry and Wolf \cite{TeWo} for $G=\mathbb{F}_p^n$ with $p$ fixed. It was shown that if $A \subseteq G$ has no bi-induced copy of a half-graph on $2k$ vertices, then there is a subgroup $H$ of $G$ of index at most $e^{\epsilon^{-O_{k,p}(1)}}$ such that for every $x \in G$, one has either $\abs{A\cap (H+x)} \le \epsilon \abs{H}$ or $\abs{A\cap (H+x)} \ge (1-\epsilon) \abs{H}$. Here the subscripts on the $O_{k,p}(1)$ mean that the constant is allowed to depend on $k$ and $p$. The result was subsequently extended to general groups by Conant, Pillay, and Terry \cite{CPT}, who showed that for every finite group $G$, if $A \subseteq G$ has no bi-induced copy of the half-graph on $2k$ vertices, then there is a normal subgroup $H$ of $G$ of index $O_{k,\epsilon}(1)$ such that there is some union $S$ of $H$-cosets such that $\abs{A \Delta S} \le \epsilon \abs{H}$, where $A \Delta B = (A \setminus B) \cup (B \setminus A)$ denotes the symmetric difference. However, the general group version of the theorem~\cite{CPT} gives no quantitative bounds on the index of $H$ due to the model theoretic tools involved in its proof. We saw earlier that forbidding a fixed bi-induced bipartite graph implies bounded VC dimension. Our first main result generalizes a variant of Terry and Wolf's result to sets of bounded VC dimension, and gives bounds of polynomial order in $1/\epsilon$. Its proof can be found in Section~\ref{sec:reg}. \begin{theorem}[Regularity lemma] \label{thm:reg} Fix positive integers $r$ and $d$. If $G$ is a finite abelian group with exponent at most $r$, and $A \subseteq G$ has VC dimension at most $d$, then for every $\epsilon > 0$ there is a subgroup $H$ of $G$ of index at most $\epsilon^{-d -o(1)}$ such that $\abs{A\Delta S} \le \epsilon\abs{G}$ for some $S \subseteq G$ which is a union of cosets of $H$. Here $o(1)$ is some quantity that goes to zero as $\epsilon \to 0$, at a rate possibly depending on $r$ and $d$. \end{theorem} We also prove a removal lemma for bi-induced copies of a fixed bipartite graph. Let us first recall the classical graph removal lemma. We say that an $n$-vertex graph is \emph{$\epsilon$-far} from some property if one needs to add or delete more than $\epsilon n^2$ edges to satisfy the property. The triangle removal lemma\footnote{The removal lemma is often stated in the contrapositive, which better explains the name ``removal lemma'': if triangle density of a graph is at most $\delta(\epsilon)>0$, then the graph can be made triangle-free by deleting $\epsilon n^2$ edges} says that if an $n$-vertex graph is $\epsilon$-far from triangle-free, then its triangle density is at least $\delta(\epsilon)>0$. The original graph regularity proof \cite{RS} of the triangle removal lemma shows that we may take $1/\delta(\epsilon)$ to be a tower of 2's of height $\epsilon^{-O(1)}$, which was improved to height $O(\log (1/\epsilon))$ in \cite{Fox11}. It is known that there exists a constant $c>0$ such that the bound in the triangle removal lemma cannot be improved to $\delta = \epsilon^{-c \log (1/\epsilon)}$ (see \cite{CF13} for a survey on graph removal lemmas). There is also a removal lemma for induced subgraphs \cite{AFKS}, initially proved using a so-called \emph{strong regularity lemma}, though better bounds were later obtained in \cite{CF12}. An arithmetic analog of the graph removal lemma was first proved by Green~\cite{Green05} for ``complexity 1'' patterns such as $x+y+z=0$ using his arithmetic regularity lemma. Kr\'al', Serra, and Vena~\cite{KSV09} later showed that Green's arithmetic removal lemma can be deduced as a consequence of the graph removal lemma. More general arithmetic removal lemmas for linear systems were later proved as a consequence of the hypergraph removal lemma~\cite{KSV12,Sha10}. We refer to the references for precise statements. Note that the reduction from the arithmetic removal lemma to the (hyper)graph removal lemma fails for induced patterns. It remains open to find a general induced arithmetic removal lemmas \cite[Conjecture 5.3]{Sha10}. Our second main result gives an arithmetic analog of the removal lemma, with polynomial bounds, for bi-induced patterns. We say that $A\subset G$ is \emph{$\epsilon$-far from bi-induced-$F$-free} if $A'\subset G$ contains a bi-induced copy of $F$ whenever $\abs{A \Delta A'} \le \epsilon \abs{G}$. Here is our second main result, whose proof can be found in Section~\ref{sec:removal}. \begin{theorem}[Removal lemma] \label{thm:removal} Fix a positive integer $r$ and a bipartite graph $F$. Let $G$ be a finite abelian group with exponent at most $r$. For every $0 < \epsilon <1/2$, if $A\subseteq G$ is $\epsilon$-far from bi-induced-$F$-free, then the probability that a uniform random map $\phi \colon V(F) \to G$ bi-induces $F$ is at least $\epsilon^{O(\abs{V(F)}^3)}$. \end{theorem} We mention an application to property testing. The removal lemma gives a polynomial-time randomized sampling algorithm for distinguishing sets $A \subseteq G$ that are bi-induced-$F$-free from those that are $\epsilon$-far from bi-induced-$F$-free. Indeed, sample a random map $\phi \colon V(F) \to G$, and output YES if $\phi$ bi-induces $F$ and is injective on each vertex part of $F$, and otherwise output NO. If $A$ is bi-induced-$F$-free, then the algorithm always outputs NO. On the other hand, if $F$ is $\epsilon$-far from bi-induced-$F$-free, then by the theorem above, the algorithm outputs YES with probability at least $\epsilon^{O_F(1)}$, provided that $G$ is large enough, so that $\phi$ is injective with high probability. We can then repeat the experiment $\epsilon^{-O_F(1)}$ times to obtain a randomized algorithm that succeeds with high probability. \section{Regularity lemma} \label{sec:reg} In this section, we prove Theorem~\ref{thm:reg}. We say that a set system $\mathcal{S}$ on a finite ground set $\Omega$ is \emph{$\delta$-separated} if $\abs{S \Delta T} \ge \delta \abs{\Omega}$ for all distinct $S,T \in \mathcal{S}$. We quote a bound on the size of a $\delta$-separated system. \begin{lemma}[Haussler's packing lemma~\cite{H95}] \label{lem:haussler} Let $d,\delta >0$. If $\mathcal{S}$ is a $\delta$-separated set system of VC dimension at most $d$, then $\abs{\mathcal{S}} \le (30/\delta)^d$. \end{lemma} By taking a maximal $\delta$-separated collection of translates of $A \subseteq G$, we deduce, below, that $A$ must be $\delta$-close to many of its own translates. \begin{lemma} \label{lem:invariant} Let $G$ be a finite abelian group, and $A \subseteq G$ a subset with VC dimension at most $d$, and $\delta > 0$. Then \[ \abs{\{x : \abs{A \Delta (A + x)} \le \delta \abs{G}\}} \ge (\delta/30)^d \abs{G}. \] \end{lemma} \begin{proof} Let $W$ be a maximal subset of $G$ such that $\abs{(A + w) \Delta (A + w')} > \delta \abs{G}$ for all distinct $w,w' \in W$. We have $\abs{W} \le (30/\delta)^d$ by Lemma~\ref{lem:haussler}. Let \[ B = \{x \in G : \abs{A \Delta (A+x)} \le \delta \abs{G}\}. \] Since $W$ is maximal, for every $x \in G$, there is some $w \in W$ such that $\abs{(A+x)\Delta (A+w)} \le \delta \abs{G}$, which implies $x - w \in B$. Hence $G = \bigcup_{w \in W}(B+w)$. Therefore $\abs{B} \ge \abs{G}/\abs{W} \ge (\delta/30)^d\abs{G}$. \end{proof} We quote a result from additive combinatorics. We use the following standard notation: $A+A = \{a+b : a,b\in A\}$, $A - A = \{a-b : a,b\in A\}$, and $k A = A + \cdots + A$ ($k$ times). \begin{theorem}[Bogolyubov--Ruzsa lemma for groups with bounded exponent] \label{thm:bog-exp} Let $G$ be an abelian group of exponent at most $r$, and $A \subseteq G$ a finite subset with $\abs{A+A}\le K\abs{A}$. Then $2A-2A$ contains a subgroup of $G$ of size at least $c_r(K)\abs{A}$ for some constant $c_r(K) > 0$. \end{theorem} The name ``Bogolyubov--Ruzsa lemma'' was given by Sanders~\cite{San12}, who proved the theorem with the current best bound $c_r(K) = e^{-O_r(\log^4 2K)}$ (see \cite[Theorem 11.1]{San12}). We refer the readers to the introductions of \cite{San12,San13} for the history of this result. A version of the theorem for $G = \ZZ$ was initially proved by Ruzsa~\cite{Ruz94} as a key step towards his proof of Freiman's theorem. The assertion of the polynomial Freiman--Ruzsa conjecture, a central open problem in additive combinatorics, would follow from an improvement of the bound to $c_r(K) = K^{-O_r(1)}$. In our next lemma, we start from the conclusion of Lemma~\ref{lem:invariant}, which gives us a large set $B$ such that $A \approx A+x$ for all $x \in B$. Consider the sequence $B, 2B, 4B, 8B, \dots$. Since $B$ is large, the size of $2^iB$ cannot keep on growing, so we can find a set $B' = 2^iB$ with small doubling $\abs{B'+B'} \le K\abs{B'}$, and $i$ not too large. Theorem~\ref{thm:bog-exp} then implies that $2B'-2B'$ contains a large subgroup, in which every element $x$ satisfies $A \approx A +x$, which is close to what we need. \begin{lemma} \label{lem:reg-key} Fix a positive integer $r$. Let $G$ be a finite abelian group of exponent at most $r$. Let $0 < \delta < 1/2$, $C > 0$, and $A \subseteq G$. Let $B = \{x \in G : \abs{A\Delta(A+x)}\le \delta \abs{G}\}$. Suppose $\abs{B} \ge \delta^C \abs{G}$. Then there exists a subgroup $H$ of $G$ with $\abs{H} \ge \delta^{o(1)} |B|$ such that $\abs{A \Delta (A+x)} \le \delta^{1-o(1)} \abs{G}$ for all $x \in H$, and furthermore there exists a union $S$ of $H$-cosets such that $\abs{A \Delta S} \le \delta^{1-o(1)} \abs{G}$. Here $o(1)$ is a quantity that goes to zero as $\delta \to 0$, at a rate that may depend on $r$ and $C$. \end{lemma} \begin{proof} Let $K = K(\delta) > 1$ to be decided. We cannot have $|2^{i+1} B| > K |2^{i} B|$ for every $0 \le i \le \log_K(|G|/|B|)$ since otherwise we would have $|2^i B| > |G|$ for some $i$, which is impossible as $2^i B$ is a subset of $G$. Thus $|2^{i+1} B| \le K |2^{i} B|$ for some $i \le \log_K(|G|/|B|) \le C \log_K (1/\delta)$, and letting $\ell = 2^i$, we have \begin{equation} \label{eq:2l-K} |2\ell B| \le K|\ell B| \quad \text{with} \quad \ell \le 2^{C \log (1/\delta)/\log K} = \delta^{- O(1/\log K)}. \end{equation} Since $|(A + x) \Delta A | \le \delta |G|$ for all $x \in B$, we have, by the triangle inequality, \[ |(A + x + y) \Delta A| \le |(A + x + y) \Delta (A + y)| + |(A + y) \Delta A| = |(A + x) \Delta A| + |(A + y) \Delta A| \text{ for all } x,y \in G. \] Thus \begin{equation} \label{eq:2l-2l} |A \Delta (A + x) | \le 4\ell \delta \abs{G} \quad \text{ for all } x \in 2\ell B - 2\ell B. \end{equation} By Theorem~\ref{thm:bog-exp} and \eqref{eq:2l-K}, $2\ell B - 2\ell B$ contains a subgroup $H$ of $G$ with $\abs{H} \ge c_r(K) \abs{\ell B} \ge c_r(K) \abs{B}$. This would complete the proof of the first claim in the lemma provided that $K = K(\delta) \to \infty$ slowly enough as $\delta \to 0$ so that $c_r(K) = \delta^{o(1)}$ (then $\ell \le \delta^{-O(1/\log K)} = \delta^{-o(1)}$). Concretely, Theorem~\ref{thm:bog-exp} with Sander's $c_r(K) = e^{-O_r(\log^4 2K)}$ allows us to take $K(\delta) = \exp((\log 1/\delta)^{1/5})$, say, so that all the $o(1)$'s in the exponents decay as $(\log(1/\delta))^{-1/5}$. For the second claim, let $S$ be the union of all $H$-cosets $y+H$ with $\abs{A \cap (y+H) } \ge \abs{H}/2$. Then \begin{align*} \abs{A \Delta S} &= \sum_{y \in G/H} \min\set{\abs{A \cap (y+H)}, \abs{H} - \abs{A \cap (x+H)}} \\ &\le \sum_{y \in G/H} \frac{2}{\abs{H}}\abs{A \cap (y+H)}( \abs{H} - \abs{A \cap (y+H)}) \\ &= \frac{1}{\abs{H}} \sum_{x \in H} \abs{A \Delta (A+x)} \quad \text{\footnotesize[counting pairs in $A \times (G\setminus A)$ lying in the same $H$-coset]} \\ &\le 4\ell \delta \abs{G} = \delta^{1-o(1)} \abs{G}. \quad \text{\footnotesize[by \eqref{eq:2l-2l}]} \end{align*} \end{proof} The regularity lemma, Theorem~\ref{thm:reg}, then follows immediately after combining Lemmas~\ref{lem:invariant} and \ref{lem:reg-key}. \medskip Instead of applying the Bogolyubov--Ruzsa lemma as we do above, it is also possible to prove Lemma~\ref{lem:reg-key} using Freiman's theorem for groups of bounded exponent: \begin{theorem} [Ruzsa~\cite{Ruz99}] \label{thm:Freiman-groups} If $A$ is a finite subset of an abelian group of exponent at most $r$ such that $\abs{A+A} \le K\abs{A}$, then $A$ is contained in a subgroup of size $O_{r, K}(1)\abs{A}$. \end{theorem} At the point in the proof of Lemma~\ref{lem:reg-key} where we apply Theorem~\ref{thm:bog-exp}, we can instead apply Theorem~\ref{thm:Freiman-groups} to contain $\ell B$ inside a subgroup of size $\delta^{-o(1)}\abs{\ell B}$. Now we apply a corollary of Kneser's theorem. \begin{theorem}[Kneser's theorem~\cite{Kn}; see~{\cite[Theorem 5.5]{TV}}] Let $G$ be an abelian group and $A, B$ finite non-empty subsets. If $|A| + |B| \leq |G|$ then there is a finite subgroup $H$ of $G$ such that \[ \abs{A + B} \ge \abs{A + H} + \abs{B + H} - \abs{H} \ge \abs{A} + \abs{B} - \abs{H}. \] The subgroup $H$ can be taken to be the stabilizer of $A+B$: \[ H = \{ g \in G : g + ( A + B ) = ( A + B ) \}. \] \end{theorem} \begin{corollary} If $G$ is an abelian group, $t$ is a positive integer, and $A \subset G$ has $|A| \geq |G|/t$ and $A$ generates $G$, then $2t A = G$. \end{corollary} \begin{proof} For any $i$ such that $(i+1) A \ne G$, applying Kneser's theorem to the sets $iA$ and $A$ gives us a subgroup $H$ so that $|(i+1) A| \ge |iA+H| + |A+H| - |H| \ge |iA| + |A|/2$ (since $A$ generates $G$, $A+H$ is a union of at least two cosets of $H$, so $|H| \le |A+H|/2$ and $|A+H| \geq |A|$). Iterating gives $|2tA| \ge t |A| \ge |G|$. \end{proof} Let us continue with our discussion of the alternative approach to proving Lemma~\ref{lem:reg-key}. Since $\ell B$ occupies a $\delta^{o(1)}$-fraction of some subgroup, by the above corollary, $\ell' B$ is a subgroup (playing the role of $H$ in the first proof) for some $\ell' = \delta^{-o(1)}\ell$. From this point we can proceed as the rest of the proof of Lemma~\ref{lem:reg-key}. \section{A strengthened regularity lemma} In the next section, we prove a removal lemma for bi-induced patterns. The regularity lemma we stated in Theorem~\ref{thm:reg} seems not quite strong enough to establish the removal lemma. Below we prove a strengthening, where the VC dimension hypothesis is weakened to a more robust one. Instead of requiring that $A$ has bounded VC dimension, we will ask that, with probability at least 0.9, say, the VC dimension of the collection of translates of $A$ is bounded if we restrict the ground set $G$ to a random set. We state the result below in the form of two alternatives: either $A$ has high VC dimension when sampled, or it satisfies a regularity lemma with polynomial bounds. \begin{proposition}[Regularity lemma with robust VC dimension hypothesis] \label{prop:robust-reg} Fix positive integers $r$ and $d$. Let $G$ be a finite abelian group of exponent at most $r$. Let $A \subseteq G$. One of the following must be true for every small $\epsilon>0$: \begin{enumerate} \item[(a)] For some $k = \epsilon^{-d-o(1)}$, if $X$ and $Y$ are random $k$-element subset of $G$, then we have $\vcdim\{(A+ x) \cap Y : x \in X\} > d$ with probability at least $0.9$. \item[(b)] There exists a subgroup $H$ of $G$ of index at most $\epsilon^{-d-o(1)}$ such that $\abs{A \Delta S} \le \epsilon\abs{G}$ for some union $S$ of $H$-cosets. \end{enumerate} Here $o(1)$ refers to a quantity that goes to zero as $\epsilon \to 0$, at a rate that can depend on $r$ and $d$. \end{proposition} Recall that Lemma~\ref{lem:invariant} tells us that if $\vcdim A \le d$, then $B = \{x \colon \abs{A \Delta (A+x)} \le \delta \abs{G}\}$ has size at least $(\delta/30)^d\abs{G}$. We will derive a similar bound for $B$ under the weaker hypothesis, namely the negation of (a), from which we can deduce (b) using Lemma~\ref{lem:reg-key} as in the proof of the previous regularity lemma Theorem~\ref{thm:reg}. \begin{lemma} \label{lem:ind-set} Let $k \le n/2$ be positive integers. In an $n$-vertex graph with maximum degree at most $n/k$, a random $k$-element subset of the vertices contains an independent set of size at least $k/4$ with probability at least $1 - e^{-k/8}$. \end{lemma} \begin{proof} Let $v_1, \dots, v_k$ be a sequence of $k$ vertices chosen uniformly at random without replacement. Let $I$ be the independent set formed greedily by, starting with the empty set, putting each $v_i$, sequentially as $i=1, 2, \dots$, into $I$ if doing so keeps $I$ an independent set. During the process, when at most $k/4$ elements are added to $I$, the probability that a new $v_i$ is added to $I$ is at least $1 - \frac{(k/4)(n/k)}{n-k} \ge \frac{1}{2}$, since among the remaining $n-k$ vertices, at most $(k/4)(n/k)$ of them are adjacent to vertices already added to $I$ at this point. It follows that $|I|$ stochastically dominates $\min\{X, k/4\}$, where $X$ is distributed as $\operatorname{Binomial}(k, 1/2)$. Thus $\PP(|I| < k/4) \le \PP(X < k/4) \le e^{-k/8}$ by the Chernoff bound. Therefore, $\{v_1, \dots, v_k\}$ contains an independent set $I$ of size at least $k/4$ with probability at least $1 - e^{-k/8}$. \end{proof} We recall a basic result on VC dimension. \begin{theorem}[Sauer--Perles--Shelah theorem~\cite{Sau,She,VC}] \label{thm:vc} If $\mathcal{S}$ is a set system on a ground set of $n$ elements with VC dimension at most $d$, then $\abs{\mathcal{S}} \le \sum_{i=0}^d \binom{n}{i} \le 2n^d$. \end{theorem} \begin{lemma} \label{lem:sep-sample} Let $0 < \delta < 1$, and let $m$ and $d$ be positive integers. Let $\mathcal{S}$ be a $\delta$-separated set system. Suppose that for a uniformly random $m$-element subset $M$, the restricted set system $\mathcal{S}|_M := \{T \cap M : T \in \mathcal{S}\}$ has VC dimension at most $d$ with probability at least $3m^{2d}(1-\delta)^m$. Then $\abs{\mathcal{S}} \le 2m^d$. \end{lemma} \begin{proof} Assume for contradiction that there exists such a set system with $\abs{\mathcal{S}} = 2m^d + 1$. Let $n$ be the size of the ground set. We have $\abs{S \Delta T} \ge \delta n$ for all distinct $S,T \in \mathcal{S}$. Then, for each pair of distinct $S,T \in \mathcal{S}$, with probability at least $1 - (1-\delta)^m$, $M$ intersects $S \Delta T$, so that $S$ and $T$ remain distinct when restricted to $M$. Taking a union bound over all pairs of sets in $\mathcal{S}$, we see that with probability at least $1 - \binom{\abs{\mathcal{S}}}{2}(1-\delta)^m \ge 1 - 3m^{2d}(1-\delta)^m$, all sets in $\mathcal{S}$ remain distinct when restricted to $M$, in which case $\vcdim(\mathcal{S}|_M) > d$ by Theorem~\ref{thm:vc} as $\abs{\mathcal{S}} > 2m^d$, a contradiction to the hypothesis. \end{proof} \begin{lemma} \label{lem:robust-vc-to-ball} Let $m$ and $d$ be positive integers and $0 < \delta < 1$. Let $G$ be a finite abelian group of order at least $24m^d$. Let $X$ be a random $12m^d$-element subset of $G$, and $Y$ a random $m$-element subset of $G$. If $\vcdim\{(A+x) \cap Y : x \in X\} \le d$ with probability at least $e^{-m^d} + 3m^{2d}(1-\delta)^m$, then $B = \{x : \abs{A \Delta(A+x)} \le \delta \abs{G}\}$ has at least $\abs{G}/(12m^d)$ elements. \end{lemma} \begin{proof} Suppose, on the contrary, that $\abs{B} < \abs{G}/(12m^d)$. Consider the Cayley graph on $G$ generated by $B \setminus \{0\}$, i.e., there is edge between $x,y \in G$ whenenver $x-y \in B$. Applying Lemma~\ref{lem:ind-set} with $k = 12m^d$ to this graph, we find that with probability at least $1-e^{-m^d}$, a random $12m^d$-element subset $X \subseteq G$ contains an independent set $I\subseteq X$ with $\abs{I} \ge 3m^d$ with respect to this graph, i.e., $\abs{(A + x) \Delta (A+y)} > \delta \abs{G}$ for all distinct $x,y \in I$. It follows, by union bound and averaging, that we can fix such a set $X$ so that $\vcdim\{(A+x)\cap Y : x \in X\} \le d$ with probability at least $3m^{2d}(1-\delta)^m$ for the random $m$-element set $Y \subseteq G$. Note that $\{A + x : x \in I\}$ is a $\delta$-separated set system with ground set $G$. Furthermore, $\vcdim\{(A+x)\cap Y : x \in I\} \le \vcdim\{(A+x)\cap Y : x \in X\} \le d$ with probability at least $3m^{2d}(1-\delta)^m$. So by Lemma~\ref{lem:sep-sample}, we have $\abs{I} \le 2m^d$, which contradicts the bound $\abs{I} \ge 3m^d$ above. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:robust-reg}] Let $0 < \delta <1/2$. Consider $B = \{x : \abs{A \Delta (A+x)} \le \delta \abs{G}\}$. Choose $m = C \delta^{-1}\log(1/\delta)$ where $C$ is a sufficiently large constant. Then $e^{-m^d} < 1/20$ and $3m^{2d}(1-\delta)^m < 2m^{2d} e^{-\delta m} < 1/20$. If $\abs{B} < \abs{G}/(12m^d)$, then by Lemma~\ref{lem:robust-vc-to-ball}, if $X$ and $Y$ are random $2m^d$-element subsets of $G$, then $\vcdim\{(A+x) \cap Y : x \in X\} > d$ with probability at least $0.9$. On the other hand, if $\abs{B} \ge \abs{G}/(12m^d)$, then by Lemma~\ref{lem:reg-key} there exists a subgroup $H$ of $G$ with $\abs{H} \ge \delta^{o(1)} |B| \ge \delta^{d + o(1)} |H|$ such that $\abs{A \Delta S} \le \delta^{1-o(1)} \abs{G}$ for some union $S$ of $H$-cosets. By choosing $\delta = \epsilon^{1+o(1)}$ so that $\abs{A \Delta S} \le \epsilon \abs{G}$, we obtain the desired result. \end{proof} \section{Removal lemma} \label{sec:removal} In this section, we prove the removal lemma, Theorem~\ref{thm:removal}, for bi-induced patterns. The result is analogous to the induced removal lemma \cite{AFKS} which can be proved using a strong version of the graph regularity lemma. The usual way of proving the strong graph regularity lemma involves iteratively applying the graph regularity lemma. For our arithmetic setting, as we are concerned with bi-induced patterns, the situation is a bit easier: we simply apply the regularity lemma, Proposition~\ref{prop:robust-reg}, twice, where the second time we choose a smaller error parameter compared to the first time. If option (a) holds either time, then we can extract a bi-induced copy of $F$ from each sample with high VC dimension. Otherwise, (b) holds, and we can modify $A$ by a small amount to $A'$, which must also have a bi-induced copy of $F$ (since $A$ is $\epsilon$-far from bi-induced-$F$-free). The set $A'$ is a union of $H$-cosets where $H$ is a subgroup of bounded index, and we will show that a single bi-induced copy of $F$ in $A'$ leads to many copies. \begin{proof}[Proof of Theorem \ref{thm:removal}] Let $V(F) = U \cup V$ be the vertex bipartition of $F$, where $\abs{U} \ge \abs{V}$. Let $d =\abs{U} + \lceil \log_2\abs{U} \rceil$. We may assume that $\abs{G} \ge \epsilon^{-\Omega(\abs{V(F)}^2)}$ or else the conclusion is automatic from just a single bi-induced copy of $F$ in $A$. Suppose, for some $k = \epsilon^{-O(\abs{V(F)})}$, with probability at least 0.9, random $k$-element subsets $X, Y \subseteq G$ satisfy $\vcdim\{(A + x) \cap Y : x \in X\} > d$, in which case there exist injective maps $U \to X$ and $V \to Y$ that bi-induce $F$ in $A$ by footnote~\ref{ft:vc-bi-induce}. Then the probability that random injections $U \to G$ and $V \to G$ bi-induce $F$ is at least $0.9 \binom{k}{|U|}^{-1}\binom{k}{|V|}^{-1} \ge 0.9 k^{-|U|-|V|} \ge \epsilon^{O(\abs{V(F)}^2)}$, since we can choose the random injection $U \to G$ by first choosing the random $k$-element subset $X \subset G$ and then taking a random injection $U \to X$, and similarly with $V$. With probability $1-O_F(\abs{G}^{-1})$ a random map $V(F) \to G$ is injective on $U$ and $V$, so it bi-induces $F$ with probability at least $\epsilon^{O(\abs{V(F)}^2)}$. We apply Proposition~\ref{prop:robust-reg} with two different parameters $\epsilon_1 = \epsilon/10$ and some $\epsilon_2$ to be specified later. If option (a) is true in either case, then the previous paragraph implies the conclusion of the Theorem. Otherwise, we obtain subgroups $H_1$ and $H_2$ of $G$, such that for each $i \in \{1,2\}$, one has $h_i := \abs{G}/\abs{H_i} \le \epsilon_i^{-d-o(1)}$ and there exists some union $S_i$ of $H_i$-cosets satisfying $\abs{A \Delta S_i} \le \epsilon_i\abs{G}$. Furthermore, we choose $\epsilon_2$ so that $h_1 \epsilon_2 \abs{U}\abs{V} = 1/8$. In particular, $\epsilon_2 \ge \epsilon^{d + o(1)}$. Let $H = H_1 \cap H_2$. So $|G|/|H| \le h_1h_2 \le \epsilon^{-d^2-d-o(1)}$. We say that a coset $x + H$ of $H$ is \emph{good} if $\abs{A \Delta (x+H)}/\abs{H}$ is within $\eta := 1/(2|U||V|)$ of $0$ or $1$, and \emph{bad} otherwise. At most an $\epsilon_2/\eta$-fraction of $H$-cosets are bad, since otherwise bad $H$-cosets would together contribute more than $(\epsilon_2/\eta) \eta \abs{G}$ elements to $A \Delta S_2$ as $S_2$ is also a union of $H$-cosets, but this is impossible as $\abs{A \Delta S_2}\le \epsilon_2\abs{G}$. Pick an arbitrary subgroup $K$ of $G$ containing exactly one element from each coset of $H_1$ (so that $G = H_1 \oplus K$ as a direct sum). Let $z \in H_1$ be chosen uniformly at random. Then $z + K + H$ is a union of $\abs{K} = h_1$ many $H$-cosets. For each $y \in K$, the random $H$-coset $z+ y + H$ is uniformly chosen from all $H$-cosets in $y+H_1$. Applying the union bound, we see that the probability that $z + K + H$ contains a bad $H$-coset is at most $h_1 \epsilon_2/\eta < 2h_1\epsilon_2 \abs{U}\abs{V} < 1/2$. Let $A' \subseteq G$ be the union of $H_1$-cosets $y + H_1$, ranging over all $y \in K$ with $\abs{A \cap (z + y + H)} \ge \abs{H}/2$. Since $A'$ and $S_1$ are both unions of $H_1$-cosets, we can apply linearity of expectation over $H_1$-cosets to deduce that $\EE[\abs{A' \Delta S_1}] \le 2\abs{A \Delta S_1} \le 2\epsilon_1\abs{G}$, and hence $\EE[\abs{A' \Delta A}] \le \EE[\abs{A' \Delta S}] + \abs{A \Delta S} \le 3\epsilon_1\abs{G}$. Thus, with probability at least $1/2$, one has $\abs{A' \Delta A}/\abs{G} \le 6\epsilon_1 < \epsilon$. Therefore there is some instance such that $\abs{A' \Delta A} < \epsilon \abs{G}$, and $z + K + H$ is a union of good $H$-cosets. Since $A$ is $\epsilon$-far from bi-induced-$F$-free, $A'$ contains a bi-induced-copy of $F$. So there exist $x'_u, y'_v \in G$ over $u \in U$ and $v \in V$ such that for all $(u,v)\in U \times V$, one has $x'_u + y'_v \in A'$ if and only if $uv\in E(F)$. Since $A'$ is a union of $H_1$-cosets, and there is an element of $K$ in every $H_1$-coset, we may assume that $x'_u \in K$ for each $u \in U$ and $y'_v \in z + K$ for each $v \in V$. Consider independent and uniform random elements $x_u \in x'_u + H$ for each $u \in U$, and $y_v \in y'_v + H$ for each $v \in V$. For each $(u,v) \in U\times V$, the random element $x_u + y_v$ is distributed uniformly in the $H$-coset $x'_u + y'_v + H$, which is a good $H$-coset since $x'_u + y'_v \in z + K$ as $K$ is a subgroup. So with probability at least $1-\eta$, one has $x_u+y_v \in A$ if and only if $x'_u + y'_v \in A'$, which in turn occurs if and only if $uv \in E(F)$. Taking a union bound over $(u,v)\in U \times V$, the following holds with probability at least $1 - \abs{U}\abs{V} \eta= 1/2$: for every $(u,v)\in U\times V$, one has $x'_u + y'_v \in A$ if and only if $uv \in E(F)$. Since each $x_u$ and $y_v$ is restricted to a single $H$-coset, it follows that a uniform random map $\phi \colon V(F) \to G$ bi-induces $F$ with probability at least $\frac12 (\abs{H}/\abs{G})^{\abs{V(F)}} \ge \epsilon^{(d^2+d+o(1))\abs{V(F)}}$. \end{proof} \section{Concluding remarks} We conjecture that the result can be extended to general groups, not necessarily abelian. \begin{conjecture} Fix positive integers $r$ and $d$. Let $G$ be a group of exponent at most $r$, and $A \subseteq G$ a subset with VC dimension at most $d$. Then, for every $\epsilon > 0$, there is a normal subgroup $H$ of $G$ of index at most $\epsilon^{-O_{r,d}(1)}$ so that $\abs{A \Delta S} \le \epsilon \abs{G}$ for some union $S$ of $H$-cosets. \end{conjecture} A special case of the conjecture, though with a somewhat stronger but non-quantitative conclusion, where one forbids a half-graph of fixed size (instead of assuming bounded VC dimension), was recently established by Conant, Pillay, and Terry~\cite{CPT} using model theoretic tools. Note that the bounded exponent hypothesis in the conjecture above cannot be dropped. Indeed, if $G = \ZZ/p\ZZ$ with $p$ prime, and $A = \{1, 2, \dots, \floor p/2 \rfloor\}$, then $\vcdim A \le 3$, while $G$ has no non-trivial subgroups, so the conclusion of the conjecture is false. Nonetheless, there may be regularity lemmas using other structures in addition to subgroups. An example of such a result is discussed later in this section. We also conjecture that the removal lemma should generalize to arbitrary groups as well, although it seems to be open even for the general abelian groups. \begin{conjecture} Fix a bipartite graph $F$. Let $G$ be a finite group. For every $0 < \epsilon < 1/2$, if $A \subseteq G$ is $\epsilon$-far from bi-induced-$F$-free, then the probability that a uniform random map $\phi \colon V(F) \to G$ bi-induces $F$ is at least $\epsilon^{O_F(1)}$. \end{conjecture} It seems likely that the theory developed by Breuillard, Green, and Tao \cite{BGT1,BGT2} on the structure of approximate groups should be useful in the case of nonabelian groups. We hope to study these problems in the future. \medskip In classical results in additive combinatorics, such as Freiman's theorem, when the ambient group does not have many subgroups, generalized progressions and Bohr sets often play the role of subgroups when the group does not have many subgroups. For example, in Green and Ruzsa's~\cite{GR} extension of Freiman's theorem to general abelian groups, the basic structural objects are \emph{coset progressions}, which are sets of the form $P = Q + H$, where $H$ is a subgroup, and $Q$ is some generalized arithmetic progression $\{x_0 + i_1 x_1 + \cdots + i_d x_d : 0 \le i_j < \ell_j \text{ for each } j\}$, and the sum $Q+H$ is a direct sum in the sense that every element in $Q+H$ can be written as $q+h$ with $q\in Q$ and $h \in H$ in a unique way. We say that the progression is \emph{proper} if all the terms $x_0 + i_1 x_1 + \cdots + i_d x_d$ in $Q$ are distinct. We call $d$ the \emph{dimension} of the progression. The Bogolyubov--Ruzsa lemma, Theorem~\ref{thm:bog-exp}, holds for general abelian groups (see~\cite[Section 5]{GR}; also see \cite{San12}). \begin{theorem}[Bogolyubov--Ruzsa lemma for general abelian groups] \label{thm:bog} Let $G$ be an abelian group, and $A \subseteq G$ a finite set such that $|A + A| \le K|A|$. Then $2A - 2A$ contains a proper coset progression $P$ of dimension at most $d(K)$ and size at least $c(K)|A|$, for some constants $c(K), d(K) > 0$. \end{theorem} By modifying the proof of Theorem~\ref{thm:bog-exp} so that we apply Theorem~\ref{thm:bog} instead of \ref{thm:bog-exp}, we obtain an analog of the first claim in Theorem~\ref{thm:bog-exp} for general finite abelian groups. We are not sure if some variant of this result can be used to prove a removal lemma. \begin{proposition} For every $\epsilon > 0$ and $D = D(\epsilon) \to \infty$ as $\epsilon \to 0$, if $G$ is a finite abelian group, and $A \subseteq G$ has VC dimension at most $d$, then there exist some proper coset progression $P$ of dimension at most $D$ and size $|P| \ge \epsilon^{d + o(1)} |G|$, such that $|(A + x) \Delta A|\le \epsilon |G|$ for all $x \in P$. Here $o(1)$ is some quantity that goes to zero as $\epsilon \to 0$, at a rate depending on $d$ and $D$. \end{proposition} We conclude with the following related question that we do not know how to answer (even for $k=2$). An affirmative answer would strengthen Szemer\'edi's theorem. \begin{question} Let $k$ be a positive integer and $\delta > 0$. Let $p$ be a sufficiently large prime, and $A \subseteq \ZZ/p\ZZ$ with $\delta p \le \abs{A} \le (1-\delta)p$. Can we always find a $2k$-term arithmetic progression in $\ZZ/p\ZZ$ where the first $k$ terms lie in $A$ and the last $k$ terms lie outside of $A$? \end{question} If $p$ had a small prime factor, then taking $A$ to be a non-trivial subgroup of $\ZZ/p\ZZ$ gives a counterexample. To see the relevance to the rest of this paper, observe that such a $2k$-term arithmetic progression would bi-induce a half-graph on $2k$ vertices. For example, if $x - (k-1)d, x-(k-2)d, \dots, x \in A$ and $x+d, \dots, x + kd \notin A$, then $x_i = x - id$ and $y_j = jd$ have the property that, for $1 \le i, j\le d$, $x_i + y_j \in A$ if and only if $j \le i$.
1,108,101,562,710
arxiv
\section{Introduction} The presence of polycyclic aromatic molecules in the interstellar medium (ISM) is established by the ubiquitous observations of the unidentified infrared (UIR) emission bands at 3.3, 6.2, 7.7, 8.6, 11.2, 12.7 and 16.4 $\mu \rm m$ (3030, 1610, 1280, 1150, 885, 787 and 609 cm$^{-1}$) towards various Galactic and extra-galactic sources (Gillet et al. 1973; Cohen et al. 1986; Li 2020). These bands have been ascribed to infrared fluorescence of polycyclic aromatic hydrocarbon (PAH) molecules excited by UV and optical photons (Leger \& Puget 1984; Allamandola et al. 1985). The UIR bands exhibit diversity in terms of peak position, width and intensity depending on the local environment of the ISM (Hony et al. 2001; Peeters et al. 2003; Sakon et al. 2004; Tielens 2008). These variations suggest the presence of varying forms of PAHs. Neutral PAHs have been proposed to produce strong 3.3 and 11.2 $\mu \rm m$ bands whereas the emissions at 6.2, 7.7 and 8.6 $\mu \rm m$ have been attributed to ionized PAHs (Schutte et al. 1993; Peeters et al. 2002; Tielens 2008; Schmidt et al. 2009). Besides this, several complex organics having disorganized structures show the potential to emit the UIR bands. These include Hydrogenated Amorphous Carbon (HAC) (Jones et al. 1990; Jones et al. 2017), Quenched Carbonaceous Composites (QCCs) (Sakata et al. 1987), coal (Guillois et al. 1996; Papoular et al. 1989) and Mixed Aromatic/Aliphatic Organic Nanoparticles (MAONs) (Kwok \& Zhang 2011; Kwok \& Zhang 2013). The 5--9 $\mu \rm m$ region shows major variations from source to source with bands at 5.2, 5.7, 6.0, 6.2, 6.8, 7.7 and 8.6 $\mu \rm m$, in which the 7.7 $\mu \rm m$ band is stronger than others (Tielens et al. 2008). Peeters et al. (2002) found that the 7.7 $\mu \rm m$ band consists of subfeatures at $\sim$7.6 and $\sim$7.8 $\mu \rm m$, while the peak position of the 6.2 $\mu \rm m$ band lies either at $\sim$6.2 $\mu \rm m$ or at a slightly redder position of $\sim$6.3 $\mu \rm m$. The varying size of PAHs can successfully explain the emission at 6.3 $\mu \rm m$, while polycyclic aromatic nitrogen heterocycle (PANH) cations have been attributed to reproduce the 6.2 $\mu \rm m$ feature (Hudgins et al. 2005). Importance of PANHs has been noticed in photon-dominated regions (PDRs) as spectra of their cations are required to fit the 6.2 and 11.0 $\mu \rm m$ observed features towards NGC 7023 (Boersma et al. 2013). Nitrogen has large electronegativity that affects the dipole derivatives of C-C stretching modes and changes the IR characteristic of these modes. This induces the shifting of the 6.2 $\mu \rm m$ band towards higher frequency, which becomes prominent when N is incorporated deeper in PAHs (Hudgins et al. 2005; Tielens 2008). In the ISM, the detection of benzo-nitrile (McGuire et al. 2018) and cyano-naphthalene (McGuire et al. 2021) strengthens the idea of existence of PANHs in the ISM. PAH and PANH cations (same structure and geometry) are found to be equally efficient in reaction with atomic H, and association of H at the nitrogen is more exothermic than association at the carbon (Demarais et al. 2014). If the lone-pair electrons of the nitrogen are not delocalized in the $\pi$ system, N atom has chance to make bond with ionized atomic H (Alvaro Galu\'e et al. 2010; Hudgins et al. 2005), leading to the formation of N-H and N-H$_{2}$. The high proton affinity of PANHs may also help in the formation of their protonated forms. The electronic spectra of protonated PANHs have also been studied experimentally for comparison with the diffuse interstellar bands (DIBs) (Noble et al. 2015). \begin{figure*} \centering \includegraphics[width=.17\textwidth]{c1011.eps}\hfill \includegraphics[width=.13\textwidth]{c161.eps}\hfill \includegraphics[width=.16\textwidth]{c201.eps}\hfill \includegraphics[width=.18\textwidth]{c241.eps}\hfill\\ \includegraphics[width=.32\textwidth]{c321.eps}\hspace{15mm} \includegraphics[width=.39\textwidth]{c481.eps}\\ \includegraphics[width=.335\textwidth]{c541.eps} \caption{PAHs with N-substituted variants studied in this work. The black dots show the site of nitrogen (\enquote*{a} and \enquote*{b}); for \enquote*{a}, CH is replaced by N at the periphery (exo N-PAH) and for site \enquote*{b}, N substitutes a C inside the structure (endo N-PAH). Two other possibilities are considered for H bonding with the peripheral N (site \enquote*{a}) --- single H atom bonded with N (NH-PAH) and two H atoms bonded with N (NH$_2$-PAH). H$^+$ shows the protonation site.} \label{fig:my_label} \end{figure*} IR spectra of PANHs have been reported in context to the UIR bands, experimentally for small neutrals and cations by Mattioda et al. (2003) and for protonated forms (with H$^{+}$ at N) by Alvaro Galu\'e et al. (2010) and theoretically for large cations by Hudgins et al. (2005). Theoretical IR spectra of PAHs with CN, NH and NH$_2$ as side group have been studied for naphthalene (C$_{10}$H$_{8}$) and anthracene (C$_{14}$H$_{10}$) (Bauschlicher et al. 1998). The study concludes that PAHs with N-containing side groups may not be abundant and will not contribute significantly to the UIR bands based on the absence of N-H and C-N stretching features in astronomical spectra, while N in the PAH ring structure as proposed by Hudgins et al. (2005); Alvaro Galu\'e et al. (2010) and Noble et al. (2015) remains as potentially promising carriers of the UIR bands. Calculations have been done for four variants of these PAHs with N, NH and NH$_2$ substitutions in neutral, cation and protonated forms to present a systematic study of the effects of nitrogen incorporation in a large samples of PAHs in this paper. This work includes medium and large sized PAHs (having up to 54 carbon atoms) that are relevant to be present in the interstellar space. Species with NH and NH$_2$ within the PAH ring along with their protonated (with H$^{+}$ at C) forms are reported for the first time. Section 2 describes the structure and the theoretical methodolgy. The results are presented and discussed in Section 3 and the astrophysical implications are given in Section 4. \section{Calculation Method} Density functional theory (DFT) is an appropriate computational approach to simulate the quantum states of PAH molecules, which is used extensively to investigate the IR spectra in the astrophysical context, e.g., Langhoff (1996); Hudgins et al. (2004); Pathak and Rastogi (2005); Pathak and Rastogi (2006); Pathak and Rastogi (2007); Pauzat et al. (2011); Buragohain et al. (2020). We employ the DFT calculations using Gaussian 09 with the B3LYP/6-31G++(d, p) basis set to optimize the geometries of PAHs that are further used to calculate the harmonic frequencies and IR intensities. Mode-dependent scaling factors have been used to scale the theoretical frequencies in order to bring them in accordance with experiments. A single scaling factor may not be sufficient as different vibrational motions require different values of the scaling factors. Here, the scaling factors are taken from Buragohain et al. (2015), which were determined after comparing the calculations for selected PAHs with experimental data. These are 0.974 for C-H out of plane bending modes, 0.972 for C-H in plane and C-C stretching modes and 0.965 for C-H stretching modes. \begin{figure* \centering \subfloat{\includegraphics[height=22.2cm, width=18.2cm]{alto.eps}} \caption{Theoretical IR spectra of 0-15 $\mu \rm m$ region for naphthalene (blue), pyrene (red), perylene (green), coronene (cyan), ovalene (yellow), C$_{48}$H$_{18}$ (magenta) and circumcoronene (black) in five variants---PAH (I row), exo N-PAH (II row), NH-PAH (III row), NH$_2$-PAH (IV row) and endo N-PAH (V row) in neutral (I column), cationic (II column) and protonated (III column) forms, where N denotes the nitrogen atom. Dotted vertical lines show the observed positions of the PAH bands. Spectra are plotted assuming the Gaussian profile with the FWHM of 30 cm$^{-1}$. The diagonal lines in the horizontal axes denote the removed section between 2700-1900 cm$^{-1}$ due to the absence of features in this region.} \end{figure*} \begin{figure* \ContinuedFloat \subfloat{\includegraphics[height=14.7cm, width=18.2cm]{bto.eps}} \caption{continued} \end{figure*} In this paper, we focus only on the position and intensity of the bands that correlate with the observed UIR bands. We are not taking into account the anharmonicity that affects the band profiles. For this purpose, DFT-B3LYP provides relevant results. Higher level theoretical methods like MP2, etc. are computationally very expensive especially for the case of large molecules such as those considered in this paper. Considering the fact that the accuracy of the results at B3LYP level for the band wavelengths (when corrected by the scaling factors) and the band strengths is not compromised in comparison to the computational resources required for MP2 calculations, therefore, the present results should be useful to signify a general outlook of N-containing PAHs. Seven PAHs have been considered (Figure 1), namely naphthalene (C$_{10}$H$_{8}$), pyrene (C$_{16}$H$_{10}$), perylene (C$_{20}$H$_{12}$), coronene (C$_{24}$H$_{12}$), ovalene (C$_{32}$H$_{14}$), C$_{48}$H$_{18}$ and circumcorornene (C$_{54}$H$_{18}$) with substitution of N, NH and NH$_2$ in four different N-incorporated variants. At site \enquote*{a} in Figure 1, N replaces CH with the lone pair on N atom not being part of the $\pi$ system (exoskeletal or exo N-PAH). This peripheral N atom may be bonded with one or two H atoms giving rise to NH-PAH and NH$_2$-PAH respectively. N may also be at site \enquote*{b} incorporated within the PAH structure (endoskeletal or endo N-PAH). An emission model is employed to transform the theoretically computed absorption spectra of PAHs into emission spectra for a justifiable comparison with the observed UIR bands. The model is based on studies by Schutte et al. (1993), Cook and Saykally (1998), Pech et al. (2002) and Pathak and Rastogi (2008). The model considers a PAH in an interstellar radiation field, corresponding to blackbody temperature T = 40,000K. The PAH molecule becomes internally excited equivalent to a peak temperature (T$_{p}$) depending on the heat capacity of the PAH. The excited PAH then relaxes by emitting a cascade of photons associated with the vibrational modes of the PAH. The emitted energy is computed for $\Delta$T = 1K and is integrated over the range T$_{p}$ to 50K. Below 50K, the emitted energy is found to be insignificant. The emission model considers the rate of absorption of photons to calculate the emitted energy, and to produce the emission spectrum. This emitted energy is summed up over the whole distribution of photon absorption. The intensity unit for this model is $10^{-13}$W$C^{-1}${$\mu \rm m$}$^{-1}$ (Pech et al. 2002). The spectra are convolved with 30 cm$^{-1}$ FWHM as the chosen value is typical for the PAH emission (Allamandola et al. 1989). We have also studied the ionization potential (IP) and H loss (from C atom) energy for each studied variant. Ionization potential for PAHs ranges from 6 to 8 eV, which remains similar when N replaces CH in exo N-PAH. For NH-PAH and endo N-PAH, the range for IP is 4 to 5 eV. The variant where N is attached with two H atoms (NH$_2$-PAH) has the IP values from 5 to 6 eV. The value decreases with increasing size as C$_{10}$H$_{8}$ and its N-substituted variants show larger values (up to 8 eV) than C$_{54}$H$_{18}$ and its N-substituted counterpart (up to 6 eV) following the typical behavior of PAHs. The H loss energy is 4 to 5 eV for every variant. \begin{table*} \tbl{Summary of the theoretical mid IR behavior of PANH variants along with PAHs in neutral, cationic($^+$) and protonated (H$^+$) forms.} {\begin{tabular}{lcccc} \noalign{\vskip3pt} \hline Species & \multicolumn{4}{c}{Normalized intensity (Int$\rm_{rel}$)}\\\cmidrule{2-5} & NH stretching & C-H stretching & C-C stretching/C-H in-plane & CH oop$^\Upsilon$\\ & (2.8-3.10 $\mu \rm m$)$^\Pi$ & (3.21-3.23 $\mu \rm m$)$^\Pi$ & (6.11-8.89 $\mu \rm m$)$^\Pi$ & (10.76-13.94 $\mu \rm m$)$^\Pi$\\ \hline PAHs & & very strong & weak & strong$^\P$\\ $^+$ & & weak & very strong & weak$^\Sigma$\\ H$^+$ & & weak & very strong & weak$^\Sigma$\\\hline N-PAHs (exo) & & very strong & weak & strong$^\P$\\ $^+$ & & weak & very strong & weak$^\Sigma$\\ H$^+$ & & weak & very strong & weak$^\Sigma$\\\hline NH-PAHs & weak & very strong & weak & strong$^\P$\\ $^+$ & moderate & weak & very strong & weak$^\Sigma$\\ H$^+$ & moderate & weak & very strong & weak$^\Sigma$\\\hline NH$_2$-PAHs & weak & moderate & very strong & moderate\\ $^+$ & moderate & weak & strong & very strong\\ H$^+$ & moderate & weak & very strong & strong\\\hline N-PAHs (endo) & & strong & strong & very strong\\ $^+$ & & weak & strong & very strong\\ H$^+$ & & weak & strong & very strong\\\hline \hline \end{tabular}} \begin{tabnote} The data is presented for naphthalene (C$_{10}$H$_8$), pyrene (C$_{16}$H$_{10}$), perylene (C$_{20}$H$_{12}$), coronene (C$_{24}$H$_{12}$), ovalene (C$_{32}$H$_{14}$), C$_{48}$H$_{18}$ and circumcoronene (C$_{54}$H$_{18}$). $^\Pi$this paper. very strong (0.8 $\leq$ Int$_{rel}$ $\leq$ 1.0), strong (0.4 $\leq$ Int$_{rel}$ $\leq$ 0.7), moderate (0.1 $\leq$ Int$_{rel}$ $\leq$ 0.4) and weak (Int$_{rel}$ $\leq$ 0.1) are defined according to the relative intensity (Int$_{rel}$), obtained by taking the ratio of all intensities to the maximum. $^\P$very strong for C$_{48}$H$_{18}$ and circumcoronene. $^\Sigma$moderate for C$_{48}$H$_{18}$ and circumcoronene. $^\Upsilon$C-H oop bending features follow their emergence on the account of CH-groups present in any PANH. oop stands for out-of-plane. \end{tabnote} \end{table*} In the present work, the site for nitrogenation and protonation has been chosen by calculating the ground state energy of structures having different unique positions of substitution in a PAH, from which the structure with the lowest energy is used for further analysis. Table 1 shows normalized intensities to summarize the mid IR behavior of the chosen species, whereas Table 2 and 3 are presented on an absolute intensity scale to study the size effect of PAHs and their N-containing variants. \section{Results and Discussion}~ Figure 2 shows spectra of seven PAHs in four N-substituted variants (Figure 1) for neutral, cations and protonated forms calculated in the present study for naphthalenes (blue), pyrenes (red), perylenes (green), coronenes (cyan), ovalenes (yellow), C$_{48}$H$_{18}$ (magenta) and circumcoronenes (black). Each row represents different variants of chosen molecules---pure PAH (Figure 2, row I), exo N-PAH (Figure 2, row II), NH-PAH (Figure 2, row III), NH$_2$-PAH (Figure 2, row IV) and endo N-PAH (Figure 2, row V), respectively. Likewise, from left to right; columns depict the neutral, cationic and protonated form. \subsection{The C-H Stretching Vibrations (3.2--3.3 $\mu \rm m$)}~ The IR behavior of C-H stretching features for every PANH variant is summarized in Table 1 and the spectra are shown in Figure 2. The C-H stretching region is very similar for all the PANH variants (Figure 2). This resemblance involves neutral, cationic and protonated molecular forms here in terms of the peak position and intensity, and no significant effect of nitrogenation is recognized in this region for any variant (Table 1). The C-H stretching feature shows a small blue shift ($\sim$0.02) upon ionization\footnote{Ions are used for referring to both the cations and protonated PAHs.}, which is true for every PANH variant. In the present sample, the intensity of the 3.3 $\mu \rm m$ feature increases with the size because of the increase of C-H bonds (Bauschlicher et al. 2008). This is followed by each PANH variant as well. \subsection{The C-C stretching/C-H in-plane bending Vibrations (6.0--9.0 $\mu \rm m$)}~ The IR behavior for each PANH variant is summarized in Table 1, while the peak positions and emitted intensities of the C-C stretching/C-H in-plane bending modes for each species studied here, are listed in Table 2 with C/N ratio, where the molecules are arranged in the order of increasing size. For the 6.2 $\mu \rm m$ band, the dominant band peaks in between 6.0-6.6 $\mu \rm m$, while for the 7.7 and 8.6 $\mu \rm m$ bands, the dominant features peak at 7.21-8.28 and 8.4-8.89 $\mu \rm m$ respectively. The C-C stretching/C-H in-plane bending region of exo N-PAH (Figure 2, row II) and NH-PAH (Figure 2, row III) variants exhibits the same behavior as that of pure PAHs (Figure 2, row I), where ions display stronger intensities compared to neutrals (Table 1). However, for NH$_2$-PAH (Figure 2, row IV), this region shows larger intensities for neutrals compared to ions (Figure 2 \& Table 1). The endo N-PAH variant (Figure 2, row V) behaves differently for this region---as the size increases, the intensity of the 6.2 $\mu \rm m$ band is higher for ions, whereas neutrals have the strong 7.7 and 8.6 $\mu \rm m$ bands (Table 2). The intensity of the 6.2 and 7.7 $\mu \rm m$ bands increases with size for the pure PAH and N-PAH (Figure 2). \begin{center} \begin{longtable}[hbt!]{lclcc} \caption{Calculated peak positions ($\mu \rm m$) and intensities ($10^{-13}$W$C^{-1}${$\mu \rm m$}$^{-1}$) of the dominant 6.2, 7.7 and 8.6 $\mu \rm m$ bands for PANH variants in neutral, cationic ($^+$) and protonated (H$^+$) forms}\\ \noalign{\vskip3pt} \hline Molecules & C/N ratio & 6.2 (I) & 7.7 (I) & 8.6 (I)\\ \hline \endfirsthead \multicolumn{5}{c}% {{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\ \hline Molecules & C/N ratio & 6.2 (I) & 7.7 (I) & 8.6 (I)\\ \hline \endhead \hline \multicolumn{5}{|r|}{{Continued on next page}} \\ \hline \endfoot \endlastfoot Naphthalene & & & & \\ N-C$_{10}$H$_{8}$ (Exo) (Figure 2-d-blue) & 9 & 6.22 (0.04) & 7.55 (0.02) & 8.81 (0.02) \\ NH-C$_{10}$H$_{8}$ (Figure 2-g-blue) & 9 & 6.45 (0.06) & 7.56 (0.13) & 8.43 (0.05) \\ NH$_2$-C$_{10}$H$_{8}$ (Figure 2-j-blue) & 9 & 6.43 (0.65) & 7.79 (0.73) & 8.42 (0.21) \\ N-C$_{10}$H$_{8}$ (Endo) (Figure 2-m-blue) & 9 & 6.60 (0.29) & 7.87 (0.08) & 8.44 (0.14) \\ N-C$_{10}$H$_{8}^+$ (Exo) (Figure 2-e-blue) & 9 & 6.60 (0.14) & 8.28 (0.28) & 8.40 (0.22) \\ NH-C$_{10}$H$_{8}^+$ (Figure 2-h-blue) & 9 & 6.31 (0.17) & 7.38 (0.11) & 8.42 (0.03) \\ NH$_2$-C$_{10}$H$_{8}^+$ (Figure 2-k-blue) & 9 & 6.28 (0.08) & 7.50 (0.07) & 8.49 (0.03) \\ N-C$_{10}$H$_{8}^+$ (Endo) (Figure 2-n-blue) & 9 & 6.11 (0.10) & 7.31 (0.97) & 8.66 (0.37) \\ H$^+$N-C$_{10}$H$_{8}$ (Exo) (Figure 2-f-blue) & 9 & 6.26 (0.38) & 7.95 (0.35) & 8.68 (0.12) \\ H$^+$NH-C$_{10}$H$_{8}$ (Figure 2-i-blue) & 9 & 6.37 (0.21) & 7.30 (0.17) & 8.71 (0.04) \\ H$^+$NH$_{2}$-C$_{10}$H$_{8}$ (Figure 2-l-blue) & 9 & 6.27 (0.07) & 8.20 (0.33) & 8.47 (0.16) \\ H$^+$N-C$_{10}$H$_{8}$ (Endo) (Figure 2-o-blue) & 9 & 6.44 (0.17) & 7.36 (0.09) & 8.65 (0.05) \\ \hline Pyrene & & & & \\ N-C$_{16}$H$_{10}$ (Exo) (Figure 2-d-red) & 15 & 6.19 (0.02) & 7.90 (0.02) & 8.64 (0.04) \\ NH-C$_{16}$H$_{10}$ (Figure 2-g-red) & 15 & 6.23 (0.13) & 7.71 (0.04) & 8.61 (0.04) \\ NH$_2$-C$_{16}$H$_{10}$ (Figure 2-j-red) & 15 & 6.13 (0.61) & 7.63 (0.36) & 8.61 (0.22) \\ N-C$_{16}$H$_{10}$ (Endo) (Figure 2-m-red) & 15 & 6.46 (0.62) & 7.93 (0.27) & 8.47 (0.14) \\ N-C$_{16}$H$_{10}^+$ (Exo) (Figure 2-e-red) & 15 & 6.50 (0.43) & 7.40 (0.23) & 8.41 (0.09) \\ NH-C$_{16}$H$_{10}^+$ (Figure 2-h-red) & 15 & 6.23 (0.13) & 7.30 (0.04) & 8.61 (0.04) \\ NH$_2$-C$_{16}$H$_{10}^+$ (Figure 2-k-red) & 15 & 6.26 (0.16) & 7.53 (0.11) & 8.46 (0.07) \\ N-C$_{16}$H$_{10}^+$ (Endo) (Figure 2-n-red) & 15 & 6.14 (0.20) & 7.75 (0.11) & 8.46 (0.05) \\ H$^+$N-C$_{16}$H$_{10}$ (Exo) (Figure 2-f-red) & 15 & 6.29 (0.60) & 7.24 (0.40) & 8.40 (0.16) \\ H$^+$NH-C$_{16}$H$_{10}$ (Figure 2-i-red) & 15 & 6.27 (0.61) & 7.45 (0.30) & 8.43 (0.08) \\ H$^+$NH$_{2}$-C$_{16}$H$_{10}$ (Figure 2-l-red) & 15 & 6.58 (0.27) & 7.44 (0.17) & 8.48 (0.07) \\ H$^+$N-C$_{16}$H$_{10}$ (Endo) (Figure 2-o-red) & 15 & 6.20 (0.15) & 7.81 (0.37) & 8.80 (0.17) \\ \hline Perylene & & & & \\ N-C$_{20}$H$_{12}$ (Exo) (Figure 2-d-green) & 19 & 6.33 (0.19) & 7.37 (0.07) & 8.80 (0.02) \\ NH-C$_{20}$H$_{12}$ (Figure 2-g-green) &19 & 6.22 (0.41) & 7.57 (0.16) & 8.48 (0.07) \\ NH$_2$-C$_{20}$H$_{12}$ (Figure 2-j-green) & 19 & 6.36 (1.12) & 7.84 (1.11) & 8.50 (0.21) \\ N-C$_{20}$H$_{12}$ (Endo) (Figure 2-m-green) & 19 & 6.38 (0.61) & 7.96 (0.43) & 8.80 (0.12) \\ N-C$_{20}$H$_{12}^+$ (Exo) (Figure 2-e-green) & 19 & 6.48 (0.33) & 7.71 (0.37) & 8,44 (0.11) \\ NH-C$_{20}$H$_{12}^+$ (Figure 2-h-green) & 19 & 6.37 (1.12) & 7.78 (0.67) & 8.48 (0.12) \\ NH$_2$-C$_{20}$H$_{12}^+$ (Figure 2-k-green) & 19 & 6.36 (0.08) & 7.83 (1.11) & 8.43 (0.24) \\ N-C$_{20}$H$_{12}^+$ (Endo) (Figure 2-n-green) & 19 & 6.33 (0.41) & 7.59 (0.08) & 8.48 (0.05) \\ H$^+$N-C$_{20}$H$_{12}$ (Exo) (Figure 2-f-green) & 19 & 6.48 (0.47) & 7.89 (0.41) & 8.42 (0.15) \\ H$^+$NH-C$_{20}$H$_{12}$ (Figure 2-i-green) & 19 & 6.37 (1.02) & 7.33 (0.74) & 8.42 (0.20) \\ H$^+$NH$_{2}$-C$_{20}$H$_{12}$ (Figure 2-l-green) & 19 & 6.30 (0.21) & 7.63 (0.31) & 8.47 (0.07) \\ H$^+$N-C$_{20}$H$_{12}$ (Endo) (Figure 2-o-green) & 19 & 6.37 (0.41) & 7.84 (0.38) & 8.42 (0.10) \\\hline Coronene & & & & \\ N-C$_{24}$H$_{12}$ (Exo) (Figure 2-d-cyan) & 23 & 6.23 (0.20) & 7.63 (0.19) & 8.89 (0.20) \\ NH-C$_{24}$H$_{12}$ (Figure 2-g-cyan) & 23 & 6.27 (0.20) & 7.63 (0.19) & 8.89 (0.20) \\ NH$_2$-C$_{24}$H$_{12}$ (Figure 2-j-cyan) & 23 & 6.19 (0.47) & 7.78 (0.78) & 8.63 (0.28) \\ N-C$_{24}$H$_{12}$ (Endo) (Figure 2-m-cyan) & 23 & 6.19 (0.12) & 7.60 (0.37) & 8.50 (0.14) \\ N-C$_{24}$H$_{12}^+$ (Exo) (Figure 2-e-cyan) & 23 & 6.40 (1.03) & 7.26 (0.70) & 8.43 (0.22) \\ NH-C$_{24}$H$_{12}^+$ (Figure 2-h-cyan) & 23 & 6.20 (0.59) & 7.42 (0.32) & 8.62 (0.08) \\ NH$_2$-C$_{24}$H$_{12}^+$ (Figure 2-k-cyan) & 23 & 6.24 (0.51) & 7.41 (0.14) & 8.73 (0.05) \\ N-C$_{24}$H$_{12}^+$ (Endo) (Figure 2-n-cyan) & 23 & 6.15 (0.72) & 7.52 (0.15) & 8.64 (0.06) \\ H$^+$N-C$_{24}$H$_{12}$ (Exo) (Figure 2-f-cyan) & 23 & 6.25 (0.81) & 7.35 (0.70) & 8.46 (0.18) \\ H$^+$NH-C$_{24}$H$_{12}$ (Figure 2-i-cyan) & 23 & 6.23 (0.61) & 7.39 (0.38) & 8.68 (0.13) \\ H$^+$NH$_{2}$-C$_{24}$H$_{12}$ (Figure 2-l-cyan) & 23 & 6.31 (0.35) & 7.65 (0.20) & 8.45 (0.09) \\ H$^+$N-C$_{24}$H$_{12}$ (Endo) (Figure 2-o-cyan) & 23 & 6.25 (0.53) & 7.75 (0.28) & 8.42 (0.10) \\\hline Ovalene & & & & \\ N-C$_{32}$H$_{14}$ (Exo) (Figure 2-d-yellow) & 31 & 6.21 (0.25) & 7.88 (0.06) & 8.63 (0.04) \\ NH-C$_{32}$H$_{14}$ (Figure 2-g-yellow) & 31 & 6.22 (0.33) & 7.41 (0.21) & 8.62 (0.20) \\ NH$_2$-C$_{32}$H$_{14}$ (Figure 2-j-yellow) & 31 & 6.22 (0.76) & 7.76 (0.61) & 8.48 (0.32) \\ N-C$_{32}$H$_{14}$ (Endo) (Figure 2-m-yellow) & 31 & 6.21 (0.52) & 7.89 (0.35) & 8.42 (0.33) \\ N-C$_{32}$H$_{14}^+$ (Exo) (Figure 2-e-yellow) & 31 & 6.30 (0.79) & 7.50 (0.68) & 8.52 (0.29) \\ NH-C$_{32}$H$_{14}^+$ (Figure 2-h-yellow) & 31 & 6.29 (0.53) & 7.38 (0.28) & 8.46 (0.08) \\ NH$_2$-C$_{32}$H$_{14}^+$ (Figure 2-k-yellow) & 31 & 6.20 (0.46) & 7.40 (0.17) & 8.50 (0.07) \\ N-C$_{32}$H$_{14}^+$ (Endo) (Figure 2-n-yellow) & 31 & 6.13 (0.59) & 7.62 (0.23) & 8.56 (0.07) \\ H$^+$N-C$_{32}$H$_{14}$ (Exo) (Figure 2-f-yellow) & 31 & 6.27 (1.03) & 7.46 (0.94) & 8.48 (0.35) \\ H$^+$NH-C$_{32}$H$_{14}$ (Figure 2-i-yellow) & 31 & 6.27 (1.10) & 7.43 (0.51) & 8.42 (0.25) \\ H$^+$NH$_{2}$-C$_{32}$H$_{14}$ (Figure 2-l-yellow) & 31 & 6.28 (0.42) & 7.30 (0.59) & 8.50 (0.20) \\ H$^+$N-C$_{32}$H$_{14}$ (Endo) (Figure 2-o-yellow) & 31 & 6.22 (0.39) & 7.88 (0.22) & 8.62 (0.07) \\\hline C$_{48}$H$_{18}$ & & & & \\ N-C$_{48}$H$_{18}$ (Exo) (Figure 2-d-magenta) & 47 & 6.44 (0.33) & 8.00 (0.06) & 8.46 (0.08) \\ NH-C$_{48}$H$_{18}$ (Figure 2-g-magenta) & 47 & 6.41 (0.57) & 7.74 (0.18) & 8.50 (0.14) \\ NH$_2$-C$_{48}$H$_{18}$ (Figure 2-j-magenta) & 47 & 6.29 (0.87) & 7.62 (0.54) & 8.49 (0.22) \\ N-C$_{48}$H$_{18}$ (Endo) (Figure 2-m-magenta) & 47 & 6.36 (0.77) & 7.46 (0.42) & 8.43 (0.46) \\ N-C$_{48}$H$_{18}$ $^+$ (Exo) (Figure 2-e-magenta) & 47 & 6.35 (1.02) & 7.42 (0.80) & 8.52 (0.20) \\ NH-C$_{48}$H$_{18}$ $^+$ (Figure 2-h-magenta) & 47 & 6.26 (0.97) & 7.38 (1.02) & 8.59 (0.56) \\ NH$_2$-C$_{48}$H$_{18}$ $^+$ (Figure 2-k-magenta) & 47 & 6.43 (0.72) & 7.33 (0.18) & 8.45 (0.11) \\ N-C$_{48}$H$_{18}$ $^+$ (Endo) (Figure 2-n-magenta) & 47 & 6.28 (1.44) & 7.46 (0.38) & 8.49 (0.30) \\ H$^+$N-C$_{48}$H$_{18}$ (Exo) (Figure 2-f-magenta) & 47 & 6.29 (0.58) & 7.49 (0.93) & 8.43 (0.38) \\ H$^+$NH-C$_{48}$H$_{18}$ (Figure 2-i-magenta) & 47 & 6.41 (0.51) & 7.66 (1.92) & 8.49 (0.59) \\ H$^+$NH$_{2}$-C$_{48}$H$_{18}$ (Figure 2-l-magenta) & 47 & 6.29 (0.48) & 7.53 (1.00) & 8.45 (0.29) \\ H$^+$N-C$_{48}$H$_{18}$ (Endo) (Figure 2-o-magenta) & 47 & 6.44 (0.72) & 7.60 (0.21) & 8.46 (0.14) \\\hline Circumcoronene & & & & \\ $^{\dagger}$C$_{54}$H$_{18}$ (Figure 2-a-black) & . . . & 6.28 (0.22) & 7.68 (0.89) & 8.52 (0.10) \\ N-C$_{54}$H$_{18}$ (Exo) (Figure 2-d-black) & 53 & 6.21 (0.24) & 7.70 (0.86) & 8.50 (0.06) \\ NH-C$_{54}$H$_{18}$ (Figure 2-g-black) & 53 & 6.20 (0.27) & 7.71 (0.44) & 8.54 (0.79) \\ NH$_2$-C$_{54}$H$_{18}$ (Figure 2-j-black) & 53 & 6.20 (1.02) & 7.64 (1.21) & 8.48 (0.41) \\ N-C$_{54}$H$_{18}$ (Endo) (Figure 2-m-black) & 53 & 6.19 (0.43) & 7.79 (1.01) & 8.50 (0.38) \\ $^{\dagger}$C$_{54}$H$_{18}^+$ (Figure 2-b-black) & . . . & 6.30 (2.37) & 7.85 (0.93) & 8.41 (0.34) \\ N-C$_{54}$H$_{18}^+$ (Exo) (Figure 2-e-black) & 53 & 6.27 (2.29) & 7.87 (1.01) & 8.45 (0.37) \\ NH-C$_{54}$H$_{18}^+$ (Figure 2-h-black) & 53 & 6.15 (0.63) & 7.44 (0.98) & 8.46 (0.22) \\ NH$_2$-C$_{54}$H$_{18}^+$ (Figure 2-k-black) & 53 & 6.17 (0.88) & 7.41 (1.02) & 8.49 (0.28) \\ N-C$_{54}$H$_{18}^+$ (Endo) (Figure 2-n-black) & 53 & 6.14 (1.36) & 7.70 (0.48) & 8.44 (0.19) \\ $^{\dagger}$H$^+$C$_{54}$H$_{18}$ (Figure 2-c-black) & . . . & 6.22 (1.29) & 7.45 (1.17) & 8.46 (0.59) \\ H$^+$N-C$_{54}$H$_{18}$ (Exo) (Figure 2-f-black) & 53 & 6.20 (1.33) & 7.44 (1.14) & 8.48 (0.37) \\ H$^+$NH-C$_{54}$H$_{18}$ (Figure 2-i-black) & 53 & 6.17 (0.52) & 7.47 (0.98) & 8.48 (0.18) \\ H$^+$NH$_{2}$-C$_{54}$H$_{18}$ (Figure 2-l-black) & 53 & 6.20 (1.08) & 7.55 (1.19) & 8.42 (0.29) \\ H$^+$N-C$_{54}$H$_{18}$ (Endo) (Figure 2-o-black) & 53 & 6.15 (1.04) & 7.57 (0.87) & 8.43 (0.31) \\\hline \vspace{0.3mm} {\raggedright \scriptsize $^{\dagger}$ same data for pure larger PAH} \end{longtable} \end{center} The 6.2 $\mu \rm m$ UIR band has been observed at two wavelengths, either near 6.3 $\mu \rm m$ (longer wavelength component) towards planetary nebulae, post-AGB objects and Herbig AeBe stars or near 6.2 $\mu \rm m$ (shorter wavelength component) towards H~$\textsc{ii}$ regions, reflection nebulae and galaxies (Peeters et al. 2002). The emergence of the shorter wavelength component is an enigma that has been explained by cationized PANHs (Hudgins et al. 2005) and protonated PANHs (Alvaro Galu\'e et al. 2010). This paper shows that the feature at 6.2 $\mu \rm m$ is seen for the neutrals as well. The substitution of N atom into a PAH redistributes its electron density, leading to the changes in the C-C bonds force constants and the dipole derivatives, which further result in a blue-shift for the 6.2 $\mu \rm m$ band. The blueshifting of the 6.2 $\mu \rm m$ band ceases, once the uniformity of the electron distribution over carbon skeletan is attained (Hudgins et al. 2005) The role of the symmetry among PANHs may not be significant in relation to the peak position of the 6.2 $\mu \rm m$ band, however it seems related to the symmetry of their parent PAH molecule. Table 2 reveals that compact PAHs achieve more blue shifting in the peak position of 6.2 $\mu \rm m$ after inclusion of N atom into the PAH structure. C$_{48}$H$_{18}$, a less symmetrical PAH, shows redder positions compared to more symmetric ones; circumcoronene, ovalene and coronene after N inclusion (Table 2). The shortest wavelength for the 6.2 $\mu \rm m$ band is attained by endo N-PAH ions. Pure circumcoronene cation produces the C-C stretching feature at 6.35 $\mu \rm m$, which shifts to 6.14 $\mu \rm m$ after incorporation of a N atom within the structure (Table 2). Neutral NH$_2$-PAHs produce strong 6.2 $\mu \rm m$ feature that shifts blue-wards with increasing \enquote*{parent} PAH's size and symmetry. For naphthalene, this feature is at 6.43 $\mu \rm m$ and shifts to 6.19 $\mu \rm m$ for circumcoronene (Table 2). Endo N-PAH cations have the band shifting to a shorter wavelength with increasing PAH size. Thus, among all PANH molecules variants, the 6.2 $\mu \rm m$ band shifts towards bluer positions more elegantly with increasing size when N is incorporated within a compact PAH. For ions, the C-C stretching 6.2 $\mu \rm m$ band is stronger in exo N-PAH, NH-PAH and endo N-PAH compared to the remaining variants. The strong C-C stretching feature of protonated endo N-PAHs appears at wavelengths shorter than 6.2 $\mu \rm m$ for circumcoronene (Figure 2 (o)-black). \begin{table*} \tbl{Calculated peak positions and intensities ($10^{-13}$W$C^{-1}${$\mu \rm m$}$^{-1}$) of 9.0--15 $\mu \rm m$ features for PAHs and PANH variants in neutral, cationic ($^+$) and protonated forms (H$^+$)} {\begin{tabular}{lccccc} \noalign{\vskip3pt} \hline PAHs & pure & exo N- & NH- & NH$_2$- & endo N-\\ \hline C$_{16}$H$_{10}$ (duo) & 11.89 (0.76) & 12.16 (0.49) & 12.15 (0.52) & 12.22 (0.21) & 11.93 (0.36)\\ C$_{16}$H$_{10}^+$ (duo) & 11.70 (0.31) & 11.93 (0.21) & 11.93 (0.33) & 11.96 (0.39) & 11.78 (0.58)\\ H$^+$C$_{16}$H$_{10}$ (duo) & 11.60 (0.10) & 11.98 (0.09) & 11.95 (0.11) & 12.19 (0.26) & 11.98 (0.46)\\\cmidrule{2-6} C$_{20}$H$_{12}$ (trio) & 12.50 (0.88) & 12.42 (0.54) & 13.33 (0.46) & 13.33 (0.19) & 13.16 (0.78)\\ C$_{20}$H$_{12}^+$ (trio) & 12.51 (0.37) & 12.44 (0.25) & 12.56 (0.45) & 12.96 (0.39) & 12.39 (0.57)\\ H$^+$C$_{20}$H$_{12}$ (trio) & 12.64 (0.12) & 12.76 (0.13) & 12.42 (0.12) & 12.96 (0.17) & 12.91 (0.20)\\\cmidrule{2-6} C$_{24}$H$_{12}$ (duo) & 11.72 (1.44) & 11.78 (0.93) & 11.92 (0.45) & 11.95 (0.20) & 11.92 (0.47)\\ C$_{24}$H$_{12}^+$ (duo) & 11.61 (0.39) & 11.61 (0.26) & 11.64 (0.40) & 11.69 (0.55) & 11.64 (0.67)\\ H$^+$C$_{24}$H$_{12}$ (duo) & 11.61 (0.28) & 11.64 (0.19) & 11.69 (0.28) & 11.68 (0.34) & 11.69 (0.57)\\\cmidrule{2-6} C$_{32}$H$_{14}$ (solo) & 11.36 (1.23) & 11.37 (0.72) & 11.39 (0.28) & 11.40 (0.35) & 11.29 (0.35)\\ C$_{32}$H$_{14}^+$ (solo) & 11.11 (0.35) & 11.10 (0.23) & 11.08 (0.27) & 11.15 (0.48) & 11.19 (0.89)\\ H$^+$C$_{32}$H$_{14}$ (solo) & 11.23 (0.26) & 11.21 (0.19) & 11.19 (0.21) & 11.09 (0.39) & 11.10 (0.69)\\ C$_{32}$H$_{14}$ (duo) & 12.03 (0.67) & 12.05 (0.73) & 12.15 (0.12) & 12.20 (0.11) & 12.20 (0.06)\\ C$_{32}$H$_{14}^+$ (duo) & 11.82 (0.30) & 11.88 (0.29) & 11.95 (0.21) & 11.77 (0.33) & 11.69 (0.65)\\ H$^+$C$_{32}$H$_{14}$ (duo) & 11.88 (0.17) & 11.81 (0.12) & 11.90 (0.18) & 11.80 (0.11) & 11.77 (0.21)\\\cmidrule{2-6} C$_{48}$H$_{18}$ (solo) & 11.04 (2.93) & 11.10 (2.07) & 11.04 (1.09) & 11.01 (0.62) & 11.08 (1.11)\\ $C_{48}$H$_{18}^+$ (solo) & 10.98 (0.85) & 10.99 (0.69) & 11.00 (0.75) & 10.91 (1.02) & 10.97 (0.89)\\ H$^+$C$_{48}$H$_{18}$ (solo) & 10.92 (0.75) & 10.94 (0.59) & 10.98 (0.57) & 10.99 (0.57) & 11.00 (1.94)\\ C$_{48}$H$_{18}$ (duo) & 11.81 (0.14) & 11.84 (0.27) & 11.67 (0.50) & 11.55 (0.21) & 11.56 (0.19)\\ C$_{48}$H$_{18}^+$ (duo) & 12.48 (0.24) & 12.57 (0.12) & 11.59 (0.16) & 11.58 (0.25) & 11.87 (0.09)\\ H$^+$C$_{48}$H$_{18}$ (duo) & 12.48 (0.13) & 12.57 (0.19) & 12.78 (0.26) & 12.22 (0.15) & 12.53 (0.23)\\\cmidrule{2-6} C$_{54}$H$_{18}$ (solo) & 10.99 (3.23) & 10.95 (2.18) & 10.95 (1.18) & 11.04 (0.67) & 11.06 (1.19)\\ C$_{54}$H$_{18}^+$ (solo) & 10.79 (0.78) & 10.80 (0.57) & 10.87 (0.90) & 10.87 (1.08) & 10.76 (1.82)\\ H$^+$C$_{54}$H$_{18}$ (solo) & 10.76 (0.74) & 10.79 (0.54) & 10.87 (0.62) & 10.87 (0.52) & 10.79 (1.23)\\ C$_{54}$H$_{18}$ (duo) & 11.81 (0.13) & 11.85 (0.22) & 11.65 (0.52) & 11.58 (0.22) & 11.53 (0.20)\\ C$_{54}$H$_{18}^+$ (duo) & 12.53 (0.20) & 12.49 (0.14) & 11.65 (0.13) & 11.57 (0.20) & 11.81 (0.07)\\ H$^+$C$_{54}$H$_{18}$ (duo) & 12.47 (0.18) & 12.41 (0.16) & 13.02 (0.23) & 12.38 (0.16) & 12.42 (0.28)\\ \hline \end{tabular}} \begin{tabnote} solo, duo and trio stand for the number of C-H groups present in a PAH ring. \end{tabnote} \end{table*} The peak position of the 7.7 $\mu \rm m$ band does not follow any specific manner with increasing size---this emerges with a blue shifting around 7.5 $\mu \rm m$ for endo N-PAH ions, while it is consistently near 7.6--7.7 $\mu \rm m$ for NH$_2$-PAH neutrals (Table 2). The peak position for the 8.6 $\mu \rm m$ band does not change with the molecular size. PAH cations produce strong 7.7 and 8.6 $\mu \rm m$ features (Peeters et al. 2002; Tielens 2008). In ions, both the bands are strong in the spectra of exo N-PAH and NH-PAH variants. Noticeably, neutral NH$_2$-PAH and neutral endo N-PAH also produce strong 7.7 and 8.6 $\mu \rm m$ bands (Table 2) unlike typical neutral PAHs. The strength of both bands seems to increase with size, and also with the protonation. \subsection{The C-H out-of-plane Bending Vibrations (9.0--15 $\mu \rm m$)}~ The peak positions and intensities of the C-H out-of-plane bending modes for each selected PANH variant are summarized in Table 1, while their detailed information is listed in Table 3. The corresponding spectra are shown in Figure 2. Figure 1 shows that there are solo, duo and trio C-H groups on an outer ring of the PAHs chosen here. In general, the C-H out-of-plane band for solo C-H groups falls at 10.8--11.4 $\mu \rm m$, while for duo and trio C-H groups, this falls at 11.4--13.2 $\mu \rm m$ (Hudgins \& Allamandola 1999). The peak positions due to C-H out-of-plane bending modes do not shift with the PANH variants. However, these are blue-shifted upon ionization for most PAHs, while red shift is seen due to the C-H out-of-plane band in larger PAHs with duo C-H groups (C$_{48}$H$_{18}$ and C$_{54}$H$_{18}$). The intensities in this region change upon ionization. For neutrals, these are larger in pure, exo N-PAH and NH-PAH variants, but, on the contrary, NH$_2$ and endo N-PAH ions have stronger intensities (Table 1). The C-H out-of-plane bending feature for PAHs blue-shifts with the increasing number of solo C-H groups that is reported in numerous studies (Bauschlicher et al. 2008, Ricca et al. 2012, Candian \& Sarre 2015), while the physical reason is not well understood. The similar trend is followed by each PANH variant as well. This falls between 11.11--11.40 $\mu \rm m$ for C$_{32}$H$_{14}$ (having 2 solo C-H groups) and 10.76--11.06 $\mu \rm m$ for C$_{54}$H$_{18}$ (having 6 solo C-H groups) (Table 3), whereas their corresponding ions fall at shorter wavelengths than neutrals for every PANH variant considered here. Figure 2 illustrates that spectral features in 9.0-15 $\mu \rm m$ range in PAH (row I-Figure 2), exo N-PAH (row II-Figure 2) and NH-PAH (row III-Figure 2); are stronger for neutrals and weaker for ions (Table 1). However, for NH$_2$-PAH (row IV-Figure 2) and endo N-PAH (row V-Figure 2) neutrals and ions, the spectra tend to produce similar intensities for both C-H out-of-plane bending and C-C stretching vibrations at $\sim$11--14 $\mu \rm m$ and $\sim$6.1--6.4 $\mu \rm m$, respectively (Table 1). The band strengths in the 9.0--15.0 $\mu \rm m$ region increases as a larger number of C-H groups start to contribute in the PANH variant for solo, duo and trio C-H out-of-plane bending vibrations. For instance, the intensity of the C-H out-of-plane band in C$_{54}$H$_{18}$ (6 solo C-H groups) is significantly stronger than C$_{32}$H$_{14}$ (2 solo C-H groups). The intensities due to solo C-H out-of-plane bending vibration are larger than those due to duos. \begin{figure} \centering \includegraphics[height=10cm, width=8cm]{anbondsnews.eps}\\ \caption{Theoretical IR spectra of NH stretching for NH and NH$_2$ bonds in NH-PAH (Column I) and NH$_2$-PAH (Column II) respectively. The blue, red and green show neutral, cationic and protonated forms. The dotted vertical line shows the position of 3.3 $\mu \rm m$ band in celestial objects. The spectra are plotted on an absolute scale with $10^{-13}$W$C^{-1}${$\mu \rm m$}$^{-1}$ intensity unit.} \label{Figure 4.} \end{figure} \subsection{Vibrations of N-related bonds}~ The C-N stretching and in-plane bending vibrations appear in the 6--9 $\mu \rm m$ range but are extremely weak compared to the regular PAH bands. Therefore, their contribution towards UIR features is suggested to be insignificant. N-H stretching, on the other hand, gives rise to intense features. Figure 3 shows the spectra for NH-PAH and NH$_2$-PAH, which illustrates the N-H stretching for N-H bond (Column I) and N-H$_2$ bond (Column II) alongside the 3.3 $\mu \rm m$ band. The spectra are given for neutral (blue), cationic (red) and protonated (green) forms. The peak wavelength regions and intensities of N-H stretching modes are summarized in Table 1. N-H stretching vibrations give rise to features near 2.8--2.9 $\mu \rm m$ in NH-PAH, where N is attached with one H atom (Figure 3, Column I). For NH$_2$-PAH, in which N is associated with two H atoms, symmetric and antisymmetric stretching modes of N-H$_2$ bonds produce bands at 3.0--3.1 $\mu \rm m$ (Figure 3, Column II). The N-H stretching features are strong for ions and stronger compared to the C-H stretching features for small PAH ions (C$_{10}$H$_{8}$, C$_{16}$H$_{10}$ and C$_{20}$H$_{12}$). They shift red-wards for NH-PAH ions (Figure 3, Column I). The N-H stretching features are very weak in N-H$_2$ bonds of neutrals and become weaker for large PAHs (Figure 3). The N-H stretch intensity decreases for C$_{48}$H$_{18}$ and C$_{54}$H$_{18}$. We consider here only one N-H bond per PAH and thus the effect of N-inclusion in the distribution of charge becomes less with the increase of the PAH size. \section{Astrophysical Implications}~ Observations of the UIR features towards different astronomical objects suggest variations in terms of peak position, profile and intensity. These variations in the UIR bands have been grouped into four classes---A, B, C (Peeters et al. 2002; van Diedenhoven et al. 2004) and D (Matsuura et al. 2014). Class A objects show the C-C stretching band at $\sim$6.2 $\mu \rm m$ and the 7.6 $\mu \rm m$ component dominates over the 7.8 $\mu \rm m$ component in the 7.6--7.8 $\mu \rm m$ complex. Class B shows a band around 6.3 $\mu \rm m$ and the 7.8 $\mu \rm m$ is more intense. Classes C and D are characterized by a very broad feature at around 8 $\mu \rm m$, while Class D also shows a broad 6.2 $\mu \rm m$ band. Class A objects are found more prevalent in the observations (Canelo et al. 2018; Pino et al. 2008). Hudgins et al. (2005) conclude the viable existence of N-containing PAHs (PANHs); large size endoskeletal PANHs ($\geq$ 50 C atoms) mainly in Class A objects, while small size exoskeletal PANHs ($\leq$ 50 C atoms) for Class B. Since PANHs considered here do not reproduce the broad features, Classes C and D are not relevant to the present study. This work presents the theoretical IR analysis of PAHs ranging in size from naphthalene (8 C atoms) to circumcoronene (54 C atoms) containing nitrogen into their structure for neutrals, cations and protonated forms. Four PANH variants have been studied -- exo N-PAH, NH-PAH, NH$_2$-PAH and endo N-PAH (shown in Figure 1). NH and NH$_2$ embedded in the PAH ring are reported for the first time. Protonated forms of the PANH variants considered here are also studied for the first time as well as the larger size neutrals of exo N-PAHs and endo N-PAHs. The present study extends the previous studies by increasing the sample of PAH species, including larger PAHs, N in the ring structure, which are more relevant to astronomical context. The mid-IR spectra of exo N-PAH (Fig. 2 and row II) and NH-PAH (Fig. 2 row III) have features similar to pure PAHs, i.e., the C-H stretch features (near 3.3 $\mu \rm m$) and the C-H out of plane features (11 - 13 $\mu \rm m$) are intense for the neutral and the C-C stretch features and C-H in-plane bending features (in the 6-9 $\mu \rm m$ region) are strong for the ions. Apart from the intensity, the position of the features also match well with that of pure PAHs. NH-PAH is an exception as it shows a strong band at 8.54 $\mu \rm m$ for C$_{54}$H$_{18}$. The exo N-PAH variant of C$_{54}$H$_{18}$ matches the Class B positions of the 6.2 $\mu \rm m$ and 7.7 $\mu \rm m$ UIR bands. Whereas, large endoskeletal PANH cations (Hudgins et al. 2005) and protonated PANHs (Alvaro Galu\'e et al. 2010) tend to match the Class A band positions of the UIR bands. Our results show that the protonated exo N-PAHs and NH-PAHs show a feature at 7.4 $\mu \rm m$, at much shorter wavelength compared to the 7.7 $\mu \rm m$ UIR band. It may be noted that the 7.7 $\mu \rm m$ UIR band also shows a subfeature at 7.4 $\mu \rm m$ (Peeters et al. 2002). We also note that exo N-PAH cations with increasing size and symmetry matches the Class B position of the 6.2 and 7.7 $\mu \rm m$ bands. In the case of pure PAHs, it is well known that their cations match the intensity of the UIR bands in the 6-9 $\mu \rm m$ region. In the present work, we find that neutral NH$_2$-PAHs and endo N-PAHs show intense features in this region. The intensity increases with the size of these PAHs. In both these variants, the C-H solo out of plane feature (corresponding to the 11.2 $\mu \rm m$ UIR band) is found at longer wavelength for C$_{54}$H$_{18}$ (at $\sim$11.05 $\mu \rm m$) compared to pure PAHs (10.99 $\mu \rm m$). Thus, neutral NH$_2$-PAHs and endo N-PAHs, with more than 50 carbon atoms, match better the Class A position of the 11.2 $\mu \rm m$ band. It should be noted that pure PAHs of similar size have these bands at wavelengths shorter than 11.0 $\mu \rm m$ (Ricca et al. 2012). The NH$_2$-PAHs and endo N-PAHs (neutrals, cations and protonated forms) also match well the Class A position of the 6.2 $\mu \rm m$ UIR band. A similar result only for endo N-PAH cation is already reported (Hudgins et al. 2005). Apart from cations, our results support the presence of the neutral and protonated forms of these PAHs in the ISM. It is also seen that large neutral NH$_2$-PAHs match better the position of the 7.7 $\mu \rm m$ UIR band, which is not so consistent for the endo-PAHs. While large neutral NH$_2$-PAHs match well the Class A positions of the 6.2, 7.6 and 11.2 $\mu \rm m$ UIR features, endo N-PAHs match the 6.2 and 11.2 $\mu \rm m$ bands fairly well. The emergence of new features at 2.9 $\mu \rm m$ due to the stretching of N-H bonds is observed for NH-PAH. For ionized NH-PAHs, this feature is as intense as the 3.3 $\mu \rm m$ band. NH$_2$ symmetric and antisymmetric stretching modes in NH$_2$-PAHs produce new features near 3.0--3.1 $\mu \rm m$ for ions. This 3.0 $\mu \rm m$ band is much fainter than the 3.3 $\mu \rm m$ band. Spectra of the Class A \& B objects taken by ISO/SWS (Sloan et al. 2003) and AKARI (Mori et al. 2014) do not show any positive evidence for the presence of features at 2.9--3.0 $\mu \rm m$. If these PAH variants are responsible for the UIR bands, we should be able to detect a band at 2.9 $\mu \rm m$ for NH-PAH, for which we already have a good upper limit. Thus, their contribution must not be very large, while the 3.0 $\mu \rm m$ band of NH$_{2}$-PAH neutrals is very weak and the currently available data do not put a strong constraint or proof for it. IR Spectra of NH and NH$_{2}$ containing PAHs have been reported previously (Bauschlicher 1998) along with CN containing PAHs, concluding that CN and NH$_{2}$ side groups in PAHs do not contribute to the UIR bands. For NH and NH$_{2}$ embedded in the PAH ring as well, the present study confirms the conclusions of Bauschlicher (1998) with a larger sample of NH-PAHs and NH$_{2}$-PAHs. On the other hand, exo N-PAHs (with N at the periphery without H) and endoskeletal PANHs do not show features in 2.9--3.1 $\mu \rm m$ since they do not have N-H bonds on the periphery and there is no constraint on their contribution. While N-PAHs with N at the periphery match with the peak positions of Class B UIR bands, it has affinity to bond with H or H$^{+}$, suggesting that the N incorporation inside the structure might be more relevant. The absence of N-H stretch features in spectra of celestial objects can be accounted for either by N present in exo N-PAHs (with N at the periphery without H) or endo N-PAHs (with N inside the structure) or low nitrogen abundance in PAHs. The presence of PANHs should not be ignored as the recent discovery of CN-containing PAH (McGuire et al. 2021) marks a silver-lining for their presence in the ISM. \begin{figure} \centering \includegraphics[height=10cm,width=8cm]{a6.2.11.2.eps}\\ \caption{Theoretical mid-IR spectra of circumcoronene in neutral (blue), cation (red) and protonation (green). The dashed line represents the spectra of the Orion bar taken from Sloan et al. (2003). The shaded vertical region spans the range of the observed position of the 6.2, 7.7 and 8.6 $\mu \rm m$ bands, as Peeters et al. (2002) and the 11.2 $\mu \rm m$ band, as van Diedenhoven et al. (2004). } \label{Figure 4.} \end{figure} Observations reveal that bands in the 6--9 $\mu \rm m$ region and the 11.2 $\mu \rm m$ arise from different populations (Galliano et al. 2008). The ascription of the 3.3 and 11.2 $\mu \rm m$ bands with neutral PAHs and those in the 6-9 $\mu \rm m$ region with ionized PAHs is well established (Peeters et al. 2002; Tielens 2008; Schmidt et al. 2009; Rigopoulou et al. 2021). Despite this, various correlations have been found between PAH bands along with some distinct spatial morphologies (Peeters et al. 2017; Sidhu et al. 2021; Knight et al. 2021), which intimates more complexity in the variations between PAH bands. The ionization fraction is determined by the balance between photoionization and recombination and this may account for the observed variations in relative band intensities. These variations also suggest that factors other than ionization may play a role in the appearance of the UIR bands. Figure 4 compares the mid-IR spectra of Orion bar with endoskeletal PANHs and with NH$_{2}$-PAHs (with N at the periphery). We clearly see a significant resemblance of the theoretically calculated spectra with the observed one. The Orion bar spectra has almost equally strong 6.2 and 11.2 $\mu \rm m$ bands with a very strong 7.7 $\mu \rm m$ band. Several of the PAHs shown in Figure 4 exhibit similar spectral behaviour. The present study shows that neutral endoskeletal PANHs have the strong 6.2 and 11.2 $\mu \rm m$ band together. Thus, if they contribute significantly to the observed UIR bands, then there should be some N--rich astronomical regions where the 6.2 to 11.2 $\mu \rm m$ band ratio may not always be a direct indicator of the ionization degree of PAHs as has been assumed in previous studies The spatial variations of such regions could involve various effects, including ionization and nitrogen-inclusion. Interstellar environments, where the 6.2 and 11.2 $\mu \rm m$ bands are equally strong, endo N-PAH may be prospective candidates PAHs to inhabit such environments. \section{Summary}~ We have calculated four N-containing variants of seven PAHs (ranging in size from 10 carbon atoms up to 54) with N, NH and NH$_{2}$ incorporation in neutral, cationic and protonated forms. Large exo N-PAH and endo N-PAH cations have been considered previously (Hudgins et al. 2005). Small PAHs with NH and NH$_{2}$ as side group have also been studied (Bauschlicher 1998), concluding their negligible contribution to the UIR bands. In the present study, NH and NH$_{2}$ have been incorporated in the outer ring of the PAH molecules. Neutral and protonated PANHs also match well some of the UIR bands along with PANH cations (Hudgins et al. 2005), suggesting that the interstellar PANH population could be a mixture of these. The present work covers all the general UIR bands and their possible attribution with PANHs, while Hudgins et al. (2005) focus mainly on the 6.2 $\mu \rm m$ band. The present work shows that PANH may contribute significantly to the 11.2 $\mu \rm m$ band in some interstellar regions. PANHs may also contribute to the 7.7 and 8.6 $\mu \rm m$ complex but a more detailed study is required to analyse the quantitative contribution. The evident conclusions of this study are summarized: \vspace{1mm} 1. Ionization shows higher intensities for the bands in the 6--9 $\mu \rm m$ region for exo N-PAH and NH-PAH variants and intensities become stronger with increasing size, which is consistent here with previous findings (Hudgins et al. 2005). The present study confirms them for a larger sample of PAH species. The present study also suggests that exo N-PAH cations may show the Class B UIR positions of the 6.2 and 7.7 $\mu \rm m$ bands. 2. The spectra of PANH neutrals in the 6--9 $\mu \rm m$ region show larger intensities, when N is bonded with two H atoms (NH$_2$-PAH) and also when N is located within the ring (endo N-PAH). This behavior for endo N-PAHs is reported for the first time, and for NH$_2$-PAHs, this agrees with Bauschlicher (1998). The present study confirms their conclusions for larger size of PAHs and also for NH$_2$ being in the ring, suggesting further that NH$_2$-PAH and endo N-PAHs with larger size (C atoms $>$ 50) may produce 6.2, 7.7 and 11.2 $\mu \rm m$ features as Class A UIR bands. 3. New features at $\sim$2.9 $\mu \rm m$ and 3.0--3.1 $\mu \rm m$ arise due to N-H stretching in NH-PAH and NH$_2$-PAH respectively. Both features look stronger than the 3.3 $\mu \rm m$ band for NH-PAH and NH$_2$-PAH ions (Figure 3), while the 3.0-3.1 $\mu \rm m$ band of neutral NH$_2$-PAH is very faint and sensitive observations are required to detect it. They have not been detected in observations as yet, although sensitive observations available in this range may be limited. 4. The absence of N-H stretching features in observations confines the contribution of NH-PAHs and NH$_2$-PAHs that is consistent with the conclusions of Bauschlicher (1998) for larger size PAHs as well 5. Large endo N-PAH neutrals and ions produce strong 6.2 and 11.2 $\mu \rm m$ bands. This behavior is also present in the spectra of NH$_2$-PAHs, suggesting that there might be some regions of nitrogen dominance in the ISM where the 6.2 and 11.2 $\mu \rm m$ band ratio loses its exclusive role to be an indicator for the ionizations of PAHs. 6. Endo N-PAH cations with increasing size account for the Class A position of the 6.2 $\mu \rm m$ band, which confirms the conclusion of Hudgins et al. (2005) for cations. Here, we found that protonated (additional H$^{+}$ at C) endo N-PAHs with increasing size and symmetry of its parent PAH molecule also show the same behavior as cations. \vspace{1mm} N, being at the periphery of PAHs has an affinity to bond with H or H$^{+}$ to form NH-PAH and NH$_{2}$-PAH. The absence of their corresponding stretching features in observations constrains the contribution of NH and NH$_{2}$ related PAHs. On the other hand, endoskeletal PANHs are free from this constraint. Further theoretical and experimental investigations are required for extensive understanding. \section*{Acknowledgements} We are thankful to the anonymous reviewer for comments that helped in bringing clarity to the manuscript. AV acknowledges research fellowship from DST SERB EMR grant, 2017 (SERB-EMR/2016/005266). AP acknowledges financial support from DST SERB EMR grant, 2017 (SERB-EMR/2016/005266), IoE incentive grant, BHU (incentive/2021-22/32439), Banaras Hindu University, Varanasi and thanks the Inter-University Centre for Astronomy and Astrophysics, Pune for associateship. The authors also acknowledge support from DST JSPS grant (DST/INT/JSPS/P-238/2017). MB acknowledges JSPS for research support (Fellowship ID P19029). TO is supported by a Grant-in-Aid for Scientific Research of JSPS (18K03691).
1,108,101,562,711
arxiv
\section{Introduction} Nuclear Magnetic Resonance (NMR) allows information to be relayed through magnetic nuclei in a non-destructive and powerful approach to structural elucidation. It is a fundamental tool in a broad range of scientific disciplines and is a cornerstone of modern spectroscopy. NMR spectra yield a wealth of information, the most commonly reported property being the chemical shift. This parameter relates an externally applied magnetic field to the resulting change in the local electronic environment of the magnetic nuclei, thereby providing key insight into the underlying atomic structure. NMR J-coupling or indirect nuclear spin-spin coupling is an indirect interaction of the nuclear magnetic moments mediated by the bonding electrons. It is manifested as the fine-structure in NMR spectra, providing a direct measure of bond-strength and a map of the connectivities of the system. The J-coupling mechanism is an essential component of many NMR experiments.\cite{levitt} In solution-state, J-coupling measurements can often be obtained from one dimensional spectra where the multiplet splitting in the peaks is clearly resolved. However, in the solid-state this is not the case as these splittings are typically obscured by the broadenings from anisotropic interactions. Fortunately this technical challenge has not prevented the determination of J-coupling in the solid-state, as recent work employing spin-echo Magic Angle Spinning (MAS) techniques\cite{duma04} has resulted in accurate measurements of J-coupling in both inorganic materials\cite{amoureux05,coelho06,cadars07, coelho07} and molecular crystals.\cite{lai06, brown04,brown02,brown02b,pham07} In combination with the advances in solid-state experiments, there has also been an increased interest from the biomolecular community as J-coupling has been found to be a direct measure of hydrogen bond strength.\cite{dingley98,dingley05, pham07} Both of these factors have provided a strong impetus to develop first principles approaches to compute the NMR J-coupling constants in order to support experimental work, particularly for solid-state systems. For finite systems, NMR parameters, including both chemical shifts and J-couplings, can be routinely calculated using traditional quantum chemistry approaches based on localised orbitals.\cite{vaara02, helgaker99} Such calculations have been widely applied to assign the solution-state NMR spectra of molecular systems and establish key conformational and structural trends.\cite{nmr_book} In particular NMR J-couplings have been used to quantify hydrogen bonding\cite{grzesiek04} in biological systems. In order to apply these techniques to solid-state NMR, it is necessary to devise finite clusters of atoms which model the local environment around a site of interest in the true extended structure. While this has led to successful studies of NMR chemical shifts in systems such as molecular crystals,\cite{facelli93} supra-molecular assemblies\cite{ochsenfeld01} and organo-metallic compounds,\cite{salzmann98} it is clear that there are advantages in an approach that inherently takes account of the long-range electrostatic effects in extended systems. This observation has led to the recent development of the Gauge Including Projector Augmented Wave (GIPAW)\cite{pickard01} method which enables NMR parameters to be calculated at all-electron accuracy within the planewave-pseudopotential formalism\cite{payne1} of density functional theory (DFT). The technique has been applied, in combination with experimental NMR spectroscopy, to systems such as minerals,\cite{profeta03,ashbrook06,farnan03} glasses\cite{benoit05,charpentier04} and molecular crystals.\cite{yates04,yates05-malt,gervais05} In this paper we introduce a theory to compute NMR J-Couplings in extended systems using periodic boundary conditions and supercells with the planewave-pseudopotential approach. Like the GIPAW approach to calculating NMR chemical shifts, our method is formulated within the planewave-pseudopotential framework using density functional perturbation theory (DFPT). We use the projector-augmented-wave\cite{blochl1} (PAW) reconstruction technique to calculate J-couplings with all-electron accuracy. In the following section we discuss the physical mechanism of the indirect spin-spin interaction, the basis of the PAW approach and the supercell technique. In Sections \ref{sec:mag} and \ref{sec:cur} we show how the J-coupling tensor maybe be calculated using PAW and DFPT. The method has been implemented in a parallel plane-wave electronic structure code and we discuss details of the implementation and provide validation results in Section \ref{sec:res}. \section{NMR J-Coupling}\label{sec:intro2} We consider the interaction of two nuclei, ${\rm K}$ and ${\rm L}$, with magnetic moments, ${\bm \mu}_{{\rm K}}$ and ${\bm \mu}_{{\rm L}}$, mediated by the electrons. The first complete analysis of this indirect coupling was provided by Ramsey\cite{ramsey52,ramsey53} who decomposed the interaction into four distinct physical mechanisms; two involving the interaction of the nuclear spins through the electron spin and two through the electron charge. In the absence of spin-orbit coupling i.e, for relatively light elements, the charge and spin interactions can be treated separately. We can write the magnetic field at atom ${\rm L}$ induced by the magnetic moment of atom ${\rm K}$ as \begin{eqnarray}\label{eq:b_ind} {\bf B}^{(1)}_{\rm in}({\bf R}_{{\rm L}}) & = & \frac{\mu_{0}}{4\pi}\int \left[\frac{3({\bf m}^{(1)}({\bf r})\cdot {\bf r}_{{\rm L}}){\bf r}_{{\rm L}} - {\bf m}^{(1)}({\bf r})|{\bf r}_{{\rm L}}|^{2}}{|{\bf r}_{{\rm L}}|^{5}}\right]\,{\rm d}^{3}{\bf r} \nonumber \\ & + & \frac{\mu_{0}}{4\pi}\frac{8\pi}{3}\int {\bf m}^{(1)}({\bf r}) \delta({\bf r}_{{\rm L}})\,{\rm d}^{3}{\bf r} \nonumber \\ & + & \frac{\mu_{0}}{4\pi}\int {\bf j}^{(1)}({\bf r})\times \frac{{\bf r}_{{\rm L}}}{|{\bf r}_{{\rm L}}|^{3}}\,{\rm d}^{3}{\bf r}. \end{eqnarray} ${\bf r}_{{\rm L}} = {\bf R}_{{\rm L}} - {\bf r}$, where ${\bf R}_{{\rm L}}$ is the position of nucleus ${\rm L}$, $\mu_{0}$ is the permeability of a vacuum and $\delta$ is the Dirac delta function. ${\bm \mu}_{{\rm K}}$ interacts with the electron spin through a magnetic field generated by a Fermi-contact term, which is due to the finite probability of the presence of an electron at the nucleus, and a spin-dipolar interaction. Both of these terms give rise to a first order spin magnetisation density, ${\bf m}^{(1)}({\bf r})$. This magnetisation density then induces a magnetic field at the receiving nucleus by the same mechanisms, which in this case are given respectively by the first and second terms of Eqn.~\ref{eq:b_ind}. The interaction between $\bm{\mu}_{{\rm K}}$ and the electronic charge gives rise to an induced current density. To first-order this is given by ${\bf j}^{(1)}({\bf r})$ and can be divided into a paramagnetic and a diamagnetic contribution. The J-coupling tensor, $\overleftrightarrow{\bf J}_{{\rm LK}}$, between ${\rm L}$ and ${\rm K}$, can be related to the induced field by \begin{equation}\label{eq:J} {\bf B}^{(1)}_{\rm in}({\bf R}_{{\rm L}}) = \frac{2\pi}{\hbar\gamma_{{\rm L}}\gamma_{{\rm K}}}{\overleftrightarrow{\bf J}}_{{\rm LK}} \cdot {\bm \mu}_{{\rm K}}, \end{equation} where $\gamma_{{\rm L}}$ and $\gamma_{{\rm K}}$ are the gyromagnetic ratios of nuclei ${\rm L}$ and ${\rm K}$. Although the physical interpretation of J-coupling is simplified by considering the interaction in terms of a responding and a perturbing nucleus, it is a symmetric coupling and either atom ${\rm L}$ or ${\rm K}$ can be considered as the perturbing site. Experimental interest is focused primarily on the isotropic coupling constant, ${\rm J}^{n}_{{\rm LK}}$, which is obtained from the trace of ${\overleftrightarrow{\bf J}}_{{\rm LK}}$ and is measured in Hz. The superscript, $n$, denotes the order of the coupling in terms of the number of bonds separating the coupled nuclei. In a typical NMR experiment J-coupling can be measured across a maximum of three bonds.\cite{marquez01,edden04} In this paper we concentrate solely on obtaining the isotropic or scalar value. To calculate ${\overleftrightarrow{\bf J}_{{\rm KL}}}$ we obtain ${\bf m}^{(1)}$ and ${\bf j}^{(1)}$ within density functional perturbation theory using a planewave expansion for the wavefunctions with periodic boundary conditions and pseudopotentials to represent the ionic cores. The use of pseudopotentials generates a complication as the J-coupling tensor depends critically on the wavefunction in the regions close to the perturbing and receiving nuclei, precisely the regions where the pseudo-wavefunctions have a non-physical form. To compensate for this we perform an all-electron reconstruction of the valence wavefunctions in the core region using Bl\"{o}chl's projector augmented wave (PAW) scheme.\cite{walle1} Within this scheme, the expectation value of an operator $O$, applied to the all-electron wavefunctions, $\ket{\psi}$, is expressed in terms of the pseudised wavefunctions, $\ket{\widetilde{\psi}}$, as: $\bracket{\psi}{O}{\psi} = \bracket{\widetilde{\psi}}{\widetilde{O}}{\widetilde{\psi}}$. Here, for an all-electron local or semi-local operator, $O$, the corresponding pseudo-operator, $\widetilde{O}$, is given by \begin{eqnarray}\label{eq:paw} \widetilde{O}& = & O + \sum_{{\bf R},n,m}\ket{\widetilde{p}_{{\bf R},n}}\left[\bracket{\phi_{{\bf R},n}}{O}{\phi_{{\bf R},m}}\right. \nonumber \\ & & - \bracket{\widetilde{\phi}_{{\bf R},n}}{O}{\widetilde{\phi}_{{\bf R},m}}\left.\right]\bra{\widetilde{p}_{{\bf R},m}}. \end{eqnarray} ${\bf R}$ labels the atomic site, or augmentation region, and $n$ and $m$ are composite indexes which account for the angular momentum channels and the number of projectors. $|\phi_{{\bf R},n}\rangle$ are the all-electron partial waves obtained as eigenstates of an atomic calculation within $r_{c}$, the pseudopotential core radius and $|\widetilde{\phi}_{{\bf R},n}\rangle$ are the corresponding pseudo partial waves. $|\widetilde{p}_{{\bf R},n}\rangle$ are the localised projectors which weight the superposition of partial waves where $\langle\widetilde{p}_{{\bf R},n}|\widetilde{\phi}_{{\bf R}',m}\rangle = \delta_{{\bf RR}'}\delta_{nm}$. The PAW method has been used to calculate several all-electron properties from pseudopotential calculations including: EPR hyperfine parameters,\cite{walle1} electric field gradient tensors\cite{efg1} and Electron Energy Loss Spectroscopy.\cite{eels1} To calculate J-couplings in the solid-state using periodic boundary conditions, the perturbing nucleus can be viewed similar to a defect in a defect calculation. This allows us to use the standard technique of constructing supercells from the unit cell which are large enough to inhibit the interaction between the periodic defects or perturbations. This corresponds to extending the system-size to facilitate the decay of the induced magnetisation and current densities within the simulation cell. Figure. \ref{fig:supercell} is a schematic of a $2\times2\times2$ supercell constructed from eight unit cells. The perturbing atom now lies at the corner of a much larger cell which decreases the interaction between the perturbation and its periodic image. This approach works very well for localised properties such as J-coupling. To calculate the J-coupling for molecules, we use a vacuum supercell technique. In both cases, the J-couplings must be converged with respect to the cell-size. \begin{figure} \includegraphics*[width=8.0cm]{supercell.eps} \caption{\label{fig:supercell}Schematic of the supercell technique. The unit cell is on the left, a 2$\times$2$\times$2 supercell of the unit cell is on the right. This supercell doubles the distance between the perturbing atom (black) and its periodic image in the next cell.} \end{figure} \section{The Spin Magnetisation Density}\label{sec:mag} We now obtain the contribution to the J-coupling tensor which arises from the interaction of the nuclear spins mediated by the electron spin. We first obtain an expression for the pseudo-Hamiltonian in the presence of a perturbing nuclear spin and show how it can be used to obtain the induced magnetisation density. We then use the magnetisation density to calculate the magnetic field induced at the receiving nucleus. \subsection{Pseudo-Hamiltonian} The all-electron Hamiltonian for a system containing ${\rm N}$ magnetic moments which interact through the electron spin, ${\bf S}$, is expanded to first order in the magnetic moment of the perturbing site, ${\bm \mu}_{{\rm K}}$, to give \begin{equation}\label{eq:ae-1st} {\rm H} = \frac{1}{2}{\bf p}^{2} + {\rm V}^{(0)} ({\bf r}) + {\rm V}^{(1)}({\bf r}) + {\rm H}_{\rm SD} + {\rm H}_{\rm FC}. \end{equation} where \begin{equation} {\rm H}_{\rm SD}=g\beta{\bf S}\cdot {\bf B}_{{\rm K}}^{\rm SD}, \end{equation} and \begin{equation} {\rm H}_{\rm FC}=g\beta{\bf S}\cdot {\bf B}_{{\rm K}}^{\rm FC}. \end{equation} ${\bf B}_{{\rm K}}^{\rm SD}$ is the magnetic field generated by a spin-dipole interaction, \begin{equation} {\bf B}_{{\rm K}}^{{\rm SD}} = \frac{\mu_{0}}{4\pi}\frac{3{\bf r}_{{\rm K}}({\bf r}_{{\rm K}} \cdot{\bm \mu}_{{\rm K}}) - r_{{\rm K}}^{2}{\bm \mu}_{{\rm K}}{\cal I}}{|r_{{\rm K}}|^{5}}, \end{equation} and ${\bf B}_{{\rm K}}^{\rm FC}$ is the Fermi contact interaction, \begin{equation} {\bf B}_{{\rm K}}^{{\rm FC}} = \frac{8\pi}{3}\delta({\bf r}_{{\rm K}}){\bm \mu}_{{\rm K}}. \end{equation} We have defined ${\bf r}_{{\rm K}} = {\bf R}_{{\rm K}}-{\bf r} $, where ${\bf R}_{{\rm K}}$ is the position of nucleus ${\rm K}$ and ${\cal I}$ is the identity matrix. Here ${\rm V}^{(0)} ({\bf r})$ is the ground-state all-electron local potential and ${\rm V}^{(1)}({\bf r})$ is the corresponding first order variation. The latter term is due to the change in magnetisation density induced by ${\bm \mu}_{{\rm K}}$. The perturbation does not give rise to a first order change in the charge density and so there is no change in the Hartree potential for linear order. This can be understood by considering the effect of time reversal; the charge density is even under time inversion, while the spin-magnetisation and magnetic field change sign. As a result the perturbation does not induce a first order change in the charge density, however there is a first order induced spin-magnetisation density. ${\rm V}^{(1)}({\bf r})$ therefore accounts for the first-order variation of the exchange-correlation term which we label as ${\rm H}_{\rm xc}^{(1)}$. We now use the PAW transformation (Eqn.~\ref{eq:paw}) to obtain the pseudo-Hamiltonian. As Eqn.~\ref{eq:paw} does not contain any field dependence it is sufficient to apply it to each term of Eqn.~\ref{eq:ae-1st} individually. The pseudo-Hamiltonian at zeroth order in the perturbation is \begin{equation}\label{eq:h-zero} \widetilde{\rm H}^{(0)} = \frac{1}{2}{\bf p}^{2} + {\rm V}_{\rm loc}({\bf r}) + \sum_{{\bf R}}{\rm V}^{{\bf R}}_{\rm nl}, \end{equation} where ${\rm V}_{\rm loc}({\bf r})$ is the local part of the pseudopotentials which includes the self-consistent part of the Hamiltonian and ${\rm V}^{{\bf R}}_{\rm nl}$ is the non-local part which is given by ${\rm V}^{{\bf R}}_{\rm nl} = \sum_{n,m}\ket{\widetilde{p}_{{\bf R},n}} a^{{\bf R}}_{n,m}\bra{\widetilde{p}_{{\bf R},n}}$. $a^{\bf R}_{n,m}$ are the strengths of the nonlocal potential in each channel at each ionic site. Collecting terms to linear order in the perturbation gives, \begin{equation}\label{eq:h1_sd} \widetilde{\rm H}^{(1)} = \widetilde{\rm H}_{\rm xc}^{(1)}+ \widetilde{\rm H}_{\rm SD} + \widetilde{\rm H}_{\rm FC}. \end{equation} $\widetilde{\rm H}_{\rm SD}$ describes the spin-dipolar interaction induced by ${\bm \mu}_{{\rm K}}$ and is given by \begin{equation}\label{eq:h_sd} \widetilde{\rm H}_{\rm SD} = g\beta{\bf S}\cdot {\bf B}_{{\rm K}}^{\rm SD}+ g\beta{\bf S}\cdot \Delta {\bf B}_{{\rm K}}^{\rm SD}. \end{equation} The first term on the right-hand side is the all-electron operator and the second term is the augmentation to this, \begin{eqnarray}\label{eq:delta_b_sd} \Delta {\bf B}_{{\rm K}}^{\rm SD}& = &\sum_{n,m}\ket{\widetilde{p}_{{\bf R},n}} \left[\right.\bracket{\phi_{{\bf R},n}}{{\bf B}_{{\rm K}}^{\rm SD}}{\phi_{{\bf R},m}} \\ & & - \bracket{\widetilde{\phi}_{{\bf R},n}}{{\bf B}_{{\rm K}}^{\rm SD}} {\widetilde{\phi}_{{\bf R},m}}\left.\right]\bra{\widetilde{p}_{{\bf R},m}}, \nonumber \end{eqnarray} with ${\bf R} = {\bf R}_{K}$. In Eqn.~\ref{eq:h_sd} we have only included the augmentation of the spin-dipolar operator at the site of the perturbing atom. This on-site approximation is fully justified given the localised nature of this operator. $\widetilde{\rm H}_{\rm FC}$ is the Fermi-contact operator and can be constructed in a similar manner to the spin-dipole operator giving an all-electron and an augmentation contribution. However, as the Fermi-contact operator contains a Dirac delta-function and is therefore localised within the augmentation region, $\widetilde{\rm H}_{\rm FC}$ can be simplified considerably. The pseudo-partial waves and projectors, $\sum_{n}\braket{\widetilde{p}_{n}}{\widetilde{\phi}_{n}}$, form a complete set which enables us to rewrite the all-electron contribution in terms of the pseudo-partial waves within the augmentation region and so we can equivalently express the operator as \begin{equation}\label{eq:h_fc} \widetilde{\rm H}_{\rm FC} = g\beta{\bf S}\cdot\sum_{n,m}\ket{\widetilde{p}_{{\bf R},n}} \bracket{{\phi}_{{\bf R},n}}{{\bf B}_{{\rm K}}^{\rm FC}}{{\phi}_{{\bf R},m}}\bra{\widetilde{p}_{{\bf R},m}}, \end{equation} where ${\bf R} = {\bf R}_{{\rm K}}$. This form is more suitable for a practical calculation as it avoids an explicit representation of the delta-function. \subsection{Magnetisation Density} To construct the magnetisation density, we define ${\bf m}_{i}^{(1)}({\bf r})$ to be the linear response to the magnetic field, ${\bf B}_{i}$ induced along the direction ${\bf u}_{i}$ by the spin-dipolar and Fermi-contact interactions. The total magnetisation density is obtained as ${\bf m}^{(1)} = \sum_{i=x,y,z} {\bf m}_{i}^{(1)}({\bf r})$, the sum over the cartesian directions. By choosing ${\bf u}_{i}$ as the spin quantisation axis, ${\rm H}^{(1)}$ is diagonal in the spin-up and spin-down basis. The eigenstates of ${\rm H}^{(0)} + {\rm H}^{(1)} $ are also eigenstates of ${\bf u}_{i}\cdot {\bf S}$ and so the magnetisation density is parallel to ${\bf u}_{i}$ giving; \begin{equation} {\bf m}_{i}^{(1)}({\bf r}) = [{\bf u}_{i}\cdot{\bf m}_{i}^{(1)}({\bf r})]{\bf u}_{i} = {m}_{i}^{(1)}({\bf r}){\bf u}_{i}. \end{equation} Here \begin{equation}\label{eq:m_1} {m}_{i}^{(1)}({\bf r}) = g\beta\left[{n}_{i,\uparrow}^{(1)}({\bf r}) - {n}_{i,\downarrow}^{(1)}({\bf r})\right] = 2g\beta{n}_{i,\uparrow}^{(1)}({\bf r}), \end{equation} where g and $\beta$ were defined previously and ${n}_{i,\sigma}^{(1)}$ is the induced density for either spin-up or spin-down. The simplification of the magnetisation density in this way is a consequence of time reversal symmetry, namely the absence of a first-order charge density. This means that the spin-up and spin-down ground-state wavefunctions are equivalent, so that $|\widetilde{\psi}^{(0)}_{\uparrow o}\rangle = |\widetilde{\psi}^{(0)}_{\downarrow o}\rangle$. Also, the linear variation of the wavefunctions induced by the spin magnetisation are related through $|\widetilde{\psi}^{(1)}_{\uparrow o}\rangle = -|\widetilde{\psi}^{(1)}_{\downarrow o}\rangle$. Within PAW, Eqn.~\ref{eq:m_1} becomes \begin{eqnarray}\label{eq:mag_den} {m}_{i}^{(1)}({\bf r})& = & 4g\beta {\rm Re} \sum_{ o} \langle\widetilde{\psi}^{(1)}_{\uparrow o}|{\bf r} \rangle\langle{\bf r}|\widetilde{\psi}^{(0)}_{\uparrow o}\rangle \nonumber \\ & & + \sum_{{\bf R},n,m} \braket{\widetilde{\psi}_{ o\uparrow}^{(1)}} {\widetilde{p}_{{\bf R},n}}\Bigl[ \braket{\phi_{{\bf R},n}}{{\bf r}}\braket{{\bf r}}{\phi_{{\bf R},m}} \nonumber \\ & & - \braket{\widetilde{\phi}_{{\bf R},n}}{{\bf r}}\braket{{\bf r}}{\widetilde{\phi}_{{\bf R},m}}\Bigr] \braket{\widetilde{p}_{{\bf R},n}}{\widetilde{\psi}_{ o\uparrow}^{(0)}}. \\ \nonumber \end{eqnarray} ${\rm Re}$ signifies taking the real component, $|\widetilde{\psi}^{(0)}_{\uparrow o}\rangle$ are the eigenstates of the unperturbed Hamiltonian, ${\rm H}^{(0)}$, $|\widetilde{\psi}^{(1)}_{\uparrow o}\rangle$ are the perturbed pseudowavefunctions and $ o$ indexes the occupied bands. The first term on the right hand side of Eqn.~\ref{eq:mag_den} is the pseudo-magnetisation density $\widetilde{m}_{i}^{(1)}$, and the second term is the corresponding augmentation. For simplicity, we drop the spin indexing on the ground-state wavefunctions from now on as the spin-dependence enters only through the perturbation. To calculate $|\widetilde{\psi}^{(1)}_{\uparrow o}\rangle$ we employ a Green's function method where \begin{equation} \label{eq:green_mag} |\widetilde{\psi}^{(1)}_{\uparrow o}\rangle = {\cal G}(\epsilon) \ket{\psi^{(0)}_{ o}} = \sum_{e}\frac{\ket{\psi_{e}^{(0)}}{\bra{\psi_{ e}^{(0)}}}}{\epsilon_{ o}- \epsilon_{e}}\widetilde{\rm H}_{i}^{(1)} \ket{\psi^{(0)}_{ o}}. \end{equation} $\widetilde{\rm H}_{i}^{(1)}$ is the first order Hamiltonian given by Eqn.~\ref{eq:h1_sd} with the spin quantised along the ${\bf u}_{i}$ direction. ${\cal G}(\epsilon)$ is the Green's function, $\epsilon_{ o}$ and $\epsilon_{e}$ are the eigenvalues of the occupied and empty bands. Rather than explicitly sum over the empty states, we project onto the occupied bands by multiplying Eqn.~\ref{eq:green_mag} through by $(\epsilon_{ o} - \widetilde{\rm H}^{(0)})$. We define ${\cal P} = \sum_{e}\ket{\widetilde{\psi}_{e}^{(0)}}\bra{\widetilde{\psi}_{e}^{(0)}} = 1 - \sum_{ o}\ket{\widetilde{\psi}_{ o}^{(0)}} \bra{\widetilde{\psi}_{ o}^{(0)}}$ and rewrite Eqn.~\ref{eq:green_mag} as \begin{equation}\label{eq:green} (\epsilon_{ o} - \widetilde{\rm H}^{(0)})\ket{\widetilde{\psi}_{\uparrow o}^{(1)}} = {\cal P}{\widetilde{\rm H}}_{i}^{(1)}\ket{\widetilde{\psi}_{ o}^{(0)}}. \end{equation} This is then solved using a conjugate gradient minimisation scheme, with an additional self-consistency condition to account for the dependence of ${\rm H}_{\rm xc}^{(1)}$ on the spin-density. For a more detailed account of this type of approach, see Ref.~\onlinecite{gonze97}. \subsection{Induced Magnetic Field} The induced magnetic field at atom ${\rm L}$, and subsequently the J-coupling between ${\rm L}$ and ${\rm K}$, due to the spin magnetisation is obtained by combining Eqns. \ref{eq:b_ind} and \ref{eq:mag_den} to give \begin{equation} {\bf B}^{(1)}_{{\bf m}^{(1)}}({\bf R}_{{\rm L}}) = \widetilde{\bf B}^{(1)}_{\rm SD}({\bf R}_{{\rm L}}) + \Delta{\bf B}^{(1)}_{\rm SD}({\bf R}_{{\rm L}}) + \Delta{\bf B}^{(1)}_{\rm FC}({\bf R}_{{\rm L}}), \end{equation} where we have taken advantage of the linearity of Eqn.~\ref{eq:b_ind} to yield three separate terms. The first term is the pseudo spin-dipolar contribution and the second term is the spin-dipole augmentation. The notation used here implicitly assumes that a rotation over the spin-axis has been performed. In practise, the pseudo-spin dipole term can be constructed from the Fourier transform of Eqn.~\ref{eq:b_ind}, \begin{equation} {\bf B}^{(1)}_{\rm SD}({\bf G}) = -\frac{\mu_{0}}{3}\left[\frac{3(\widetilde{\bf m}^{(1)}({\bf G})\cdot{\bf G}){\bf G} -\widetilde{\bf m}^{(1)}({\bf G})|{\bf G}|^{2}}{ G^{2}}\right], \end{equation} where $\widetilde{\bf m}^{(1)}({\bf G})$ is the Fourier transform of the pseudo-magnetisation density and ${\bf G}$ is a reciprocal space lattice vector. The $G=0$ term is neglected as the cell size is large compared with the strength of the perturbation which is small. The induced magnetic field at atom ${\rm L}$ is then recovered by performing a slow inverse Fourier transform at the position of each responding nucleus. The spin dipole augmentation term is obtained by using an on-site approximation and evaluating terms of the form, \begin{equation}\label{eq:b_sd_aug} \Delta{\bf B}^{(1)}_{\rm SD}({\bf R}_{{\rm L}}) = g\beta\frac{\mu_{0}}{2\pi}{\rm Re} \sum_{n,m}\bracket{\widetilde{\psi}_{ o}^{(0)}} {\Delta{\bf B}^{\rm SD}_{{\rm L}}}{\widetilde{\psi}_{\uparrow o}^{(1)}}. \end{equation} $\Delta{\bf B}^{\rm SD}_{{\rm L}}$ is defined in Eqn.~\ref{eq:delta_b_sd} but now the subscript indicates the responding rather than the perturbing nucleus. To evaluate this term and Eqn.~\ref{eq:delta_b_sd}, we note that $\ket{\phi_{n}}$ can be decomposed into the product of a radial ($\ket{{\rm R}_{nl}}$) and an angular ($\ket{Y_{lm}}$) term. $B_{{\rm L}}^{\rm SD}$ can also be rewritten as the product of a radial and angular component such that the computation of the augmentation term involves the on-site calculation of $\bracket{R_{nl}}{\frac{1}{r_{{\rm L}}^{3}}}{R_{n'l'}}\bracket{Y_{n}} {\frac{3{\bf r}_{{\rm L}}{\bf r}_{{\rm L}}^{\rm T}}{r_{{\rm L}}^{2}} - {\cal I}}{Y_{m}}$. The latter quantity reduces to an integral over spherical harmonics given by the Gaunt coefficients. The Fermi-contact contribution, $\Delta{\bf B}^{(1)}_{\rm FC}$, is obtained by following the same argument used in constructing $\widetilde{\rm H}_{\rm FC}$ (Eqn.~\ref{eq:h_fc}) and is given by \begin{eqnarray} &&\Delta{\bf B}^{(1)}_{\rm FC}({\bf R}_{{\rm L}}) = \nonumber \\ && \frac{4\mu_{0}}{3}{\rm Re}\sum_{n,m} \braket{\widetilde{\psi}_{ o}^{(0)}}{\widetilde{p}_{{\bf R},n}} \bracket{\phi_{{\bf R},n}}{\delta({\bf r}_{{\rm L}})}{\phi_{{\bf R},m}}\braket{\widetilde{p}_{{\bf R},m}} {\widetilde{\psi}_{\uparrow o}^{(1)}}, \nonumber \\ \end{eqnarray} which is the all-electron reconstruction of the induced magnetisation density at the responding nucleus. \section{Current Density}\label{sec:cur} We now obtain the contribution to the J-coupling tensor arising from the interaction of the nuclear spins mediated by the electron charge current. The derivation of the current density is similar to that of the magnetisation density and much of the notation is conserved through-out. We first obtain an expression for the pseudo-Hamiltonian in the presence of a perturbing nuclear spin and show how it can be used to obtain the induced current density. We then use this current density to calculate the magnetic field induced at the receiving nucleus. \subsection{Pseudo-Hamiltonian} The all-electron Hamiltonian for a system of N magnetic nuclei which interact through the nuclear vector potential, ${\bf A}({\bf r}) = \sum_{{\rm N}}{\bf A}_{{\rm N}}({\bf r}) = \frac{\mu_{0}}{4\pi}\sum_{{\rm N}}\frac{{\bm \mu}_{{\rm N}} \times{\bf r}_{{\rm N}}}{|{\bf r}_{{\rm N}}|^{3}}$, can be expanded to first order in ${\bm \mu}_{{\rm K}}$ to give, \begin{equation}\label{eq:ham_orb_b} {\rm H}_{{\rm K}} = \frac{1}{2}{\bf p}^{2} + {\rm V}^{(0)}({\bf r}) + {\rm H}_{{\bf A}_{\rm K}}, \end{equation} where \begin{equation} {\rm H}_{{\bf A}_{{\rm K}}} = \frac{\mu_{0}}{4\pi}{\bf p}\cdot{\bf A}_{{\rm K}}({\bf r}), \end{equation} and ${\rm V}^{(0)}({\bf r})$ is the ground-state local potential. The perturbation does not induce a first order change in either the charge or magnetisation densities and so unlike Eqn.~\ref{eq:ae-1st}, there is no linear variation to the self-consistent potential in Eqn.~\ref{eq:ham_orb_b}. We have used the symmetric gauge for the vector potential and taken the natural choice of gauge-origin; namely that for the ${\rm N}$th nuclear spin the gauge origin is the ${\rm N}$th atomic site giving ${\bf A}_{{\rm N}}({\bf r})=1/2 {\bf B}({\bf r}) \times {\bf r}_{\rm N}$. This gauge-choice preserves the translational invariance of the system and is much simpler than in the otherwise analogous case of NMR chemical shielding. In the latter situation, due to the use of finite basis sets a rigid translation of the system in the uniform external magnetic field introduces an additional phase factor. For the planewave-pseudopotential approach the problem was solved by Pickard and Mauri with the development of the Gauge-Including Projector Augmented-Wave (GIPAW) approach, an extension which is unnecessary here. To obtain the pseudo-Hamiltonian we now apply the PAW transformation of Eqn.~\ref{eq:paw} to Eqn.~\ref{eq:ham_orb_b}. The zeroth order term is again given by Eqn.~\ref{eq:h-zero} and the first order term by \begin{eqnarray}\label{eq:orb_ham} \widetilde{\rm H}_{{\bf A}_{{\rm K}}}& = & \frac{\mu_{0}}{4\pi} \bm{\mu}_{\rm K}\cdot \frac{{\bf r}_{{\rm K}}\times{\bf p}}{|{\bf r}_{{\rm K}}|^{3}} + \nonumber \\ & &\frac{\mu_{0}}{4\pi} \bm{\mu}_{\rm K}\cdot \sum_{n,m}\ket{{\widetilde p}_{{\bf R},n}} \left[\bracket{\phi_{{\bf R},n}}{\frac{{\bf L}_{{\rm K}}}{|{\bf r}_{{\rm K}}|^{3}}} {\phi_{{\bf R},m}}\right. \nonumber \\ & & - \left.\bracket{\widetilde{\phi}_{{\bf R},n}} {\frac{{\bf L}_{{\rm K}}}{|{\bf r}_{{\rm K}}|^{3}}}{\widetilde{\phi}_{{\bf R},m}}\right] \bra{\widetilde{p}_{{\bf R},m}}. \end{eqnarray} The first term on the right-hand side is the all-electron component and the second term is the augmentation which is constructed at the site of the perturbing nucleus. ${\bf L}_{\rm K} = {\bf r}_{{\rm K}}\times {\bf p}$ is the angular momentum operator centred on the perturbing atomic site. \subsection{Current Density} The current density operator, ${\bf J}({\bf r})$, is given by the sum of a paramagnetic and a diamagnetic term, \begin{equation} {\bf J}({\bf r}) = {\bf J}^{\rm p}({\bf r}) + {\bf J}^{\rm d}({\bf r}), \end{equation} where the paramagnetic term is given by \begin{equation}\label{eq:jp_op} {\bf J}^{\rm p}({\bf r}) = -\left[{\bf p}\ket{\bf r}\bra{\bf r} + \ket{\bf r}\bra{\bf r}{\bf p}\right]/2, \end{equation} and the diamagnetic term is \begin{equation} {\bf J}^{\rm d}({\bf r}) = -{\bf A}({\bf r})\ket{\bf r}\bra{\bf r}. \end{equation} If we consider the current due only to the perturbing nucleus, and with our atomic choice of gauge origin, the diamagnetic term can be written as \begin{equation}\label{eq:jd_op} {\bf J}^{\rm d}_{{\rm K}}({\bf r}) = -\frac{\mu_{0}}{4\pi}\frac{{\bm \mu}_{{\rm K}}\times{\bf r}_{{\rm K}}}{r_{{\rm K}}^{3}} \ket{\bf r}\bra{\bf r}. \end{equation} By applying Eqn.~\ref{eq:paw} to both Eqns. \ref{eq:jp_op} and \ref{eq:jd_op}, we obtain the pseudo-current density operator within PAW \begin{equation} \widetilde{\bf J}({\bf r}) = {\bf J}^{\rm p}({\bf r}) + {\bf J}^{\rm d}_{{\rm K}}({\bf r}) + \left[\Delta{\bf J}^{\rm p}({\bf r}) + \Delta{\bf J}^{\rm d}_{{\rm K}}({\bf r})\right], \end{equation} where the paramagnetic augmentation operator is \begin{eqnarray} \Delta{\bf J}^{\rm p}({\bf r})& = &\sum_{{\bf R},n,m}\ket{\widetilde{p}_{{\bf R},n}}\left[\bracket{\phi_{{\bf R},n}}{{\bf J}^{\rm p}} {\phi_{{\bf R},m}}\right. \nonumber \\ && -\bracket{\widetilde{\phi}_{{\bf R},n}}{{\bf J}^{\rm p}} {\widetilde{\phi}_{{\bf R},m}}\left.\right]\bra{\widetilde{p}_{{\bf R},m}}, \end{eqnarray} and the corresponding diamagnetic operator is \begin{eqnarray} \Delta{\bf J}^{\rm d}_{{\rm K}}({\bf r})& = & \sum_{{\bf R},n,m}\ket{\widetilde{p}_{{\bf R},n}} \left[\bracket{\phi_{{\bf R},n}}{{\bf J}^{\rm d}_{{\rm K}} }{\phi_{{\bf R},m}}\right. \nonumber \\ & & -\bracket{\widetilde{\phi}_{{\bf R},n}}{{\bf J}^{\rm d}_{{\rm K}}} {\widetilde{\phi}_{{\bf R},m}}\left.\right] \bra{\widetilde{p}_{{\bf R},m}}. \end{eqnarray} Arranging terms in ${\bf J}({\bf r})$ to zeroth and linear order in ${\bm \mu}_{\rm K}$ gives \begin{equation}\label{eq:j_zero} {\bf J}^{(0)}({\bf r}) = {\bf J}^{\rm p}({\bf r}) + \Delta{\bf J}^{\rm p}({\bf r}), \end{equation} and \begin{equation}\label{eq:j_one} {\bf J}^{(1)}({\bf r}) = {\bf J}^{\rm d}_{{\rm K}}({\bf r}) + \Delta{\bf J}^{\rm d}_{{\rm K}}({\bf r}). \end{equation} Using Eqns.~\ref{eq:j_zero} and \ref{eq:j_one} we are now able to obtain the first-order induced current density, which within density functional perturbation theory is given by \begin{eqnarray}\label{eq:current} {\bf j}^{(1)}({\bf r})& = & 2\sum_{o} 2\,{\rm Re} \bracket{\widetilde{\psi}_{o}^{(0)}}{\widetilde{{\bf J}}^{(0)}} {\widetilde{\psi}_{o}^{(1)}} + \bracket{\widetilde{\psi}_{o}^{(0)}}{\widetilde{\bf J}^{(1)}} {\widetilde{\psi}_{o}^{(0)}}, \nonumber \\ & = & {\bf j}^{(1)}_{\rm p}({\bf r}) + {\bf j}^{(1)}_{\rm d}({\bf r}). \end{eqnarray} Here $\ket{\widetilde{\psi}_{o}^{(0)}}$ is the unperturbed wavefunction, $\ket{\widetilde{\psi}_{o}^{(1)}}$ is the perturbed wavefunction and $o$ indexes the occupied bands. The first term on the right hand side is the paramagnetic contribution to the induced current and the second term is the diamagnetic contribution. The first order wavefunction is again obtained using Eqn.~\ref{eq:green} where $\widetilde{\rm H}^{(1)}$ is now given by Eqn.~\ref{eq:orb_ham}. \subsection{Induced Magnetic Field} We can now combine Eqns.~\ref{eq:b_ind} and \ref{eq:current} to calculate the J-coupling between nuclei ${\rm L}$ and ${\rm K}$ arising from the magnetic field induced by the orbital current. The magnetic field at nucleus ${\rm L}$ due to the orbital current can be expressed as the sum of 4 terms, \begin{eqnarray}\label{eq:b_orb} {\bf B}^{(1)}_{{\bf j}^{(1)}}({\bf R}_{{\rm L}})& = &\widetilde{\bf B}_{\rm p}^{(1)}({\bf R}_{{\rm L}}) + \widetilde{\bf B}_{\rm d}^{(1)}({\bf R}_{{\rm L}}) + \Delta{\bf B}_{\rm p}^{(1)}({\bf R}_{{\rm L}}) \nonumber \\ && + \Delta{\bf B}_{\rm d}^{(1)}({\bf R}_{{\rm L}}), \end{eqnarray} the pseudised contributions from the paramagnetic and diamagnetic currents and their respective augmentation terms. To calculate the pseudised contributions to the current density we obtain the Fourier transform of $\widetilde{\bf B}_{\rm p}^{(1)}({\bf R}_{{\rm L}})$ and $\widetilde{\bf B}_{\rm d}^{(1)}({\bf R}_{{\rm L}})$ giving \begin{equation}\label{eq:cur-for} {\bf B}^{(1)}({\bf G}) = \mu_{0}\frac{i{\bf G}\times {\bf j}_{\rm p/d}^{(1)}({\bf G})}{G^{2}}, \end{equation} where ${\bf j}_{\rm p/d}^{(1)}({\bf G})$ is the Fourier transform of either the paramagnetic or diamagnetic current. To obtain the induced field at the atom site ${\rm L}$ we perform a slow Fourier transform of Eqn.~\ref{eq:cur-for}. We again note that the G=0 contribution to ${\bf B}^{(1)}({\bf G})$ is neglected as the contribution is expected to be small. The augmentation to the paramagnetic current is calculated using an on-site approximation (${\bf R} = {\bf R}_{\rm L}$) with \begin{eqnarray} \Delta{\bf B}_{\rm p}^{(1)}({\bf R}_{{\rm L}})& = & \frac{\mu_{0}}{4\pi}\sum_{n,m} \braket{\widetilde{\psi}_{o}^{(0)}}{\widetilde{p}_{{\bf R},n}} \left[\bracket{\phi_{{\bf R},n}}{\frac{{\bf L}_{\rm L}}{{\bf r}_{{\rm L}}^{3}}} {\phi_{{\bf R},m}}\right.\nonumber \\ && -\left.\bracket{\widetilde{\phi}_{{\bf R},n}}{\frac{{\bf L}_{\rm L}}{{\bf r}_{{\rm L}}^{3}}} {\widetilde{\phi}_{{\bf R},m}}\right] \braket{\widetilde{p}_{{\bf R},m}}{\widetilde{\psi}_{o}^{(1)}}, \end{eqnarray} where ${\bf L}$ is the angular momentum operator evaluated with respect to the augmentation regions. The augmentation to the diamagnetic current is given by \begin{eqnarray} \Delta{\bf B}_{\rm d}^{(1)}& = & \frac{\mu_{0}}{4\pi}\sum_{{\bf R},n,m} \braket{\widetilde{\psi}_{o}^{(0)}}{\widetilde{p}_{{\bf R},n}} \nonumber \\ && \left[\bracket{\phi_{{\bf R},n}}{\frac{ r_{{\rm L}} r_{{\rm K}} - {\bf r}_{{\rm L}}{\bf r}_{{\rm K}}^{\rm T}}{r_{{\rm L}}^{3}r_{{\rm K}}^{3}}}{\phi_{m}}\right.\nonumber \\ &-& \left.\bracket{\widetilde{\phi}_{{\bf R},n}}{\frac{ r_{{\rm L}} r_{{\rm K}} - {\bf r}_{{\rm L}}{\bf r}_{{\rm K}}^{\rm T}}{r_{{\rm L}}^{3}r_{{\rm K}}^{3}}}{\widetilde{\phi}_{m}}\right] \braket{\widetilde{p}_{{\bf R},m}}{\widetilde{\psi}_{o}^{(1)}}.\nonumber \\ \end{eqnarray} This is much more difficult to evaluate than any other term as an on-site approximation cannot be used due to the presence of the ${\bf r}_{{\rm K}}$ position vector within the augmentation summation. However, previous quantum chemical studies have shown that the overall contribution to the J-coupling from the diamagnetic term is very small compared with any of the other three contributions. In light of this, we have neglected this term in our current implementation. \section{Results}\label{sec:res} We have implemented our theory into a parallelised plane-wave electronic structure code.\cite{clark05} The ground-state wavefunctions and Hamiltonian are obtained self-consistently after which the isotropic J-coupling constant is calculated using the outlined approach. In our implementation we use norm-conserving Troullier-Martins pseudopotentials.\cite{troullier1} For the all-electron reconstruction we used two projectors per angular momentum channel. In the following sections we compare our approach to existing quantum chemistry approaches which use localised basis sets and with experiment. \subsection{Molecules} To validate our method we have calculated isotropic coupling constants for a range of small molecules\cite{notemol} and compared them with experiment. There are several studies of calculated J-couplings for small molecules reported in the literature, using a variety of theoretical approaches, see Ref.~\onlinecite{vaara02} and references therein. We compare our results to calculations presented by Lantto {\it et al}\cite{lantto02} which were obtained within DFT using the BLYP functional and with the Multi-configurational Self-Consistent Field (MCSCF) approach. For consistency we use the molecular geometries reported in their work. To obtain the isotropic J-coupling we use a supercell of size 1728 \AA$^{3}$ for each molecule with the exception of benzene which required a larger cell-size of 3375 \AA$^{3}$. The exchange-correlation was approximated by GGA-PBE\cite{perdew1} and an energy cut-off of 80 Ry was imposed on the planewave expansion. All calculations sample the Brillouin zone at the gamma-point and used norm-conserving Trouillier-Martins pseudopotentials.\cite{troullier1} \begin{figure} \includegraphics*[width=8.0cm]{graph3.eps} \caption{\label{fig:molecules}J-couplings calculated for a set of molecules. Both the MCSCF and BLYP results were taken from Ref.~\onlinecite{lantto02}. All of the experimental values were also taken from this paper. The PW(PBE) results are from the present work. The lines are obtained from a linear regression of the calculated values with experiment. All values are quoted in $10^{19}{\rm T}^{2} {\rm J}^{-1}$, the unit of the reduced coupling constant.} \end{figure} The calculated J-coupling against experiment for the molecules are shown in Fig. \ref{fig:molecules} alongside the results of a linear regression for each set of data. These results are presented as reduced spin-coupling constants, which are given by $\overleftrightarrow{\bf K}_{{\rm LK}} = \frac{2\pi\overleftrightarrow{\bf J}_{{\rm LK}}}{\hbar \gamma_{{\rm L}}\gamma_{{\rm K}}}$, and so are independent of nuclear species. The graph indicates an excellent overall agreement with both experiment and the other approaches. The accuracy of the planewave PBE calculations is comparable with the all-electron BLYP results, with regression coefficients of 0.92 and 1.08 respectively. The correlation coefficients for the BLYP, MCSCF and PW data are 0.97, 0.97 and 0.99 respectively, suggesting a smaller random error in the planewave approach. Unsurprisingly, the most accurate couplings are given by MCSCF, which provides a more comprehensive description of electron correlation but is computationally more demanding than DFT. \begin{table} \caption{\label{tab:f_ben}J-coupling[Hz] in benzene. PW(PBE) labels the current planewave approach. The BLYP, MCSCF and experimental values were taken from Ref.~\onlinecite{lantto02}. D, P, FC and SD label the diamagnetic, paramagnetic, Fermi-contact and spin-dipolar contributions respectively. All values are in Hz.} \begin{ruledtabular} \begin{tabular}{lcdddddd} $J^{n}_{{\rm KL}}$& Method& \multicolumn{1}{c}{$\mathrm{D}$}& \multicolumn{1}{c}{$\mathrm{P}$}& \multicolumn{1}{c}{$\mathrm{FC}$}& \multicolumn{1}{c}{$\mathrm{SD}$}& \multicolumn{1}{c}{$\mathrm{Total}$}& \multicolumn{1}{c}{$\mathrm{Expt[Hz]}$} \\ \hline J$_{\rm CC}^{1}$ &PW(PBE)& 0.2 &-6.9 & 58.2 & 1.9 & 53.4 &55.8 \\ &BLYP & 0.2 &-6.8 & 63.8 & 1.0 & 58.2 & \\ &MCSCF & 0.4 &-6.6 & 75.1 & 1.5 & 70.9 & \\ J$_{\rm CC}^{2}$ &PW(PBE)& 0.0 & 0.0 & -1.2 & -0.3 & -1.5 &-2.5 \\ &BLYP & 0.0 & 0.1 & 0.0 & -0.5 & -0.4 & \\ &MCSCF &-0.2 & 0.0 & -3.7 & -1.1 & -5.0 & \\ J$_{\rm CC}^{3}$ &PW(PBE)& 0.0 & 0.6 & 7.3 & 1.1 & 9.0 &10.1 \\ &BLYP &0.0 & 0.5 & 8.4 & 1.5 & 10.4 & \\ &MCSCF &0.1 & 0.4 & 16.8 & 1.8 & 19.1 & \\ J$_{\rm CH}^{1}$ &PW(PBE)& 0.5 & 0.9 &132.5 &-0.2 & 133.7 & 158.3 \\ &BLYP & & & & & 155.2 & \\ &MCSCF & & & & & 185.1 & \\ J$_{\rm CH}^{2}$ &PW(PBE)&-0.1 & 0.2 & 5.1 & 0.1 & 5.3 & 1.0 \\ &BLYP & & & & & 1.1 & \\ &MCSCF & & & & & -9.8 & \\ J$_{\rm CH}^{3}$ &PW(PBE)&-0.2 & -1.1 & 6.0 & 0.0 & 4.7 & 7.6 \\ &BLYP & & & & & 7.4 & \\ &MCSCF & & & & & 12.9 & \\ J$_{\rm CH}^{4}$ &PW(PBE)&-0.1 & 0.3 & -0.4 & 0.0 & -0.2 & -1.2 \\ &BLYP & & & & & -1.3 & \\ &MCSCF & & & & & -6.1 & \\ \end{tabular} \end{ruledtabular} \end{table} In Table \ref{tab:f_ben} we present the J-coupling values calculated for benzene. The results compare favourably with both the existing approaches and with experiment. The MCSCF approach systematically overestimates the J-coupling for both J$_{\rm CH}$ and J$_{\rm CC}$ compared with experiment. This is due to the use of a restricted basis set which was necessary given the size of the system, for further details see Ref.~\onlinecite{kaski96}. The decomposition of the J-coupling into the four components serves as an illustration of the relative strengths of each contribution and the trends over several bonds. Lantto {\it et al} have only presented this separation for J$_{\rm CC}$. It is clear that the Fermi-contact is the dominant mechanism in the coupling and that the diamagnetic component is consistently the smallest and is often negligible. \subsection{Crystals} Due to the difficulties encountered measuring J-coupling in solid-state systems there are very few values to be found in the literature that are suitable for validation of our approach. Recently Coelho {\it et al}.\cite{coelho06} provided an estimate for the two bond coupling between $^{29}$Si and $^{31}$P pairs in the silicophosphate Si$_{5}$O(PO$_{4}$)$_{6}$. Subsequently this was followed by a more accurate determination \cite{coelho07} which identified the four Si-O-P couplings. We have calculated NMR chemical shifts and $J^{2}_{\rm P-O-Si}$ for Si$_{5}$O(PO$_{4}$)$_{6}$ to validate our approach. The structure of Si$_{5}$O(PO$_{4}$)$_{6}$ is trigonal (a=7.869\AA, c=24.138\AA, 36 atoms per primitive cell) and contains one unique P site and three inequivalent Si sites. Two of these Si sites are 6-fold coordinated, Si$_{1}$ and Si$_{2}$, and the third site, Si$_{3}$, is four-fold coordinated. Si$_{1}$ is bonded to six equivalent oxygen atoms, Si$_{2}$ is bonded to six oxygen atoms which are comprised of two distinct sites. Si$_{3}$ is bonded to three equivalent oxygen atoms and one oxygen from an Si$_{3}$-O tetrahedron. Thus there is one $^{31}$P chemical shift, three $^{29}$Si chemical shifts and four unique $^{2}$J$_{\rm P-O-Si}$ couplings. We obtained the structure from the Chemical Database Service at Daresbury.\cite{cds} Prior to calculating the NMR parameters, we performed a full geometry optimisation on the structure, using a planewave cut-off of 70 Ryd and norm-conserving pseudopotentials. The GGA-PBE\cite{perdew1} exchange-correlation functional was used and a Monkhorst-Pack k-point grid with a maximum of 0.1 \AA$^{-1}$ between sampling points. We calculated the NMR chemical shifts using the GIPAW\cite{pickard01} approach with the same parameters used for the geometry optimisation. The J-coupling between P and Si was obtained using the approach outlined above. A slightly higher maximum planewave energy (80Ryd) was required to give J-couplings converged to within 0.1Hz. We tested the convergence of the induced magnetization density and current density with respect to supercell size. The results of these calculations for three cell sizes are presented in table \ref{tab:sipo_cell}. From this we can see that both the induced magnetization and current densities have decayed substantially within the single unit cell. The largest of these calculations (144 atoms) was parallelised over 16 dual-core AMD processors and took 45 hours to run. The groundstate calculation took approximately 14 hours and the J-coupling terms; Fermi-contact, spin-dipolar and orbital, required 3.5 hours, 15.5 hours and 11.4 hours respectively. \begin{table} \caption{\label{tab:sipo_cell}Calculated J-coupling for silicophosphate Si$_{5}$O(PO$_{4}$)$_{6}$ using the unit cell and two supercells constructed with 2$\times$1$\times$1 and 2$\times$2$\times$1 unit cells.} \begin{ruledtabular} \begin{tabular}{llll} Coupling & 1$\times$1$\times$1 & 2$\times$1$\times$1 & 2$\times$2$\times$1 \\\hline\hline $^{2}$J$_{\rm P-O_{3}-Si_{1}}$ & -17.37 & -17.07 & -17.12\\ $^{2}$J$_{\rm P-O_{2}-Si_{2}}$ & -16.16 & -16.18 & -16.26\\ $^{2}$J$_{\rm P-O_{5}-Si_{2}}$ & -1.30 & -1.20 & -1.17 \\ $^{2}$J$_{\rm P-O_{4}-Si_{3}}$ & -13.83 & -14.18 & -14.13\\ \end{tabular} \end{ruledtabular} \end{table} The results for the 2$\times$2$\times$1 cells are presented in comparison with experiment in Table \ref{tab:sipo}. \begin{table} \caption{\label{tab:sipo}Calculated NMR chemical shifts\cite{noteref} and J-coupling for silicophosphate Si$_{5}$O(PO$_{4}$)$_{6}$. The experimental values are in brackets and were taken from Ref.~\onlinecite{coelho07}.} \begin{ruledtabular} \begin{tabular}{llll} Coupling & $^{31}$P [ppm] & $^{29}$Si [ppm] & Calc. [Hz] \\ \hline ${J}^{2}_{\rm P-O_{3}-Si_{1}}$& -47.4 (-43.8) & -214.8 (-213.3) & -17.12 (15$\pm$2)\\ ${J}^{2}_{\rm P-O_{2}-Si_{2}}$& & -218.7 (-217.0) & -16.26 (14$\pm$2)\\ ${J}^{2}_{\rm P-O_{5}-Si_{2}}$& & -218.7 (-217.0) & -1.17 (4$\pm2$) \\ ${J}^{2}_{\rm P-O_{4}-Si_{3}}$& & -128.6 (-119.1) & -14.13 (12$\pm2$)\\ \end{tabular} \end{ruledtabular} \end{table} From Table \ref{tab:sipo} it is clear that the calculated J-couplings are in excellent agreement with experiment and fully reproduce the surprisingly large spread in the J-coupling values. Our calculations verify the novel experimental work and also identify the sign of the couplings which are not determined by the experimental spin-echo based approaches. The NMR chemical shifts are also in good agreement with experiment, particularly for $^{29}$Si. For both $^{29}$Si and $^{31}$P the difference between the calculated and experimental values is a very small fraction of the total shift range. We note that our assignment of the three Si sites in Si$_{5}$O(PO$_{4}$)$_{6}$ agrees with the assignment based on experimental intensities as discussed by Coelho {\it et al}\cite{coelho06}. \begin{table} \caption{\label{tab:sipo2}Decomposition for the J-coupling in Si$_{5}$O(PO$_{4}$)$_{6}$. D is the diamagnetic term, P is the paramagnetic term, SD is the spin-dipolar and FC is the Fermi-contact.} \begin{ruledtabular} \begin{tabular}{lccccc} Coupling & D & P & SD & FC & Total \\ \hline $^{2}$J$_{\rm P-O_{3}-Si_{1}}$ &-0.05 &-0.27 &-0.03 & -16.77 & -17.12\\ $^{2}$J$_{\rm P-O_{2}-Si_{2}}$ &-0.02 & -0.50 & -0.23 &-15.51 & -16.26\\ $^{2}$J$_{\rm P-O_{5}-Si_{2}}$ &-0.10 & -0.07 & 0.18 &-1.18 & -1.17\\ $^{2}$J$_{\rm P-O_{4}-Si_{3}}$ &-0.09 & -0.49 & 0.23 &-13.79 & -14.13\\ \end{tabular} \end{ruledtabular} \end{table} In Table~\ref{tab:sipo2} we present the decomposition of the silicophosphate J-coupling into their constituent terms. As with benzene, the Fermi-contact is found to be consistently the largest component while the diamagnetic and spin-dipolar contributions are very small. \section{Conclusions}\label{sec:conc} We have developed an all-electron approach for calculating NMR J-coupling constants using planewaves and pseudopotentials within DFT. Our method is applicable to both solution and solid state systems using supercell techniques. We have validated our theory against existing quantum chemical approaches and experiment for molecules. We have calculated the J-coupling between Si and P in a silicophosphate polymorph, for which we have determined the sign of the coupling. Given the recent experimental interest in J-coupling, we expect that our approach will prove useful in determining both the range and strength of coupling in systems not yet investigated and whether or not such couplings can feasibly be determined by experiment. By combining J-coupling calculations with computations of other NMR parameters, there now exists a comprehensive set of computational tools to complement experimental understanding and design. \section{Acknowledgments} SAJ would like to acknowledge postdoctoral funding by TCM Group under Grant No. S61263/01 and Science Foundation Ireland. JRY thanks Corpus Christi College for a research fellowship. Computational facilities were provided by the Tyndall National Institute and the SFI/HEA Irish Centre for High-End Computing (ICHEC). We would also like to thank S.P. Brown for valuable discussions on J-coupling experiments.
1,108,101,562,712
arxiv
\section*{Key points/Objectives} \begin{itemize} \item Theoretical foundation of density functional theory is reviewed. \item Relations to experimentally observable quantities are highlighted. \item Modern applications of density functional theory are introduced. \end{itemize} \section{Introduction} \label{sec_overview} The motion of the electrons in atoms, molecules, and solids is described by the Schr$\ddot{\rm o}$dinger (or Dirac) equation. The computational complexity to obtain the exact solution grows exponentially with the number of electrons $N$. Therefore, obtaining exact ground-state wave function is prohibitively expensive when $N$ is more than a few dozen. Therefore, it has been a great challenge to develop a powerful and accurate numerical method to calculate electronic structures. There are mainly two numerical approaches. One is the wave function theory. It directly deals with the many-body wave function itself and attempts to find good approximations to the exact wave function. The other is the density functional theory (DFT), for which we will give a brief review. By employing the electron density $\rho({\bf r})$ (a function of three coordinate variables) as the fundamental variable instead of the many-body wave function (a function of $3N$ coordinate variables), DFT has drastically reduced the computational cost. Therefore, DFT is the most widely used method for electronic structure calculations of solids. Here, we briefly review the fundamental and practical aspects of DFT. Sec.~\ref{sec:formalism} reviews the Hohenberg-Kohn theorem and the Kohn-Sham equation, which gives a foundation of DFT. We also describe several extensions of DFT. In DFT, as described below, the exchange-correlation functional is one of the most fundamental quantities. Sec.~\ref{sec:XC} is devoted to a discussion of approximations to the exchange-correlation functional. Sec.~\ref{sec:electronic_structure} presents several topics related to electronic structure calculations. In Sec.~\ref{sec:lattice}, we show that DFT can also be used to calculate the properties of atomic vibrations. Sec.~\ref{sec:practical} provides a practical guide of DFT. Finally, we give a summary in Sec.~\ref{sec:summary}. \section{Basic formalism} \label{sec:formalism} We start from the full non-relativistic Hamiltonian, in which electrons and atomic nuclei interact with each other. Because the masses of nuclei are much heavier than that of electrons, the kinetic energies of nuclei are usually much smaller than that of electrons. Then, to a good approximation, we can treat electron and nuclear dynamics separately. DFT is a theory for dealing with the electronic part under this approximation. In this section, we first briefly discuss the Born-Oppenheimer approximation~\citep{Born_Oppenheimer}, which is the most widely used framework to derive separate equations for the electron and nuclear dynamics. Then, we discuss the Hohenberg-Kohn theorem~\citep{Hohenberg_Kohn} and the Kohn-Sham equation~\citep{Kohn_Sham}, which form the basis of DFT. \subsection{Born-Oppenheimer approximation} The Hamiltonian for interacting electrons and nuclei reads \begin{eqnarray}\label{Eq:H_all} \hat{\mathcal H} = -\sum_{I}\frac{\hbar^{2}}{2M_{I}}\frac{\partial^{2}}{\partial{\bf R}_{I}^{2}} -\sum_{i}\frac{\hbar^{2}}{2m} \frac{\partial^{2}}{\partial{\bf r}_{i}^{2}} +V\bigl(\{ {\bf r} \}, \{ {\bf R}\} \bigr), \end{eqnarray} where \begin{eqnarray} \hspace{-0.6cm} V(\{ {\bf r} \} , \{ {\bf R} \} ) = \sum_{i < j} \frac{e^2}{|{\bf r}_{i} \! - \! {\bf r}_{j}|} - \sum_{i,I} \frac{Z_{I}e^{2}}{|{\bf r}_{i} \! - \! {\bf R}_{I}|} + \sum_{I < J} \frac{Z_{I}Z_{J} e^2}{|{\bf R}_{I} \! - \! {\bf R}_{J}|} \nonumber \end{eqnarray} with $\{ {\bf r} \}$ and $\{ {\bf R} \}$ being the set of electron and nuclei coordinates, respectively, and $i$ and $I$ labelling electrons and nuclei. The Born-Oppenheimer approximation makes use of the large difference in the mass between the nuclei and electrons. Within this approximation, the electron and nuclear motions are treated separately, and the wave function of electrons and nuclei is given by a product of the electron part $\Psi_{\rm e}$ and the nuclear part $\Phi_{\rm n}$. In the following, we briefly describe the equations obtained from the Born-Oppenheimer approximation. If we neglect the nuclear kinetic energy, which is much smaller than that of electrons, from the Hamiltonian, we obtain the electron Schr$\ddot{\rm o}$dinger equation: \begin{eqnarray} \label{Eq:BO_electron} \biggl [ - \sum_{i}\frac{\hbar^{2}}{2m} \frac{\partial^{2}}{\partial{\bf r}_{i}^{2}} + V \bigl(\{ {\bf r} \}, \{ {\bf R} \} \bigr) \biggr] \Psi_{\rm e} \bigl( \{ {\bf r} \} \ \! | \ \! \{ {\bf R} \} \bigr ) \nonumber \\ = E \bigl( \{ {\bf R} \} \bigr ) \Psi_{\rm e} \bigl (\{ {\bf r} \} \ \! | \ \! \{ {\bf R} \} \bigr ). \end{eqnarray} Here, the equation is solved for fixed nuclear positions. The nuclear dynamics with a smaller energy scale are treated separately from the electron dynamics. The Schr$\ddot{\rm o}$dinger equation for the nuclear part is given by \begin{eqnarray} \label{Eq:lattice} \biggl[-\sum_{i}\frac{\hbar^{2}}{2M_{I}} \frac{\partial^{2}}{\partial{\bf R}_{I}^{2}} +E\bigl( \{ {\bf R} \} \bigr ) \biggr] \Phi_{\rm n} \bigl( \{ {\bf R} \} \bigr) = \varepsilon \Phi_{\rm n} \bigl (\{ {\bf R} \} \bigr), \end{eqnarray} where the total energy of the electron part $E\bigl( \{ {\bf R} \} \bigr )$ serves as a potential for the nuclear dynamics (Born-Oppenheimer energy surface). $\varepsilon$ is the total energy of the electron-nuclear coupled system. In the following, we explain the idea of DFT for solving the electronic part [Eq.~(\ref{Eq:BO_electron})]. We also discuss the nuclear dynamics in Sec.~\ref{sec:lattice}. \subsection{Hohenberg-Kohn theorem and Kohn-Sham equation} \label{sec_HK_KS} The electronic part of the Hamiltonian is given by \begin{eqnarray} \hspace{-0.5cm} \hat{\mathcal H}_{\rm e} = \sum_{i} \Biggl( - \frac{\hbar^{2}}{2m} \frac{\partial^{2}}{\partial{\bf r}_{i}^{2}} - \sum_{I} \frac{Z_{I}e^{2}}{|{\bf r}_{i}-{\bf R}_{I}|}\Biggr) + \sum_{i<j} \frac{e^2}{|{\bf r}_{i}-{\bf r}_{j}|}. \end{eqnarray} As is mentioned in Sec.~\ref{sec_overview}, when the number of electrons is more than a few dozen, it is prohibitively demanding to obtain the exact ground-state wave function. In DFT, the electron density $\rho({\bf r})$ is used as the fundamental variable instead of the many-body wave function. The electron density is much more tractable than the many-body wave function because it depends on only three coordinate variables, regardless of the number of electrons. This approach has been justified by the Hohenberg-Kohn theorem. The theorem first states one-to-one correspondence between the ground-state electron density and the external potential. Therefore, once the ground-state electron density is known, the external potential is determined uniquely. Then, in principle, physical properties associated with the ground-state wave function are unambiguously determined by the electron density. In particular, the kinetic and electron-electron interaction energies of the ground state can be expressed as universal functionals of the electron density ($E_{\rm kin} [ \rho]$ and $E_{\rm ee} [\rho]$, respectively). The term ``universal'' indicates that the forms of the functionals do not depend explicitly on the external potential. The theorem also gives a variational principle. When we define the energy functional \begin{eqnarray} E_\varv[ \rho] = E_{\rm kin} [ \rho] + \int \! \rho({\bf r}) \varv({\bf r}) d{\bf r} + E_{\rm ee} [\rho] \end{eqnarray} for some external potential $\varv({\bf r})$, the functional satisfies the inequality \begin{eqnarray} \label{eq_variational} E_\varv[ \rho] \geq E_\varv[\rho_0] = E_0 , \end{eqnarray} where $\rho_0$ ($E_0$) is the ground-state electron density (energy) under the potential $\varv({\bf r})$, respectively. Therefore, the ground-state electron density can be obtained by searching for the electron density that minimizes the energy functional $E_\varv[ \rho]$. Although the variational principle in Eq.~(\ref{eq_variational}) looks simple, the major problem is that the exact forms of the functionals, $E_{\rm kin} [ \rho]$ and $E_{\rm ee} [\rho]$, are unknown. For this problem, Kohn and Sham have proposed an idea of introducing ``orbitals'' to approximate the kinetic energy functional $E_{\rm kin} [ \rho]$. This has opened up a way to perform DFT calculations with sufficient accuracy for practical use. Therefore, most of the modern DFT implementations employ the Kohn-Sham scheme; thus, we discuss this scheme in more detail below. As for the direct variational approach (orbital-free DFT), which is less accurate than the Kohn-Sham approach but has the advantage of being much faster, the current research focus is to construct accurate kinetic energy functionals. One strategy is to reproduce the Kohn-Sham kinetic energy as accurately as possible. The Kohn-Sham scheme introduces an auxiliary non-interacting system that is designed to give the same electron density as that of the interacting system. The non-interacting system is described by single-particle Schr$\ddot{\rm o}$dinger equation (so called Kohn-Sham equation): \begin{eqnarray} \label{eq:KS2} \left[ - \frac{\hbar^{2}}{2m} \frac{\partial^{2}}{\partial{\bf r}^{2}} + \varv_{\rm eff} ({\bf r }) \right] \phi_i ({\bf r}) = \varepsilon_{i} \phi_i ({\bf r}), \end{eqnarray} where $\varv_{\rm eff} ({\bf r })$ is an effective potential which is determined by Eq.~(\ref{eq:veff}), $\phi_i$ is the Kohn-Sham state, and $\varepsilon_{i}$ is the Kohn-Sham energy eigenvalue. The electron density is given by \begin{eqnarray} \label{eq:density} \rho({\bf r}) = \sum_{i=1}^{\rm occ.} | \phi_i({\bf r}) | ^2, \end{eqnarray} where the summation runs over occupied Khon-Sham states. In the Kohn-Sham scheme, to reproduce the true electron density with Eq.~(\ref{eq:density}), the effective potential $\varv_{\rm eff}({\bf r})$ is determined as follows: We first introduce the kinetic energy functional \begin{eqnarray} E_{\rm kin}^{\rm KS}[\rho] = \sum_{i=1}^{\rm occ.} \int \phi^*_i ({\bf r}) \left( - \frac{\hbar^{2}}{2m} \frac{\partial^{2}}{\partial{\bf r}^{2}} \right) \phi_i ({\bf r})d {\bf r} \end{eqnarray} and the Hartree energy functional \begin{eqnarray} E_{\rm H} [\rho] = \frac{e^2}{2} \int \! \! \! \int \frac{\rho ({\bf r}) \rho ({\bf r'})} {| {\bf r} - {\bf r}'| } d {\bf r} d{\bf r'}. \end{eqnarray} of the Kohn-Sham system. Next, we introduce so called exchange-correlation functional, whose definition is given by the sum of corrections to the exact kinetic and electron-electron interaction energies of the original interacting system: \begin{eqnarray} E_{\rm xc} [\rho] = \bigl ( E_{\rm kin} [\rho] - E_{\rm kin}^{\rm KS} [\rho] \bigr) + \bigl( E_{\rm ee} [\rho] - E_{\rm H} [\rho] \bigr). \end{eqnarray} Then, one can show that $\varv_{\rm eff}({\bf r})$ should be given by \begin{eqnarray} \label{eq:veff} \varv_{\rm eff} ({\bf r} ) \!\! &=& \!\! \varv({\bf r}) + e^2 \! \int \! \frac{ \rho ({\bf r'})} {| {\bf r} - {\bf r}'| } d{\bf r'} + \varv_{\rm xc} (\bf r), \\ \varv_{\rm xc} (\bf r) \!\! &=& \!\! \frac{\delta E_{\rm xc} [\rho] } {\delta \rho ({\bf r})}. \end{eqnarray} One can see that Eqs. (\ref{eq:KS2}), (\ref{eq:density}), and (\ref{eq:veff}) form self-consistent equations. Therefore, in practical DFT calculations based on the Kohn-Sham scheme, the self-consistent equations are solved iteratively with some approximate form of the exchange-correlation functional (see Sec.~\ref{sec:XC} for the detail). \subsection{Extensions} \label{sec:extension} The ground states are sometimes not sufficiently characterized only by the charge density distribution. This occurs when any quantities, generally represented by the 1-reduced density matrix, are involved \begin{eqnarray} \rho^{(1)}_{\sigma\sigma'}({\bf r}, {\bf r}') = \langle \hat{\psi}^{\dagger}_{\sigma}({\bf r}) \hat{\psi}_{\sigma'}({\bf r}') \rangle . \end{eqnarray} $\langle \hat{\mathcal O} \rangle$ denotes the expectation value of the operator $\hat{\mathcal O}$. When the system is subjected to external fields or exhibits spontaneous symmetry breaking, its nontrivial components, such as spin density ${\bf m}({\bf r})=\sum_{\sigma\sigma'}{\bm \sigma}_{\sigma \sigma'}\rho^{(1)}_{\sigma\sigma'}({\bf r}, {\bf r}) \ \ [{\bm \sigma}=(\sigma_{x}, \sigma_{y}, \sigma_{z})]$, may become nonzero. The framework of DFT can be extended for such situations. Namely, we can introduce $\rho^{(1)}_{\sigma\sigma'}({\bf r}, {\bf r}')$ as fundamental variables in addition to the charge density distribution $\rho({\bf r})$. A parallelism to the Hohenberg-Kohn theorem holds: The ground state is unambiguously characterized by $\rho({\bf r})$ and combination of $\rho^{(1)}_{\sigma\sigma'}({\bf r}, {\bf r}')$. The Kohn-Sham equation, which reproduces the identical $\rho$ and $\rho^{(1)}$ in the interacting system, can be constructed with complementary exchange-correlation potentials $\varv_{\rm xc}^{\sigma\sigma'}({\bf r},{\bf r}')=\delta E_{\rm xc}[\rho, \rho^{(1)}]/\delta \rho^{(1)}_{\sigma\sigma'}({\bf r}, {\bf r}')$. Such an extension has been first accomplished for the spin density~\citep{vonBarth-Hedin1972}. Calculations based on the spin DFT is now widely in practice for describing the electron spin distribution of materials. More generally, the 1-reduced density matrix functional theory has also been developed, which treats all the $\rho^{(1)}_{\sigma\sigma'}({\bf r}, {\bf r}')$ components as additional variables. The DFT framework is further extended for degrees of freedom other than the electronic charge and spin. Here we list some representative examples. In the current density functional theory, the current density ${\bm j}({\bf r})=\sum_{\sigma}\left\langle\frac{-e\hbar}{2mi}\left[\hat{\psi}^{\dagger}_{\sigma}({\bf r})\nabla\hat{\psi}_{\sigma}({\bf r})-[\nabla\hat{\psi}^{\dagger}_{\sigma}({\bf r})]\hat{\psi}_{\sigma}({\bf r})\right]\right\rangle$ is treated as a variable. Inclusion of the off-diagonal order variable $\chi_{\sigma\sigma'}({\bf r},{\bf r}')=\langle\hat{\psi}_{\sigma}({\bf r})\hat{\psi}_{\sigma'}({\bf r}')\rangle$ has been addressed to deal with superconductivity. It is even possible to invent a DFT with a distribution of ionic sites $\Gamma({\bf R}_{1}, {\bf R}_{2}, \dots)$ (treated as points in the Born-Oppenheimer approximation) as an additional variable, which is named multicomponent DFT. Interestingly, by combined use of $\chi_{\sigma\sigma'}({\bf r},{\bf r}')$ and $\Gamma({\bf R}_{1}, {\bf R}_{2}, \dots)$ and extension to nonzero temperature, the phonon-mediated superconductivity is consistently described within the DFT framework, which is called DFT for superconductors. The concept of introducing a non-interacting reference system that reproduces the charge density of the interacting system can also be extended to the time-dependent problem. Suppose the system is described by the time-dependent Schrodinger equation \begin{eqnarray} i\hbar\frac{\partial}{\partial t}|\Psi_{\rm e}(t)\rangle =\hat{\mathcal{H}}_{\rm e}(t)|\Psi_{\rm e}(t)\rangle \end{eqnarray} with $\hat{\mathcal{H}}_{\rm e}(t)$ including a time-dependent external field $\varv({\bf r}, t)$. The Runge-Gross theorem states that, with a given initial state $|\Psi_{\rm e}(t_{0})\rangle$, a one-to-one correspondence between the time-dependent charge density $\rho({\bf r}, t)$ and external potential $\varv({\bf r}, t)$ holds \citep{Runge_Gross}. With this theorem, the time-dependent density functional theory has been established. Here, one can recast the original system to a non-interacting system \begin{eqnarray} i\hbar\frac{\partial}{\partial t}\phi_{i}({\bf r}, t) = \left[-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial {\bf r}^2}+\varv_{\rm eff}({\bf r}, t)\right]\phi_{i}({\bf r}, t). \end{eqnarray} This time-dependent Kohn-Sham equation has been used to simulate ionization and excitation dynamics of atoms, molecules, and solids. For further reading of the present topics, see, e.g., \citet{Parr_Yang}, \citet{Engel_Dreizler}, \citet{SCDFTI}, and \cite{SCDFTII}. \section{Approximate exchange-correlation functionals} \label{sec:XC} Several exact formulas of the exchange-correlation energy $E_{\rm xc}$ are actually known. But such formulas are not useful because they involve the exact solution of the many-body Schrodinger equation. Practically, we design calculable approximate forms as explicit functionals of $\rho$ so that they comply with correct asymptotes, and apply them to general systems. Here we look over standard approximations in use. \subsection{LDA, GGA, and orbital dependent functionals} The concept of gradient expansion gives us a useful strategy for developing the approximate functional forms. In extended systems like periodic solids, nearly uniform itinerant electrons are expected to dominate the electronic properties, and its spatial variation would be treated as a perturbation. Along this line, the accuracy of the approximate forms may be improved systematically. The exchange-correlation energy density $\varepsilon_{\rm xc}$ is introduced by the following decomposition \begin{eqnarray} E_{\rm xc}=\int \! d{\bf r} \ \! \rho({\bf r})\varepsilon_{\rm xc}({\bf r}). \end{eqnarray} Here, $\varepsilon_{\rm xc}$ is in principle a functional of the entire distribution of $\rho({\bf r})$. A systematic approximation is implemented by expressing the spatial dependence of $\rho$ in terms of the local values (semilocal approximation) \begin{eqnarray} \varepsilon_{\rm xc}([\rho]) =\varepsilon_{\rm xc}(\rho({\bf r}), \nabla\rho({\bf r}), \dots). \end{eqnarray} The first approximate form is the local density approximation (LDA) \begin{eqnarray} \varepsilon_{\rm xc}([\rho]) \simeq\varepsilon_{\rm xc}(\rho({\bf r})). \label{eq:LDA} \end{eqnarray} This form becomes exact in the uniform electron gas, where $\rho({\bf r})$ is constant in space. Useful approximate models within the LDA can be derived from accurate calculations for uniform electron gas by sophisticated wavefunction methods such as the diffusion Monte Carlo method. The parameters in the models are determined by fitting the reference numerical data of $\varepsilon_{\rm xc}(\rho_{\rm uni})$ as functions of the uniform density $\rho_{\rm uni}$. Such models, being accurate for the uniform systems, are applied to non-uniform systems with Eq.~(\ref{eq:LDA}). The approximation can be improved by taking into account the higher order derivatives. The generalized gradient approximation (GGA) incorporates the first-order derivative of $\rho$ \begin{eqnarray} \varepsilon_{\rm xc}([\rho]) \simeq\varepsilon_{\rm xc}(\rho({\bf r}),\nabla\rho({\bf r})). \label{eq:GGA} \end{eqnarray} The meta-GGA incorporates the Laplacian of $\rho$, as well as kinetic energy density $\tau({\bf r})=\sum_{i}|\nabla\phi_{i}({\bf r})|^2$ \begin{eqnarray} \varepsilon_{\rm xc}([\rho]) \simeq\varepsilon_{\rm xc}(\rho({\bf r}),\nabla\rho({\bf r}),\Delta\rho({\bf r}),\tau({\bf r})). \label{eq:metaGGA} \end{eqnarray} A common strategy for implementing the practical forms for those approximations is as follows: (i) derive asymptotic behavior of the exact functional in extreme cases, (ii) design an analytic model so that those asymptotes are reproduced, and (iii) determine the remaining model parameters, referring to some ``norm" systems or any principle like maximal smoothness. Adding the higher-order gradients may not be efficient for incorporating quantum effects such as the Pauli exclusion and dynamical/static correlations. To describe such effects, the Kohn-Sham orbitals, which are also implicit functionals of $\rho$, are explicitly included as variables. For the former, the exact exchange term (EXX) \begin{eqnarray} \hspace{-0.6cm} E^{\rm EXX} \! =-\frac{e^2}{2} \sum_{ij}^{\rm occ.} \! \int \! \! \! \int \! d{\bf r}d{\bf r}' \phi^{\ast}_{i}({\bf r})\phi^{\ast}_{j}({\bf r}') \frac{1}{|{\bf r} \! - \! {\bf r}'|}\phi_{i}({\bf r}')\phi_{j}({\bf r}) \nonumber \end{eqnarray} is added to $E_{\rm xc}$ instead of the exchange part of the functional. The magnitude of the $E^{\rm EXX}$ term is tuned by a prefactor and/or cutoff for the Coulomb potential, considering its partial cancellation with the correlation effects. Functionals in this form are called hybrid functionals. For including the dynamical and static correlation effects, a formally exact adiabatic continuation formula~\citep{Langreth-Perdew1975} provides a useful basis \begin{eqnarray} \! \! \! E_{\rm c} \! \! \! &=& \! \! \! - \frac{e^2}{2} \int_{0}^{1} d\lambda \int\frac{d\omega}{2\pi} \nonumber \\ && \times \! \int \! \! \! \int \! d{\bf r}d{\bf r}' \ \frac{\chi_{\lambda}({\bf r}',{\bf r}, i\omega)-\chi_{0}({\bf r}',{\bf r}, i\omega)}{|{\bf r}-{\bf r}'|}. \end{eqnarray} Here, $\chi_{\lambda}$ denotes the density-density response function of the ground state with the electron-electron Coulomb interaction scaled by a factor $\lambda$. $\chi_{0}$ is the Kohn-Sham response function. The $\lambda$ integral is taken with the density $\rho$ fixed. Applying approximations to $\chi_{\lambda}({\bf r}',{\bf r}, i\omega)$, we can obtain approximate $E_{\rm c}$ formulas. For example, the random phase approximation is widely used to approximate $\chi_{\lambda}$, and the corresponding $E_c$ successfully describes the van der Waals effect, a representative correlation effect involving unoccupied orbitals. For a more thorough review on the LDA, GGA, and orbital-dependent functionals, see, e.g., \citet{Engel_Dreizler} and \citet{Kummel-Kronik}. \subsection{Notes on constructing approximate exchange-correlation functionals} As explained above, analytically derived or exactly calculated asymptotic behavior of $E_{\rm xc}[\rho]$ in the $\rho$ space is utilized to specify the form of the approximate functionals. Still, there remains an ambiguity for general non-uniform $\rho$, where the exact references are unavailable. Full implementation of the approximate forms finally requires heuristic modeling with tunable parameters. One approach is to make the model for $E_{\rm xc}[\rho]$ as simple as possible. For example, in constructing the GGA-PBE (Perdew-Burke-Ernzerhof) functional~\citep{GGAPBE}, a simple smooth form has been conceived, where the parameters have been unambiguously determined so that it converges to the asymptotic formulas in the limits. A more empirical approach is to refer to data of specific systems for determining the model parameters. Experimentally observed physical quantities or those calculated with accurate wave function methods are often used for this. For example, the prefactor for $E^{\rm EXX}$ is tuned so that accurately calculated cohesive energies of several molecules are optimally reproduced. See, e.g., \citet{Head-Gordon} for a thorough review on a variety of functionals. An attempt has been recently made to remove the ambiguity stemming from the human construction of the model form. Namely, one adopts an extremely flexible model with a huge number of parameters. The model is by design capable of representing {\it any} functionals in the infinite parameter limit, so that it can mimic the ideal energy functional $E[\rho]$ as a mapping from $\rho$ to a scalar $E$. Parameters are tuned with a large amount of reference data. Modern machine learning models and parameter tuning methods are used to implement this scheme [see, e.g., \citet{Marques2019} for a review]. \section{Electronic structure} \label{sec:electronic_structure} DFT unambiguously defines the quantities related to the charge and spin densities and total energy; magnetic moment, lattice constants, bulk modulus, etc. Relating DFT to electronic single-particle properties is a more complicated issue. This is mainly due to the fact that the occupation number and Kohn-Sham orbital are auxiliary concepts (see Sec.~\ref{sec_HK_KS}) and not directly related to real single particle excitations. Nevertheless, the DFT calculations are widely in practical use for quantitative comparison with experimentally observed excitation spectra. The DFT results (Kohn-Sham wave functions, Kohn-Sham energies, etc.) can also be utilized as a basis for other methods such as the Green's function theory. In this section, we review those aspects. \subsection{Single-particle spectrum} Single-particle excitation can be observed with spectroscopy experiments. Among the single-particle quantities, the fundamental gap, which is the minimum energy required for manipulating one electron within the system at zero temperature, is a key quantity that distinguishes insulators from metals in solids. Here we discuss the discrepancy between the apparent gap in the Kohn-Sham spectra and experimentally observed optical gap (fundamental gap), to recall attention to the interpretation of the band structures obtained within the Kohn-Sham scheme. Within DFT, the level difference between ground states with different numbers of electrons is well defined. Then, the fundamental gap for an $N$ electron system is given by \begin{eqnarray} \hspace{-0.5 cm }E_{\rm gap}(N) \! \! \! &=& \! \! \! \left ( E(N \! -\! 1)-E(N) \right ) - \left ( E(N)-E(N\!+\!1) \right ) \nonumber \\ &=& \! \! \! I(N) - A(N). \label{eq:fundamental_gap} \end{eqnarray} Here, $I$ and $A$ are the ionization energy and the electron affinity, respectively. In this section, we indicate the dependence on the total number of electrons explicitly for clarity, e.g., $E(N)$ is the total energy of the $N$ electron system. The expression of Eq.~(\ref{eq:fundamental_gap}) is, however, inefficient because we cannot always perform calculations with $N\pm 1$ electrons: For example, in bulk systems, the total $N$ is taken to infinity. To accommodate DFT to such cases, the extension to systems with fractional electron number $N \! +\! \delta$ $(0 \! <\! \delta \! < \! 1)$ has been beneficial. There, the chemical potential $\mu$ is introduced, and the occupation number $\{n_{i}\}$ is extended to allow fractional values. The state can be a mixed state with fractional weight, formed by the pure states of integer electron numbers. The infinitesimal variation of electron numbers can be defined with this formalism. From this Janak's theorem~\citep{Janak} is derived, which relates the change in total energy and the occupation numbers as \begin{eqnarray} \delta E = \sum_{i}\delta n_{i}\varepsilon_{i}. \label{eq:Janak} \end{eqnarray} This theorem is sometimes referred to as it gives the Kohn-Sham orbitals physical meaning. However, one has to recall that $n_{i}$ and $\varepsilon_{i}$ are artificial variables. Thus, the operation ``changing the number of electrons in state $i$" is not directly related to adding/subtracting actual electrons. Yet the frontier orbital has physical significance. Within the extended DFT, in the $N+\delta$ electron ground state, only one orbital $\phi_{\rm f}$, which is called the frontier orbital, can have a fractional occupation number ($n_{i}=1 \ \forall \varepsilon_{i}\!<\!\varepsilon_{\rm f};\ n_{i}=0 \ \forall \varepsilon_{i}\!>\!\varepsilon_{\rm f}$). Integrating Eq.~(\ref{eq:Janak}) for $i={\rm f}$ from the $N$ to $N\pm 1$ electron states, we obtain: \begin{eqnarray} I \! \! \! &=& \! \! \! \int_{N}^{N-1} dN \varepsilon_{\rm f}(N), \\ -A \! \! \! &=& \! \! \! \int_{N}^{N+1} dN \varepsilon_{\rm f}(N). \label{eq:I-A-exact} \end{eqnarray} Another important consequence of the extended DFT is that the exact $N$ dependence of the total energy is linear between integer $N$'s. The fact $I=E(N-1)-E(N)\neq E(N)-E(N+1)=A$ then requires that the derivative of the total energy is discontinuous at the integer points. Therefore, the curve of $E(N)$ appears as a polygonal line with nodes at the integer points~\citep{Perdew-straight}. With this fact, the derivative gap \begin{eqnarray} E_{\rm gap}^{\rm deriv} = \left.\frac{\partial E(N)}{\partial N}\right|_{N+\delta} - \left.\frac{\partial E(N)}{\partial N}\right|_{N-\delta} \end{eqnarray} is equated to the fundamental gap. Furthermore, $E_{\rm gap}^{\rm deriv}$ satisfies the following equation \begin{eqnarray} E_{\rm gap}^{\rm deriv} =E_{\rm gap}^{\rm KS}+\Delta_{\rm xc} , \label{eq:deriv-KS} \end{eqnarray} where $E_{\rm gap}^{\rm KS}$ is the gap between the lowest unoccupied and highest occupied Kohn-Sham states (Kohn-Sham gap) and $\Delta_{\rm xc}$ is a spatially constant term so called derivative discontinuity $\left.\frac{\partial E_{\rm xc}(N)}{\partial N}\right|_{N+\delta} - \left.\frac{\partial E_{\rm xc}(N)}{\partial N}\right|_{N-\delta}$~\citep{Perdew-discont}. This equality enables us to relate the observed gap to the quantities defined in the $N$-electron system. When we use approximate exchange-correlation functionals, $E_{\rm gap}^{\rm deriv}$ in Eq.~(\ref{eq:deriv-KS}) calculated with DFT deviates from the true fundamental gap $E_{\rm gap}$. Then, the fundamental gap is expressed as \begin{eqnarray} E_{\rm gap} = E_{\rm gap}^{\rm KS}+\Delta_{\rm xc} + \Delta^{\rm straight}. \label{eq:gap-approx} \end{eqnarray} The final term $\Delta^{\rm straight}$ represents a correction term, corresponding to the deviation of the behavior of the approximate $E(N)$ from the ideal segmented straight lines. The formula Eq.~(\ref{eq:gap-approx}) implies some interesting facts of the Kohn-Sham gap $E_{\rm gap}^{\rm KS}$. First, even with the exact functional, it does not correspond to the fundamental gap because of the nonzero $\Delta_{\rm xc}$. Second, using the standard functionals such as those within the LDA and GGA, it departs from the fundamental gap by incorrect zero $\Delta_{\rm xc}$ and nonzero $\Delta^{\rm straight}$. Recently, functionals design that also takes into account Eq.~(\ref{eq:gap-approx}) has been broadly conducted. For example, minimization of $\Delta^{\rm straight}$ has been referred to as a criterion for tuning some modern functionals, which improves accuracy for electronic properties. The generalized Kohn-Sham scheme, which includes the orbital-dependent functionals like EXX, helps us to incorporate a major fraction of $\Delta_{\rm xc}$ into $E_{\rm gap}^{\rm KS}$, so that the Kohn-Sham gap better agrees with the experimental fundamental gap. See, e.g., \citet{Parr_Yang}, \citet{MoriSanchez2008}, and \citet{Burke} for further reading of the topic in this section. \subsection{Response function} The general fluctuation-dissipation relationship says that the electronic response to external perturbations is equated to a property in the ground state. In fact, the response function is formulated exactly within the framework of DFT \citep{Hybertsen-Louie}. Suppose the system is perturbed by a change of the external potential $\delta \varv({\bf r})$ and the charge density distribution changes by $\delta\rho({\bf r})$. The density-density response function $\chi({\bf r}, {\bf r}')$ is defined as the linear coefficient relating them: \begin{eqnarray} \delta\rho({\bf r}) = \int d{\bf r}' \chi({\bf r}, {\bf r}')\delta \varv({\bf r}') \end{eqnarray} The response function $\chi$ is given by \begin{eqnarray} \chi = \left[ 1-\chi_{0}\left(\frac{e^2}{|{\bf r}-{\bf r}'|} +K_{\rm xc}\right) \right]^{-1}\chi_{0} \end{eqnarray} with $K_{\rm xc}({\bf r}, {\bf r}')\equiv \delta^2 E_{\rm xc}/[\delta\rho({\bf r})\delta\rho({\bf r}')]$. Here, the quantities are matrices with indices ${\bf r}$ and ${\bf r}'$. The Kohn-Sham response function $\chi_{0}$ is defined by \begin{eqnarray} \delta \rho({\bf r})= \int d{\bf r}'\chi_{0}({\bf r},{\bf r}')\delta \varv_{\rm eff}({\bf r}'). \end{eqnarray} Applying the perturbation theory to the Kohn-Sham equation, we can write $\chi_{0}$ in terms of the Kohn-Sham eigenpairs ($\varepsilon_{i}$, $\phi_{i}$). Note that those expressions are rigorous and well-defined within DFT, though all the ambiguities are condensed in $E_{\rm xc}$. Using the time-dependent DFT (Sec.~\ref{sec:extension}), the time-dependent response function can also be formulated. \subsection{Kohn-Sham states in practice} DFT is in principle independent from the Green's function theory since the former does not define the electron one-particle creation/annihilation operator. Nevertheless, practices are found in the literature where the Kohn-Sham eigenstates are utilized as a basis for calculating quantities formulated in the Green's function theory, showing remarkable successes in comparing with experiments. Calculations of this kind may be justified by the fact that the Green's function theory can be made basis-free when the perturbation effects are incorporated up to the infinite order \citep{Hedin1965}. The Kohn-Sham states, which are introduced as auxiliary quantities (see Sec.~\ref{sec_HK_KS}), are then presumed as a good zeroth-order basis, which enables us relatively fast convergence with respect to the order of perturbation. The combination of DFT and the Green's function theory has yielded tremendous success in describing the single-particle excitation of materials. The spectral function defined in the Green's function theory is directly measured by, e.g., angle-resolved photoemission spectroscopy (ARPES). Although the agreement between the Kohn-Sham and experimental spectra is not theoretically guaranteed, a number of studies have conducted comparisons between them. A classic application to copper is displayed in Fig.~\ref{Fig_elband}. Recently, the comparison serves more and more essential roles in analyzing the detailed electronic structure of materials, with the improvement in the experimental precision: Here, we also append an example of monolayer MoS$_2$ in Fig.~\ref{Fig_elband2}. These theoretical spectra would need to be interpreted as results of the Green's function theory using the Kohn-Sham states as the zeroth-order basis. The Kohn-Sham calculation is known to yield reasonable spectra in weakly correlated systems such as dense metals. However, the agreement with experiments becomes worse for insulators (especially their band gap) and dilute metals, where the exchange-correlation effects are stronger. Even in such cases, accuracy can be efficiently cured by introducing the partially screened exchange term to the functional or calculating the self-energy by perturbation theory with the Kohn-Sham eigenbasis. The GW approximation [see, e.g., \citet{Aryasetiawan_Gunnarsson} for a review] serves as a standard method for calculating the self-energy: It treats the long-range exchange-correlation effects efficiently under weak to strong Coulomb interaction. \begin{figure}[tbp] \vspace{0cm} \begin{center} \includegraphics[width=0.45\textwidth]{bandstructure.eps} \caption{Electronic band structure of copper calculated by the Kohn-Sham equation with a primary LDA functional. The points are from photoemission experiments. Reproduced from \citet{Courths_Hufner}, with permission from Elsevier. } \label{Fig_elband} \end{center} \end{figure} \begin{figure}[tbp] \vspace{0cm} \begin{center} \includegraphics[width=0.45\textwidth]{bandstructure2.eps} \caption{ Intensity map of the photoemission spectrum of exfoliated monolayer MoS$_{2}$ measured with micrometer-scale ARPES. The GGA-PBE Kohn-Sham band structure including spin-orbit interaction \citep{Z_Zhu_2011} is indicated by the red curves. Reproduced with permission from \citet{Jin_2013}. Copyright (2013) by the American Physical Society. } \label{Fig_elband2} \end{center} \end{figure} When the correlation effects become even stronger, the one-electron description may break down. Such a situation occurs in so-called strongly-correlated materials, where energy scales of the interaction and kinetic energies compete. Typical strongly-correlated materials are, for example, transition-metal oxides with partially-filled $d$-electron shells and heavy-fermion materials with partially-filled $f$-electron shells. The bandwidth $W$ of partially-filled orbitals in these materials is typically narrow, and becomes comparable or even smaller than the onsite Coulomb interaction $U$. For such systems, the band theory based on DFT becomes inaccurate, and DFT often fails to describe the spectral property around the Fermi level. For example, when we regard the Kohn-Sham energy eigenvalues as the poles of the spectral function, DFT cannot reproduce the charge gap opening in Mott insulators, where electrons are localized due to the strong Coulomb repulsion. To incorporate such many-body effects, a combination of DFT and many-body methods, such as the dynamical mean-field theory and the variational Monte Carlo method, has been developed. For more details of the conceptual and practical aspects of the combination, see, e.g., \citet{Kotliar_2006} and \citet{Imada_2010}. \section{Nuclear dynamics} \label{sec:lattice} DFT is also useful in investigating nuclear dynamics. This is because the total energy of the electronic system plays a role as the potential for the atomic vibration [Eq.~(\ref{Eq:lattice})]. Here, we discuss the expressions for the forces acting on nuclei and the interatomic force constants, which are used in structure optimization and phonon dispersion calculations, respectively. \subsection{Forces acting on nuclei} The force acting on $I$th nucleus ${\bf F}_I$ is given by the derivative of the energy surface \begin{eqnarray} {\bf F}_{I} = -\frac{\partial E \bigl( \{ {\bf R} \} \bigr )}{\partial {\bf R}_{I}}. \end{eqnarray} The forces on nuclei vanish (${\bf F}_{I} = {\bf 0}$) at equilibrium geometry. Therefore, one can perform structure optimization by minimizing the forces on the nuclei. In practice, the forces (first derivative of the energy surface) can be estimated efficiently using the Hellmann-Feynman theorem [\citet{Hellmann}, \citet{Feynman}]. \subsection{Interatomic force constants} Here, we show how the normal vibrational modes are derived. We consider the displacement of $I$th nucleus ${\bf u}_I$ from the equilibrium position ${\bf R}_{I}^{(0)}$. The position of the $I$th nucleus reads \begin{eqnarray} {\bf R}_{I}={\bf R}_{I}^{(0)}+{\bf u}_{I}, \end{eqnarray} Then, the kinetic energy $T$ for the atomic vibration is given by \begin{eqnarray}\label{Eq:T_kin} T = \frac{1}{2}\sum_{I }M_{I}|\dot{{\bf u}}_{I}|^{2} = \frac{1}{2}\sum_{I \alpha}M_{I}(\dot{ u}_{I}^{\alpha})^{2}. \end{eqnarray} Here, $\alpha$ is the cartesian component of the displacement ($\alpha = x,y,z$). For the the potential energy $U$, we apply the harmonic approximation: \begin{eqnarray} \label{Eq:U_pot} U &=& E(\verb|{| {\bf R}^{(0)} + {\bf u} \verb|}|)-E(\verb|{| {\bf R}^{(0)} \verb|}| ) \\ &\simeq& \frac{1}{2} \sum_{I \alpha} \sum_{I^{\prime} \alpha^{\prime}} \frac{\partial^{2} E \bigl( \{ {\bf R} \} \bigr ) }{\partial R_{I}^{\alpha} \partial R_{I^{\prime}}^{\alpha^{\prime}}} u_{I}^{\alpha}u_{I^{\prime}}^{\alpha^{\prime}}. \end{eqnarray} Note that the first-order terms with respect to the displacement vanish at the equilibrium geometry. From the expression of $U$ and $T$, one can derive the secular equation, which determines the frequency and displacement pattern of the normal vibrational modes: \begin{eqnarray} \label{Eq:motion_R} \sum_{I^{\prime} \alpha^{\prime}} \left( C_{I, I^{\prime}}^{\alpha \alpha^{\prime}} - M_I \omega^2 \delta_{I I^{\prime}} \delta_{\alpha \alpha^{\prime}} \right) u_{I^{\prime}}^{\alpha^{\prime}} = 0 , \end{eqnarray} where $C$ is the matrix of so called interatomic force constants \begin{eqnarray} C_{I, I^{\prime} }^{\alpha \alpha^{\prime}}= \frac{\partial^{2} E \bigl( \{ {\bf R} \} \bigr ) }{\partial R_{I}^{\alpha} \partial R_{I^{\prime}}^{\alpha^{\prime}}}. \end{eqnarray} The interatomic force constants play a role as ``spring constants'' of the ``springs'' between nuclei. In crystals, Eq.~(\ref{Eq:motion_R}) becomes block diagonal in momentum space, and the normal modes are labelled by wave vectors. Then, the frequencies of the normal modes give the phonon dispersion in solids. In practical calculations, there exist mainly two approaches to estimate the interatomic force constants. One is a direct method, called frozen phonon, in which forces are computed under finite amplitudes of displacements and the interatomic force constants are estimated from finite differences of the forces. The other method relies on the density-functional perturbation theory (DFPT)~\citep{Baroni_2001}, which shows that the interatomic force constants can be computed from the ground-state electron density and its linear response to the atomic displacements. Therefore, the frozen phonon method performs supercell calculations with finite atomic displacements, whereas the DFPT performs linear response calculations to the displacements. As an example of phonon calculations, we show, in Fig.~\ref{Fig_phonon}, the phonon dispersions of elemental semiconductors, Si and Ge, calculated using the DFPT~\citep{Giannozzi_1991}. The DFPT results show a good agreement with experiments. \begin{figure}[tbp] \vspace{0cm} \begin{center} \includegraphics[width=0.48\textwidth]{phonon.eps} \caption{ Phonon dispersions and densities of states of silicon and germanium solids. The diamonds are experimental data. Reproduced with permission from \citet{Giannozzi_1991}. Copyright (1991) by the American Physical Society. } \label{Fig_phonon} \end{center} \end{figure} \subsection{Recent topics} Here, among many recent topics, we introduce two advances related to phonon properties: anharmonicity and structure prediction. Anharmonicity is a deviation of the atomic vibrations from a harmonic oscillator, which arises from higher-order terms of the expansion of energy difference with respect to atomic displacements [Eq.~(\ref{Eq:U_pot})]. The anharmonic terms are crucial in describing thermal properties of atomic vibrations, such as thermal expansion, thermal conductivity, and thermodynamic stability. Optimizing these properties leads to functional materials. For example, controlling the phonon lifetime and achieving low thermal conductivity in the phonon-contributing part is essential in designing good thermoelectric materials. For more details of the recent advancements in the computation of anharmonicity, see, e.g., \citet{Tadano_2018,McGaughey_2019}. One of the dreams of materials scientists would be a theoretical prediction of functional materials. Structure prediction of materials is a very challenging task because the energy landscape $E \bigl( \{ {\bf R} \} \bigr )$ is a complicated object with a huge number of local minima. Recently, there has been a tremendous advance in numerical tools to explore the complex potential energy surface (see, e.g., \citet{Andreoni_2020}). A remarkable success of the crystal-structure search algorithm so far is, for example, a prediction of high-temperature superconductors under extremely high pressure (see, e.g., \citet{Flores-Livas_2020} for details). \section{Practical guide} \label{sec:practical} \subsection{Program packages} Nowadays, there are a variety of DFT open source packages. Some of them are introduced in \citet{Talirz_2021}. See also, e.g., \url{https://en.wikipedia.org/wiki/List_of_quantum_chemistry_and_solid-state_physics_software} for a list. \subsection{Reproducibility} In the practical calculations of DFT, most of the open-source programs employ the Kohn-Sham formalism (see Sec.~\ref{sec_HK_KS}). When the same exchange-correlation functional is employed, different programs solve the same Kohn-Sham equations. Ideally, all codes should give the same solution. However, depending on, for example, the type of the basis set [plane-wave, (linearized) augmented plane wave (L)APW, linear muffin-tin orbital (LMTO), and so on] and approximate ionic potentials [norm-conserving and ultrasoft pseudopotentials, projector augmented wave (PAW) method, and so on], the accuracy may differ among available packages. Therefore, it is an important task for the community to check the reproducibility of the DFT calculations. Recently, such a systematic benchmark has started to be performed. For example, \citet{Lejaeghere_2016} compared various DFT packages for solids and confirmed that widely-used codes and methods give essentially identical solutions. \section{Summary} \label{sec:summary} DFT is currently one of the most standard and established methods for electronic structure calculations. Many DFT open sources have been developed in recent years, allowing a wide range of researchers to enjoy DFT calculations. Nowadays, DFT is widely used to calculate the properties of solids. In addition to the ground-state electronic structure calculations, DFT is also useful for calculating excitation spectra (both single- and two-particle) and nuclear dynamics. Care must be taken then in interpreting the eigenvalues of the Kohn-Sham equation. Although it is impossible to cover all of the vast DFT-related topics, we have provided an introductory explanation of some of them in this chapter. Please refer to the references for more details of advanced topics. We hope this chapter will be of some help to understand what can be done with DFT. \section*{Acknowledgments} We are grateful for valuable comments from Hideo Aoki, Ryotaro Arita, Silke Biermann, Kieron Burke, Kenta Kuroda, Yasushi Shinohara, Terumasa Tadano, and Shinji Tsuneyuki. \bibliographystyle{elsarticle-harv}
1,108,101,562,713
arxiv
\section{Fast Neural Dynamics vs.\ Slow Adaption Processes} Complex dynamical systems are often characterized by a variety of timescales and the brain is no exception here \cite{izhikevich2007dynamical,gros2013complex}. It has been observed that the neural dynamics is contingent, for time scales ranging from hundreds of milliseconds to minutes, on the underlying anatomical network structure in distinct ways \cite{honey2007network}. This relation between anatomy and the timescale characterizing neural activity is present even for autonomous systems, viz in the absence of external stimuli. It has been proposed, complementarily, that certain temporal aspects of the brain activity may reflect the multitude of timescales present in the environment \cite{kiebel2008hierarchy}, and could be induced through adaptive processes \cite{ulanovsky2004multiple}. The neurons in the brain are faced with the problem, in a related perspective, of maintaining long-term functional stability on both the single neuron level, as well as on the level of network activities, in view of the fact that the constituents of the molecular and biochemical machinery, such as ion channel proteins and synaptic receptors, have lifetimes ranging only from minutes to weeks \cite{marder2006variability}. This situation results in the need to regulate homeostatically both the inter-neural synaptic strength \cite{turrigiano2004homeostatic}, and the intra-neural parameters, the latter process termed intrinsic plasticity \cite{daoudal2003long,echegoyen2007homeostatic}. Homeostatic mechanisms in the brain can be regarded as part of the generic control problem of the overall brain dynamics \cite{oLeary2011neuronal}, with the adaption of neural parameters being necessary to achieve certain targets \cite{ge2010stable}. Here we study the consequences of ongoing slow adaption for the time evolution of the landscape of adiabatic attractors, viz of the attractors of the dynamical system obtained by temporarily freezing the adaption process. We find that the locus of the instantaneous attracting state guides the overall time evolution and that the study of the attractor metadynamics, which find to be possibly both continuous and discontinuous, constitutes a powerful tool for the study of evolving neural networks. \section{Adapting Continuous-Time Recurrent Neural Networks} We consider continuous-time neural networks \cite{beer1995dynamics,beer1992evolving}, defined by \begin{equation} \dot x_i = -\Gamma x_i + \sum_j w_{ij} y_j, \qquad\quad y_i = \frac{1}{1+\mathrm{e}^{a_i(b_i-x_i)}}~. \label{eq:dot_x_i} \end{equation} One may consider either the firing rates $y_i=y_i(t)$ as the primary dynamical variables or, equivalently, the corresponding membrane potentials $x_i=x_i(t)$, with the sigmoidal $g(z)=1/(1+\exp(z))$ constituting the standard non-linear input-output relation for a single neuron. One denotes $a_i$ the gain (slope) of the sigmoidal and $b_i$ the respective threshold. Here $\Gamma>0$ sets the relaxation rate for the membrane potentials $x_i$ and the $w_{ij}$ are the inter-neural synaptic weights. \begin{figure}[!t] \centering \includegraphics[width=0.65\textwidth]{latching_1000} \caption{A network with $N=1000$ neurons and $N_p=20$ encoded attractor states, see Eq.~(\ref{eq:hopfield_encoding}). The neurons adapt, trying to optimized the relative information content (\ref{eq:KL}) of their respective activities. Shown are vertically displaced, as a function of time $t$, time lines of the overlaps $O_p(t)\in[0,1]$, see Eq.~(\ref{eq:overlap}). Notice, that the polyhomeostatic adaption (\ref{eq:dot_ab}) leads to transient-state dynamics, one attractor relict $\xi^p$ after the other is transiently visited by the state of network activities, as evident by the bumps in the respective time lines. } \label{fig:latching} \end{figure} One speaks of intrinsic adaption when the internal parameters of an individual neuron adapt slowly over time \cite{triesch2005gradient,markovic2012intrinsic}. In our case the bias $a_i$ and threshold $b_i$. This kind of internal adaption is necessary for keeping the output $y_i(t)\in[0,1]$ within the desired dynamical range, viz within the working regime of the dynamical system. Anatomical constraints such as the limited availability of energy are imposed on the long-term firing statistics of each neuron. On a functional level, the firing patterns are expected to encode maximal information. This distribution of maximal information is at the same time the least biased or `noncomittal' with respect to the constraints \cite{jaynes1957information} and it is obtained by maximizing Shannon's information entropy. Given a certain mean $\mu$, here the mean target firing rate \cite{gros2013complex}, the desired output distribution is an exponential, \begin{equation} p_\lambda(y)\, \propto\, \mathrm{e}^{\lambda y}, \qquad \mu=\int dy\,y\,p(y)~, \label{eq:p_lambda} \end{equation} for the neural model (\ref{eq:dot_x_i}), with $\lambda$ being the respective Langrange multiplier. The distance of a time series of data, like the neural firing rate $y_i(t)$ for a given neuron $i$, relative to this target distribution function $p_\lambda(y)$ is captured by the Kullback-Leibler divergence $K_i$ \cite{gros2013complex} \begin{equation} K_i = \int dy\, p_i(y)\log\left(\frac{p_i(y)}{p_\lambda(y)}\right), \quad\quad p_i(y) = \lim_{T\to\infty} \int_0^T \delta\big(y-y_i(t-\tau)\big)\,\frac{d\tau}{T}~, \label{eq:KL} \end{equation} where $p_i(y)$ is the time-averaged distribution of $y_i(t)$. One can now optimize the adaption by minimizing (\ref{eq:KL}) with respect to the intrinsic parameters $a_i$ and $b_i$ and one obtains \cite{linkerhand2013self,steil2007online} \begin{equation} \begin{array}{rcl} \dot a_i & =& \epsilon_a \big( {1}/{a_i} + (x_i-b_i) \theta\big) \\[0.5pt] \dot b_i &=& \epsilon _b ( - a_i)\theta, \qquad\qquad\qquad \theta= 1 - 2 y_i + \lambda \left( 1 - y_i \right) y_i \label{eq:dot_ab} \end{array} \end{equation} with $\epsilon_a$ and $\epsilon_b$ being adaption rates for the gain $a_i$ and the threshold $b_i$ respectively. In effect, the system is given an entire distribution function $p_\lambda(y)$ as an adaption target. The adaption rules (\ref{eq:dot_ab}) hence generalize the principle of homeostasis, which deals with regulating a single scalar quantity, and have been denoted polyhomeostatic optimization \cite{markovic2010}. \begin{figure}[!t] \centering \raisebox{0.04\textwidth}{ \includegraphics[height=0.21\textwidth]{autapse_illustration} } \hspace{2ex} \includegraphics[height=0.29\textwidth]{autapse_attractor17} \hspace{2ex} \raisebox{0.04\textwidth}{ \includegraphics[height=0.21\textwidth]{threeSites_illustration} } \caption{Left: The autapse, a neural net with a single, self-coupled neuron. The output is directly fed to the input with $w_{11}=1$. Middle: Depending on the values of the intrinsic parameters $a$ and $b$ there may be one or two stable fixpoints $\dot x=0$ for the autapse, and one unstable fixpoint. The number and the position of the adiabatic fixpoints change when the gain $a=a(t)$ and the threshold $b=b(t)$ slowly adapt through (\ref{eq:dot_ab}). Shown are $y(x)$ (red solid line) and $x$ (dashed black line), compare Eq.~(\ref{eq:dot_x_i}). Right: A three-site network with inhibitory (red) and excitatory (green) synaptic weights. } \label{fig:autapse_attractor} \end{figure} \section{Transient State Dynamics} A convenient way to construct networks with a predefined set $\xi^p=(\xi_1^p,\xi_2^p,..)$ of attracting states, with $p=1,..,N_p$, is by selecting the synaptic weights as \cite{hopfield1982neural} \begin{equation} w_{ij} \,\propto\, \sum_p \big(\xi_i^p - \bar\xi_i\big) \big(\xi_j^p - \bar\xi_j\big)~, \label{eq:hopfield_encoding} \end{equation} where $\bar\xi_j$ is a local activity, averaged over all encoded patterns $\xi^p$. With the Hopfield encoding (\ref{eq:hopfield_encoding}) one can hence construct attractor networks having point attractors close to the patterns $\xi^p$, with a given, predefined average activity level $\bar\xi^p=\sum_i \xi_i^p/N$, where $N$ is the number of neurons in the network. As a first application we study a network of $N=1000$ neurons with the synaptic weights selected using the Hopfield encoding (\ref{eq:hopfield_encoding}) and $N_p=20$ random binary patterns $\xi^p=(\xi_1^p,..\,,\xi_N^p)$ drawn from an uniform distribution. We define with \begin{equation} O_p(t) = \frac{\sum_i \xi_i^p y_i}{||\xi^p||\,||y||}, \qquad\quad ||z||\equiv \sqrt{\sum_i z_i^2} \label{eq:overlap} \end{equation} the overlap between the current state $y(t)=(y_1(t),..\,,y_N(t))$ and a given stored attractor state $\xi^p$, in terms of the respective normalized scalar product. In Fig.~\ref{fig:latching} we show a typical simulation result for the overlaps $O_p(t)\in[0,1]$, with the individual time lines being shown vertically displaced and color-coded. The parameters used for the simulation are $\Gamma=1$, $\epsilon_a=0.1$, $\epsilon_b=0.01$ and $\bar\xi^p=0.2$, $\mu=0.2$. Alternative values for the adaption rates $\epsilon_{a,b}$ lead qualitatively to similar behaviors, whenever the adaption process is substantially slower than the neural dynamics (\ref{eq:dot_x_i}). One observes two distinct features. \begin{itemize} \item For $\epsilon_a=\epsilon_b=0$ the dynamics would eventually settle into a steady state close to one of the stored patterns $\xi^p$. The dynamical activity is, on the other hand, continuous and autonomously ongoing when intrinsic adaption is present, as evident from Fig.~\ref{fig:latching}. This is due to the fact that the system tries to achieve exponentially distributed firing-rate distributions. Without adaption the individual $p_i(y)$ would be simple $\delta$-functions in any fixpoint state and this would lead to a very high and therefore sub-optimal Kullback-Leibler divergence (\ref{eq:KL}). \item The overlaps $O_p(t)$ are, most of the time, relatively small with temporally well defined characteristic bumps corresponding to dynamical states $y(t)$ approaching closely one of the initially stored patterns $\xi^p$. This type of dynamics has been termed transient-state \cite{gros2007neural} and latching \cite{russo2008free} dynamics and may be used for semantic learning in autonomously active neural networks \cite{gros2009cognitive,gros2010semantic}. \end{itemize} The inclusion of intrinsic adaption hence destroys all previously present attracting states. When the adaption process is slow and hence weak, with the actual values for the adaption rates $\epsilon_a$ and $\epsilon_b$ not being relevant, the system will however still notice the remains of the original point attractors and slow down when close by. The resulting type of network has been termed attractor relict network \cite{gros2009cognitive}. \begin{figure}[!t] \centering \includegraphics[height=0.37\textwidth]{autapse_phaseDiagram} \hspace{0ex} \raisebox{0.045\textwidth}{ \includegraphics[height=0.27\textwidth]{autapse_hysteresis} } \caption{Left: Phase diagram of the autapse, as illustrated in Fig.~\ref{fig:autapse_attractor}. The activity $y\in[0,1]$ of the fixpoint is color-coded, for fixed gains $a$ and thresholds $b$ of the sigmoidal, see Eq.~(\ref{eq:dot_x_i}), when only a single stable fixpoint is present. The greenish area within the two white lines denotes the phase space containing two stable fixpoints. The thick white line is the limiting cycle for polyhomeostatically adapting intrinsic parameters $(a(t),b(t))$, compare Eq.~(\ref{eq:dot_ab}). Right: The firing rate of the adiabatic fixpoint (stable/unstable: thick/dashed lines) as function of the intrinsic parameters. The arrows and numbers indicate the section of the hysteresis loop in the landscape of adiabatic fixpoints corresponding to the equally labeled sections of the limiting cycle of $(a(t),b(t))$ shown in the left panel. } \label{fig:autapse_phaseDiagram} \end{figure} \section{Discontinuous Attractor Metadynamics} A complete listing of all attracting states and the study of their respective time evolution is cumbersome for a large network like the one of Fig.~\ref{fig:latching}. For an in-depth study we have selected two small model systems, we start with a single, self-coupled neuron, the autapse, as illustrated in Fig.~\ref{fig:autapse_attractor} (left). The fixpoint condition is $x=y(x)$, for $\Gamma=1=w_{11}$, and it is depicted in Fig.~\ref{fig:autapse_attractor} (middle). Depending on the location of the turning point $b$ of the sigmoidal and on its steepness $a$, there may be either one or two stable fixpoints, the respective phase diagram is presented in Fig.~\ref{fig:autapse_phaseDiagram} (left). Additionally an unstable fixpoint may be present (central region). The actual values $(a(t),b(t))$ of the intrinsic parameters polyhomeostatically adapt via (\ref{eq:dot_ab}), an example of an actual state is included in Fig.~\ref{fig:autapse_attractor} (middle) and the final limiting cycle in Fig.~\ref{fig:autapse_phaseDiagram} (left). The internal parameters settle, after an initial transient, in a region of phase space crossing two first-order phase transitions at which the number of attractors changes from $1\leftrightarrow2\leftrightarrow1$, resulting in a hysteresis loop for the adiabatic attractor landscape, compare Fig.~\ref{fig:autapse_phaseDiagram} (right). In the limit of long times the internal parameters, as given by the white elongated eight-shaped loop in Fig.~\ref{fig:autapse_phaseDiagram} (left), stay for finite time intervals in the regions of the phase diagram characterized by a single fixpoint (top/bottom : blue/red). The limiting cycle of the adaption trajectory hence overshoots the hysteresis loop characterized by the vertical transitions illustrated in Fig.~\ref{fig:autapse_phaseDiagram} (right). The dynamics is relatively slow on the hysteresis branches $1\to2$ and $3\to4$ and becomes very fast when the local adiabatic fixpoints vanishes. At this point the system is forced to rapidly evolve towards the opposite branch of the hysteresis loop, an example of self-organized slow-fast dynamics. \begin{figure}[!t] \centering \includegraphics[width=0.40\textwidth]{threeSites_attractor30}\hspace{8ex} \includegraphics[width=0.40\textwidth]{threeSites_attractor45} \caption{For the three site network illustrated in Fig.~\ref{fig:autapse_attractor} (right), the time evolution of the firing rates $(y_1(t),y_3(t))$. The green line is the trajectory of the final limiting cycle and the red line of the single adiabatic attractor present in the system. The black arrows illustrate the instantaneous flow, attracting the current dynamical state (green filled circles) to the current position of the adiabatic attractor (red filled circles). The right panel follows in time shortly after the left panel. } \label{fig:threeSites_attractor} \end{figure} \section{Continuous Attractor Metadynamics} As a second model system we consider the three-site network depicted in Fig.~\ref{fig:autapse_attractor} (right), with $w_{12}=w_{21}=1=w_{32}=w_{23}$ and $w_{31}=w_{13}=-1$. At first sight one may expect an attractor metadynamics equivalent to the one of the autapse, since the three-site net also has two possible attracting states $(y_1^*,y_2^*,y_3^*)$, with either $y_2^*$ and $y_1^*$ large and $y_3^*$ small, or with $y_2^*$ and $y_3^*$ large and $y_1^*$ small. There is indeed a region in phase space for which these two fixpoints coexist \cite{linkerhand2013generating}, but the system adapts the six internal parameters $a_i(t)$ and $b_i(t)$ such that a single adiabatic fixpoint remains, which morphs continuously under the influence of the polyhomeostatic adaption (\ref{eq:dot_ab}). In Fig.~\ref{fig:threeSites_attractor} we present the resulting limiting cycle of the full dynamics projected onto the $(y_1,y_3)$ plane (the activity of $y_2$ is intermediate and only weakly changing). One observes that the adiabatic fixpoint moves on a continuous trajectory, an adiabatic limiting cycle. This behavior contrasts with the time evolution of the attractor landscape observed for the autapse, as presented in Fig.~\ref{fig:autapse_phaseDiagram} (right), which is characterized by a discontinuous hysteresis loop. The adiabatic fixpoint approaches $(y_1^*\approx1,y_3^*\approx0)$ and $(y_1^*\approx0,y_3^*\approx1)$ repeatedly, as evident in Fig.~\ref{fig:threeSites_attractor}. The corresponding phase space trajectory then slows down, as one can observe when plotting the actual time evolution, an example of transient-state dynamics. \section{Carrot and Donkey Dynamics} In the metaphor of the donkey trying to reach the carrot it carries itself, the animal will never reach its target. The case of self-generated attractor metadynamics studied here is analogous. The current dynamical state is attracted by the nearest adiabatic attractor, but the systems itself morphs the attractor continuously when the trajectory tries to close in. The locus of the attractor evolves, either continuously or discontinuously, and the trajectory is then attracted by the adiabatic fixpoint at its new locus. This feature allows to characterize decision processes in the brain and in model task problems dynamically \cite{beer2000dynamical,deco2010synaptic}, and choice options can be extracted in terms of corresponding adiabatic fixpoints. Here we studied autonomous systems, starting from attractor networks, with the aim to obtain a first overview regarding the possible types of self-generated attractor metadynamics. \section*{Acknowledgments} The authors would like to thank Peter Hirschfeld for illuminating suggestions. \bibliographystyle{splncs}
1,108,101,562,714
arxiv
\section{1. Introduction} Deep learning is developing rapidly in many areas of Natural Language Understanding. While the deep learning requires a large amount of data to ensure the generalization of the model. For example, in the famous pre-trained language model ''BERT'' \cite{devlin-etal-2019-bert} where Google provides 400 million data to train Bert model, which requires a huge amount of data to guarantee the model performance. However, in many cases we cannot obtain a large amount of data. For example, training data is scarce in the task of word emotional classification.\\ Previous research \cite{DBLP:conf/acl/HatzivassiloglouM97} tried to get the "semantic orientation" of the sentence. For example, positive and negative tendencies. through obtaining the predictions of emotion tendencies and comparing the predictions with the real sentences, the accuracy of the prediction is obtained.\\ Further, the researchers found that, based on psychological research, multiple factors will lead to different emotions rather than solely determined by emotional polarity. For example, in the study of Bradley and Lang's research \cite{Measuring-emotion}, Valence-Arousal-Dominance model to determine the emotional tendency from three different factors.\\ Word embedding is an effective way to represent language features, such as similarity\cite{DBLP:conf/emnlp/PenningtonSM14}, emotional orientation \cite{DBLP:conf/naacl/RotheES16}, and sentiment\cite{DBLP:journals/ci/CalvoK13}. Word embedding is the method to victories strings, different dimensions of vectors may contain different information, but embedding representation is "black box", it is hard to know what exactly mean the dimension present.\\ In this paper, we proposed that emotional information can be compressed as a one dimension embedding called ultradense subspace, we hypotheses the ultradence subspace as the space where the sentiment be quantified as numerical values and be classified by trained ultradense word embedding. For Chinese and English, we predict that the emotional words of two languages can correctly classify but not restricted by language types and grammars.\\ The advantage of this method is that even if the data is lacking, the model has good performance. And the data does not need to be labeled, so there is a large amount of data that can be used to train. Another advantage is that, compared to the origin word embedding, the ultradense subspace has higher performance and quality. For example, in classification tasks, unrelated information is considered as noise, which is filtered through orthogonal transformation. As a result, the model has fewer parameters, which means that the training speed is improved and the possibility of overfitting is reduced.\\ In this paper, our goal is to produce a decent word embedding in Chinese. The output of model will be a lexicon, which is converted from the original embedding, the embedding of lexicon presented as one-dimensional ultradense embedding table, this table produce a clearer, easier-to-explain word representation for word emotions. \section{2. motivation} In most of the previous studies, the models were trained using English datasets, and rarely use Chinese datasets, especially web review data, the web review contain stronger emotional tendencies than the written context. I would like to do Chinese emotional classification by using this method, which transforms embedding into a low-dimensional, interpretable embedding. Based on the output of the embedding, we can know whether the model is well in Chinese emotion classification. At the same time, I am also looking for factors that decrease the performance of the model. For example, according to research \cite{DBLP:conf/naacl/RotheES16}, they find that some factors of embedding such as embedding size, stop words, embedding dimensions, and seed words may effect performance of model. Thus, we decide to evaluate what kind of factors influence the mode results.\\ \section{3. Background} In this section, I will introduce some relevant background about emotional expression and word embedding, and some of techniques will use in our methodological work. \subsection{3.1 emotion representation} Emotional expression is not only distinguished by positive and negative, but also by multiple factors. The VAD model (Figure 1) \cite{Measuring-emotion} considers that emotions to be composed of three different dimensions, named Valence (positive-negative), Arousal (clam-excited), and Dominance (perceived degree of control situation). There are also studies \cite{DBLP:conf/hci/ZhongQZ19} which suggest that Dominance does not affect the expression of emotion (only VA model works). As the most common system for assessing emotional tendencies, the VAD model quantifies Valence, Arousal and Dominance, and then evaluates them. \begin{figure} \centering \includegraphics[scale=0.6]{VAE_model.png} \caption{The emotion space covered by the Valence-Arousal-Dominance (VAD) model}~\label{fig:figure1} \end{figure} \subsection{3.2 emotion lexicon} In some psychological studies, words with emotional tendencies illustrated as specific numerical values. For example, if the value of a word is greater than a certain threshold, the word is considered to be a positive emotional word, whereas if a word is less than the threshold, it is Negative emotion word. In this paper, the emotion lexicon that we build only use positive and negative emotions as a one-dimensional evaluation. For illustration, we will visualize the result as a 2d figures to build the lexicon that the emotional values of word distributed on the figures.\\ \subsection{3.3 word embedding} Word embedding is a vector representation, which encodes word into a vector form. In this way, word is converted to the numeric values, it can use into unsupervised learning. Here are some very popular embedding algorithms today: WORD2VEC is a popular embedding algorithm, which provides a trimmed down neural network \cite{DBLP:conf/nips/MikolovSCCD13}. FASTTEXT is based on WORD2VEC, which is a derivative of WORD2VEC and combining the n-gram of characters \cite{DBLP:journals/tacl/BojanowskiGJM17}. Unlike the previous embedding algorithms, GLOVE directly trains word vectors on the word co-occurrence matrix, which improves the training efficiency. \subsection{3.4 word-level prediction} In previous research \cite{DBLP:journals/tois/TurneyL03}, researchers distinguished the emotional tendency of words by using the polarity of words. The specific approach is to divide words into two kind of training data, the original word embedding and the seed words. The seed words are usually positive or negative The word, the distance between the pair of seed words is obtained point-wise, so as to obtain the emotional tendencies of words. Afterwards, Rothe \cite{DBLP:conf/naacl/RotheES16} proposed to transform the word embedding from the original space to ultradense subspace by training the orthogonal matrix, which will get embedding of word's emotion.\\ \section{4. Research Method} In order to analyze the sentiment of TikTok comment data and build TikTok sentiment lexicon, I need to collect and process reviews from the TikTok app. My plan is to use web crawlers to crawl data from the TikTok app and preprocess and victories the data, which need to implement a web crawler framework to collect data for the TikTok app, then use python data analysis tool Pandas to pre-process review data and use WORD2VEC training tool Gensim to victories review sentences, the processed word embedding table saved as the VEC files. For seed words, I labeled 5, 10, 15 for seed word groups. In the end, After trained model, I visualized and analysed results.\\ \section{5. Research Questions} First, "Is there a relationship between the seed words and the emotions?" When the size of word reaches a certain amount, the generalization ability of the model improved, that is, the model may learn more prior information. While the model learned the distance between seed words, we can verify that whether the distinguishing ability of model be affected when the size of seed words change. One hypothesis is that the model's ability to distinguish emotions grows if we expands the size of the seed word. To verify the hypothesis, I set the size of seeds to 5, 10, 15 respectively, and each seed word group join the training processes. Finally, 3 results will be visualized to verify the hypothesis.\\ The second question is "Does a specific video guide the commentator's emotions?" When we collected data, we found that the commentator's emotional inclination would be consistent with the emotional tendencies that the video wanted to express. For example, most Comments are sad in videos of funerals, but most of the comments are pleasing in funny videos. Therefore, we made a hypothesis that the sentiment of a particular video will guide the comments emotional tendencies. To verify the hypothesis, we collected two different data, comment data from random videos and comment data from special videos, I use different datasets to train the model, and finally compare the distribution of words.\\ The third question is "Compared with the previous method, is there any improvement in compressing embedding to ultra dense subspace?" In the previous research, the commonly used approach of feature extraction is Principle Compose Analysis, PCA is the feature selection method, which is a commonly used tool in data analysis and model prediction. It is used to visualize the distance and relationship between data. PCA uses Singular Value Decomposition to select appropriate dimensions to achieve the goal of dimensionality reduction of data. In this paper, after processing data by PCA and the ultradense compression, we will compare the results to find that which method can better show the emotion of words. One hypothesis is than the data processed by ultradense compression has the better performance on emotional classification task, because ultradense compression uses seed words, which reduces the distance between the same emotion words and increases the distance between different emotion words. To verify this hypothesis, we will use the same training data to get results from PCA and ultradense compression, and then we will visualize and analyze the results to verify hypothesis. \section{6. Data Collection} \subsection{6.1 collection} The data we need to collect are user's comments from TikTok random videos and user's comments from particular videos. The comment data from random videos can analyse the user's sentiment for making sentiment lexicon, and the comment data of specific videos can analyze the user's sentiment expression under specific videos. People's sentiment often be reflected in words, and words with sentiment appeal are often commented by people. As a result, after the vectorization of comment data in vector space, the vector that have same sentiment are often collected together. Through research on RQ1 We can get the distribution of the vector of random video comments in vector space, which will tell us what sentiments expression will show what vector distribution.\\ We randomly crawl TikTok comments data through the web crawler, and crawl specific comment data under some very high like videos. These comment data can be found at \url{https://github.com/h2222/douyin_comment_dataset} the data contains 3 type of files. The dataset.csv file represents the original data crawled from TikTok where data preprocessing is required. The fixdata.py file is a data processing file for processing raw data. The processed file saved in dataupdate.xlsx file For further use, the data collects the user's age, gender, nickname, comments, number of comments liked, etc. \subsection{6.2 preprocess} The data will be saved as a csv file, and a new repository will be created to save the data on GitHub. The data set to two categories: random video comment data and specific video comment data. For the dataset, the related heads includes the user's nickname, age, gender, number of likes, and comment details, where gender head can be binarized, such as 0 for female and 1 for male. In addition, comments can be grouped with users of different ages to research if age affects sentiment expression. The Python Pandas module use to analyse and preprocess the data and Gensim module use to train a original embedding table by pre-processed comment data, we will also labeled emotional words as seed words. \section{7. Data Analysis} First of all, I want to evaluate the performance of the model When using a different number of seed words. I decided to plot the distribution of a word in the case of using different seed words. the figure 2 is the Chinese TikTok words distribution, which is based on the training results of the model, the emotional distribution can be shown as the distance between words.\\ \begin{figure} \centering \includegraphics[scale=0.25]{differences.png} \caption{the words distribution based on different seed words, red part is 5 seeds result, blue part is 10 seeds result, green part is 15 seeds result }~\label{fig:figure1} \end{figure} According to the word distribution, the X axis is the number of all words, which comes from the TikTok comment data after the word segmentation. The Y axis is a random number between 100 and -100, the reason we add the Y axis is that the word distribution is 1D distribution, which means that the experiment only obtains positive and negative emotions, and the distribution results will be "crowded" on a line. In order to better visualize the distribution results, we add a random number as the second dimension, so The words are mapped on the 2D plane.\\ Looking at the image, we find that the model's ability to distinguish positive emotions is stronger than that of negative emotions. For example, if we think that the position where index equals 0 is the distribution of neutral words, the minimum value of negative words are close to -600. Instead, The maximum value of positive words are close to +2000. Evaluating from the semantics, the model are also better distinguishes positive sentiment words. I think the reason for that is because there are more positive words in the TikTok dataset than negative words, so it will Let the model produce better training results to positive words.\\ Further, we found that the size of the seed words may affects the ability of emotional classification. For example, under the condition of training the model with same dataset, we found that when the seed word is 15 (green distribution in the figure), the positive word The maximum value is +1000 and the minimum value of the negative word is -250, and the span is approximately 1250. When the seed word is 5 (the red part in the figure), the maximum value of the word is +2000 and the minimum value is -600, and the span is 2600. Therefore , the smaller the seed words, the stronger the model's ability to distinguish sentiment words. In my opinion, during the training process, the model distinguishes word sentiment by optimizing the distance between seed words and embedding words. Excessive addition of seed words may increase the model's Parameters, which leads to overfitting of the model. Based on the experiments, when the size of the seed words equal to 5, the model's result is the best.\\ \begin{figure} \centering \includegraphics[scale=0.25]{line_graph_nosorted.png} \caption{the result embedding value against the set of words, red line is 5 seed result, blue line is 10 seed result, green line is 15 seed result }~\label{fig:figure1} \end{figure} Next, we draw a line graph to show the model's distinction between sentiment words. As shown in figure.3, the X axis is the number of all words, and the Y axis represents the output value of each words. According to the figure, we found that the model can be more distinguish the first 2000 words to a large extent of which 1 to 2000 words have a larger interval on the Y axis. When the word is after 2000, the model gradually loses its ability to distinguish the emotion of the words.\\ Next, we sort the results. The model converts words embedding table (shape is [vocabulary size, embedding size]) in the original space to words representation (shape is [vocabulary size, 1]) in ultradense subspace. Therefore, we sort all the words according to the values in the new 1D space. The result of sorting is shown in figure 4.\\ \begin{figure} \centering \includegraphics[scale=0.25]{line_graph_sorted.png} \caption{the line graph of sorted words, red line is 5 seed result, blue line is 10 seed result, green line is 15 seed result }~\label{fig:figure1} \end{figure} We found two obvious upward and downward trends on both sides of Figure 4, which shows that the word representation can distinguish the sentiment of the word in the ultradense subspace. When the seed words are 15 (the green part in the figure), the turning point is more obvious, I guess the increase of seed words may increase the model's ability to classify fuzzy words. For example, some nouns words with emotional tendencies such as 'cake', 'firework' can be correctly classified.\\ \begin{figure} \centering \includegraphics[scale=0.25]{PCA_curve.png} \caption{the result of PCA, the X axis is words index sorted by embedding value of PCA, the Y axis is the word embedding of PCA}~\label{fig:figure1} \end{figure} In order to prove that the emotion can be better represented by using the ultradense compression method, we use PCA for comparative experiments. The original word embedding table (shape is [vocabulary size, embedding size]) is compressed into a new embedding ( shape is [vocabulary size, 1], shape is consistent with that of result of ultradense compression), as shown in Figure 5.\\ As shown in Figure 5, the index of the sorted words on the X axis is based on the word embedding value of PCA, and the Y axis is the word embedding value. Compared with the orthogonal transformation method (Figure 4), the curve of the PCA-based method is more gentle, This shows that PCA cannot distinguish the emotion of words well. Moreover, the PCA method has insufficient predictive ability for negative emotion words. \section{8. Findings and discussion} \subsection{8.1 Is there a relationship between the seed words and the emotion?} A result shows that there is indeed a relationship between seed words and emotional tendencies, but the word emotion may more related with training data. An obvious conclusion is that most of the comments use positive words but negative words have not been widely appeared in the comments, which can be verified by figure 6, The interesting result is that words with a large ultradense embedding value appear very frequently in comments. These words usually have a positive meaning. table 1 are some of the most frequent words and the number of occurrences.\\ \begin{table} \centering \begin{tabular}{l r r r} {\small\textit{word}} & {\small \textit{numbers}}\\ \midrule feel & 3710 \\ Awesome & 3523 \\ heart & 3365\\ praise & 2351 \\ ...\\ \end{tabular} \caption{Table captions should be placed below the table. We recommend table lines be 1 point, 25\% black. Minimize use of table grid lines.}~\label{tab:table1} \end{table} In the comment dataset, the most of high frequent words have positive meanings, even when we set the seed words, we found that 30\% of the seed word appeared in the top 200 words of comment dataset. Combined with the previous model result, we guess that if words with certain type appear frequently, the word is easier to be distinguished by the model. In addition, we found that in the dataset, in the comments that contain high-frequency words, the comment with 2 or more The high-frequency words are 44\%. In other words, nearly half of the high-frequency words appear in pairs in the comments, which will cause the associated relationship when the word is converted into embedding. As the result, the model is affected.\\ \subsection{8.2 Does a specific video guide the commentator's emotions?} Special model videos will impact on the comments. the five different categories of video comment are collected and the categories are health, star, funny, news and entertainment. Through model training to get emotional lexicon, we found that the emotional tendency is biased to positive in funny videos, but the emotional tendencies of comments are more neutral in news videos. \\ I guess the reason for this result is that the content of the video will guide the emotional tendency of the comment, so that the frequency of the words that match the emotional tendency of the video will increase, and the model can better learn the emotional information of those words, thereby improving the model The ability to discern specific emotions. For example, when processing the comment data of funny videos, we found that some words with a clear positive tendency have a high Term Frequency (other words divided by the most frequent words), such as Awesome, funny , happy face, the Term Frequency of these words can be close to 0.9. Finally, we use the model to classify the emotions of the words, and find that the model has a more obvious effect on the classification of words with high frequency. \subsection{8.3 Compared with the previous method, is there any improvement in compressing embedding to ultra dense subspace?} After testing, we found that the performance of ultra dense word embedding is better than that of PCA method in sentiment classification tasks. Comparing Figure 5 and Figure 6, we observed a clear distinction between words using the ultra dense embedding method (the turning point in Figure 5). On the contrary, although the PCA method can distinguish words (the curve in Figure 6), it cannot clearly distinguish words, and the curve is relatively smooth, so PCA cannot distinguish the emotion of words well.\\ I think that ultra dense word embedding performs better because the model learns emotional information from seed words. The model reduces the distance of the same emotional words in the seed word and increases the distance of the opposite emotional words, so that the transformed representation of the model learns the emotion Information, which improves the word emotional representation. The PCA method does not analyze the emotion of the word, so ultra dense word embedding is better in the emotion classification.\\ \section{9 Threats to Validity} My paper may still have some disadvantages. First, my data collection volume is small and limited. Since English review data cannot be collected, we only build sentiment dictionaries on Chinese data, because both Chinese and English are different in grammar and expression, I expect differences in different language data sets, so a comparative test is necessary To evaluate the performance difference of the model when dealing with different languages. \\ secondly, the amount of data we collected did not meet expectations. We spent a long time on collecting TikTok's review data. As a very popular application, TikTok's company designed a complex encryption algorithm to protect user privacy, This is a very good thing, but for researchers who want to perform data analysis, they have to spend a lot of time users cracking the anti-crawl mechanism of TikTok. When collecting data with comments, it took us a long time to crack TikTok ’s x-gergon encryption algorithm, which is a hexadecimal encryption based on MD5, and it is difficult to crack. As a result, the data I collected did not reach the expected amount. In the future research, we will strengthen the data collect.\\ Finally, I think that in the experimental stage, the performance of the model can be further optimized. In the training process of the model, because we need to reduce the distance between the same emotion words and increase the distance between different emotion words, we need to optimize the same and different emotions at the same time. The loss of the word. Therefore, we use a loss function with hyper-parameter alpha, which can combine the two losses in a certain proportion, so that the global loss can be optimized. The formula is as follows:\\ \begin{align*} Loss = (1 - \alpha) \times SLoss + \alpha \times DLoss \end{align*} where $\alpha$ is the hyperparameter of the model, SLoss is the loss of the same sentiment words, and DLoss is the loss of different sentiment words.\\ During training, we found that when the iteration of the model reaches the 4th epoch (we set the model to iterate 10epoch), the loss of the model hardly drops. Regarding the cause of this problem, we guess that it is caused by the bias of the model data. In find and In the discussion section, we mentioned that positive emotion words have a higher frequency and higher in the data set, which leads to uneven distribution of positive emotion words and negative emotion words in the data set. Therefore, in the training stage of the model, the positive words are better optimized by the model compared to negative words, so the classification ability of the model for positive words is better than the classification ability of negative words. \section{10 Related Work} Faruqui \cite{DBLP:conf/acl/FaruquiD15} use word post-processing based on word similarity, so Faruqui does not use processing based on orthogonal transformation and does not need to consider word distance, which makes it worse for other applications such as syntax detection.The method of Faruqui does not benefit that ultra-dense embedding is more efficient.\\ In the tensor framework, Rothe and SchUtz \cite{DBLP:conf/acl/RotheS15} converts word embedding to perceptual (synonym) embedding. In their work, all the embeddings exist in the same subspace. However, we want to keep only specific information embedded in the subspace to create ultra-dense subspaces with a specific role (sentiment embedding).\\ The method we used is also related to directed PCA \cite{PCA}. However, compared to PCA, the directional PCA solution is not orthogonal.\\ The creation of the sentiment lexicon usually needs to label the training corpus. For example, people manually label the sentiment tendency of the words in the training set or form words with the same meaning into the training set, or add words with similar meanings to the seed set (\cite{DBLP:conf/acl/Turney02} and \cite{DBLP:journals/jair/KiritchenkoZM14}). Heerschop team \cite{DBLP:conf/bis/HeerschopHF11} used wordNet and pageRank-based algorithms to propagate the emotional information of seed words into unknown words. Scheible team \cite{DBLP:conf/acl/Scheible10} proposed a semi-automatic method for machine translation Based on sentiment lexicon. Hamdan team \cite{DBLP:conf/semeval/HamdanBB15} evaluated the sentiment level based on the average of six sentiment lexicons. Those method cannot apply less-resource languages. Our experiments show that the orthogonal transformation method can train the sentiment lexicon with a small amount of training resources, and has better performance than the lexicon created by other semi-automatic methods.\\ \section{11 Conclusions} We use the web crawler method to collect TikTok's Chinese comment data, and apply the method that can embed the original vector of data into the ultra-dense subspace. In this study, two experiments were used to verify that the ultra-dense subspace can classify the emotion of words. 1. By setting the number of different seed words, we get that the model can distinguish emotions through model training. 2. By comparing with the PCA method, ultra-dense word embedding can better distinguish the emotion of words. we only need to use 5-15 seed words as a training example to learn the subspace, and get the emotional information about the words through the subspace. \section{12 Future Work} In future work, we will find the probabilities that other information embedded in one or more orthogonal subspaces rather than just emotional embedding. This decomposition will not change the content of the information embedding, but it will make them more compact, meaningful and easier to interpret for any given application. In addition, we will use the larger dataset for training, because the size of the data determines the improvement of the model's generalization ability. \bibliographystyle{plain}